Monotone matrix norms
up vote
12
down vote
favorite
[Ciarlet 2.2-10]
- Let $mathscr{S}_n$ be the set of symmetric matrices and $mathscr{S}_n^+$ the subset of non-negative definite symmetric matrices. A matrix norm $|cdot|$ to be monotone if
$$Ainmathscr{S}_n^+; wedge; B-Ainmathscr{S}_n^+ Rightarrow |A| leq |B|.$$
Show that the norms $|cdot|_2$ and $|cdot|_F$ (Frobenus norm) are monotone. - More generally, show that if a matrix norm $|cdot|$ is invariant under unitary transformations, that is, if $|A| = |AU| = |UA|$ for every unitary matrix $U$, then it is monotone.
- Let $|cdot|$ be a monotone norm and $mbox{cond}(cdot)$ the condition number function associated with it. Prove that
$$A,Binmathscr{S}_n^* Rightarrow mbox{cond}(A+B) leq maxleft{mbox{cond}(A),; mbox{cond}(B)right}$$
where $mathscr{S}_n^*$ denotes the subset of positive definite symmetric matrices.
I already have proved (1), and I proved that $lambda_i(A) leq lambda_i(B)$, $forall i=1,2,ldots,n$ and $forall A,B-Ainmathscr{S}_n^+$. But I have had problems in order to prove (2) and (3). For (2), i proved that
begin{eqnarray*}
|A| & = & |U^*AU| = |mbox{diag}(lambda_i(A))|,\[0.3cm]
|B| & = & |V^*BV| = |mbox{diag}(lambda_i(B))|.
end{eqnarray*}
but I don't know what I should do next. Please help me and thanks in advance.
linear-algebra norm
add a comment |
up vote
12
down vote
favorite
[Ciarlet 2.2-10]
- Let $mathscr{S}_n$ be the set of symmetric matrices and $mathscr{S}_n^+$ the subset of non-negative definite symmetric matrices. A matrix norm $|cdot|$ to be monotone if
$$Ainmathscr{S}_n^+; wedge; B-Ainmathscr{S}_n^+ Rightarrow |A| leq |B|.$$
Show that the norms $|cdot|_2$ and $|cdot|_F$ (Frobenus norm) are monotone. - More generally, show that if a matrix norm $|cdot|$ is invariant under unitary transformations, that is, if $|A| = |AU| = |UA|$ for every unitary matrix $U$, then it is monotone.
- Let $|cdot|$ be a monotone norm and $mbox{cond}(cdot)$ the condition number function associated with it. Prove that
$$A,Binmathscr{S}_n^* Rightarrow mbox{cond}(A+B) leq maxleft{mbox{cond}(A),; mbox{cond}(B)right}$$
where $mathscr{S}_n^*$ denotes the subset of positive definite symmetric matrices.
I already have proved (1), and I proved that $lambda_i(A) leq lambda_i(B)$, $forall i=1,2,ldots,n$ and $forall A,B-Ainmathscr{S}_n^+$. But I have had problems in order to prove (2) and (3). For (2), i proved that
begin{eqnarray*}
|A| & = & |U^*AU| = |mbox{diag}(lambda_i(A))|,\[0.3cm]
|B| & = & |V^*BV| = |mbox{diag}(lambda_i(B))|.
end{eqnarray*}
but I don't know what I should do next. Please help me and thanks in advance.
linear-algebra norm
add a comment |
up vote
12
down vote
favorite
up vote
12
down vote
favorite
[Ciarlet 2.2-10]
- Let $mathscr{S}_n$ be the set of symmetric matrices and $mathscr{S}_n^+$ the subset of non-negative definite symmetric matrices. A matrix norm $|cdot|$ to be monotone if
$$Ainmathscr{S}_n^+; wedge; B-Ainmathscr{S}_n^+ Rightarrow |A| leq |B|.$$
Show that the norms $|cdot|_2$ and $|cdot|_F$ (Frobenus norm) are monotone. - More generally, show that if a matrix norm $|cdot|$ is invariant under unitary transformations, that is, if $|A| = |AU| = |UA|$ for every unitary matrix $U$, then it is monotone.
- Let $|cdot|$ be a monotone norm and $mbox{cond}(cdot)$ the condition number function associated with it. Prove that
$$A,Binmathscr{S}_n^* Rightarrow mbox{cond}(A+B) leq maxleft{mbox{cond}(A),; mbox{cond}(B)right}$$
where $mathscr{S}_n^*$ denotes the subset of positive definite symmetric matrices.
I already have proved (1), and I proved that $lambda_i(A) leq lambda_i(B)$, $forall i=1,2,ldots,n$ and $forall A,B-Ainmathscr{S}_n^+$. But I have had problems in order to prove (2) and (3). For (2), i proved that
begin{eqnarray*}
|A| & = & |U^*AU| = |mbox{diag}(lambda_i(A))|,\[0.3cm]
|B| & = & |V^*BV| = |mbox{diag}(lambda_i(B))|.
end{eqnarray*}
but I don't know what I should do next. Please help me and thanks in advance.
linear-algebra norm
[Ciarlet 2.2-10]
- Let $mathscr{S}_n$ be the set of symmetric matrices and $mathscr{S}_n^+$ the subset of non-negative definite symmetric matrices. A matrix norm $|cdot|$ to be monotone if
$$Ainmathscr{S}_n^+; wedge; B-Ainmathscr{S}_n^+ Rightarrow |A| leq |B|.$$
Show that the norms $|cdot|_2$ and $|cdot|_F$ (Frobenus norm) are monotone. - More generally, show that if a matrix norm $|cdot|$ is invariant under unitary transformations, that is, if $|A| = |AU| = |UA|$ for every unitary matrix $U$, then it is monotone.
- Let $|cdot|$ be a monotone norm and $mbox{cond}(cdot)$ the condition number function associated with it. Prove that
$$A,Binmathscr{S}_n^* Rightarrow mbox{cond}(A+B) leq maxleft{mbox{cond}(A),; mbox{cond}(B)right}$$
where $mathscr{S}_n^*$ denotes the subset of positive definite symmetric matrices.
I already have proved (1), and I proved that $lambda_i(A) leq lambda_i(B)$, $forall i=1,2,ldots,n$ and $forall A,B-Ainmathscr{S}_n^+$. But I have had problems in order to prove (2) and (3). For (2), i proved that
begin{eqnarray*}
|A| & = & |U^*AU| = |mbox{diag}(lambda_i(A))|,\[0.3cm]
|B| & = & |V^*BV| = |mbox{diag}(lambda_i(B))|.
end{eqnarray*}
but I don't know what I should do next. Please help me and thanks in advance.
linear-algebra norm
linear-algebra norm
edited Feb 26 '17 at 3:21
Martin Argerami
121k1073172
121k1073172
asked Apr 27 '13 at 17:56
FASCH
642926
642926
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
up vote
0
down vote
For part 2:
To simplify notation, define a function $g:mathbb R^ntomathbb R$ by $g(x_1,ldots,x_n)=|text{diag}(x_j)|$. Since $|cdot|$ is a unitarily invariant matrix norm, we have that
$g$ is a norm on $mathbb R^n$
$g(x_1,ldots,x_n)=g(|x_1|,ldots,|x_n|)$ (this comes from the unitary invariance)
$g(x_1,ldots,x_n)=g(x_{sigma(1)},ldots,x_{sigma(n)})$ for any permutation $sigma$.
Such a $g$ is called a gauge function.
Now, if $tin[0,1]$, then (writing $x=(x_1,ldots,x_n)$)
begin{align}
g(tx_1,x_2,ldots,x_n)&=gleft(frac{1+t}2,x+frac{1-t}2(-x_1,x_2,ldots,x_nright)\ \
&leqfrac{1+t}2,g(x)+frac{1-t}2g(-x_1,x_2,ldots,x_n)\ \
&=frac{1+t}2g(x)+frac{1-t}2g(x)=g(x).
end{align}
Applying the above inductively, we get
$$
g(t_1x_1,ldots,t_nx_n)leq g(x)
$$
whenever $t_1,ldots,t_nin [0,1]$.
Since $0leqlambda_j(A)leqlambda_j(B)$ for all $j$, we have $lambda_j(A)=t_jlambda_j(B)$ for appropriate $t_1,ldots,t_nin[0,1]$.
Then
begin{align}
|A|&=|text{diag}(lambda_j(A))|=|text{diag}(t_j,lambda_j(B))|\ \
&leq |text{diag}(lambda_j(B))|=|B|
end{align}
For part 3, I know of the original proof by Marshall and Olkin (1973). Assume $text{cond}(A)leqtext{cond}(B)$. Let $A'=A/|A|$, $B'=B/|B|$, and $t=frac{|A|}{|A|+|B|}$.
We have, in the new notation, that $$tag{0}|(A')^{-1}|leq|(B')^{-1}|.$$ And
$$tag{1}
|tA'+(1-t)B'|leq t|A'|+(1-t)|B'|=1.
$$
Also, as the inverse is convex on the set of positive-definite matrices,
$$tag{2}
(tA'+(1-t)B')^{-1}leq t(A')^{-1}+(1-t)(B')^{-1}.
$$
Thus, using $(2)$ and $(0)$,
begin{align}tag{3}
|(tA'+(1-t)B')^{-1}|&leq|t(A')^{-1}+(1-t)(B')^{-1}|\
&leq t|(A')^{-1}|+(1-t)|(B')^{-1}|\ &leq|(B')^{-1}|
end{align}
Now, combining $(1)$, and $(3)$,
begin{align}tag{4}
|tA'+(1-t)B'|,|(tA'+(1-t)B')^{-1}|&leq |(tA'+(1-t)B')^{-1}|leq|(B')^{-1}|.
end{align}
If we now use the definitions of $A'$ and $B'$, we get
$$
tA'+(1-t)B'=frac1{|A|+|B|},left(A+Bright), (B')^{-1}=|B|,B^{-1}.
$$
We may thus rewrite $(4)$ as
$$
|A+B|,|(A+B)^{-1}|leq |B|,|B^{-1}|,
$$
as desired.
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
For part 2:
To simplify notation, define a function $g:mathbb R^ntomathbb R$ by $g(x_1,ldots,x_n)=|text{diag}(x_j)|$. Since $|cdot|$ is a unitarily invariant matrix norm, we have that
$g$ is a norm on $mathbb R^n$
$g(x_1,ldots,x_n)=g(|x_1|,ldots,|x_n|)$ (this comes from the unitary invariance)
$g(x_1,ldots,x_n)=g(x_{sigma(1)},ldots,x_{sigma(n)})$ for any permutation $sigma$.
Such a $g$ is called a gauge function.
Now, if $tin[0,1]$, then (writing $x=(x_1,ldots,x_n)$)
begin{align}
g(tx_1,x_2,ldots,x_n)&=gleft(frac{1+t}2,x+frac{1-t}2(-x_1,x_2,ldots,x_nright)\ \
&leqfrac{1+t}2,g(x)+frac{1-t}2g(-x_1,x_2,ldots,x_n)\ \
&=frac{1+t}2g(x)+frac{1-t}2g(x)=g(x).
end{align}
Applying the above inductively, we get
$$
g(t_1x_1,ldots,t_nx_n)leq g(x)
$$
whenever $t_1,ldots,t_nin [0,1]$.
Since $0leqlambda_j(A)leqlambda_j(B)$ for all $j$, we have $lambda_j(A)=t_jlambda_j(B)$ for appropriate $t_1,ldots,t_nin[0,1]$.
Then
begin{align}
|A|&=|text{diag}(lambda_j(A))|=|text{diag}(t_j,lambda_j(B))|\ \
&leq |text{diag}(lambda_j(B))|=|B|
end{align}
For part 3, I know of the original proof by Marshall and Olkin (1973). Assume $text{cond}(A)leqtext{cond}(B)$. Let $A'=A/|A|$, $B'=B/|B|$, and $t=frac{|A|}{|A|+|B|}$.
We have, in the new notation, that $$tag{0}|(A')^{-1}|leq|(B')^{-1}|.$$ And
$$tag{1}
|tA'+(1-t)B'|leq t|A'|+(1-t)|B'|=1.
$$
Also, as the inverse is convex on the set of positive-definite matrices,
$$tag{2}
(tA'+(1-t)B')^{-1}leq t(A')^{-1}+(1-t)(B')^{-1}.
$$
Thus, using $(2)$ and $(0)$,
begin{align}tag{3}
|(tA'+(1-t)B')^{-1}|&leq|t(A')^{-1}+(1-t)(B')^{-1}|\
&leq t|(A')^{-1}|+(1-t)|(B')^{-1}|\ &leq|(B')^{-1}|
end{align}
Now, combining $(1)$, and $(3)$,
begin{align}tag{4}
|tA'+(1-t)B'|,|(tA'+(1-t)B')^{-1}|&leq |(tA'+(1-t)B')^{-1}|leq|(B')^{-1}|.
end{align}
If we now use the definitions of $A'$ and $B'$, we get
$$
tA'+(1-t)B'=frac1{|A|+|B|},left(A+Bright), (B')^{-1}=|B|,B^{-1}.
$$
We may thus rewrite $(4)$ as
$$
|A+B|,|(A+B)^{-1}|leq |B|,|B^{-1}|,
$$
as desired.
add a comment |
up vote
0
down vote
For part 2:
To simplify notation, define a function $g:mathbb R^ntomathbb R$ by $g(x_1,ldots,x_n)=|text{diag}(x_j)|$. Since $|cdot|$ is a unitarily invariant matrix norm, we have that
$g$ is a norm on $mathbb R^n$
$g(x_1,ldots,x_n)=g(|x_1|,ldots,|x_n|)$ (this comes from the unitary invariance)
$g(x_1,ldots,x_n)=g(x_{sigma(1)},ldots,x_{sigma(n)})$ for any permutation $sigma$.
Such a $g$ is called a gauge function.
Now, if $tin[0,1]$, then (writing $x=(x_1,ldots,x_n)$)
begin{align}
g(tx_1,x_2,ldots,x_n)&=gleft(frac{1+t}2,x+frac{1-t}2(-x_1,x_2,ldots,x_nright)\ \
&leqfrac{1+t}2,g(x)+frac{1-t}2g(-x_1,x_2,ldots,x_n)\ \
&=frac{1+t}2g(x)+frac{1-t}2g(x)=g(x).
end{align}
Applying the above inductively, we get
$$
g(t_1x_1,ldots,t_nx_n)leq g(x)
$$
whenever $t_1,ldots,t_nin [0,1]$.
Since $0leqlambda_j(A)leqlambda_j(B)$ for all $j$, we have $lambda_j(A)=t_jlambda_j(B)$ for appropriate $t_1,ldots,t_nin[0,1]$.
Then
begin{align}
|A|&=|text{diag}(lambda_j(A))|=|text{diag}(t_j,lambda_j(B))|\ \
&leq |text{diag}(lambda_j(B))|=|B|
end{align}
For part 3, I know of the original proof by Marshall and Olkin (1973). Assume $text{cond}(A)leqtext{cond}(B)$. Let $A'=A/|A|$, $B'=B/|B|$, and $t=frac{|A|}{|A|+|B|}$.
We have, in the new notation, that $$tag{0}|(A')^{-1}|leq|(B')^{-1}|.$$ And
$$tag{1}
|tA'+(1-t)B'|leq t|A'|+(1-t)|B'|=1.
$$
Also, as the inverse is convex on the set of positive-definite matrices,
$$tag{2}
(tA'+(1-t)B')^{-1}leq t(A')^{-1}+(1-t)(B')^{-1}.
$$
Thus, using $(2)$ and $(0)$,
begin{align}tag{3}
|(tA'+(1-t)B')^{-1}|&leq|t(A')^{-1}+(1-t)(B')^{-1}|\
&leq t|(A')^{-1}|+(1-t)|(B')^{-1}|\ &leq|(B')^{-1}|
end{align}
Now, combining $(1)$, and $(3)$,
begin{align}tag{4}
|tA'+(1-t)B'|,|(tA'+(1-t)B')^{-1}|&leq |(tA'+(1-t)B')^{-1}|leq|(B')^{-1}|.
end{align}
If we now use the definitions of $A'$ and $B'$, we get
$$
tA'+(1-t)B'=frac1{|A|+|B|},left(A+Bright), (B')^{-1}=|B|,B^{-1}.
$$
We may thus rewrite $(4)$ as
$$
|A+B|,|(A+B)^{-1}|leq |B|,|B^{-1}|,
$$
as desired.
add a comment |
up vote
0
down vote
up vote
0
down vote
For part 2:
To simplify notation, define a function $g:mathbb R^ntomathbb R$ by $g(x_1,ldots,x_n)=|text{diag}(x_j)|$. Since $|cdot|$ is a unitarily invariant matrix norm, we have that
$g$ is a norm on $mathbb R^n$
$g(x_1,ldots,x_n)=g(|x_1|,ldots,|x_n|)$ (this comes from the unitary invariance)
$g(x_1,ldots,x_n)=g(x_{sigma(1)},ldots,x_{sigma(n)})$ for any permutation $sigma$.
Such a $g$ is called a gauge function.
Now, if $tin[0,1]$, then (writing $x=(x_1,ldots,x_n)$)
begin{align}
g(tx_1,x_2,ldots,x_n)&=gleft(frac{1+t}2,x+frac{1-t}2(-x_1,x_2,ldots,x_nright)\ \
&leqfrac{1+t}2,g(x)+frac{1-t}2g(-x_1,x_2,ldots,x_n)\ \
&=frac{1+t}2g(x)+frac{1-t}2g(x)=g(x).
end{align}
Applying the above inductively, we get
$$
g(t_1x_1,ldots,t_nx_n)leq g(x)
$$
whenever $t_1,ldots,t_nin [0,1]$.
Since $0leqlambda_j(A)leqlambda_j(B)$ for all $j$, we have $lambda_j(A)=t_jlambda_j(B)$ for appropriate $t_1,ldots,t_nin[0,1]$.
Then
begin{align}
|A|&=|text{diag}(lambda_j(A))|=|text{diag}(t_j,lambda_j(B))|\ \
&leq |text{diag}(lambda_j(B))|=|B|
end{align}
For part 3, I know of the original proof by Marshall and Olkin (1973). Assume $text{cond}(A)leqtext{cond}(B)$. Let $A'=A/|A|$, $B'=B/|B|$, and $t=frac{|A|}{|A|+|B|}$.
We have, in the new notation, that $$tag{0}|(A')^{-1}|leq|(B')^{-1}|.$$ And
$$tag{1}
|tA'+(1-t)B'|leq t|A'|+(1-t)|B'|=1.
$$
Also, as the inverse is convex on the set of positive-definite matrices,
$$tag{2}
(tA'+(1-t)B')^{-1}leq t(A')^{-1}+(1-t)(B')^{-1}.
$$
Thus, using $(2)$ and $(0)$,
begin{align}tag{3}
|(tA'+(1-t)B')^{-1}|&leq|t(A')^{-1}+(1-t)(B')^{-1}|\
&leq t|(A')^{-1}|+(1-t)|(B')^{-1}|\ &leq|(B')^{-1}|
end{align}
Now, combining $(1)$, and $(3)$,
begin{align}tag{4}
|tA'+(1-t)B'|,|(tA'+(1-t)B')^{-1}|&leq |(tA'+(1-t)B')^{-1}|leq|(B')^{-1}|.
end{align}
If we now use the definitions of $A'$ and $B'$, we get
$$
tA'+(1-t)B'=frac1{|A|+|B|},left(A+Bright), (B')^{-1}=|B|,B^{-1}.
$$
We may thus rewrite $(4)$ as
$$
|A+B|,|(A+B)^{-1}|leq |B|,|B^{-1}|,
$$
as desired.
For part 2:
To simplify notation, define a function $g:mathbb R^ntomathbb R$ by $g(x_1,ldots,x_n)=|text{diag}(x_j)|$. Since $|cdot|$ is a unitarily invariant matrix norm, we have that
$g$ is a norm on $mathbb R^n$
$g(x_1,ldots,x_n)=g(|x_1|,ldots,|x_n|)$ (this comes from the unitary invariance)
$g(x_1,ldots,x_n)=g(x_{sigma(1)},ldots,x_{sigma(n)})$ for any permutation $sigma$.
Such a $g$ is called a gauge function.
Now, if $tin[0,1]$, then (writing $x=(x_1,ldots,x_n)$)
begin{align}
g(tx_1,x_2,ldots,x_n)&=gleft(frac{1+t}2,x+frac{1-t}2(-x_1,x_2,ldots,x_nright)\ \
&leqfrac{1+t}2,g(x)+frac{1-t}2g(-x_1,x_2,ldots,x_n)\ \
&=frac{1+t}2g(x)+frac{1-t}2g(x)=g(x).
end{align}
Applying the above inductively, we get
$$
g(t_1x_1,ldots,t_nx_n)leq g(x)
$$
whenever $t_1,ldots,t_nin [0,1]$.
Since $0leqlambda_j(A)leqlambda_j(B)$ for all $j$, we have $lambda_j(A)=t_jlambda_j(B)$ for appropriate $t_1,ldots,t_nin[0,1]$.
Then
begin{align}
|A|&=|text{diag}(lambda_j(A))|=|text{diag}(t_j,lambda_j(B))|\ \
&leq |text{diag}(lambda_j(B))|=|B|
end{align}
For part 3, I know of the original proof by Marshall and Olkin (1973). Assume $text{cond}(A)leqtext{cond}(B)$. Let $A'=A/|A|$, $B'=B/|B|$, and $t=frac{|A|}{|A|+|B|}$.
We have, in the new notation, that $$tag{0}|(A')^{-1}|leq|(B')^{-1}|.$$ And
$$tag{1}
|tA'+(1-t)B'|leq t|A'|+(1-t)|B'|=1.
$$
Also, as the inverse is convex on the set of positive-definite matrices,
$$tag{2}
(tA'+(1-t)B')^{-1}leq t(A')^{-1}+(1-t)(B')^{-1}.
$$
Thus, using $(2)$ and $(0)$,
begin{align}tag{3}
|(tA'+(1-t)B')^{-1}|&leq|t(A')^{-1}+(1-t)(B')^{-1}|\
&leq t|(A')^{-1}|+(1-t)|(B')^{-1}|\ &leq|(B')^{-1}|
end{align}
Now, combining $(1)$, and $(3)$,
begin{align}tag{4}
|tA'+(1-t)B'|,|(tA'+(1-t)B')^{-1}|&leq |(tA'+(1-t)B')^{-1}|leq|(B')^{-1}|.
end{align}
If we now use the definitions of $A'$ and $B'$, we get
$$
tA'+(1-t)B'=frac1{|A|+|B|},left(A+Bright), (B')^{-1}=|B|,B^{-1}.
$$
We may thus rewrite $(4)$ as
$$
|A+B|,|(A+B)^{-1}|leq |B|,|B^{-1}|,
$$
as desired.
edited Apr 13 '17 at 12:21
Community♦
1
1
answered Feb 26 '17 at 3:21
Martin Argerami
121k1073172
121k1073172
add a comment |
add a comment |
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f374500%2fmonotone-matrix-norms%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown