Proofs of Determinants of Block matrices [duplicate]
up vote
3
down vote
favorite
This question already has an answer here:
Determinant of a block lower triangular matrix
6 answers
I know that there are three important results when taking the Determinants of Block matrices
$$begin{align}det begin{bmatrix}
A & B \
0 & D
end{bmatrix} &= det(A) cdot det(D) & (1) \ \
det begin{bmatrix}
A & B \
C & D
end{bmatrix} &neq AD - CB & (2) \ \
det begin{bmatrix}
A & B \
C & D
end{bmatrix} &= det begin{bmatrix}
A & B \
0 & D - CA^{-1}B
end{bmatrix} \ \
&= underbrace{det(A)cdot detleft(D-CA^{-1}Bright)}_text{if $A^{-1}$ exists} \ \
&= underbrace{detleft(AD-CBright)}_text{if $AC=CA$} & (3)
end{align}$$
Now I understand in result $(3)$, that all that row operations are being performed to bring it into the form we see in $(1)$, but I can't seem to convince myself that result $(1)$ is true in the first place.
Furthermore in result $(3)$, I understand that, $det(A)cdot detleft(D-CA^{-1}Bright) = detleft(A(D-CA^{-1}B)right)= det(AD-CB)$, via the product rule for determinants I also understand that we need $A^{-1}$ to exist, for the initial row operation to reduce the matrix into an upper triangular form $U$, and I understand that we require $AC = CA$, to allow commutativity when we multiply $ACA^{-1}B$ to reduce it to $CB$.
Can someone provide proofs for results $(1)$ and $(2)$, as I can't seem to find proofs for them in any of the textbooks I have at my disposal
linear-algebra matrices determinant proof-explanation
marked as duplicate by Rahul, Shailesh, Joey Zou, R_D, Claude Leibovici Aug 28 '16 at 6:13
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
add a comment |
up vote
3
down vote
favorite
This question already has an answer here:
Determinant of a block lower triangular matrix
6 answers
I know that there are three important results when taking the Determinants of Block matrices
$$begin{align}det begin{bmatrix}
A & B \
0 & D
end{bmatrix} &= det(A) cdot det(D) & (1) \ \
det begin{bmatrix}
A & B \
C & D
end{bmatrix} &neq AD - CB & (2) \ \
det begin{bmatrix}
A & B \
C & D
end{bmatrix} &= det begin{bmatrix}
A & B \
0 & D - CA^{-1}B
end{bmatrix} \ \
&= underbrace{det(A)cdot detleft(D-CA^{-1}Bright)}_text{if $A^{-1}$ exists} \ \
&= underbrace{detleft(AD-CBright)}_text{if $AC=CA$} & (3)
end{align}$$
Now I understand in result $(3)$, that all that row operations are being performed to bring it into the form we see in $(1)$, but I can't seem to convince myself that result $(1)$ is true in the first place.
Furthermore in result $(3)$, I understand that, $det(A)cdot detleft(D-CA^{-1}Bright) = detleft(A(D-CA^{-1}B)right)= det(AD-CB)$, via the product rule for determinants I also understand that we need $A^{-1}$ to exist, for the initial row operation to reduce the matrix into an upper triangular form $U$, and I understand that we require $AC = CA$, to allow commutativity when we multiply $ACA^{-1}B$ to reduce it to $CB$.
Can someone provide proofs for results $(1)$ and $(2)$, as I can't seem to find proofs for them in any of the textbooks I have at my disposal
linear-algebra matrices determinant proof-explanation
marked as duplicate by Rahul, Shailesh, Joey Zou, R_D, Claude Leibovici Aug 28 '16 at 6:13
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
It reminds me of this MO answer…
– Watson
Feb 2 at 14:02
add a comment |
up vote
3
down vote
favorite
up vote
3
down vote
favorite
This question already has an answer here:
Determinant of a block lower triangular matrix
6 answers
I know that there are three important results when taking the Determinants of Block matrices
$$begin{align}det begin{bmatrix}
A & B \
0 & D
end{bmatrix} &= det(A) cdot det(D) & (1) \ \
det begin{bmatrix}
A & B \
C & D
end{bmatrix} &neq AD - CB & (2) \ \
det begin{bmatrix}
A & B \
C & D
end{bmatrix} &= det begin{bmatrix}
A & B \
0 & D - CA^{-1}B
end{bmatrix} \ \
&= underbrace{det(A)cdot detleft(D-CA^{-1}Bright)}_text{if $A^{-1}$ exists} \ \
&= underbrace{detleft(AD-CBright)}_text{if $AC=CA$} & (3)
end{align}$$
Now I understand in result $(3)$, that all that row operations are being performed to bring it into the form we see in $(1)$, but I can't seem to convince myself that result $(1)$ is true in the first place.
Furthermore in result $(3)$, I understand that, $det(A)cdot detleft(D-CA^{-1}Bright) = detleft(A(D-CA^{-1}B)right)= det(AD-CB)$, via the product rule for determinants I also understand that we need $A^{-1}$ to exist, for the initial row operation to reduce the matrix into an upper triangular form $U$, and I understand that we require $AC = CA$, to allow commutativity when we multiply $ACA^{-1}B$ to reduce it to $CB$.
Can someone provide proofs for results $(1)$ and $(2)$, as I can't seem to find proofs for them in any of the textbooks I have at my disposal
linear-algebra matrices determinant proof-explanation
This question already has an answer here:
Determinant of a block lower triangular matrix
6 answers
I know that there are three important results when taking the Determinants of Block matrices
$$begin{align}det begin{bmatrix}
A & B \
0 & D
end{bmatrix} &= det(A) cdot det(D) & (1) \ \
det begin{bmatrix}
A & B \
C & D
end{bmatrix} &neq AD - CB & (2) \ \
det begin{bmatrix}
A & B \
C & D
end{bmatrix} &= det begin{bmatrix}
A & B \
0 & D - CA^{-1}B
end{bmatrix} \ \
&= underbrace{det(A)cdot detleft(D-CA^{-1}Bright)}_text{if $A^{-1}$ exists} \ \
&= underbrace{detleft(AD-CBright)}_text{if $AC=CA$} & (3)
end{align}$$
Now I understand in result $(3)$, that all that row operations are being performed to bring it into the form we see in $(1)$, but I can't seem to convince myself that result $(1)$ is true in the first place.
Furthermore in result $(3)$, I understand that, $det(A)cdot detleft(D-CA^{-1}Bright) = detleft(A(D-CA^{-1}B)right)= det(AD-CB)$, via the product rule for determinants I also understand that we need $A^{-1}$ to exist, for the initial row operation to reduce the matrix into an upper triangular form $U$, and I understand that we require $AC = CA$, to allow commutativity when we multiply $ACA^{-1}B$ to reduce it to $CB$.
Can someone provide proofs for results $(1)$ and $(2)$, as I can't seem to find proofs for them in any of the textbooks I have at my disposal
This question already has an answer here:
Determinant of a block lower triangular matrix
6 answers
linear-algebra matrices determinant proof-explanation
linear-algebra matrices determinant proof-explanation
asked Aug 27 '16 at 21:39
Perturbative
3,86811446
3,86811446
marked as duplicate by Rahul, Shailesh, Joey Zou, R_D, Claude Leibovici Aug 28 '16 at 6:13
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
marked as duplicate by Rahul, Shailesh, Joey Zou, R_D, Claude Leibovici Aug 28 '16 at 6:13
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
It reminds me of this MO answer…
– Watson
Feb 2 at 14:02
add a comment |
It reminds me of this MO answer…
– Watson
Feb 2 at 14:02
It reminds me of this MO answer…
– Watson
Feb 2 at 14:02
It reminds me of this MO answer…
– Watson
Feb 2 at 14:02
add a comment |
3 Answers
3
active
oldest
votes
up vote
2
down vote
accepted
To prove $(1)$, it suffices to note that
$$
pmatrix{A &B\0&D} = pmatrix{A & 0\0 & D} pmatrix{I&A^{-1}B\0 & I}
$$
From here, it suffices to note that the second matrix is upper-triangular, and to compute the determinant of the first matrix. It is easy to see that the determinant of the first matrix should be $det(A)det(D)$ if we use the Leibniz expansion.
For an example where $(2)$ fails to hold, consider the matrix
$$
pmatrix{
0&1&0&0\
0&0&1&0\
0&0&0&1\
1&0&0&0
} =
pmatrix{B&B^T\B^T&B}
$$
For an example where the diagonal blocks are invertible, add $I$ to the whole matrix.
add a comment |
up vote
2
down vote
Proof of the 3rd identity.
It is a consequence of the following "block diagonalization" identity:
$$pmatrix{
A&B\
C&D
}=pmatrix{
I&0\
CA^{-1}&I
}pmatrix{
A&0\
0&S
}pmatrix{
I&A^{-1}B\
0&I
} text{with} S:=D-CA^{-1}B$$
(S = "Schur's complement" (https://en.wikipedia.org/wiki/Schur_complement)),
Then, it suffices to take determinants on both sides.
add a comment |
up vote
0
down vote
Suppose block $A$ has dimension $r$, block $D$ has dimension $s$. Use the definition of the determinant $lvert c_{i,j}rvert,enspace {1le i,jle r+s}$:
$$begin{vmatrix} A&C\0& Dend{vmatrix} =sum_{1le jle r+s}(-1)^{text{sgn}, sigma}c_{sigma(j),j}.$$
Now the non-zero terms are those such that, if $1le jle r$, $;1le sigma(j)le r$. Similarly, if $r+1le jle r+s$, $;r+1le sigma(j)le r+s$. So the non-zero terms are those for which the permutation $sigmain mathfrak S_{r+s}$ is the concatenation of a permutation of $mathfrak S_r$ and a permutation in $mathfrak S_s$, and clearly the signature of $sigma$ is the product of the signatures of its factors.
The formula for the first formula for the determinant follows by distributivity.
add a comment |
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
2
down vote
accepted
To prove $(1)$, it suffices to note that
$$
pmatrix{A &B\0&D} = pmatrix{A & 0\0 & D} pmatrix{I&A^{-1}B\0 & I}
$$
From here, it suffices to note that the second matrix is upper-triangular, and to compute the determinant of the first matrix. It is easy to see that the determinant of the first matrix should be $det(A)det(D)$ if we use the Leibniz expansion.
For an example where $(2)$ fails to hold, consider the matrix
$$
pmatrix{
0&1&0&0\
0&0&1&0\
0&0&0&1\
1&0&0&0
} =
pmatrix{B&B^T\B^T&B}
$$
For an example where the diagonal blocks are invertible, add $I$ to the whole matrix.
add a comment |
up vote
2
down vote
accepted
To prove $(1)$, it suffices to note that
$$
pmatrix{A &B\0&D} = pmatrix{A & 0\0 & D} pmatrix{I&A^{-1}B\0 & I}
$$
From here, it suffices to note that the second matrix is upper-triangular, and to compute the determinant of the first matrix. It is easy to see that the determinant of the first matrix should be $det(A)det(D)$ if we use the Leibniz expansion.
For an example where $(2)$ fails to hold, consider the matrix
$$
pmatrix{
0&1&0&0\
0&0&1&0\
0&0&0&1\
1&0&0&0
} =
pmatrix{B&B^T\B^T&B}
$$
For an example where the diagonal blocks are invertible, add $I$ to the whole matrix.
add a comment |
up vote
2
down vote
accepted
up vote
2
down vote
accepted
To prove $(1)$, it suffices to note that
$$
pmatrix{A &B\0&D} = pmatrix{A & 0\0 & D} pmatrix{I&A^{-1}B\0 & I}
$$
From here, it suffices to note that the second matrix is upper-triangular, and to compute the determinant of the first matrix. It is easy to see that the determinant of the first matrix should be $det(A)det(D)$ if we use the Leibniz expansion.
For an example where $(2)$ fails to hold, consider the matrix
$$
pmatrix{
0&1&0&0\
0&0&1&0\
0&0&0&1\
1&0&0&0
} =
pmatrix{B&B^T\B^T&B}
$$
For an example where the diagonal blocks are invertible, add $I$ to the whole matrix.
To prove $(1)$, it suffices to note that
$$
pmatrix{A &B\0&D} = pmatrix{A & 0\0 & D} pmatrix{I&A^{-1}B\0 & I}
$$
From here, it suffices to note that the second matrix is upper-triangular, and to compute the determinant of the first matrix. It is easy to see that the determinant of the first matrix should be $det(A)det(D)$ if we use the Leibniz expansion.
For an example where $(2)$ fails to hold, consider the matrix
$$
pmatrix{
0&1&0&0\
0&0&1&0\
0&0&0&1\
1&0&0&0
} =
pmatrix{B&B^T\B^T&B}
$$
For an example where the diagonal blocks are invertible, add $I$ to the whole matrix.
edited Aug 27 '16 at 21:53
answered Aug 27 '16 at 21:46
Omnomnomnom
125k788176
125k788176
add a comment |
add a comment |
up vote
2
down vote
Proof of the 3rd identity.
It is a consequence of the following "block diagonalization" identity:
$$pmatrix{
A&B\
C&D
}=pmatrix{
I&0\
CA^{-1}&I
}pmatrix{
A&0\
0&S
}pmatrix{
I&A^{-1}B\
0&I
} text{with} S:=D-CA^{-1}B$$
(S = "Schur's complement" (https://en.wikipedia.org/wiki/Schur_complement)),
Then, it suffices to take determinants on both sides.
add a comment |
up vote
2
down vote
Proof of the 3rd identity.
It is a consequence of the following "block diagonalization" identity:
$$pmatrix{
A&B\
C&D
}=pmatrix{
I&0\
CA^{-1}&I
}pmatrix{
A&0\
0&S
}pmatrix{
I&A^{-1}B\
0&I
} text{with} S:=D-CA^{-1}B$$
(S = "Schur's complement" (https://en.wikipedia.org/wiki/Schur_complement)),
Then, it suffices to take determinants on both sides.
add a comment |
up vote
2
down vote
up vote
2
down vote
Proof of the 3rd identity.
It is a consequence of the following "block diagonalization" identity:
$$pmatrix{
A&B\
C&D
}=pmatrix{
I&0\
CA^{-1}&I
}pmatrix{
A&0\
0&S
}pmatrix{
I&A^{-1}B\
0&I
} text{with} S:=D-CA^{-1}B$$
(S = "Schur's complement" (https://en.wikipedia.org/wiki/Schur_complement)),
Then, it suffices to take determinants on both sides.
Proof of the 3rd identity.
It is a consequence of the following "block diagonalization" identity:
$$pmatrix{
A&B\
C&D
}=pmatrix{
I&0\
CA^{-1}&I
}pmatrix{
A&0\
0&S
}pmatrix{
I&A^{-1}B\
0&I
} text{with} S:=D-CA^{-1}B$$
(S = "Schur's complement" (https://en.wikipedia.org/wiki/Schur_complement)),
Then, it suffices to take determinants on both sides.
edited Mar 31 at 5:20
answered Aug 27 '16 at 22:37
Jean Marie
28.1k41848
28.1k41848
add a comment |
add a comment |
up vote
0
down vote
Suppose block $A$ has dimension $r$, block $D$ has dimension $s$. Use the definition of the determinant $lvert c_{i,j}rvert,enspace {1le i,jle r+s}$:
$$begin{vmatrix} A&C\0& Dend{vmatrix} =sum_{1le jle r+s}(-1)^{text{sgn}, sigma}c_{sigma(j),j}.$$
Now the non-zero terms are those such that, if $1le jle r$, $;1le sigma(j)le r$. Similarly, if $r+1le jle r+s$, $;r+1le sigma(j)le r+s$. So the non-zero terms are those for which the permutation $sigmain mathfrak S_{r+s}$ is the concatenation of a permutation of $mathfrak S_r$ and a permutation in $mathfrak S_s$, and clearly the signature of $sigma$ is the product of the signatures of its factors.
The formula for the first formula for the determinant follows by distributivity.
add a comment |
up vote
0
down vote
Suppose block $A$ has dimension $r$, block $D$ has dimension $s$. Use the definition of the determinant $lvert c_{i,j}rvert,enspace {1le i,jle r+s}$:
$$begin{vmatrix} A&C\0& Dend{vmatrix} =sum_{1le jle r+s}(-1)^{text{sgn}, sigma}c_{sigma(j),j}.$$
Now the non-zero terms are those such that, if $1le jle r$, $;1le sigma(j)le r$. Similarly, if $r+1le jle r+s$, $;r+1le sigma(j)le r+s$. So the non-zero terms are those for which the permutation $sigmain mathfrak S_{r+s}$ is the concatenation of a permutation of $mathfrak S_r$ and a permutation in $mathfrak S_s$, and clearly the signature of $sigma$ is the product of the signatures of its factors.
The formula for the first formula for the determinant follows by distributivity.
add a comment |
up vote
0
down vote
up vote
0
down vote
Suppose block $A$ has dimension $r$, block $D$ has dimension $s$. Use the definition of the determinant $lvert c_{i,j}rvert,enspace {1le i,jle r+s}$:
$$begin{vmatrix} A&C\0& Dend{vmatrix} =sum_{1le jle r+s}(-1)^{text{sgn}, sigma}c_{sigma(j),j}.$$
Now the non-zero terms are those such that, if $1le jle r$, $;1le sigma(j)le r$. Similarly, if $r+1le jle r+s$, $;r+1le sigma(j)le r+s$. So the non-zero terms are those for which the permutation $sigmain mathfrak S_{r+s}$ is the concatenation of a permutation of $mathfrak S_r$ and a permutation in $mathfrak S_s$, and clearly the signature of $sigma$ is the product of the signatures of its factors.
The formula for the first formula for the determinant follows by distributivity.
Suppose block $A$ has dimension $r$, block $D$ has dimension $s$. Use the definition of the determinant $lvert c_{i,j}rvert,enspace {1le i,jle r+s}$:
$$begin{vmatrix} A&C\0& Dend{vmatrix} =sum_{1le jle r+s}(-1)^{text{sgn}, sigma}c_{sigma(j),j}.$$
Now the non-zero terms are those such that, if $1le jle r$, $;1le sigma(j)le r$. Similarly, if $r+1le jle r+s$, $;r+1le sigma(j)le r+s$. So the non-zero terms are those for which the permutation $sigmain mathfrak S_{r+s}$ is the concatenation of a permutation of $mathfrak S_r$ and a permutation in $mathfrak S_s$, and clearly the signature of $sigma$ is the product of the signatures of its factors.
The formula for the first formula for the determinant follows by distributivity.
edited Aug 27 '16 at 22:42
answered Aug 27 '16 at 22:20
Bernard
115k637107
115k637107
add a comment |
add a comment |
It reminds me of this MO answer…
– Watson
Feb 2 at 14:02