Derivative with respect to Symmetric Matrix
up vote
3
down vote
favorite
I realize that derivatives with respect to symmetric matrices have been well covered in prior questions. Still, I find that the numerical results do not agree with what I understand as the theory.
Minka states (page 4) that for a symmetric matrix $Sigma$ we have the following result
$$frac{dlog|Sigma|}{dSigma} = 2Sigma^{-1} - (Sigma^{-1}circ I)$$
stemming from the constraing that $dSigma$ must be symmetric.
This is in contrast to the result of
$$frac{dlog|Sigma|}{dSigma} = Sigma^{-1}$$ if $Sigma$ was not constrained to be symmetric.
I checked this using finite differences and my result agrees with the later $frac{dlog|Sigma|}{dSigma} = Sigma^{-1}$ rather than the former $frac{dlog|Sigma|}{dSigma} = 2Sigma^{-1} - (Sigma^{-1}circ I)$. Also, this paper from Dwyer (1967, see Table 2) says the answer is $Sigma^{-1}$ without mentioning any correction for symmetry. Why is it then than that the former correction for symmetric constraints is given? What is going on here?
Here is the code and output (notice the off diagonals are incorrect):
> library(numDeriv)
>
> A <- matrix(c(4,.4,.4,2), 2, 2)
> q <- nrow(A)
>
> f <-function(A) log(det(A))
>
> matrix(grad(f, A), q, q)
[,1] [,2]
[1,] 0.25510204 -0.05102041
[2,] -0.05102041 0.51020408
> 2*solve(A)-diag(diag(solve(A)))
[,1] [,2]
[1,] 0.2551020 -0.1020408
[2,] -0.1020408 0.5102041
> solve(A)
[,1] [,2]
[1,] 0.25510204 -0.05102041
[2,] -0.05102041 0.51020408
matrix-calculus
add a comment |
up vote
3
down vote
favorite
I realize that derivatives with respect to symmetric matrices have been well covered in prior questions. Still, I find that the numerical results do not agree with what I understand as the theory.
Minka states (page 4) that for a symmetric matrix $Sigma$ we have the following result
$$frac{dlog|Sigma|}{dSigma} = 2Sigma^{-1} - (Sigma^{-1}circ I)$$
stemming from the constraing that $dSigma$ must be symmetric.
This is in contrast to the result of
$$frac{dlog|Sigma|}{dSigma} = Sigma^{-1}$$ if $Sigma$ was not constrained to be symmetric.
I checked this using finite differences and my result agrees with the later $frac{dlog|Sigma|}{dSigma} = Sigma^{-1}$ rather than the former $frac{dlog|Sigma|}{dSigma} = 2Sigma^{-1} - (Sigma^{-1}circ I)$. Also, this paper from Dwyer (1967, see Table 2) says the answer is $Sigma^{-1}$ without mentioning any correction for symmetry. Why is it then than that the former correction for symmetric constraints is given? What is going on here?
Here is the code and output (notice the off diagonals are incorrect):
> library(numDeriv)
>
> A <- matrix(c(4,.4,.4,2), 2, 2)
> q <- nrow(A)
>
> f <-function(A) log(det(A))
>
> matrix(grad(f, A), q, q)
[,1] [,2]
[1,] 0.25510204 -0.05102041
[2,] -0.05102041 0.51020408
> 2*solve(A)-diag(diag(solve(A)))
[,1] [,2]
[1,] 0.2551020 -0.1020408
[2,] -0.1020408 0.5102041
> solve(A)
[,1] [,2]
[1,] 0.25510204 -0.05102041
[2,] -0.05102041 0.51020408
matrix-calculus
add a comment |
up vote
3
down vote
favorite
up vote
3
down vote
favorite
I realize that derivatives with respect to symmetric matrices have been well covered in prior questions. Still, I find that the numerical results do not agree with what I understand as the theory.
Minka states (page 4) that for a symmetric matrix $Sigma$ we have the following result
$$frac{dlog|Sigma|}{dSigma} = 2Sigma^{-1} - (Sigma^{-1}circ I)$$
stemming from the constraing that $dSigma$ must be symmetric.
This is in contrast to the result of
$$frac{dlog|Sigma|}{dSigma} = Sigma^{-1}$$ if $Sigma$ was not constrained to be symmetric.
I checked this using finite differences and my result agrees with the later $frac{dlog|Sigma|}{dSigma} = Sigma^{-1}$ rather than the former $frac{dlog|Sigma|}{dSigma} = 2Sigma^{-1} - (Sigma^{-1}circ I)$. Also, this paper from Dwyer (1967, see Table 2) says the answer is $Sigma^{-1}$ without mentioning any correction for symmetry. Why is it then than that the former correction for symmetric constraints is given? What is going on here?
Here is the code and output (notice the off diagonals are incorrect):
> library(numDeriv)
>
> A <- matrix(c(4,.4,.4,2), 2, 2)
> q <- nrow(A)
>
> f <-function(A) log(det(A))
>
> matrix(grad(f, A), q, q)
[,1] [,2]
[1,] 0.25510204 -0.05102041
[2,] -0.05102041 0.51020408
> 2*solve(A)-diag(diag(solve(A)))
[,1] [,2]
[1,] 0.2551020 -0.1020408
[2,] -0.1020408 0.5102041
> solve(A)
[,1] [,2]
[1,] 0.25510204 -0.05102041
[2,] -0.05102041 0.51020408
matrix-calculus
I realize that derivatives with respect to symmetric matrices have been well covered in prior questions. Still, I find that the numerical results do not agree with what I understand as the theory.
Minka states (page 4) that for a symmetric matrix $Sigma$ we have the following result
$$frac{dlog|Sigma|}{dSigma} = 2Sigma^{-1} - (Sigma^{-1}circ I)$$
stemming from the constraing that $dSigma$ must be symmetric.
This is in contrast to the result of
$$frac{dlog|Sigma|}{dSigma} = Sigma^{-1}$$ if $Sigma$ was not constrained to be symmetric.
I checked this using finite differences and my result agrees with the later $frac{dlog|Sigma|}{dSigma} = Sigma^{-1}$ rather than the former $frac{dlog|Sigma|}{dSigma} = 2Sigma^{-1} - (Sigma^{-1}circ I)$. Also, this paper from Dwyer (1967, see Table 2) says the answer is $Sigma^{-1}$ without mentioning any correction for symmetry. Why is it then than that the former correction for symmetric constraints is given? What is going on here?
Here is the code and output (notice the off diagonals are incorrect):
> library(numDeriv)
>
> A <- matrix(c(4,.4,.4,2), 2, 2)
> q <- nrow(A)
>
> f <-function(A) log(det(A))
>
> matrix(grad(f, A), q, q)
[,1] [,2]
[1,] 0.25510204 -0.05102041
[2,] -0.05102041 0.51020408
> 2*solve(A)-diag(diag(solve(A)))
[,1] [,2]
[1,] 0.2551020 -0.1020408
[2,] -0.1020408 0.5102041
> solve(A)
[,1] [,2]
[1,] 0.25510204 -0.05102041
[2,] -0.05102041 0.51020408
matrix-calculus
matrix-calculus
asked 2 days ago
jds
1636
1636
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
up vote
2
down vote
Consider $B$, a symmetric matrix of dimension $2$; $B=begin{pmatrix}x&y\y&zend{pmatrix}in S_2$ may be identified with the row vector $[x,y,z]$.
Now let $f:Ain S_2cap GL_2 rightarrow det(|log(A)|)$. Its derivative is $Df_A:Hin S_2rightarrow tr(HA^{-1})=<H,A^{-1}>$ where $<.>$ is the standard scalar product over the matrices. Then $H=begin{pmatrix}h&k\k&lend{pmatrix}$ is identified with $[h,k,l]$ and $A^{-1}=begin{pmatrix}a&b\b&cend{pmatrix}$ with $[a,b,c]$.
$Df_A(H)=ha+2kb+lc=[a,2b,c][h,k,l]^T$.
Finally $nabla(f)(A)=[a,2b,c]$, that is identified with $begin{pmatrix}a&2b\2b&cend{pmatrix}=2A^{-1}-A^{-1}circ I$.
i) Note that the derivative is a linear application and the gradient is its associated matrix; more exactly, it's the transpose of the associated matrix (a vector).
ii) about your example, for $A=begin{pmatrix}p&q\q&rend{pmatrix}$. The command $matrix(grad..)$ doesn't see that $a_{1,2}=a_{2,1}$ and considers that there are $4$ variables whereas there are only three.
In fact, the derivatives with respect to $p,q,r$ are $0.2551020, -0.1020408, 0.5102041$. More generally, you must mutiply by $2$ the derivatives with respect to $a_{i,j}$ when $inot= j$ and keep the other.
Thank you for the answer. However, I am not following the last 4 lines. In particular, where is 2lb coming from? why not also 2kc? In the last line why do you switch to ∇(f) rather than Df? Sorry I think I am missing something. Also, why is the finite difference result different (see original question)?
– jds
15 hours ago
Yes, I change $k$ with $l$. For the other questons, cf. my edit.
– loup blanc
11 hours ago
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
2
down vote
Consider $B$, a symmetric matrix of dimension $2$; $B=begin{pmatrix}x&y\y&zend{pmatrix}in S_2$ may be identified with the row vector $[x,y,z]$.
Now let $f:Ain S_2cap GL_2 rightarrow det(|log(A)|)$. Its derivative is $Df_A:Hin S_2rightarrow tr(HA^{-1})=<H,A^{-1}>$ where $<.>$ is the standard scalar product over the matrices. Then $H=begin{pmatrix}h&k\k&lend{pmatrix}$ is identified with $[h,k,l]$ and $A^{-1}=begin{pmatrix}a&b\b&cend{pmatrix}$ with $[a,b,c]$.
$Df_A(H)=ha+2kb+lc=[a,2b,c][h,k,l]^T$.
Finally $nabla(f)(A)=[a,2b,c]$, that is identified with $begin{pmatrix}a&2b\2b&cend{pmatrix}=2A^{-1}-A^{-1}circ I$.
i) Note that the derivative is a linear application and the gradient is its associated matrix; more exactly, it's the transpose of the associated matrix (a vector).
ii) about your example, for $A=begin{pmatrix}p&q\q&rend{pmatrix}$. The command $matrix(grad..)$ doesn't see that $a_{1,2}=a_{2,1}$ and considers that there are $4$ variables whereas there are only three.
In fact, the derivatives with respect to $p,q,r$ are $0.2551020, -0.1020408, 0.5102041$. More generally, you must mutiply by $2$ the derivatives with respect to $a_{i,j}$ when $inot= j$ and keep the other.
Thank you for the answer. However, I am not following the last 4 lines. In particular, where is 2lb coming from? why not also 2kc? In the last line why do you switch to ∇(f) rather than Df? Sorry I think I am missing something. Also, why is the finite difference result different (see original question)?
– jds
15 hours ago
Yes, I change $k$ with $l$. For the other questons, cf. my edit.
– loup blanc
11 hours ago
add a comment |
up vote
2
down vote
Consider $B$, a symmetric matrix of dimension $2$; $B=begin{pmatrix}x&y\y&zend{pmatrix}in S_2$ may be identified with the row vector $[x,y,z]$.
Now let $f:Ain S_2cap GL_2 rightarrow det(|log(A)|)$. Its derivative is $Df_A:Hin S_2rightarrow tr(HA^{-1})=<H,A^{-1}>$ where $<.>$ is the standard scalar product over the matrices. Then $H=begin{pmatrix}h&k\k&lend{pmatrix}$ is identified with $[h,k,l]$ and $A^{-1}=begin{pmatrix}a&b\b&cend{pmatrix}$ with $[a,b,c]$.
$Df_A(H)=ha+2kb+lc=[a,2b,c][h,k,l]^T$.
Finally $nabla(f)(A)=[a,2b,c]$, that is identified with $begin{pmatrix}a&2b\2b&cend{pmatrix}=2A^{-1}-A^{-1}circ I$.
i) Note that the derivative is a linear application and the gradient is its associated matrix; more exactly, it's the transpose of the associated matrix (a vector).
ii) about your example, for $A=begin{pmatrix}p&q\q&rend{pmatrix}$. The command $matrix(grad..)$ doesn't see that $a_{1,2}=a_{2,1}$ and considers that there are $4$ variables whereas there are only three.
In fact, the derivatives with respect to $p,q,r$ are $0.2551020, -0.1020408, 0.5102041$. More generally, you must mutiply by $2$ the derivatives with respect to $a_{i,j}$ when $inot= j$ and keep the other.
Thank you for the answer. However, I am not following the last 4 lines. In particular, where is 2lb coming from? why not also 2kc? In the last line why do you switch to ∇(f) rather than Df? Sorry I think I am missing something. Also, why is the finite difference result different (see original question)?
– jds
15 hours ago
Yes, I change $k$ with $l$. For the other questons, cf. my edit.
– loup blanc
11 hours ago
add a comment |
up vote
2
down vote
up vote
2
down vote
Consider $B$, a symmetric matrix of dimension $2$; $B=begin{pmatrix}x&y\y&zend{pmatrix}in S_2$ may be identified with the row vector $[x,y,z]$.
Now let $f:Ain S_2cap GL_2 rightarrow det(|log(A)|)$. Its derivative is $Df_A:Hin S_2rightarrow tr(HA^{-1})=<H,A^{-1}>$ where $<.>$ is the standard scalar product over the matrices. Then $H=begin{pmatrix}h&k\k&lend{pmatrix}$ is identified with $[h,k,l]$ and $A^{-1}=begin{pmatrix}a&b\b&cend{pmatrix}$ with $[a,b,c]$.
$Df_A(H)=ha+2kb+lc=[a,2b,c][h,k,l]^T$.
Finally $nabla(f)(A)=[a,2b,c]$, that is identified with $begin{pmatrix}a&2b\2b&cend{pmatrix}=2A^{-1}-A^{-1}circ I$.
i) Note that the derivative is a linear application and the gradient is its associated matrix; more exactly, it's the transpose of the associated matrix (a vector).
ii) about your example, for $A=begin{pmatrix}p&q\q&rend{pmatrix}$. The command $matrix(grad..)$ doesn't see that $a_{1,2}=a_{2,1}$ and considers that there are $4$ variables whereas there are only three.
In fact, the derivatives with respect to $p,q,r$ are $0.2551020, -0.1020408, 0.5102041$. More generally, you must mutiply by $2$ the derivatives with respect to $a_{i,j}$ when $inot= j$ and keep the other.
Consider $B$, a symmetric matrix of dimension $2$; $B=begin{pmatrix}x&y\y&zend{pmatrix}in S_2$ may be identified with the row vector $[x,y,z]$.
Now let $f:Ain S_2cap GL_2 rightarrow det(|log(A)|)$. Its derivative is $Df_A:Hin S_2rightarrow tr(HA^{-1})=<H,A^{-1}>$ where $<.>$ is the standard scalar product over the matrices. Then $H=begin{pmatrix}h&k\k&lend{pmatrix}$ is identified with $[h,k,l]$ and $A^{-1}=begin{pmatrix}a&b\b&cend{pmatrix}$ with $[a,b,c]$.
$Df_A(H)=ha+2kb+lc=[a,2b,c][h,k,l]^T$.
Finally $nabla(f)(A)=[a,2b,c]$, that is identified with $begin{pmatrix}a&2b\2b&cend{pmatrix}=2A^{-1}-A^{-1}circ I$.
i) Note that the derivative is a linear application and the gradient is its associated matrix; more exactly, it's the transpose of the associated matrix (a vector).
ii) about your example, for $A=begin{pmatrix}p&q\q&rend{pmatrix}$. The command $matrix(grad..)$ doesn't see that $a_{1,2}=a_{2,1}$ and considers that there are $4$ variables whereas there are only three.
In fact, the derivatives with respect to $p,q,r$ are $0.2551020, -0.1020408, 0.5102041$. More generally, you must mutiply by $2$ the derivatives with respect to $a_{i,j}$ when $inot= j$ and keep the other.
edited 11 hours ago
answered yesterday
loup blanc
21.9k21749
21.9k21749
Thank you for the answer. However, I am not following the last 4 lines. In particular, where is 2lb coming from? why not also 2kc? In the last line why do you switch to ∇(f) rather than Df? Sorry I think I am missing something. Also, why is the finite difference result different (see original question)?
– jds
15 hours ago
Yes, I change $k$ with $l$. For the other questons, cf. my edit.
– loup blanc
11 hours ago
add a comment |
Thank you for the answer. However, I am not following the last 4 lines. In particular, where is 2lb coming from? why not also 2kc? In the last line why do you switch to ∇(f) rather than Df? Sorry I think I am missing something. Also, why is the finite difference result different (see original question)?
– jds
15 hours ago
Yes, I change $k$ with $l$. For the other questons, cf. my edit.
– loup blanc
11 hours ago
Thank you for the answer. However, I am not following the last 4 lines. In particular, where is 2lb coming from? why not also 2kc? In the last line why do you switch to ∇(f) rather than Df? Sorry I think I am missing something. Also, why is the finite difference result different (see original question)?
– jds
15 hours ago
Thank you for the answer. However, I am not following the last 4 lines. In particular, where is 2lb coming from? why not also 2kc? In the last line why do you switch to ∇(f) rather than Df? Sorry I think I am missing something. Also, why is the finite difference result different (see original question)?
– jds
15 hours ago
Yes, I change $k$ with $l$. For the other questons, cf. my edit.
– loup blanc
11 hours ago
Yes, I change $k$ with $l$. For the other questons, cf. my edit.
– loup blanc
11 hours ago
add a comment |
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3005374%2fderivative-with-respect-to-symmetric-matrix%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown