Derivative with respect to Symmetric Matrix











up vote
3
down vote

favorite
1












I realize that derivatives with respect to symmetric matrices have been well covered in prior questions. Still, I find that the numerical results do not agree with what I understand as the theory.



Minka states (page 4) that for a symmetric matrix $Sigma$ we have the following result



$$frac{dlog|Sigma|}{dSigma} = 2Sigma^{-1} - (Sigma^{-1}circ I)$$
stemming from the constraing that $dSigma$ must be symmetric.
This is in contrast to the result of
$$frac{dlog|Sigma|}{dSigma} = Sigma^{-1}$$ if $Sigma$ was not constrained to be symmetric.



I checked this using finite differences and my result agrees with the later $frac{dlog|Sigma|}{dSigma} = Sigma^{-1}$ rather than the former $frac{dlog|Sigma|}{dSigma} = 2Sigma^{-1} - (Sigma^{-1}circ I)$. Also, this paper from Dwyer (1967, see Table 2) says the answer is $Sigma^{-1}$ without mentioning any correction for symmetry. Why is it then than that the former correction for symmetric constraints is given? What is going on here?



Here is the code and output (notice the off diagonals are incorrect):



> library(numDeriv)
>
> A <- matrix(c(4,.4,.4,2), 2, 2)
> q <- nrow(A)
>
> f <-function(A) log(det(A))
>
> matrix(grad(f, A), q, q)
[,1] [,2]
[1,] 0.25510204 -0.05102041
[2,] -0.05102041 0.51020408
> 2*solve(A)-diag(diag(solve(A)))
[,1] [,2]
[1,] 0.2551020 -0.1020408
[2,] -0.1020408 0.5102041
> solve(A)
[,1] [,2]
[1,] 0.25510204 -0.05102041
[2,] -0.05102041 0.51020408









share|cite|improve this question


























    up vote
    3
    down vote

    favorite
    1












    I realize that derivatives with respect to symmetric matrices have been well covered in prior questions. Still, I find that the numerical results do not agree with what I understand as the theory.



    Minka states (page 4) that for a symmetric matrix $Sigma$ we have the following result



    $$frac{dlog|Sigma|}{dSigma} = 2Sigma^{-1} - (Sigma^{-1}circ I)$$
    stemming from the constraing that $dSigma$ must be symmetric.
    This is in contrast to the result of
    $$frac{dlog|Sigma|}{dSigma} = Sigma^{-1}$$ if $Sigma$ was not constrained to be symmetric.



    I checked this using finite differences and my result agrees with the later $frac{dlog|Sigma|}{dSigma} = Sigma^{-1}$ rather than the former $frac{dlog|Sigma|}{dSigma} = 2Sigma^{-1} - (Sigma^{-1}circ I)$. Also, this paper from Dwyer (1967, see Table 2) says the answer is $Sigma^{-1}$ without mentioning any correction for symmetry. Why is it then than that the former correction for symmetric constraints is given? What is going on here?



    Here is the code and output (notice the off diagonals are incorrect):



    > library(numDeriv)
    >
    > A <- matrix(c(4,.4,.4,2), 2, 2)
    > q <- nrow(A)
    >
    > f <-function(A) log(det(A))
    >
    > matrix(grad(f, A), q, q)
    [,1] [,2]
    [1,] 0.25510204 -0.05102041
    [2,] -0.05102041 0.51020408
    > 2*solve(A)-diag(diag(solve(A)))
    [,1] [,2]
    [1,] 0.2551020 -0.1020408
    [2,] -0.1020408 0.5102041
    > solve(A)
    [,1] [,2]
    [1,] 0.25510204 -0.05102041
    [2,] -0.05102041 0.51020408









    share|cite|improve this question
























      up vote
      3
      down vote

      favorite
      1









      up vote
      3
      down vote

      favorite
      1






      1





      I realize that derivatives with respect to symmetric matrices have been well covered in prior questions. Still, I find that the numerical results do not agree with what I understand as the theory.



      Minka states (page 4) that for a symmetric matrix $Sigma$ we have the following result



      $$frac{dlog|Sigma|}{dSigma} = 2Sigma^{-1} - (Sigma^{-1}circ I)$$
      stemming from the constraing that $dSigma$ must be symmetric.
      This is in contrast to the result of
      $$frac{dlog|Sigma|}{dSigma} = Sigma^{-1}$$ if $Sigma$ was not constrained to be symmetric.



      I checked this using finite differences and my result agrees with the later $frac{dlog|Sigma|}{dSigma} = Sigma^{-1}$ rather than the former $frac{dlog|Sigma|}{dSigma} = 2Sigma^{-1} - (Sigma^{-1}circ I)$. Also, this paper from Dwyer (1967, see Table 2) says the answer is $Sigma^{-1}$ without mentioning any correction for symmetry. Why is it then than that the former correction for symmetric constraints is given? What is going on here?



      Here is the code and output (notice the off diagonals are incorrect):



      > library(numDeriv)
      >
      > A <- matrix(c(4,.4,.4,2), 2, 2)
      > q <- nrow(A)
      >
      > f <-function(A) log(det(A))
      >
      > matrix(grad(f, A), q, q)
      [,1] [,2]
      [1,] 0.25510204 -0.05102041
      [2,] -0.05102041 0.51020408
      > 2*solve(A)-diag(diag(solve(A)))
      [,1] [,2]
      [1,] 0.2551020 -0.1020408
      [2,] -0.1020408 0.5102041
      > solve(A)
      [,1] [,2]
      [1,] 0.25510204 -0.05102041
      [2,] -0.05102041 0.51020408









      share|cite|improve this question













      I realize that derivatives with respect to symmetric matrices have been well covered in prior questions. Still, I find that the numerical results do not agree with what I understand as the theory.



      Minka states (page 4) that for a symmetric matrix $Sigma$ we have the following result



      $$frac{dlog|Sigma|}{dSigma} = 2Sigma^{-1} - (Sigma^{-1}circ I)$$
      stemming from the constraing that $dSigma$ must be symmetric.
      This is in contrast to the result of
      $$frac{dlog|Sigma|}{dSigma} = Sigma^{-1}$$ if $Sigma$ was not constrained to be symmetric.



      I checked this using finite differences and my result agrees with the later $frac{dlog|Sigma|}{dSigma} = Sigma^{-1}$ rather than the former $frac{dlog|Sigma|}{dSigma} = 2Sigma^{-1} - (Sigma^{-1}circ I)$. Also, this paper from Dwyer (1967, see Table 2) says the answer is $Sigma^{-1}$ without mentioning any correction for symmetry. Why is it then than that the former correction for symmetric constraints is given? What is going on here?



      Here is the code and output (notice the off diagonals are incorrect):



      > library(numDeriv)
      >
      > A <- matrix(c(4,.4,.4,2), 2, 2)
      > q <- nrow(A)
      >
      > f <-function(A) log(det(A))
      >
      > matrix(grad(f, A), q, q)
      [,1] [,2]
      [1,] 0.25510204 -0.05102041
      [2,] -0.05102041 0.51020408
      > 2*solve(A)-diag(diag(solve(A)))
      [,1] [,2]
      [1,] 0.2551020 -0.1020408
      [2,] -0.1020408 0.5102041
      > solve(A)
      [,1] [,2]
      [1,] 0.25510204 -0.05102041
      [2,] -0.05102041 0.51020408






      matrix-calculus






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked 2 days ago









      jds

      1636




      1636






















          1 Answer
          1






          active

          oldest

          votes

















          up vote
          2
          down vote













          Consider $B$, a symmetric matrix of dimension $2$; $B=begin{pmatrix}x&y\y&zend{pmatrix}in S_2$ may be identified with the row vector $[x,y,z]$.



          Now let $f:Ain S_2cap GL_2 rightarrow det(|log(A)|)$. Its derivative is $Df_A:Hin S_2rightarrow tr(HA^{-1})=<H,A^{-1}>$ where $<.>$ is the standard scalar product over the matrices. Then $H=begin{pmatrix}h&k\k&lend{pmatrix}$ is identified with $[h,k,l]$ and $A^{-1}=begin{pmatrix}a&b\b&cend{pmatrix}$ with $[a,b,c]$.



          $Df_A(H)=ha+2kb+lc=[a,2b,c][h,k,l]^T$.



          Finally $nabla(f)(A)=[a,2b,c]$, that is identified with $begin{pmatrix}a&2b\2b&cend{pmatrix}=2A^{-1}-A^{-1}circ I$.



          i) Note that the derivative is a linear application and the gradient is its associated matrix; more exactly, it's the transpose of the associated matrix (a vector).



          ii) about your example, for $A=begin{pmatrix}p&q\q&rend{pmatrix}$. The command $matrix(grad..)$ doesn't see that $a_{1,2}=a_{2,1}$ and considers that there are $4$ variables whereas there are only three.



          In fact, the derivatives with respect to $p,q,r$ are $0.2551020, -0.1020408, 0.5102041$. More generally, you must mutiply by $2$ the derivatives with respect to $a_{i,j}$ when $inot= j$ and keep the other.






          share|cite|improve this answer























          • Thank you for the answer. However, I am not following the last 4 lines. In particular, where is 2lb coming from? why not also 2kc? In the last line why do you switch to ∇(f) rather than Df? Sorry I think I am missing something. Also, why is the finite difference result different (see original question)?
            – jds
            15 hours ago












          • Yes, I change $k$ with $l$. For the other questons, cf. my edit.
            – loup blanc
            11 hours ago











          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "69"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














           

          draft saved


          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3005374%2fderivative-with-respect-to-symmetric-matrix%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          up vote
          2
          down vote













          Consider $B$, a symmetric matrix of dimension $2$; $B=begin{pmatrix}x&y\y&zend{pmatrix}in S_2$ may be identified with the row vector $[x,y,z]$.



          Now let $f:Ain S_2cap GL_2 rightarrow det(|log(A)|)$. Its derivative is $Df_A:Hin S_2rightarrow tr(HA^{-1})=<H,A^{-1}>$ where $<.>$ is the standard scalar product over the matrices. Then $H=begin{pmatrix}h&k\k&lend{pmatrix}$ is identified with $[h,k,l]$ and $A^{-1}=begin{pmatrix}a&b\b&cend{pmatrix}$ with $[a,b,c]$.



          $Df_A(H)=ha+2kb+lc=[a,2b,c][h,k,l]^T$.



          Finally $nabla(f)(A)=[a,2b,c]$, that is identified with $begin{pmatrix}a&2b\2b&cend{pmatrix}=2A^{-1}-A^{-1}circ I$.



          i) Note that the derivative is a linear application and the gradient is its associated matrix; more exactly, it's the transpose of the associated matrix (a vector).



          ii) about your example, for $A=begin{pmatrix}p&q\q&rend{pmatrix}$. The command $matrix(grad..)$ doesn't see that $a_{1,2}=a_{2,1}$ and considers that there are $4$ variables whereas there are only three.



          In fact, the derivatives with respect to $p,q,r$ are $0.2551020, -0.1020408, 0.5102041$. More generally, you must mutiply by $2$ the derivatives with respect to $a_{i,j}$ when $inot= j$ and keep the other.






          share|cite|improve this answer























          • Thank you for the answer. However, I am not following the last 4 lines. In particular, where is 2lb coming from? why not also 2kc? In the last line why do you switch to ∇(f) rather than Df? Sorry I think I am missing something. Also, why is the finite difference result different (see original question)?
            – jds
            15 hours ago












          • Yes, I change $k$ with $l$. For the other questons, cf. my edit.
            – loup blanc
            11 hours ago















          up vote
          2
          down vote













          Consider $B$, a symmetric matrix of dimension $2$; $B=begin{pmatrix}x&y\y&zend{pmatrix}in S_2$ may be identified with the row vector $[x,y,z]$.



          Now let $f:Ain S_2cap GL_2 rightarrow det(|log(A)|)$. Its derivative is $Df_A:Hin S_2rightarrow tr(HA^{-1})=<H,A^{-1}>$ where $<.>$ is the standard scalar product over the matrices. Then $H=begin{pmatrix}h&k\k&lend{pmatrix}$ is identified with $[h,k,l]$ and $A^{-1}=begin{pmatrix}a&b\b&cend{pmatrix}$ with $[a,b,c]$.



          $Df_A(H)=ha+2kb+lc=[a,2b,c][h,k,l]^T$.



          Finally $nabla(f)(A)=[a,2b,c]$, that is identified with $begin{pmatrix}a&2b\2b&cend{pmatrix}=2A^{-1}-A^{-1}circ I$.



          i) Note that the derivative is a linear application and the gradient is its associated matrix; more exactly, it's the transpose of the associated matrix (a vector).



          ii) about your example, for $A=begin{pmatrix}p&q\q&rend{pmatrix}$. The command $matrix(grad..)$ doesn't see that $a_{1,2}=a_{2,1}$ and considers that there are $4$ variables whereas there are only three.



          In fact, the derivatives with respect to $p,q,r$ are $0.2551020, -0.1020408, 0.5102041$. More generally, you must mutiply by $2$ the derivatives with respect to $a_{i,j}$ when $inot= j$ and keep the other.






          share|cite|improve this answer























          • Thank you for the answer. However, I am not following the last 4 lines. In particular, where is 2lb coming from? why not also 2kc? In the last line why do you switch to ∇(f) rather than Df? Sorry I think I am missing something. Also, why is the finite difference result different (see original question)?
            – jds
            15 hours ago












          • Yes, I change $k$ with $l$. For the other questons, cf. my edit.
            – loup blanc
            11 hours ago













          up vote
          2
          down vote










          up vote
          2
          down vote









          Consider $B$, a symmetric matrix of dimension $2$; $B=begin{pmatrix}x&y\y&zend{pmatrix}in S_2$ may be identified with the row vector $[x,y,z]$.



          Now let $f:Ain S_2cap GL_2 rightarrow det(|log(A)|)$. Its derivative is $Df_A:Hin S_2rightarrow tr(HA^{-1})=<H,A^{-1}>$ where $<.>$ is the standard scalar product over the matrices. Then $H=begin{pmatrix}h&k\k&lend{pmatrix}$ is identified with $[h,k,l]$ and $A^{-1}=begin{pmatrix}a&b\b&cend{pmatrix}$ with $[a,b,c]$.



          $Df_A(H)=ha+2kb+lc=[a,2b,c][h,k,l]^T$.



          Finally $nabla(f)(A)=[a,2b,c]$, that is identified with $begin{pmatrix}a&2b\2b&cend{pmatrix}=2A^{-1}-A^{-1}circ I$.



          i) Note that the derivative is a linear application and the gradient is its associated matrix; more exactly, it's the transpose of the associated matrix (a vector).



          ii) about your example, for $A=begin{pmatrix}p&q\q&rend{pmatrix}$. The command $matrix(grad..)$ doesn't see that $a_{1,2}=a_{2,1}$ and considers that there are $4$ variables whereas there are only three.



          In fact, the derivatives with respect to $p,q,r$ are $0.2551020, -0.1020408, 0.5102041$. More generally, you must mutiply by $2$ the derivatives with respect to $a_{i,j}$ when $inot= j$ and keep the other.






          share|cite|improve this answer














          Consider $B$, a symmetric matrix of dimension $2$; $B=begin{pmatrix}x&y\y&zend{pmatrix}in S_2$ may be identified with the row vector $[x,y,z]$.



          Now let $f:Ain S_2cap GL_2 rightarrow det(|log(A)|)$. Its derivative is $Df_A:Hin S_2rightarrow tr(HA^{-1})=<H,A^{-1}>$ where $<.>$ is the standard scalar product over the matrices. Then $H=begin{pmatrix}h&k\k&lend{pmatrix}$ is identified with $[h,k,l]$ and $A^{-1}=begin{pmatrix}a&b\b&cend{pmatrix}$ with $[a,b,c]$.



          $Df_A(H)=ha+2kb+lc=[a,2b,c][h,k,l]^T$.



          Finally $nabla(f)(A)=[a,2b,c]$, that is identified with $begin{pmatrix}a&2b\2b&cend{pmatrix}=2A^{-1}-A^{-1}circ I$.



          i) Note that the derivative is a linear application and the gradient is its associated matrix; more exactly, it's the transpose of the associated matrix (a vector).



          ii) about your example, for $A=begin{pmatrix}p&q\q&rend{pmatrix}$. The command $matrix(grad..)$ doesn't see that $a_{1,2}=a_{2,1}$ and considers that there are $4$ variables whereas there are only three.



          In fact, the derivatives with respect to $p,q,r$ are $0.2551020, -0.1020408, 0.5102041$. More generally, you must mutiply by $2$ the derivatives with respect to $a_{i,j}$ when $inot= j$ and keep the other.







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited 11 hours ago

























          answered yesterday









          loup blanc

          21.9k21749




          21.9k21749












          • Thank you for the answer. However, I am not following the last 4 lines. In particular, where is 2lb coming from? why not also 2kc? In the last line why do you switch to ∇(f) rather than Df? Sorry I think I am missing something. Also, why is the finite difference result different (see original question)?
            – jds
            15 hours ago












          • Yes, I change $k$ with $l$. For the other questons, cf. my edit.
            – loup blanc
            11 hours ago


















          • Thank you for the answer. However, I am not following the last 4 lines. In particular, where is 2lb coming from? why not also 2kc? In the last line why do you switch to ∇(f) rather than Df? Sorry I think I am missing something. Also, why is the finite difference result different (see original question)?
            – jds
            15 hours ago












          • Yes, I change $k$ with $l$. For the other questons, cf. my edit.
            – loup blanc
            11 hours ago
















          Thank you for the answer. However, I am not following the last 4 lines. In particular, where is 2lb coming from? why not also 2kc? In the last line why do you switch to ∇(f) rather than Df? Sorry I think I am missing something. Also, why is the finite difference result different (see original question)?
          – jds
          15 hours ago






          Thank you for the answer. However, I am not following the last 4 lines. In particular, where is 2lb coming from? why not also 2kc? In the last line why do you switch to ∇(f) rather than Df? Sorry I think I am missing something. Also, why is the finite difference result different (see original question)?
          – jds
          15 hours ago














          Yes, I change $k$ with $l$. For the other questons, cf. my edit.
          – loup blanc
          11 hours ago




          Yes, I change $k$ with $l$. For the other questons, cf. my edit.
          – loup blanc
          11 hours ago


















           

          draft saved


          draft discarded



















































           


          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3005374%2fderivative-with-respect-to-symmetric-matrix%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Wiesbaden

          To store a contact into the json file from server.js file using a class in NodeJS

          Marschland