4th order tensors double dot product and inverse computation
$begingroup$
I am currently working on a subject that involves a lot of 4th order tensors computations including double dot product and inverse of fourth order tensors.
First the definitions so that we are on the same page. What I call the double dot product is :
$$ (A:B)_{ijkl} = A_{ijmn}B_{mnkl} $$
and for the double dot product between a fourth order tensor and a second order tensor :
$$ (A:s)_{ij} = A_{ijkl}s_{kl}$$
Using the convention of sommation over repeating indices.
What I call the identity of the fourth order tensors is the only tensor such that :
$$ A:I = I:A = I $$
it is defined by $ I = delta_{ik}delta_{jl} e_{i} otimes e_{j} otimes e_{k} otimes e_{l} $.
What I call the inverse of a fourth order tensor is the inverse with respect to the double dot product, that is, the inverse of $A$ is the only tensor $B$ such that $AB = BA = I$.
The double dot product is easy to compute if you don't think about the efficiency of the code, just create an array and loop over the four indices. Computing the inverse is something else. Every tensor I use has the minor symmetries $A_{ijkl} = A_{jikl} = A_{ijlk}$ so I thought I would use the Mandel representation for second order and fourth order tensors mentioned on Wikipedia. The fourth order tensor can be put into a $6 times6$ matrix with the following components :
$$ [C] =
begin{bmatrix}
C_{1111} & C_{1122} & C_{1133} & sqrt{2}C_{1123} & sqrt{2}C_{1131} & sqrt{2}C_{1112}\
C_{2211} & C_{2222} & C_{2233} & sqrt{2}C_{2223} & sqrt{2}C_{2231} & sqrt{2}C_{2212}\
C_{3311} & C_{3322} & C_{3333} & sqrt{2}C_{3323} & sqrt{2}C_{3331} & sqrt{2}C_{3312}\
sqrt{2}C_{2311} & sqrt{2}C_{2322} & sqrt{2}C_{2333} & 2C_{2323} & 2C_{2331} & 2C_{2312}\
sqrt{2}C_{3111} & sqrt{2}C_{3122} & sqrt{2}C_{3133} & 2C_{3123} & 2C_{3131} & 2C_{3112}\
sqrt{2}C_{1211} & sqrt{2}C_{1222} & sqrt{2}C_{1233} & 2C_{1223} &2C_{1231} & 2C_{1212}
end{bmatrix}
$$
$C$ is a fourth order tensor with minor symmetries and $[C]$ is its Mandel representation. The reason why Mandel's representation exists according to different sources is such that the matrix-matrix and matrix-vector usual products coincide with the fourth order tensors double dot product and the inverse in each respective space (fourth order tensors and $6times 6$ matrices) coincides as well, that is
$$
[A:B] = [A].[B] qquad qquad (1)
$$
and
$$
[A^{-1}] = [A]^{-1} qquad qquad (2)
$$
where $.$ is the usual matrix-matrix product. But it doesn't work or at least there must be something I don't understand. If I put the identity 4th order tensor defined above into Mandel's notation, I get the following matrix :
$$ [I] =
begin{bmatrix}
1&0&0&0&0&0\
0&1&0&0&0&0\
0&0&1&0&0&0\
0&0&0&2&0&0\
0&0&0&0&2&0\
0&0&0&0&0&2
end{bmatrix}
$$
which is obviously different from the identity of $6 times 6$ matrices so if I compute $[C].[I]$ using the usual matrix-matrix product I won't get the same $[C]$.
I also wrote a little script to check relations (1) and (2) but wasn't able to find this result with random $4^{th}$ order tensors possessing minor symmetries.
What am I missing here ?
Thanks a lot for your help and the discussions to come :)
linear-algebra abstract-algebra matrices tensor-products tensors
$endgroup$
|
show 2 more comments
$begingroup$
I am currently working on a subject that involves a lot of 4th order tensors computations including double dot product and inverse of fourth order tensors.
First the definitions so that we are on the same page. What I call the double dot product is :
$$ (A:B)_{ijkl} = A_{ijmn}B_{mnkl} $$
and for the double dot product between a fourth order tensor and a second order tensor :
$$ (A:s)_{ij} = A_{ijkl}s_{kl}$$
Using the convention of sommation over repeating indices.
What I call the identity of the fourth order tensors is the only tensor such that :
$$ A:I = I:A = I $$
it is defined by $ I = delta_{ik}delta_{jl} e_{i} otimes e_{j} otimes e_{k} otimes e_{l} $.
What I call the inverse of a fourth order tensor is the inverse with respect to the double dot product, that is, the inverse of $A$ is the only tensor $B$ such that $AB = BA = I$.
The double dot product is easy to compute if you don't think about the efficiency of the code, just create an array and loop over the four indices. Computing the inverse is something else. Every tensor I use has the minor symmetries $A_{ijkl} = A_{jikl} = A_{ijlk}$ so I thought I would use the Mandel representation for second order and fourth order tensors mentioned on Wikipedia. The fourth order tensor can be put into a $6 times6$ matrix with the following components :
$$ [C] =
begin{bmatrix}
C_{1111} & C_{1122} & C_{1133} & sqrt{2}C_{1123} & sqrt{2}C_{1131} & sqrt{2}C_{1112}\
C_{2211} & C_{2222} & C_{2233} & sqrt{2}C_{2223} & sqrt{2}C_{2231} & sqrt{2}C_{2212}\
C_{3311} & C_{3322} & C_{3333} & sqrt{2}C_{3323} & sqrt{2}C_{3331} & sqrt{2}C_{3312}\
sqrt{2}C_{2311} & sqrt{2}C_{2322} & sqrt{2}C_{2333} & 2C_{2323} & 2C_{2331} & 2C_{2312}\
sqrt{2}C_{3111} & sqrt{2}C_{3122} & sqrt{2}C_{3133} & 2C_{3123} & 2C_{3131} & 2C_{3112}\
sqrt{2}C_{1211} & sqrt{2}C_{1222} & sqrt{2}C_{1233} & 2C_{1223} &2C_{1231} & 2C_{1212}
end{bmatrix}
$$
$C$ is a fourth order tensor with minor symmetries and $[C]$ is its Mandel representation. The reason why Mandel's representation exists according to different sources is such that the matrix-matrix and matrix-vector usual products coincide with the fourth order tensors double dot product and the inverse in each respective space (fourth order tensors and $6times 6$ matrices) coincides as well, that is
$$
[A:B] = [A].[B] qquad qquad (1)
$$
and
$$
[A^{-1}] = [A]^{-1} qquad qquad (2)
$$
where $.$ is the usual matrix-matrix product. But it doesn't work or at least there must be something I don't understand. If I put the identity 4th order tensor defined above into Mandel's notation, I get the following matrix :
$$ [I] =
begin{bmatrix}
1&0&0&0&0&0\
0&1&0&0&0&0\
0&0&1&0&0&0\
0&0&0&2&0&0\
0&0&0&0&2&0\
0&0&0&0&0&2
end{bmatrix}
$$
which is obviously different from the identity of $6 times 6$ matrices so if I compute $[C].[I]$ using the usual matrix-matrix product I won't get the same $[C]$.
I also wrote a little script to check relations (1) and (2) but wasn't able to find this result with random $4^{th}$ order tensors possessing minor symmetries.
What am I missing here ?
Thanks a lot for your help and the discussions to come :)
linear-algebra abstract-algebra matrices tensor-products tensors
$endgroup$
$begingroup$
Are you sure about the presence of the $2$ coefficients? For instance, in Voigt notation note how the stress tensor is not given such coefficients, whereas the strain tensor is. Look especially at the stiffness tensor here.
$endgroup$
– user3658307
May 10 '17 at 19:01
$begingroup$
Well, in any website, book or literature I found, the Mandel notation is exactly as I wrote. The Voigt notation you're suggesting doesnt have any factor but according to every source, it doesn't preserve the double dot product or the inverse when using the usual matrix-matrix product
$endgroup$
– Experience111
May 10 '17 at 20:22
$begingroup$
There's definitely an issue with your identity tensor since $[I][I]ne[I]$. These representations may depend on covariance/contravariance ... check out this review by Helnwein.
$endgroup$
– user3658307
May 10 '17 at 22:53
$begingroup$
@user3658307 What I know for sure is that $I = delta_{ik}delta_{jl} e_{i} otimes e_{j} otimes e_{k} otimes e_{l}$ is the proper identity tensor for the double dot product as I checked it with several random 4-th order tensors. However, you're right the representation as I gave it is not the right representation so there must be an issue. I already came accross the review you suggested but didn't read it extensively, I'll check it out again.
$endgroup$
– Experience111
May 11 '17 at 6:46
$begingroup$
What I'm sure of so far is that the Mandel representation is correct i.e. $[A:B] = [A].[B]$ because I checked it by hand. The problem is definitely in my identity tensor. I'm not so sure anymore about the relation $I = delta_{ik}delta_{jl} e_{i} otimes e_{j} otimes e_{k} otimes e_{l}$ But when I use $frac{1}{2}( delta_{ik}delta_{jl} + delta_{il}delta_{jk})e_{i} otimes e_{j} otimes e_{k} otimes e_{l}$ my computation for the double product fails $A:I neq A$
$endgroup$
– Experience111
May 11 '17 at 7:38
|
show 2 more comments
$begingroup$
I am currently working on a subject that involves a lot of 4th order tensors computations including double dot product and inverse of fourth order tensors.
First the definitions so that we are on the same page. What I call the double dot product is :
$$ (A:B)_{ijkl} = A_{ijmn}B_{mnkl} $$
and for the double dot product between a fourth order tensor and a second order tensor :
$$ (A:s)_{ij} = A_{ijkl}s_{kl}$$
Using the convention of sommation over repeating indices.
What I call the identity of the fourth order tensors is the only tensor such that :
$$ A:I = I:A = I $$
it is defined by $ I = delta_{ik}delta_{jl} e_{i} otimes e_{j} otimes e_{k} otimes e_{l} $.
What I call the inverse of a fourth order tensor is the inverse with respect to the double dot product, that is, the inverse of $A$ is the only tensor $B$ such that $AB = BA = I$.
The double dot product is easy to compute if you don't think about the efficiency of the code, just create an array and loop over the four indices. Computing the inverse is something else. Every tensor I use has the minor symmetries $A_{ijkl} = A_{jikl} = A_{ijlk}$ so I thought I would use the Mandel representation for second order and fourth order tensors mentioned on Wikipedia. The fourth order tensor can be put into a $6 times6$ matrix with the following components :
$$ [C] =
begin{bmatrix}
C_{1111} & C_{1122} & C_{1133} & sqrt{2}C_{1123} & sqrt{2}C_{1131} & sqrt{2}C_{1112}\
C_{2211} & C_{2222} & C_{2233} & sqrt{2}C_{2223} & sqrt{2}C_{2231} & sqrt{2}C_{2212}\
C_{3311} & C_{3322} & C_{3333} & sqrt{2}C_{3323} & sqrt{2}C_{3331} & sqrt{2}C_{3312}\
sqrt{2}C_{2311} & sqrt{2}C_{2322} & sqrt{2}C_{2333} & 2C_{2323} & 2C_{2331} & 2C_{2312}\
sqrt{2}C_{3111} & sqrt{2}C_{3122} & sqrt{2}C_{3133} & 2C_{3123} & 2C_{3131} & 2C_{3112}\
sqrt{2}C_{1211} & sqrt{2}C_{1222} & sqrt{2}C_{1233} & 2C_{1223} &2C_{1231} & 2C_{1212}
end{bmatrix}
$$
$C$ is a fourth order tensor with minor symmetries and $[C]$ is its Mandel representation. The reason why Mandel's representation exists according to different sources is such that the matrix-matrix and matrix-vector usual products coincide with the fourth order tensors double dot product and the inverse in each respective space (fourth order tensors and $6times 6$ matrices) coincides as well, that is
$$
[A:B] = [A].[B] qquad qquad (1)
$$
and
$$
[A^{-1}] = [A]^{-1} qquad qquad (2)
$$
where $.$ is the usual matrix-matrix product. But it doesn't work or at least there must be something I don't understand. If I put the identity 4th order tensor defined above into Mandel's notation, I get the following matrix :
$$ [I] =
begin{bmatrix}
1&0&0&0&0&0\
0&1&0&0&0&0\
0&0&1&0&0&0\
0&0&0&2&0&0\
0&0&0&0&2&0\
0&0&0&0&0&2
end{bmatrix}
$$
which is obviously different from the identity of $6 times 6$ matrices so if I compute $[C].[I]$ using the usual matrix-matrix product I won't get the same $[C]$.
I also wrote a little script to check relations (1) and (2) but wasn't able to find this result with random $4^{th}$ order tensors possessing minor symmetries.
What am I missing here ?
Thanks a lot for your help and the discussions to come :)
linear-algebra abstract-algebra matrices tensor-products tensors
$endgroup$
I am currently working on a subject that involves a lot of 4th order tensors computations including double dot product and inverse of fourth order tensors.
First the definitions so that we are on the same page. What I call the double dot product is :
$$ (A:B)_{ijkl} = A_{ijmn}B_{mnkl} $$
and for the double dot product between a fourth order tensor and a second order tensor :
$$ (A:s)_{ij} = A_{ijkl}s_{kl}$$
Using the convention of sommation over repeating indices.
What I call the identity of the fourth order tensors is the only tensor such that :
$$ A:I = I:A = I $$
it is defined by $ I = delta_{ik}delta_{jl} e_{i} otimes e_{j} otimes e_{k} otimes e_{l} $.
What I call the inverse of a fourth order tensor is the inverse with respect to the double dot product, that is, the inverse of $A$ is the only tensor $B$ such that $AB = BA = I$.
The double dot product is easy to compute if you don't think about the efficiency of the code, just create an array and loop over the four indices. Computing the inverse is something else. Every tensor I use has the minor symmetries $A_{ijkl} = A_{jikl} = A_{ijlk}$ so I thought I would use the Mandel representation for second order and fourth order tensors mentioned on Wikipedia. The fourth order tensor can be put into a $6 times6$ matrix with the following components :
$$ [C] =
begin{bmatrix}
C_{1111} & C_{1122} & C_{1133} & sqrt{2}C_{1123} & sqrt{2}C_{1131} & sqrt{2}C_{1112}\
C_{2211} & C_{2222} & C_{2233} & sqrt{2}C_{2223} & sqrt{2}C_{2231} & sqrt{2}C_{2212}\
C_{3311} & C_{3322} & C_{3333} & sqrt{2}C_{3323} & sqrt{2}C_{3331} & sqrt{2}C_{3312}\
sqrt{2}C_{2311} & sqrt{2}C_{2322} & sqrt{2}C_{2333} & 2C_{2323} & 2C_{2331} & 2C_{2312}\
sqrt{2}C_{3111} & sqrt{2}C_{3122} & sqrt{2}C_{3133} & 2C_{3123} & 2C_{3131} & 2C_{3112}\
sqrt{2}C_{1211} & sqrt{2}C_{1222} & sqrt{2}C_{1233} & 2C_{1223} &2C_{1231} & 2C_{1212}
end{bmatrix}
$$
$C$ is a fourth order tensor with minor symmetries and $[C]$ is its Mandel representation. The reason why Mandel's representation exists according to different sources is such that the matrix-matrix and matrix-vector usual products coincide with the fourth order tensors double dot product and the inverse in each respective space (fourth order tensors and $6times 6$ matrices) coincides as well, that is
$$
[A:B] = [A].[B] qquad qquad (1)
$$
and
$$
[A^{-1}] = [A]^{-1} qquad qquad (2)
$$
where $.$ is the usual matrix-matrix product. But it doesn't work or at least there must be something I don't understand. If I put the identity 4th order tensor defined above into Mandel's notation, I get the following matrix :
$$ [I] =
begin{bmatrix}
1&0&0&0&0&0\
0&1&0&0&0&0\
0&0&1&0&0&0\
0&0&0&2&0&0\
0&0&0&0&2&0\
0&0&0&0&0&2
end{bmatrix}
$$
which is obviously different from the identity of $6 times 6$ matrices so if I compute $[C].[I]$ using the usual matrix-matrix product I won't get the same $[C]$.
I also wrote a little script to check relations (1) and (2) but wasn't able to find this result with random $4^{th}$ order tensors possessing minor symmetries.
What am I missing here ?
Thanks a lot for your help and the discussions to come :)
linear-algebra abstract-algebra matrices tensor-products tensors
linear-algebra abstract-algebra matrices tensor-products tensors
asked May 10 '17 at 18:13
Experience111Experience111
587
587
$begingroup$
Are you sure about the presence of the $2$ coefficients? For instance, in Voigt notation note how the stress tensor is not given such coefficients, whereas the strain tensor is. Look especially at the stiffness tensor here.
$endgroup$
– user3658307
May 10 '17 at 19:01
$begingroup$
Well, in any website, book or literature I found, the Mandel notation is exactly as I wrote. The Voigt notation you're suggesting doesnt have any factor but according to every source, it doesn't preserve the double dot product or the inverse when using the usual matrix-matrix product
$endgroup$
– Experience111
May 10 '17 at 20:22
$begingroup$
There's definitely an issue with your identity tensor since $[I][I]ne[I]$. These representations may depend on covariance/contravariance ... check out this review by Helnwein.
$endgroup$
– user3658307
May 10 '17 at 22:53
$begingroup$
@user3658307 What I know for sure is that $I = delta_{ik}delta_{jl} e_{i} otimes e_{j} otimes e_{k} otimes e_{l}$ is the proper identity tensor for the double dot product as I checked it with several random 4-th order tensors. However, you're right the representation as I gave it is not the right representation so there must be an issue. I already came accross the review you suggested but didn't read it extensively, I'll check it out again.
$endgroup$
– Experience111
May 11 '17 at 6:46
$begingroup$
What I'm sure of so far is that the Mandel representation is correct i.e. $[A:B] = [A].[B]$ because I checked it by hand. The problem is definitely in my identity tensor. I'm not so sure anymore about the relation $I = delta_{ik}delta_{jl} e_{i} otimes e_{j} otimes e_{k} otimes e_{l}$ But when I use $frac{1}{2}( delta_{ik}delta_{jl} + delta_{il}delta_{jk})e_{i} otimes e_{j} otimes e_{k} otimes e_{l}$ my computation for the double product fails $A:I neq A$
$endgroup$
– Experience111
May 11 '17 at 7:38
|
show 2 more comments
$begingroup$
Are you sure about the presence of the $2$ coefficients? For instance, in Voigt notation note how the stress tensor is not given such coefficients, whereas the strain tensor is. Look especially at the stiffness tensor here.
$endgroup$
– user3658307
May 10 '17 at 19:01
$begingroup$
Well, in any website, book or literature I found, the Mandel notation is exactly as I wrote. The Voigt notation you're suggesting doesnt have any factor but according to every source, it doesn't preserve the double dot product or the inverse when using the usual matrix-matrix product
$endgroup$
– Experience111
May 10 '17 at 20:22
$begingroup$
There's definitely an issue with your identity tensor since $[I][I]ne[I]$. These representations may depend on covariance/contravariance ... check out this review by Helnwein.
$endgroup$
– user3658307
May 10 '17 at 22:53
$begingroup$
@user3658307 What I know for sure is that $I = delta_{ik}delta_{jl} e_{i} otimes e_{j} otimes e_{k} otimes e_{l}$ is the proper identity tensor for the double dot product as I checked it with several random 4-th order tensors. However, you're right the representation as I gave it is not the right representation so there must be an issue. I already came accross the review you suggested but didn't read it extensively, I'll check it out again.
$endgroup$
– Experience111
May 11 '17 at 6:46
$begingroup$
What I'm sure of so far is that the Mandel representation is correct i.e. $[A:B] = [A].[B]$ because I checked it by hand. The problem is definitely in my identity tensor. I'm not so sure anymore about the relation $I = delta_{ik}delta_{jl} e_{i} otimes e_{j} otimes e_{k} otimes e_{l}$ But when I use $frac{1}{2}( delta_{ik}delta_{jl} + delta_{il}delta_{jk})e_{i} otimes e_{j} otimes e_{k} otimes e_{l}$ my computation for the double product fails $A:I neq A$
$endgroup$
– Experience111
May 11 '17 at 7:38
$begingroup$
Are you sure about the presence of the $2$ coefficients? For instance, in Voigt notation note how the stress tensor is not given such coefficients, whereas the strain tensor is. Look especially at the stiffness tensor here.
$endgroup$
– user3658307
May 10 '17 at 19:01
$begingroup$
Are you sure about the presence of the $2$ coefficients? For instance, in Voigt notation note how the stress tensor is not given such coefficients, whereas the strain tensor is. Look especially at the stiffness tensor here.
$endgroup$
– user3658307
May 10 '17 at 19:01
$begingroup$
Well, in any website, book or literature I found, the Mandel notation is exactly as I wrote. The Voigt notation you're suggesting doesnt have any factor but according to every source, it doesn't preserve the double dot product or the inverse when using the usual matrix-matrix product
$endgroup$
– Experience111
May 10 '17 at 20:22
$begingroup$
Well, in any website, book or literature I found, the Mandel notation is exactly as I wrote. The Voigt notation you're suggesting doesnt have any factor but according to every source, it doesn't preserve the double dot product or the inverse when using the usual matrix-matrix product
$endgroup$
– Experience111
May 10 '17 at 20:22
$begingroup$
There's definitely an issue with your identity tensor since $[I][I]ne[I]$. These representations may depend on covariance/contravariance ... check out this review by Helnwein.
$endgroup$
– user3658307
May 10 '17 at 22:53
$begingroup$
There's definitely an issue with your identity tensor since $[I][I]ne[I]$. These representations may depend on covariance/contravariance ... check out this review by Helnwein.
$endgroup$
– user3658307
May 10 '17 at 22:53
$begingroup$
@user3658307 What I know for sure is that $I = delta_{ik}delta_{jl} e_{i} otimes e_{j} otimes e_{k} otimes e_{l}$ is the proper identity tensor for the double dot product as I checked it with several random 4-th order tensors. However, you're right the representation as I gave it is not the right representation so there must be an issue. I already came accross the review you suggested but didn't read it extensively, I'll check it out again.
$endgroup$
– Experience111
May 11 '17 at 6:46
$begingroup$
@user3658307 What I know for sure is that $I = delta_{ik}delta_{jl} e_{i} otimes e_{j} otimes e_{k} otimes e_{l}$ is the proper identity tensor for the double dot product as I checked it with several random 4-th order tensors. However, you're right the representation as I gave it is not the right representation so there must be an issue. I already came accross the review you suggested but didn't read it extensively, I'll check it out again.
$endgroup$
– Experience111
May 11 '17 at 6:46
$begingroup$
What I'm sure of so far is that the Mandel representation is correct i.e. $[A:B] = [A].[B]$ because I checked it by hand. The problem is definitely in my identity tensor. I'm not so sure anymore about the relation $I = delta_{ik}delta_{jl} e_{i} otimes e_{j} otimes e_{k} otimes e_{l}$ But when I use $frac{1}{2}( delta_{ik}delta_{jl} + delta_{il}delta_{jk})e_{i} otimes e_{j} otimes e_{k} otimes e_{l}$ my computation for the double product fails $A:I neq A$
$endgroup$
– Experience111
May 11 '17 at 7:38
$begingroup$
What I'm sure of so far is that the Mandel representation is correct i.e. $[A:B] = [A].[B]$ because I checked it by hand. The problem is definitely in my identity tensor. I'm not so sure anymore about the relation $I = delta_{ik}delta_{jl} e_{i} otimes e_{j} otimes e_{k} otimes e_{l}$ But when I use $frac{1}{2}( delta_{ik}delta_{jl} + delta_{il}delta_{jk})e_{i} otimes e_{j} otimes e_{k} otimes e_{l}$ my computation for the double product fails $A:I neq A$
$endgroup$
– Experience111
May 11 '17 at 7:38
|
show 2 more comments
2 Answers
2
active
oldest
votes
$begingroup$
I'll answer my own question since I was able to find the solution to my problem with the help of one commentator. The definition of the identity tensor $I = delta_{ik}delta_{jl} e_{i} otimes e_{j} otimes e_{k} otimes e_{l}$ is correct but it does not lead to a tensor with minor symmetries.
My mistake was in using this definition of the identity and applying the Mandel transformation to it. The Mandel transformation preserves the double dot product and the inverse if and only if the transformed tensors have the minor symmetries. As suggested in the review by Helnwein mentioned by @user3658307 whom I thank for his help, the definition one should use for the identity tensor in the case of minor symmetries is :
$$
I = frac{1}{2}( delta_{ik}delta_{jl} + delta_{il}delta_{jk})e_{i} otimes e_{j} otimes e_{k} otimes e_{l}
$$
Which has the minor symmetries and can thus be put into the Mandel representation that yields :
$$
[I] = begin{bmatrix} 1 & 0&0&0&0&0 \ 0&1&0&0&0&0\0&0&1&0&0&0\0&0&0&1&0&0\0&0&0&0&1&0\0&0&0&0&0&1end{bmatrix}
$$
If you are interested in a more rigorous presentation of this subject (matrix representation of tensors), I highly recommend reading the aforementioned review. In particular, one should be really careful of the covariance and contravariance of the tensors he or she is handling as in some particular cases, their matrix representation can differ even if $A^{ij}_{kl} = A_{ij}^{kl}$.
Note on the inverse :
My original goal was to find an easy way to inverse fourth order tensors with minor symmetries using usual inversion algorithms for matrices. It is not always possible in the general case since the matrix representation of a general fourth order tensor possessing only minor symmetries is not always invertible in the space of $6times 6$ matrices. However it is known that given a random $n times n$ matrix with real coordinates, the 'probability' of it being invertible is $1$ in the sense of the Lebesgue measure. In linear elasticity or physics in general, one should thus be able to compute an inverse in every case.
$endgroup$
add a comment |
$begingroup$
In this modern computer age, I'm not sure why anyone is enamored by these old compressed $,6times 6,$ formats anymore.
At the cost of slightly more storage, there is a perfectly reversible way to flatten any $,3times 3times 3times 3,$ tensor into $,9times 9,$ matrix
$$ [C] =
begin{bmatrix}
C_{1111} &C_{1121}&C_{1131}&C_{1112}&C_{1122}&C_{1132}&C_{1113}&C_{1123} &C_{1133}\
C_{2111} &C_{2121}&C_{2131}&C_{2112}&C_{2122}&C_{2132}&C_{2113}&C_{2123} &C_{2133}\
C_{3111} &C_{3121}&C_{3131}&C_{3112}&C_{3122}&C_{3132}&C_{3113}&C_{3123} &C_{3133}\
C_{1211} &C_{1221}&C_{1231}&C_{1212}&C_{1222}&C_{1232}&C_{1213}&C_{1223} &C_{1233}\
C_{2211} &C_{2221}&C_{2231}&C_{2212}&C_{2222}&C_{2232}&C_{2213}&C_{2223} &C_{2233}\
C_{3211} &C_{3221}&C_{3231}&C_{3212}&C_{3222}&C_{3232}&C_{3213}&C_{3223} &C_{3233}\
C_{1311} &C_{1321}&C_{1331}&C_{1312}&C_{1322}&C_{1332}&C_{1313}&C_{1323} &C_{1333}\
C_{2311} &C_{2321}&C_{2331}&C_{2312}&C_{2322}&C_{2332}&C_{2313}&C_{2323} &C_{2333}\
C_{3311} &C_{3321}&C_{3331}&C_{3312}&C_{3322}&C_{3332}&C_{3313}&C_{3323} &C_{3333}
end{bmatrix}
$$
And to flatten a $,3times 3,$ matrix into a $,9times 1,$ vector
$$ [S] = {rm vec}(S) =
begin{bmatrix}
S_{11} \S_{21}\S_{31}\S_{12}\S_{22}\S_{32}\S_{13}\S_{23} \S_{33}
end{bmatrix}
$$
This is a very clean and easy to code transformation, since it doesn't require goofy scale factors like $(2,{sqrt 2})$ on three-quarters of the terms.
Finally, the matrix inverse $[C]^{-1}$, should it exists, can be reconstituted into a 4th order tensor. Not in some probabilistic sense, but in every single case.
$endgroup$
$begingroup$
While you can indeed represent a fourth order tensor using a 9x9 matrix, I think the issue is when you want to preserve the operations. You will probably be able to inverse the 9x9 matrix representation of the fourth order tensor but I'm not sure it will match with the representation of the inverse of the tensor. I would be interested if you had any source about this :)
$endgroup$
– Experience111
Dec 10 '18 at 7:52
$begingroup$
The equivalence of the operations $$C:S Longleftrightarrow [C]cdot[S]$$ is preserved by the above mapping. Look at the ordering of the first index pair in the columns of $C$ and $S$. It is the same ordering as the last index pair in the rows of $C$. The dot product is the sum over the index pair; the double-dot product sums over the same pair of indices.
$endgroup$
– greg
Dec 10 '18 at 11:31
$begingroup$
Also note that the fourth-order identity tensor in your original question gets mapped to the standard $9times 9$ identity matrix -- because there is only one identity. The other fourth-order tensors mentioned in your answer are isotropic but they're not identity tensors.
$endgroup$
– greg
Dec 10 '18 at 11:38
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2275151%2f4th-order-tensors-double-dot-product-and-inverse-computation%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
I'll answer my own question since I was able to find the solution to my problem with the help of one commentator. The definition of the identity tensor $I = delta_{ik}delta_{jl} e_{i} otimes e_{j} otimes e_{k} otimes e_{l}$ is correct but it does not lead to a tensor with minor symmetries.
My mistake was in using this definition of the identity and applying the Mandel transformation to it. The Mandel transformation preserves the double dot product and the inverse if and only if the transformed tensors have the minor symmetries. As suggested in the review by Helnwein mentioned by @user3658307 whom I thank for his help, the definition one should use for the identity tensor in the case of minor symmetries is :
$$
I = frac{1}{2}( delta_{ik}delta_{jl} + delta_{il}delta_{jk})e_{i} otimes e_{j} otimes e_{k} otimes e_{l}
$$
Which has the minor symmetries and can thus be put into the Mandel representation that yields :
$$
[I] = begin{bmatrix} 1 & 0&0&0&0&0 \ 0&1&0&0&0&0\0&0&1&0&0&0\0&0&0&1&0&0\0&0&0&0&1&0\0&0&0&0&0&1end{bmatrix}
$$
If you are interested in a more rigorous presentation of this subject (matrix representation of tensors), I highly recommend reading the aforementioned review. In particular, one should be really careful of the covariance and contravariance of the tensors he or she is handling as in some particular cases, their matrix representation can differ even if $A^{ij}_{kl} = A_{ij}^{kl}$.
Note on the inverse :
My original goal was to find an easy way to inverse fourth order tensors with minor symmetries using usual inversion algorithms for matrices. It is not always possible in the general case since the matrix representation of a general fourth order tensor possessing only minor symmetries is not always invertible in the space of $6times 6$ matrices. However it is known that given a random $n times n$ matrix with real coordinates, the 'probability' of it being invertible is $1$ in the sense of the Lebesgue measure. In linear elasticity or physics in general, one should thus be able to compute an inverse in every case.
$endgroup$
add a comment |
$begingroup$
I'll answer my own question since I was able to find the solution to my problem with the help of one commentator. The definition of the identity tensor $I = delta_{ik}delta_{jl} e_{i} otimes e_{j} otimes e_{k} otimes e_{l}$ is correct but it does not lead to a tensor with minor symmetries.
My mistake was in using this definition of the identity and applying the Mandel transformation to it. The Mandel transformation preserves the double dot product and the inverse if and only if the transformed tensors have the minor symmetries. As suggested in the review by Helnwein mentioned by @user3658307 whom I thank for his help, the definition one should use for the identity tensor in the case of minor symmetries is :
$$
I = frac{1}{2}( delta_{ik}delta_{jl} + delta_{il}delta_{jk})e_{i} otimes e_{j} otimes e_{k} otimes e_{l}
$$
Which has the minor symmetries and can thus be put into the Mandel representation that yields :
$$
[I] = begin{bmatrix} 1 & 0&0&0&0&0 \ 0&1&0&0&0&0\0&0&1&0&0&0\0&0&0&1&0&0\0&0&0&0&1&0\0&0&0&0&0&1end{bmatrix}
$$
If you are interested in a more rigorous presentation of this subject (matrix representation of tensors), I highly recommend reading the aforementioned review. In particular, one should be really careful of the covariance and contravariance of the tensors he or she is handling as in some particular cases, their matrix representation can differ even if $A^{ij}_{kl} = A_{ij}^{kl}$.
Note on the inverse :
My original goal was to find an easy way to inverse fourth order tensors with minor symmetries using usual inversion algorithms for matrices. It is not always possible in the general case since the matrix representation of a general fourth order tensor possessing only minor symmetries is not always invertible in the space of $6times 6$ matrices. However it is known that given a random $n times n$ matrix with real coordinates, the 'probability' of it being invertible is $1$ in the sense of the Lebesgue measure. In linear elasticity or physics in general, one should thus be able to compute an inverse in every case.
$endgroup$
add a comment |
$begingroup$
I'll answer my own question since I was able to find the solution to my problem with the help of one commentator. The definition of the identity tensor $I = delta_{ik}delta_{jl} e_{i} otimes e_{j} otimes e_{k} otimes e_{l}$ is correct but it does not lead to a tensor with minor symmetries.
My mistake was in using this definition of the identity and applying the Mandel transformation to it. The Mandel transformation preserves the double dot product and the inverse if and only if the transformed tensors have the minor symmetries. As suggested in the review by Helnwein mentioned by @user3658307 whom I thank for his help, the definition one should use for the identity tensor in the case of minor symmetries is :
$$
I = frac{1}{2}( delta_{ik}delta_{jl} + delta_{il}delta_{jk})e_{i} otimes e_{j} otimes e_{k} otimes e_{l}
$$
Which has the minor symmetries and can thus be put into the Mandel representation that yields :
$$
[I] = begin{bmatrix} 1 & 0&0&0&0&0 \ 0&1&0&0&0&0\0&0&1&0&0&0\0&0&0&1&0&0\0&0&0&0&1&0\0&0&0&0&0&1end{bmatrix}
$$
If you are interested in a more rigorous presentation of this subject (matrix representation of tensors), I highly recommend reading the aforementioned review. In particular, one should be really careful of the covariance and contravariance of the tensors he or she is handling as in some particular cases, their matrix representation can differ even if $A^{ij}_{kl} = A_{ij}^{kl}$.
Note on the inverse :
My original goal was to find an easy way to inverse fourth order tensors with minor symmetries using usual inversion algorithms for matrices. It is not always possible in the general case since the matrix representation of a general fourth order tensor possessing only minor symmetries is not always invertible in the space of $6times 6$ matrices. However it is known that given a random $n times n$ matrix with real coordinates, the 'probability' of it being invertible is $1$ in the sense of the Lebesgue measure. In linear elasticity or physics in general, one should thus be able to compute an inverse in every case.
$endgroup$
I'll answer my own question since I was able to find the solution to my problem with the help of one commentator. The definition of the identity tensor $I = delta_{ik}delta_{jl} e_{i} otimes e_{j} otimes e_{k} otimes e_{l}$ is correct but it does not lead to a tensor with minor symmetries.
My mistake was in using this definition of the identity and applying the Mandel transformation to it. The Mandel transformation preserves the double dot product and the inverse if and only if the transformed tensors have the minor symmetries. As suggested in the review by Helnwein mentioned by @user3658307 whom I thank for his help, the definition one should use for the identity tensor in the case of minor symmetries is :
$$
I = frac{1}{2}( delta_{ik}delta_{jl} + delta_{il}delta_{jk})e_{i} otimes e_{j} otimes e_{k} otimes e_{l}
$$
Which has the minor symmetries and can thus be put into the Mandel representation that yields :
$$
[I] = begin{bmatrix} 1 & 0&0&0&0&0 \ 0&1&0&0&0&0\0&0&1&0&0&0\0&0&0&1&0&0\0&0&0&0&1&0\0&0&0&0&0&1end{bmatrix}
$$
If you are interested in a more rigorous presentation of this subject (matrix representation of tensors), I highly recommend reading the aforementioned review. In particular, one should be really careful of the covariance and contravariance of the tensors he or she is handling as in some particular cases, their matrix representation can differ even if $A^{ij}_{kl} = A_{ij}^{kl}$.
Note on the inverse :
My original goal was to find an easy way to inverse fourth order tensors with minor symmetries using usual inversion algorithms for matrices. It is not always possible in the general case since the matrix representation of a general fourth order tensor possessing only minor symmetries is not always invertible in the space of $6times 6$ matrices. However it is known that given a random $n times n$ matrix with real coordinates, the 'probability' of it being invertible is $1$ in the sense of the Lebesgue measure. In linear elasticity or physics in general, one should thus be able to compute an inverse in every case.
edited Jun 30 '17 at 12:04
answered May 11 '17 at 9:29
Experience111Experience111
587
587
add a comment |
add a comment |
$begingroup$
In this modern computer age, I'm not sure why anyone is enamored by these old compressed $,6times 6,$ formats anymore.
At the cost of slightly more storage, there is a perfectly reversible way to flatten any $,3times 3times 3times 3,$ tensor into $,9times 9,$ matrix
$$ [C] =
begin{bmatrix}
C_{1111} &C_{1121}&C_{1131}&C_{1112}&C_{1122}&C_{1132}&C_{1113}&C_{1123} &C_{1133}\
C_{2111} &C_{2121}&C_{2131}&C_{2112}&C_{2122}&C_{2132}&C_{2113}&C_{2123} &C_{2133}\
C_{3111} &C_{3121}&C_{3131}&C_{3112}&C_{3122}&C_{3132}&C_{3113}&C_{3123} &C_{3133}\
C_{1211} &C_{1221}&C_{1231}&C_{1212}&C_{1222}&C_{1232}&C_{1213}&C_{1223} &C_{1233}\
C_{2211} &C_{2221}&C_{2231}&C_{2212}&C_{2222}&C_{2232}&C_{2213}&C_{2223} &C_{2233}\
C_{3211} &C_{3221}&C_{3231}&C_{3212}&C_{3222}&C_{3232}&C_{3213}&C_{3223} &C_{3233}\
C_{1311} &C_{1321}&C_{1331}&C_{1312}&C_{1322}&C_{1332}&C_{1313}&C_{1323} &C_{1333}\
C_{2311} &C_{2321}&C_{2331}&C_{2312}&C_{2322}&C_{2332}&C_{2313}&C_{2323} &C_{2333}\
C_{3311} &C_{3321}&C_{3331}&C_{3312}&C_{3322}&C_{3332}&C_{3313}&C_{3323} &C_{3333}
end{bmatrix}
$$
And to flatten a $,3times 3,$ matrix into a $,9times 1,$ vector
$$ [S] = {rm vec}(S) =
begin{bmatrix}
S_{11} \S_{21}\S_{31}\S_{12}\S_{22}\S_{32}\S_{13}\S_{23} \S_{33}
end{bmatrix}
$$
This is a very clean and easy to code transformation, since it doesn't require goofy scale factors like $(2,{sqrt 2})$ on three-quarters of the terms.
Finally, the matrix inverse $[C]^{-1}$, should it exists, can be reconstituted into a 4th order tensor. Not in some probabilistic sense, but in every single case.
$endgroup$
$begingroup$
While you can indeed represent a fourth order tensor using a 9x9 matrix, I think the issue is when you want to preserve the operations. You will probably be able to inverse the 9x9 matrix representation of the fourth order tensor but I'm not sure it will match with the representation of the inverse of the tensor. I would be interested if you had any source about this :)
$endgroup$
– Experience111
Dec 10 '18 at 7:52
$begingroup$
The equivalence of the operations $$C:S Longleftrightarrow [C]cdot[S]$$ is preserved by the above mapping. Look at the ordering of the first index pair in the columns of $C$ and $S$. It is the same ordering as the last index pair in the rows of $C$. The dot product is the sum over the index pair; the double-dot product sums over the same pair of indices.
$endgroup$
– greg
Dec 10 '18 at 11:31
$begingroup$
Also note that the fourth-order identity tensor in your original question gets mapped to the standard $9times 9$ identity matrix -- because there is only one identity. The other fourth-order tensors mentioned in your answer are isotropic but they're not identity tensors.
$endgroup$
– greg
Dec 10 '18 at 11:38
add a comment |
$begingroup$
In this modern computer age, I'm not sure why anyone is enamored by these old compressed $,6times 6,$ formats anymore.
At the cost of slightly more storage, there is a perfectly reversible way to flatten any $,3times 3times 3times 3,$ tensor into $,9times 9,$ matrix
$$ [C] =
begin{bmatrix}
C_{1111} &C_{1121}&C_{1131}&C_{1112}&C_{1122}&C_{1132}&C_{1113}&C_{1123} &C_{1133}\
C_{2111} &C_{2121}&C_{2131}&C_{2112}&C_{2122}&C_{2132}&C_{2113}&C_{2123} &C_{2133}\
C_{3111} &C_{3121}&C_{3131}&C_{3112}&C_{3122}&C_{3132}&C_{3113}&C_{3123} &C_{3133}\
C_{1211} &C_{1221}&C_{1231}&C_{1212}&C_{1222}&C_{1232}&C_{1213}&C_{1223} &C_{1233}\
C_{2211} &C_{2221}&C_{2231}&C_{2212}&C_{2222}&C_{2232}&C_{2213}&C_{2223} &C_{2233}\
C_{3211} &C_{3221}&C_{3231}&C_{3212}&C_{3222}&C_{3232}&C_{3213}&C_{3223} &C_{3233}\
C_{1311} &C_{1321}&C_{1331}&C_{1312}&C_{1322}&C_{1332}&C_{1313}&C_{1323} &C_{1333}\
C_{2311} &C_{2321}&C_{2331}&C_{2312}&C_{2322}&C_{2332}&C_{2313}&C_{2323} &C_{2333}\
C_{3311} &C_{3321}&C_{3331}&C_{3312}&C_{3322}&C_{3332}&C_{3313}&C_{3323} &C_{3333}
end{bmatrix}
$$
And to flatten a $,3times 3,$ matrix into a $,9times 1,$ vector
$$ [S] = {rm vec}(S) =
begin{bmatrix}
S_{11} \S_{21}\S_{31}\S_{12}\S_{22}\S_{32}\S_{13}\S_{23} \S_{33}
end{bmatrix}
$$
This is a very clean and easy to code transformation, since it doesn't require goofy scale factors like $(2,{sqrt 2})$ on three-quarters of the terms.
Finally, the matrix inverse $[C]^{-1}$, should it exists, can be reconstituted into a 4th order tensor. Not in some probabilistic sense, but in every single case.
$endgroup$
$begingroup$
While you can indeed represent a fourth order tensor using a 9x9 matrix, I think the issue is when you want to preserve the operations. You will probably be able to inverse the 9x9 matrix representation of the fourth order tensor but I'm not sure it will match with the representation of the inverse of the tensor. I would be interested if you had any source about this :)
$endgroup$
– Experience111
Dec 10 '18 at 7:52
$begingroup$
The equivalence of the operations $$C:S Longleftrightarrow [C]cdot[S]$$ is preserved by the above mapping. Look at the ordering of the first index pair in the columns of $C$ and $S$. It is the same ordering as the last index pair in the rows of $C$. The dot product is the sum over the index pair; the double-dot product sums over the same pair of indices.
$endgroup$
– greg
Dec 10 '18 at 11:31
$begingroup$
Also note that the fourth-order identity tensor in your original question gets mapped to the standard $9times 9$ identity matrix -- because there is only one identity. The other fourth-order tensors mentioned in your answer are isotropic but they're not identity tensors.
$endgroup$
– greg
Dec 10 '18 at 11:38
add a comment |
$begingroup$
In this modern computer age, I'm not sure why anyone is enamored by these old compressed $,6times 6,$ formats anymore.
At the cost of slightly more storage, there is a perfectly reversible way to flatten any $,3times 3times 3times 3,$ tensor into $,9times 9,$ matrix
$$ [C] =
begin{bmatrix}
C_{1111} &C_{1121}&C_{1131}&C_{1112}&C_{1122}&C_{1132}&C_{1113}&C_{1123} &C_{1133}\
C_{2111} &C_{2121}&C_{2131}&C_{2112}&C_{2122}&C_{2132}&C_{2113}&C_{2123} &C_{2133}\
C_{3111} &C_{3121}&C_{3131}&C_{3112}&C_{3122}&C_{3132}&C_{3113}&C_{3123} &C_{3133}\
C_{1211} &C_{1221}&C_{1231}&C_{1212}&C_{1222}&C_{1232}&C_{1213}&C_{1223} &C_{1233}\
C_{2211} &C_{2221}&C_{2231}&C_{2212}&C_{2222}&C_{2232}&C_{2213}&C_{2223} &C_{2233}\
C_{3211} &C_{3221}&C_{3231}&C_{3212}&C_{3222}&C_{3232}&C_{3213}&C_{3223} &C_{3233}\
C_{1311} &C_{1321}&C_{1331}&C_{1312}&C_{1322}&C_{1332}&C_{1313}&C_{1323} &C_{1333}\
C_{2311} &C_{2321}&C_{2331}&C_{2312}&C_{2322}&C_{2332}&C_{2313}&C_{2323} &C_{2333}\
C_{3311} &C_{3321}&C_{3331}&C_{3312}&C_{3322}&C_{3332}&C_{3313}&C_{3323} &C_{3333}
end{bmatrix}
$$
And to flatten a $,3times 3,$ matrix into a $,9times 1,$ vector
$$ [S] = {rm vec}(S) =
begin{bmatrix}
S_{11} \S_{21}\S_{31}\S_{12}\S_{22}\S_{32}\S_{13}\S_{23} \S_{33}
end{bmatrix}
$$
This is a very clean and easy to code transformation, since it doesn't require goofy scale factors like $(2,{sqrt 2})$ on three-quarters of the terms.
Finally, the matrix inverse $[C]^{-1}$, should it exists, can be reconstituted into a 4th order tensor. Not in some probabilistic sense, but in every single case.
$endgroup$
In this modern computer age, I'm not sure why anyone is enamored by these old compressed $,6times 6,$ formats anymore.
At the cost of slightly more storage, there is a perfectly reversible way to flatten any $,3times 3times 3times 3,$ tensor into $,9times 9,$ matrix
$$ [C] =
begin{bmatrix}
C_{1111} &C_{1121}&C_{1131}&C_{1112}&C_{1122}&C_{1132}&C_{1113}&C_{1123} &C_{1133}\
C_{2111} &C_{2121}&C_{2131}&C_{2112}&C_{2122}&C_{2132}&C_{2113}&C_{2123} &C_{2133}\
C_{3111} &C_{3121}&C_{3131}&C_{3112}&C_{3122}&C_{3132}&C_{3113}&C_{3123} &C_{3133}\
C_{1211} &C_{1221}&C_{1231}&C_{1212}&C_{1222}&C_{1232}&C_{1213}&C_{1223} &C_{1233}\
C_{2211} &C_{2221}&C_{2231}&C_{2212}&C_{2222}&C_{2232}&C_{2213}&C_{2223} &C_{2233}\
C_{3211} &C_{3221}&C_{3231}&C_{3212}&C_{3222}&C_{3232}&C_{3213}&C_{3223} &C_{3233}\
C_{1311} &C_{1321}&C_{1331}&C_{1312}&C_{1322}&C_{1332}&C_{1313}&C_{1323} &C_{1333}\
C_{2311} &C_{2321}&C_{2331}&C_{2312}&C_{2322}&C_{2332}&C_{2313}&C_{2323} &C_{2333}\
C_{3311} &C_{3321}&C_{3331}&C_{3312}&C_{3322}&C_{3332}&C_{3313}&C_{3323} &C_{3333}
end{bmatrix}
$$
And to flatten a $,3times 3,$ matrix into a $,9times 1,$ vector
$$ [S] = {rm vec}(S) =
begin{bmatrix}
S_{11} \S_{21}\S_{31}\S_{12}\S_{22}\S_{32}\S_{13}\S_{23} \S_{33}
end{bmatrix}
$$
This is a very clean and easy to code transformation, since it doesn't require goofy scale factors like $(2,{sqrt 2})$ on three-quarters of the terms.
Finally, the matrix inverse $[C]^{-1}$, should it exists, can be reconstituted into a 4th order tensor. Not in some probabilistic sense, but in every single case.
edited Dec 5 '18 at 6:33
answered Dec 5 '18 at 6:03
greggreg
7,6201821
7,6201821
$begingroup$
While you can indeed represent a fourth order tensor using a 9x9 matrix, I think the issue is when you want to preserve the operations. You will probably be able to inverse the 9x9 matrix representation of the fourth order tensor but I'm not sure it will match with the representation of the inverse of the tensor. I would be interested if you had any source about this :)
$endgroup$
– Experience111
Dec 10 '18 at 7:52
$begingroup$
The equivalence of the operations $$C:S Longleftrightarrow [C]cdot[S]$$ is preserved by the above mapping. Look at the ordering of the first index pair in the columns of $C$ and $S$. It is the same ordering as the last index pair in the rows of $C$. The dot product is the sum over the index pair; the double-dot product sums over the same pair of indices.
$endgroup$
– greg
Dec 10 '18 at 11:31
$begingroup$
Also note that the fourth-order identity tensor in your original question gets mapped to the standard $9times 9$ identity matrix -- because there is only one identity. The other fourth-order tensors mentioned in your answer are isotropic but they're not identity tensors.
$endgroup$
– greg
Dec 10 '18 at 11:38
add a comment |
$begingroup$
While you can indeed represent a fourth order tensor using a 9x9 matrix, I think the issue is when you want to preserve the operations. You will probably be able to inverse the 9x9 matrix representation of the fourth order tensor but I'm not sure it will match with the representation of the inverse of the tensor. I would be interested if you had any source about this :)
$endgroup$
– Experience111
Dec 10 '18 at 7:52
$begingroup$
The equivalence of the operations $$C:S Longleftrightarrow [C]cdot[S]$$ is preserved by the above mapping. Look at the ordering of the first index pair in the columns of $C$ and $S$. It is the same ordering as the last index pair in the rows of $C$. The dot product is the sum over the index pair; the double-dot product sums over the same pair of indices.
$endgroup$
– greg
Dec 10 '18 at 11:31
$begingroup$
Also note that the fourth-order identity tensor in your original question gets mapped to the standard $9times 9$ identity matrix -- because there is only one identity. The other fourth-order tensors mentioned in your answer are isotropic but they're not identity tensors.
$endgroup$
– greg
Dec 10 '18 at 11:38
$begingroup$
While you can indeed represent a fourth order tensor using a 9x9 matrix, I think the issue is when you want to preserve the operations. You will probably be able to inverse the 9x9 matrix representation of the fourth order tensor but I'm not sure it will match with the representation of the inverse of the tensor. I would be interested if you had any source about this :)
$endgroup$
– Experience111
Dec 10 '18 at 7:52
$begingroup$
While you can indeed represent a fourth order tensor using a 9x9 matrix, I think the issue is when you want to preserve the operations. You will probably be able to inverse the 9x9 matrix representation of the fourth order tensor but I'm not sure it will match with the representation of the inverse of the tensor. I would be interested if you had any source about this :)
$endgroup$
– Experience111
Dec 10 '18 at 7:52
$begingroup$
The equivalence of the operations $$C:S Longleftrightarrow [C]cdot[S]$$ is preserved by the above mapping. Look at the ordering of the first index pair in the columns of $C$ and $S$. It is the same ordering as the last index pair in the rows of $C$. The dot product is the sum over the index pair; the double-dot product sums over the same pair of indices.
$endgroup$
– greg
Dec 10 '18 at 11:31
$begingroup$
The equivalence of the operations $$C:S Longleftrightarrow [C]cdot[S]$$ is preserved by the above mapping. Look at the ordering of the first index pair in the columns of $C$ and $S$. It is the same ordering as the last index pair in the rows of $C$. The dot product is the sum over the index pair; the double-dot product sums over the same pair of indices.
$endgroup$
– greg
Dec 10 '18 at 11:31
$begingroup$
Also note that the fourth-order identity tensor in your original question gets mapped to the standard $9times 9$ identity matrix -- because there is only one identity. The other fourth-order tensors mentioned in your answer are isotropic but they're not identity tensors.
$endgroup$
– greg
Dec 10 '18 at 11:38
$begingroup$
Also note that the fourth-order identity tensor in your original question gets mapped to the standard $9times 9$ identity matrix -- because there is only one identity. The other fourth-order tensors mentioned in your answer are isotropic but they're not identity tensors.
$endgroup$
– greg
Dec 10 '18 at 11:38
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2275151%2f4th-order-tensors-double-dot-product-and-inverse-computation%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
Are you sure about the presence of the $2$ coefficients? For instance, in Voigt notation note how the stress tensor is not given such coefficients, whereas the strain tensor is. Look especially at the stiffness tensor here.
$endgroup$
– user3658307
May 10 '17 at 19:01
$begingroup$
Well, in any website, book or literature I found, the Mandel notation is exactly as I wrote. The Voigt notation you're suggesting doesnt have any factor but according to every source, it doesn't preserve the double dot product or the inverse when using the usual matrix-matrix product
$endgroup$
– Experience111
May 10 '17 at 20:22
$begingroup$
There's definitely an issue with your identity tensor since $[I][I]ne[I]$. These representations may depend on covariance/contravariance ... check out this review by Helnwein.
$endgroup$
– user3658307
May 10 '17 at 22:53
$begingroup$
@user3658307 What I know for sure is that $I = delta_{ik}delta_{jl} e_{i} otimes e_{j} otimes e_{k} otimes e_{l}$ is the proper identity tensor for the double dot product as I checked it with several random 4-th order tensors. However, you're right the representation as I gave it is not the right representation so there must be an issue. I already came accross the review you suggested but didn't read it extensively, I'll check it out again.
$endgroup$
– Experience111
May 11 '17 at 6:46
$begingroup$
What I'm sure of so far is that the Mandel representation is correct i.e. $[A:B] = [A].[B]$ because I checked it by hand. The problem is definitely in my identity tensor. I'm not so sure anymore about the relation $I = delta_{ik}delta_{jl} e_{i} otimes e_{j} otimes e_{k} otimes e_{l}$ But when I use $frac{1}{2}( delta_{ik}delta_{jl} + delta_{il}delta_{jk})e_{i} otimes e_{j} otimes e_{k} otimes e_{l}$ my computation for the double product fails $A:I neq A$
$endgroup$
– Experience111
May 11 '17 at 7:38