How many ways are there to prove Cayley-Hamilton Theorem?











up vote
4
down vote

favorite
3












I see many proofs for the Cayley-Hamilton Theorem in textbooks and net, so I want to know how many proofs are there for this important and applicable theorem?










share|cite|improve this question




















  • 6




    Avoid demanding that answers have a certain form. If you don't allow for references or links, you'll miss out on other proofs. Also, there is no reason for demanding that an answer contain only one proof. What's the goal of this?
    – Pedro Tamaroff
    Apr 23 '16 at 14:57






  • 5




    That seems like an arbitrary thing to ask for.
    – Pedro Tamaroff
    Apr 23 '16 at 15:05






  • 4




    At any rate, do not expect people to comply with this demand.
    – Pedro Tamaroff
    Apr 23 '16 at 15:08






  • 1




    @PedroTamaroff you can delete your comments.
    – user217174
    May 4 '16 at 22:16










  • Also, I would avoid asking moderators do delete their comments :p
    – fonini
    May 4 '16 at 23:10















up vote
4
down vote

favorite
3












I see many proofs for the Cayley-Hamilton Theorem in textbooks and net, so I want to know how many proofs are there for this important and applicable theorem?










share|cite|improve this question




















  • 6




    Avoid demanding that answers have a certain form. If you don't allow for references or links, you'll miss out on other proofs. Also, there is no reason for demanding that an answer contain only one proof. What's the goal of this?
    – Pedro Tamaroff
    Apr 23 '16 at 14:57






  • 5




    That seems like an arbitrary thing to ask for.
    – Pedro Tamaroff
    Apr 23 '16 at 15:05






  • 4




    At any rate, do not expect people to comply with this demand.
    – Pedro Tamaroff
    Apr 23 '16 at 15:08






  • 1




    @PedroTamaroff you can delete your comments.
    – user217174
    May 4 '16 at 22:16










  • Also, I would avoid asking moderators do delete their comments :p
    – fonini
    May 4 '16 at 23:10













up vote
4
down vote

favorite
3









up vote
4
down vote

favorite
3






3





I see many proofs for the Cayley-Hamilton Theorem in textbooks and net, so I want to know how many proofs are there for this important and applicable theorem?










share|cite|improve this question















I see many proofs for the Cayley-Hamilton Theorem in textbooks and net, so I want to know how many proofs are there for this important and applicable theorem?







linear-algebra abstract-algebra reference-request big-list cayley-hamilton






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Apr 13 '17 at 12:58









Community

1




1










asked Apr 23 '16 at 14:38







user217174















  • 6




    Avoid demanding that answers have a certain form. If you don't allow for references or links, you'll miss out on other proofs. Also, there is no reason for demanding that an answer contain only one proof. What's the goal of this?
    – Pedro Tamaroff
    Apr 23 '16 at 14:57






  • 5




    That seems like an arbitrary thing to ask for.
    – Pedro Tamaroff
    Apr 23 '16 at 15:05






  • 4




    At any rate, do not expect people to comply with this demand.
    – Pedro Tamaroff
    Apr 23 '16 at 15:08






  • 1




    @PedroTamaroff you can delete your comments.
    – user217174
    May 4 '16 at 22:16










  • Also, I would avoid asking moderators do delete their comments :p
    – fonini
    May 4 '16 at 23:10














  • 6




    Avoid demanding that answers have a certain form. If you don't allow for references or links, you'll miss out on other proofs. Also, there is no reason for demanding that an answer contain only one proof. What's the goal of this?
    – Pedro Tamaroff
    Apr 23 '16 at 14:57






  • 5




    That seems like an arbitrary thing to ask for.
    – Pedro Tamaroff
    Apr 23 '16 at 15:05






  • 4




    At any rate, do not expect people to comply with this demand.
    – Pedro Tamaroff
    Apr 23 '16 at 15:08






  • 1




    @PedroTamaroff you can delete your comments.
    – user217174
    May 4 '16 at 22:16










  • Also, I would avoid asking moderators do delete their comments :p
    – fonini
    May 4 '16 at 23:10








6




6




Avoid demanding that answers have a certain form. If you don't allow for references or links, you'll miss out on other proofs. Also, there is no reason for demanding that an answer contain only one proof. What's the goal of this?
– Pedro Tamaroff
Apr 23 '16 at 14:57




Avoid demanding that answers have a certain form. If you don't allow for references or links, you'll miss out on other proofs. Also, there is no reason for demanding that an answer contain only one proof. What's the goal of this?
– Pedro Tamaroff
Apr 23 '16 at 14:57




5




5




That seems like an arbitrary thing to ask for.
– Pedro Tamaroff
Apr 23 '16 at 15:05




That seems like an arbitrary thing to ask for.
– Pedro Tamaroff
Apr 23 '16 at 15:05




4




4




At any rate, do not expect people to comply with this demand.
– Pedro Tamaroff
Apr 23 '16 at 15:08




At any rate, do not expect people to comply with this demand.
– Pedro Tamaroff
Apr 23 '16 at 15:08




1




1




@PedroTamaroff you can delete your comments.
– user217174
May 4 '16 at 22:16




@PedroTamaroff you can delete your comments.
– user217174
May 4 '16 at 22:16












Also, I would avoid asking moderators do delete their comments :p
– fonini
May 4 '16 at 23:10




Also, I would avoid asking moderators do delete their comments :p
– fonini
May 4 '16 at 23:10










3 Answers
3






active

oldest

votes

















up vote
8
down vote



+400










My favorite : let $k$ be your ground field, and let $A = k[X_{ij}]_{1leqslant i,jleqslant n}$ be the ring of polynomials in $n^2$ indeterminates over $k$, and $K = Frac(A)$.



Then put $M = (X_{ij})_{ij}in M_n(A)$ the "generic matrix".



For any $N=(a_{ij})_{ij}in M_n(k)$, there is a unique $k$-algebra morphism $varphi_N:Ato k$ defined by $varphi(X_{ij}) = a_{ij}$ that satisfies $varphi(M)=N$.



Then the characteristic polynomial of $M$ is separable (ie $M$ has $n$ distinct eingenvalues in an algebraic closure $widetilde{K}$ of $K$). Indeed, otherwise its resultant $Res(chi_M)$ is zero, so for any $Nin M_n(k)$, $Res(chi_N) = Res(chi_{varphi_N(M)})= varphi_N(Res(chi_M)) = 0$, so no matrix $Nin M_n(k)$ would have distinct eigenvalues (but obviously some do, just take a diagonal matrix).



It's easy to show that matrices with separable characteristic polynomial satisfy Cayley-Hamilton (because they are diagonalizable in an algebraic closure), so $M$ satisfies Cayley-Hamilton.



Now for any $Nin M_n(k)$, $chi_N(N) = varphi_N(chi_M(M)) = varphi_N(0) = 0$.






share|cite|improve this answer























  • What's a $k$-algebra morphism? Can you elaborate on that step?
    – littleO
    Apr 23 '16 at 23:28










  • @littleO A morphism that preserves operations : it is linear (where $A$ and $k$ are considered as vector spaces on $k$), and preserves multiplication and unit (in other words, that is a unital ring homomorphism between $A$ and $k$ seen as rings)
    – yago
    Aug 5 '16 at 13:42


















up vote
6
down vote













Here is a neat proof from Qiaochu Yuan's answer to this question:




If $L$ is diagonalizable with eigenvalues $lambda_1, dots lambda_n$, then it's clear that $(L - lambda_1) dots (L - lambda_n) = 0$, which is the Cayley-Hamilton theorem for $L$. But the Cayley-Hamilton theorem is a "continuous" fact: for an $n times n$ matrix it asserts that $n^2$ polynomial functions of the $n^2$ entries of $L$ vanish. And the diagonalizable matrices are dense (over $mathbb{C}$). Hence we get Cayley-Hamilton in general.







share|cite|improve this answer



















  • 4




    This proof also works over any integral domain by going up to the algebraic closure of the field of fractions and using that the diagonalizable matrices are dense in the Zariski topology (which is regular which is sufficient to emulate the Hausdorffness to make morphisms uniquely determined by their value on a dense subset)
    – Tobias Kildetoft
    Apr 23 '16 at 18:54


















up vote
6
down vote













One can prove this theorem by use of the fact that the matrix representation of all linear map on a complex vector space, is Triangularisable with respect to a basis ${v_1,...,v_n}$.



So if $T$ be a linear map there are ${lambda_1,...,lambda_n}$ s.t

$$T(v_1)=lambda_1 v_1 $$
$$T(v_2)=a_{11} v_1+lambda_2 v_2 $$
$$.$$
$$.$$
$$.$$
$$T(v_n)=a_{n1}v_1+a_{n2}v_2+...+lambda_n v_n $$



And by computation we can find that the matrix $S=(T-lambda_1)(T-lambda_2)...(T-lambda_n)$ vanishes all $v_i$, and so $Sequiv 0$.

For more details you can see Herstein's Topics in Algebra.






share|cite|improve this answer























    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1755478%2fhow-many-ways-are-there-to-prove-cayley-hamilton-theorem%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown
























    3 Answers
    3






    active

    oldest

    votes








    3 Answers
    3






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    8
    down vote



    +400










    My favorite : let $k$ be your ground field, and let $A = k[X_{ij}]_{1leqslant i,jleqslant n}$ be the ring of polynomials in $n^2$ indeterminates over $k$, and $K = Frac(A)$.



    Then put $M = (X_{ij})_{ij}in M_n(A)$ the "generic matrix".



    For any $N=(a_{ij})_{ij}in M_n(k)$, there is a unique $k$-algebra morphism $varphi_N:Ato k$ defined by $varphi(X_{ij}) = a_{ij}$ that satisfies $varphi(M)=N$.



    Then the characteristic polynomial of $M$ is separable (ie $M$ has $n$ distinct eingenvalues in an algebraic closure $widetilde{K}$ of $K$). Indeed, otherwise its resultant $Res(chi_M)$ is zero, so for any $Nin M_n(k)$, $Res(chi_N) = Res(chi_{varphi_N(M)})= varphi_N(Res(chi_M)) = 0$, so no matrix $Nin M_n(k)$ would have distinct eigenvalues (but obviously some do, just take a diagonal matrix).



    It's easy to show that matrices with separable characteristic polynomial satisfy Cayley-Hamilton (because they are diagonalizable in an algebraic closure), so $M$ satisfies Cayley-Hamilton.



    Now for any $Nin M_n(k)$, $chi_N(N) = varphi_N(chi_M(M)) = varphi_N(0) = 0$.






    share|cite|improve this answer























    • What's a $k$-algebra morphism? Can you elaborate on that step?
      – littleO
      Apr 23 '16 at 23:28










    • @littleO A morphism that preserves operations : it is linear (where $A$ and $k$ are considered as vector spaces on $k$), and preserves multiplication and unit (in other words, that is a unital ring homomorphism between $A$ and $k$ seen as rings)
      – yago
      Aug 5 '16 at 13:42















    up vote
    8
    down vote



    +400










    My favorite : let $k$ be your ground field, and let $A = k[X_{ij}]_{1leqslant i,jleqslant n}$ be the ring of polynomials in $n^2$ indeterminates over $k$, and $K = Frac(A)$.



    Then put $M = (X_{ij})_{ij}in M_n(A)$ the "generic matrix".



    For any $N=(a_{ij})_{ij}in M_n(k)$, there is a unique $k$-algebra morphism $varphi_N:Ato k$ defined by $varphi(X_{ij}) = a_{ij}$ that satisfies $varphi(M)=N$.



    Then the characteristic polynomial of $M$ is separable (ie $M$ has $n$ distinct eingenvalues in an algebraic closure $widetilde{K}$ of $K$). Indeed, otherwise its resultant $Res(chi_M)$ is zero, so for any $Nin M_n(k)$, $Res(chi_N) = Res(chi_{varphi_N(M)})= varphi_N(Res(chi_M)) = 0$, so no matrix $Nin M_n(k)$ would have distinct eigenvalues (but obviously some do, just take a diagonal matrix).



    It's easy to show that matrices with separable characteristic polynomial satisfy Cayley-Hamilton (because they are diagonalizable in an algebraic closure), so $M$ satisfies Cayley-Hamilton.



    Now for any $Nin M_n(k)$, $chi_N(N) = varphi_N(chi_M(M)) = varphi_N(0) = 0$.






    share|cite|improve this answer























    • What's a $k$-algebra morphism? Can you elaborate on that step?
      – littleO
      Apr 23 '16 at 23:28










    • @littleO A morphism that preserves operations : it is linear (where $A$ and $k$ are considered as vector spaces on $k$), and preserves multiplication and unit (in other words, that is a unital ring homomorphism between $A$ and $k$ seen as rings)
      – yago
      Aug 5 '16 at 13:42













    up vote
    8
    down vote



    +400







    up vote
    8
    down vote



    +400




    +400




    My favorite : let $k$ be your ground field, and let $A = k[X_{ij}]_{1leqslant i,jleqslant n}$ be the ring of polynomials in $n^2$ indeterminates over $k$, and $K = Frac(A)$.



    Then put $M = (X_{ij})_{ij}in M_n(A)$ the "generic matrix".



    For any $N=(a_{ij})_{ij}in M_n(k)$, there is a unique $k$-algebra morphism $varphi_N:Ato k$ defined by $varphi(X_{ij}) = a_{ij}$ that satisfies $varphi(M)=N$.



    Then the characteristic polynomial of $M$ is separable (ie $M$ has $n$ distinct eingenvalues in an algebraic closure $widetilde{K}$ of $K$). Indeed, otherwise its resultant $Res(chi_M)$ is zero, so for any $Nin M_n(k)$, $Res(chi_N) = Res(chi_{varphi_N(M)})= varphi_N(Res(chi_M)) = 0$, so no matrix $Nin M_n(k)$ would have distinct eigenvalues (but obviously some do, just take a diagonal matrix).



    It's easy to show that matrices with separable characteristic polynomial satisfy Cayley-Hamilton (because they are diagonalizable in an algebraic closure), so $M$ satisfies Cayley-Hamilton.



    Now for any $Nin M_n(k)$, $chi_N(N) = varphi_N(chi_M(M)) = varphi_N(0) = 0$.






    share|cite|improve this answer














    My favorite : let $k$ be your ground field, and let $A = k[X_{ij}]_{1leqslant i,jleqslant n}$ be the ring of polynomials in $n^2$ indeterminates over $k$, and $K = Frac(A)$.



    Then put $M = (X_{ij})_{ij}in M_n(A)$ the "generic matrix".



    For any $N=(a_{ij})_{ij}in M_n(k)$, there is a unique $k$-algebra morphism $varphi_N:Ato k$ defined by $varphi(X_{ij}) = a_{ij}$ that satisfies $varphi(M)=N$.



    Then the characteristic polynomial of $M$ is separable (ie $M$ has $n$ distinct eingenvalues in an algebraic closure $widetilde{K}$ of $K$). Indeed, otherwise its resultant $Res(chi_M)$ is zero, so for any $Nin M_n(k)$, $Res(chi_N) = Res(chi_{varphi_N(M)})= varphi_N(Res(chi_M)) = 0$, so no matrix $Nin M_n(k)$ would have distinct eigenvalues (but obviously some do, just take a diagonal matrix).



    It's easy to show that matrices with separable characteristic polynomial satisfy Cayley-Hamilton (because they are diagonalizable in an algebraic closure), so $M$ satisfies Cayley-Hamilton.



    Now for any $Nin M_n(k)$, $chi_N(N) = varphi_N(chi_M(M)) = varphi_N(0) = 0$.







    share|cite|improve this answer














    share|cite|improve this answer



    share|cite|improve this answer








    edited Apr 23 '16 at 22:46







    user217174

















    answered Apr 23 '16 at 14:53









    Captain Lama

    9,944728




    9,944728












    • What's a $k$-algebra morphism? Can you elaborate on that step?
      – littleO
      Apr 23 '16 at 23:28










    • @littleO A morphism that preserves operations : it is linear (where $A$ and $k$ are considered as vector spaces on $k$), and preserves multiplication and unit (in other words, that is a unital ring homomorphism between $A$ and $k$ seen as rings)
      – yago
      Aug 5 '16 at 13:42


















    • What's a $k$-algebra morphism? Can you elaborate on that step?
      – littleO
      Apr 23 '16 at 23:28










    • @littleO A morphism that preserves operations : it is linear (where $A$ and $k$ are considered as vector spaces on $k$), and preserves multiplication and unit (in other words, that is a unital ring homomorphism between $A$ and $k$ seen as rings)
      – yago
      Aug 5 '16 at 13:42
















    What's a $k$-algebra morphism? Can you elaborate on that step?
    – littleO
    Apr 23 '16 at 23:28




    What's a $k$-algebra morphism? Can you elaborate on that step?
    – littleO
    Apr 23 '16 at 23:28












    @littleO A morphism that preserves operations : it is linear (where $A$ and $k$ are considered as vector spaces on $k$), and preserves multiplication and unit (in other words, that is a unital ring homomorphism between $A$ and $k$ seen as rings)
    – yago
    Aug 5 '16 at 13:42




    @littleO A morphism that preserves operations : it is linear (where $A$ and $k$ are considered as vector spaces on $k$), and preserves multiplication and unit (in other words, that is a unital ring homomorphism between $A$ and $k$ seen as rings)
    – yago
    Aug 5 '16 at 13:42










    up vote
    6
    down vote













    Here is a neat proof from Qiaochu Yuan's answer to this question:




    If $L$ is diagonalizable with eigenvalues $lambda_1, dots lambda_n$, then it's clear that $(L - lambda_1) dots (L - lambda_n) = 0$, which is the Cayley-Hamilton theorem for $L$. But the Cayley-Hamilton theorem is a "continuous" fact: for an $n times n$ matrix it asserts that $n^2$ polynomial functions of the $n^2$ entries of $L$ vanish. And the diagonalizable matrices are dense (over $mathbb{C}$). Hence we get Cayley-Hamilton in general.







    share|cite|improve this answer



















    • 4




      This proof also works over any integral domain by going up to the algebraic closure of the field of fractions and using that the diagonalizable matrices are dense in the Zariski topology (which is regular which is sufficient to emulate the Hausdorffness to make morphisms uniquely determined by their value on a dense subset)
      – Tobias Kildetoft
      Apr 23 '16 at 18:54















    up vote
    6
    down vote













    Here is a neat proof from Qiaochu Yuan's answer to this question:




    If $L$ is diagonalizable with eigenvalues $lambda_1, dots lambda_n$, then it's clear that $(L - lambda_1) dots (L - lambda_n) = 0$, which is the Cayley-Hamilton theorem for $L$. But the Cayley-Hamilton theorem is a "continuous" fact: for an $n times n$ matrix it asserts that $n^2$ polynomial functions of the $n^2$ entries of $L$ vanish. And the diagonalizable matrices are dense (over $mathbb{C}$). Hence we get Cayley-Hamilton in general.







    share|cite|improve this answer



















    • 4




      This proof also works over any integral domain by going up to the algebraic closure of the field of fractions and using that the diagonalizable matrices are dense in the Zariski topology (which is regular which is sufficient to emulate the Hausdorffness to make morphisms uniquely determined by their value on a dense subset)
      – Tobias Kildetoft
      Apr 23 '16 at 18:54













    up vote
    6
    down vote










    up vote
    6
    down vote









    Here is a neat proof from Qiaochu Yuan's answer to this question:




    If $L$ is diagonalizable with eigenvalues $lambda_1, dots lambda_n$, then it's clear that $(L - lambda_1) dots (L - lambda_n) = 0$, which is the Cayley-Hamilton theorem for $L$. But the Cayley-Hamilton theorem is a "continuous" fact: for an $n times n$ matrix it asserts that $n^2$ polynomial functions of the $n^2$ entries of $L$ vanish. And the diagonalizable matrices are dense (over $mathbb{C}$). Hence we get Cayley-Hamilton in general.







    share|cite|improve this answer














    Here is a neat proof from Qiaochu Yuan's answer to this question:




    If $L$ is diagonalizable with eigenvalues $lambda_1, dots lambda_n$, then it's clear that $(L - lambda_1) dots (L - lambda_n) = 0$, which is the Cayley-Hamilton theorem for $L$. But the Cayley-Hamilton theorem is a "continuous" fact: for an $n times n$ matrix it asserts that $n^2$ polynomial functions of the $n^2$ entries of $L$ vanish. And the diagonalizable matrices are dense (over $mathbb{C}$). Hence we get Cayley-Hamilton in general.








    share|cite|improve this answer














    share|cite|improve this answer



    share|cite|improve this answer








    edited Apr 13 '17 at 12:21


























    community wiki





    3 revs
    user217174









    • 4




      This proof also works over any integral domain by going up to the algebraic closure of the field of fractions and using that the diagonalizable matrices are dense in the Zariski topology (which is regular which is sufficient to emulate the Hausdorffness to make morphisms uniquely determined by their value on a dense subset)
      – Tobias Kildetoft
      Apr 23 '16 at 18:54














    • 4




      This proof also works over any integral domain by going up to the algebraic closure of the field of fractions and using that the diagonalizable matrices are dense in the Zariski topology (which is regular which is sufficient to emulate the Hausdorffness to make morphisms uniquely determined by their value on a dense subset)
      – Tobias Kildetoft
      Apr 23 '16 at 18:54








    4




    4




    This proof also works over any integral domain by going up to the algebraic closure of the field of fractions and using that the diagonalizable matrices are dense in the Zariski topology (which is regular which is sufficient to emulate the Hausdorffness to make morphisms uniquely determined by their value on a dense subset)
    – Tobias Kildetoft
    Apr 23 '16 at 18:54




    This proof also works over any integral domain by going up to the algebraic closure of the field of fractions and using that the diagonalizable matrices are dense in the Zariski topology (which is regular which is sufficient to emulate the Hausdorffness to make morphisms uniquely determined by their value on a dense subset)
    – Tobias Kildetoft
    Apr 23 '16 at 18:54










    up vote
    6
    down vote













    One can prove this theorem by use of the fact that the matrix representation of all linear map on a complex vector space, is Triangularisable with respect to a basis ${v_1,...,v_n}$.



    So if $T$ be a linear map there are ${lambda_1,...,lambda_n}$ s.t

    $$T(v_1)=lambda_1 v_1 $$
    $$T(v_2)=a_{11} v_1+lambda_2 v_2 $$
    $$.$$
    $$.$$
    $$.$$
    $$T(v_n)=a_{n1}v_1+a_{n2}v_2+...+lambda_n v_n $$



    And by computation we can find that the matrix $S=(T-lambda_1)(T-lambda_2)...(T-lambda_n)$ vanishes all $v_i$, and so $Sequiv 0$.

    For more details you can see Herstein's Topics in Algebra.






    share|cite|improve this answer



























      up vote
      6
      down vote













      One can prove this theorem by use of the fact that the matrix representation of all linear map on a complex vector space, is Triangularisable with respect to a basis ${v_1,...,v_n}$.



      So if $T$ be a linear map there are ${lambda_1,...,lambda_n}$ s.t

      $$T(v_1)=lambda_1 v_1 $$
      $$T(v_2)=a_{11} v_1+lambda_2 v_2 $$
      $$.$$
      $$.$$
      $$.$$
      $$T(v_n)=a_{n1}v_1+a_{n2}v_2+...+lambda_n v_n $$



      And by computation we can find that the matrix $S=(T-lambda_1)(T-lambda_2)...(T-lambda_n)$ vanishes all $v_i$, and so $Sequiv 0$.

      For more details you can see Herstein's Topics in Algebra.






      share|cite|improve this answer

























        up vote
        6
        down vote










        up vote
        6
        down vote









        One can prove this theorem by use of the fact that the matrix representation of all linear map on a complex vector space, is Triangularisable with respect to a basis ${v_1,...,v_n}$.



        So if $T$ be a linear map there are ${lambda_1,...,lambda_n}$ s.t

        $$T(v_1)=lambda_1 v_1 $$
        $$T(v_2)=a_{11} v_1+lambda_2 v_2 $$
        $$.$$
        $$.$$
        $$.$$
        $$T(v_n)=a_{n1}v_1+a_{n2}v_2+...+lambda_n v_n $$



        And by computation we can find that the matrix $S=(T-lambda_1)(T-lambda_2)...(T-lambda_n)$ vanishes all $v_i$, and so $Sequiv 0$.

        For more details you can see Herstein's Topics in Algebra.






        share|cite|improve this answer














        One can prove this theorem by use of the fact that the matrix representation of all linear map on a complex vector space, is Triangularisable with respect to a basis ${v_1,...,v_n}$.



        So if $T$ be a linear map there are ${lambda_1,...,lambda_n}$ s.t

        $$T(v_1)=lambda_1 v_1 $$
        $$T(v_2)=a_{11} v_1+lambda_2 v_2 $$
        $$.$$
        $$.$$
        $$.$$
        $$T(v_n)=a_{n1}v_1+a_{n2}v_2+...+lambda_n v_n $$



        And by computation we can find that the matrix $S=(T-lambda_1)(T-lambda_2)...(T-lambda_n)$ vanishes all $v_i$, and so $Sequiv 0$.

        For more details you can see Herstein's Topics in Algebra.







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited May 3 '16 at 15:35

























        answered May 3 '16 at 12:23









        Sh.R

        914




        914






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.





            Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


            Please pay close attention to the following guidance:


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1755478%2fhow-many-ways-are-there-to-prove-cayley-hamilton-theorem%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Wiesbaden

            Marschland

            Dieringhausen