Minimal polynomial algorithm
$begingroup$
In our textbooks we are given the following algorithm:
Let $V$ be a vector space having dimension $n$ over a field $K$ ( $mathbb R$ or $mathbb C$ ) and $A : V to V$ be a linear map. In a sequence $I,A, A^2,A^3,..,A^n$,we pick a non-zero matrix element in the first member of the sequence ( $I$ ) with which we are "eliminating" corresponding matrix elements in the other members of the sequence. Now we have $A_{11},A_{12},...,A_{1n}$ , where $$A_{1k} = A^k - beta_{1k}I , k=1,...,n$$ If $A_{11}neq 0$ , we do the same to the sequence $A_{11},A_{12},...,A_{1n}$, meaning that we again pick a non-zero matrix element in $A_{11}$ with which we are "eliminating" corresponding matrix elements in the other members of the sequence Now we have $A_{22},A_{23},...,A_{2n}$, where $$A_{2k} = A_{1k} - beta_{2k}A_{11} , k=2,...,n$$ We repeat the same process until we get $A_{jj}=0 , A_{j-1,j-1}neq0, jleq n$.As a result we get $$ I quad A quad A^2 quad A^3 quad ... quad A^n \ quad A_{11}enspace A_{12}enspace A_{13} quad... enspace A_{1n} \ qquad quad A_{22}enspace A_{23} quad... enspace A_{2n} \ hspace{90pt}. \ hspace{90pt}. \ hspace{90pt}. \ hspace{90pt}A_{jj}$$
To get our minimal polynomial we just need to "unroll" $A_{jj}$, precisely $0=A_{jj}=A_{j-1,j}-beta_{j-1,j}A_{j-1,j-1}=... $
Now it says that it's obvious that the sequence $ I,A_{11},A_{22},...,A_{j-1,j-1}$ is linearly independent, but I can't really see why though.Also, how do we know we can't get a monic polynomial $p(x)$ of less degree such that $p(A)=0$ ?
Thank you for your time.
linear-algebra vector-spaces minimal-polynomials
$endgroup$
add a comment |
$begingroup$
In our textbooks we are given the following algorithm:
Let $V$ be a vector space having dimension $n$ over a field $K$ ( $mathbb R$ or $mathbb C$ ) and $A : V to V$ be a linear map. In a sequence $I,A, A^2,A^3,..,A^n$,we pick a non-zero matrix element in the first member of the sequence ( $I$ ) with which we are "eliminating" corresponding matrix elements in the other members of the sequence. Now we have $A_{11},A_{12},...,A_{1n}$ , where $$A_{1k} = A^k - beta_{1k}I , k=1,...,n$$ If $A_{11}neq 0$ , we do the same to the sequence $A_{11},A_{12},...,A_{1n}$, meaning that we again pick a non-zero matrix element in $A_{11}$ with which we are "eliminating" corresponding matrix elements in the other members of the sequence Now we have $A_{22},A_{23},...,A_{2n}$, where $$A_{2k} = A_{1k} - beta_{2k}A_{11} , k=2,...,n$$ We repeat the same process until we get $A_{jj}=0 , A_{j-1,j-1}neq0, jleq n$.As a result we get $$ I quad A quad A^2 quad A^3 quad ... quad A^n \ quad A_{11}enspace A_{12}enspace A_{13} quad... enspace A_{1n} \ qquad quad A_{22}enspace A_{23} quad... enspace A_{2n} \ hspace{90pt}. \ hspace{90pt}. \ hspace{90pt}. \ hspace{90pt}A_{jj}$$
To get our minimal polynomial we just need to "unroll" $A_{jj}$, precisely $0=A_{jj}=A_{j-1,j}-beta_{j-1,j}A_{j-1,j-1}=... $
Now it says that it's obvious that the sequence $ I,A_{11},A_{22},...,A_{j-1,j-1}$ is linearly independent, but I can't really see why though.Also, how do we know we can't get a monic polynomial $p(x)$ of less degree such that $p(A)=0$ ?
Thank you for your time.
linear-algebra vector-spaces minimal-polynomials
$endgroup$
1
$begingroup$
Which textbook?
$endgroup$
– lhf
Dec 22 '18 at 20:02
$begingroup$
@lhf It's more like a script rather than a book ( I think ) and it's written in croatian language. But it is used as a teaching material in my college. If you're still interested, take a look at link ( 21st slide/page ).
$endgroup$
– James Groon
Dec 22 '18 at 20:18
$begingroup$
One is basically doing Gaussian elimination "by rows" where each matrix is interpreted as a "column" vector (you could concatenate all columns of a matrix to make it one long column; all we care about here is linear dependency among matrices). Each time chose in your current matrix a position with nonzero entry as pivot, and clear out that position in the remainder of the row. This is not a very convenient method for computing a minimal polynomial though, and you must cater with the possibility that at some point no pivot can be found.
$endgroup$
– Marc van Leeuwen
Dec 27 '18 at 10:13
add a comment |
$begingroup$
In our textbooks we are given the following algorithm:
Let $V$ be a vector space having dimension $n$ over a field $K$ ( $mathbb R$ or $mathbb C$ ) and $A : V to V$ be a linear map. In a sequence $I,A, A^2,A^3,..,A^n$,we pick a non-zero matrix element in the first member of the sequence ( $I$ ) with which we are "eliminating" corresponding matrix elements in the other members of the sequence. Now we have $A_{11},A_{12},...,A_{1n}$ , where $$A_{1k} = A^k - beta_{1k}I , k=1,...,n$$ If $A_{11}neq 0$ , we do the same to the sequence $A_{11},A_{12},...,A_{1n}$, meaning that we again pick a non-zero matrix element in $A_{11}$ with which we are "eliminating" corresponding matrix elements in the other members of the sequence Now we have $A_{22},A_{23},...,A_{2n}$, where $$A_{2k} = A_{1k} - beta_{2k}A_{11} , k=2,...,n$$ We repeat the same process until we get $A_{jj}=0 , A_{j-1,j-1}neq0, jleq n$.As a result we get $$ I quad A quad A^2 quad A^3 quad ... quad A^n \ quad A_{11}enspace A_{12}enspace A_{13} quad... enspace A_{1n} \ qquad quad A_{22}enspace A_{23} quad... enspace A_{2n} \ hspace{90pt}. \ hspace{90pt}. \ hspace{90pt}. \ hspace{90pt}A_{jj}$$
To get our minimal polynomial we just need to "unroll" $A_{jj}$, precisely $0=A_{jj}=A_{j-1,j}-beta_{j-1,j}A_{j-1,j-1}=... $
Now it says that it's obvious that the sequence $ I,A_{11},A_{22},...,A_{j-1,j-1}$ is linearly independent, but I can't really see why though.Also, how do we know we can't get a monic polynomial $p(x)$ of less degree such that $p(A)=0$ ?
Thank you for your time.
linear-algebra vector-spaces minimal-polynomials
$endgroup$
In our textbooks we are given the following algorithm:
Let $V$ be a vector space having dimension $n$ over a field $K$ ( $mathbb R$ or $mathbb C$ ) and $A : V to V$ be a linear map. In a sequence $I,A, A^2,A^3,..,A^n$,we pick a non-zero matrix element in the first member of the sequence ( $I$ ) with which we are "eliminating" corresponding matrix elements in the other members of the sequence. Now we have $A_{11},A_{12},...,A_{1n}$ , where $$A_{1k} = A^k - beta_{1k}I , k=1,...,n$$ If $A_{11}neq 0$ , we do the same to the sequence $A_{11},A_{12},...,A_{1n}$, meaning that we again pick a non-zero matrix element in $A_{11}$ with which we are "eliminating" corresponding matrix elements in the other members of the sequence Now we have $A_{22},A_{23},...,A_{2n}$, where $$A_{2k} = A_{1k} - beta_{2k}A_{11} , k=2,...,n$$ We repeat the same process until we get $A_{jj}=0 , A_{j-1,j-1}neq0, jleq n$.As a result we get $$ I quad A quad A^2 quad A^3 quad ... quad A^n \ quad A_{11}enspace A_{12}enspace A_{13} quad... enspace A_{1n} \ qquad quad A_{22}enspace A_{23} quad... enspace A_{2n} \ hspace{90pt}. \ hspace{90pt}. \ hspace{90pt}. \ hspace{90pt}A_{jj}$$
To get our minimal polynomial we just need to "unroll" $A_{jj}$, precisely $0=A_{jj}=A_{j-1,j}-beta_{j-1,j}A_{j-1,j-1}=... $
Now it says that it's obvious that the sequence $ I,A_{11},A_{22},...,A_{j-1,j-1}$ is linearly independent, but I can't really see why though.Also, how do we know we can't get a monic polynomial $p(x)$ of less degree such that $p(A)=0$ ?
Thank you for your time.
linear-algebra vector-spaces minimal-polynomials
linear-algebra vector-spaces minimal-polynomials
edited Dec 22 '18 at 20:33
James Groon
asked Dec 22 '18 at 19:49
James GroonJames Groon
687
687
1
$begingroup$
Which textbook?
$endgroup$
– lhf
Dec 22 '18 at 20:02
$begingroup$
@lhf It's more like a script rather than a book ( I think ) and it's written in croatian language. But it is used as a teaching material in my college. If you're still interested, take a look at link ( 21st slide/page ).
$endgroup$
– James Groon
Dec 22 '18 at 20:18
$begingroup$
One is basically doing Gaussian elimination "by rows" where each matrix is interpreted as a "column" vector (you could concatenate all columns of a matrix to make it one long column; all we care about here is linear dependency among matrices). Each time chose in your current matrix a position with nonzero entry as pivot, and clear out that position in the remainder of the row. This is not a very convenient method for computing a minimal polynomial though, and you must cater with the possibility that at some point no pivot can be found.
$endgroup$
– Marc van Leeuwen
Dec 27 '18 at 10:13
add a comment |
1
$begingroup$
Which textbook?
$endgroup$
– lhf
Dec 22 '18 at 20:02
$begingroup$
@lhf It's more like a script rather than a book ( I think ) and it's written in croatian language. But it is used as a teaching material in my college. If you're still interested, take a look at link ( 21st slide/page ).
$endgroup$
– James Groon
Dec 22 '18 at 20:18
$begingroup$
One is basically doing Gaussian elimination "by rows" where each matrix is interpreted as a "column" vector (you could concatenate all columns of a matrix to make it one long column; all we care about here is linear dependency among matrices). Each time chose in your current matrix a position with nonzero entry as pivot, and clear out that position in the remainder of the row. This is not a very convenient method for computing a minimal polynomial though, and you must cater with the possibility that at some point no pivot can be found.
$endgroup$
– Marc van Leeuwen
Dec 27 '18 at 10:13
1
1
$begingroup$
Which textbook?
$endgroup$
– lhf
Dec 22 '18 at 20:02
$begingroup$
Which textbook?
$endgroup$
– lhf
Dec 22 '18 at 20:02
$begingroup$
@lhf It's more like a script rather than a book ( I think ) and it's written in croatian language. But it is used as a teaching material in my college. If you're still interested, take a look at link ( 21st slide/page ).
$endgroup$
– James Groon
Dec 22 '18 at 20:18
$begingroup$
@lhf It's more like a script rather than a book ( I think ) and it's written in croatian language. But it is used as a teaching material in my college. If you're still interested, take a look at link ( 21st slide/page ).
$endgroup$
– James Groon
Dec 22 '18 at 20:18
$begingroup$
One is basically doing Gaussian elimination "by rows" where each matrix is interpreted as a "column" vector (you could concatenate all columns of a matrix to make it one long column; all we care about here is linear dependency among matrices). Each time chose in your current matrix a position with nonzero entry as pivot, and clear out that position in the remainder of the row. This is not a very convenient method for computing a minimal polynomial though, and you must cater with the possibility that at some point no pivot can be found.
$endgroup$
– Marc van Leeuwen
Dec 27 '18 at 10:13
$begingroup$
One is basically doing Gaussian elimination "by rows" where each matrix is interpreted as a "column" vector (you could concatenate all columns of a matrix to make it one long column; all we care about here is linear dependency among matrices). Each time chose in your current matrix a position with nonzero entry as pivot, and clear out that position in the remainder of the row. This is not a very convenient method for computing a minimal polynomial though, and you must cater with the possibility that at some point no pivot can be found.
$endgroup$
– Marc van Leeuwen
Dec 27 '18 at 10:13
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
For convenience, we define $A_{00}=I.$ The matrices $A_{00},A_{11},A_{22},ldots,A_{j-1,j-1}$ are linearly independent, because each matrix $A_{kk}$ has a non-zero element at a position $(a_k,b_k)$ where all matrices $A_{rs},,sgeq r>k$ have a zero element.
Now let us take a look at the sum
$$
S = c_0A_{00}+c_1A_{11}+ldots+c_{j-1}A_{j-1,j-1}
$$
In order to make this sum become the zero matrix, the entry of $S$ at position $(a_0,b_0)$ must be $0.$ Therefore, $c_0=0,$ because the other addends do not change anything at this position of the matrix. The entry of $S$ at position $(a_1,b_1)$ must also be $0$. Therefore, $c_1=0.$ By induction, we can show that $c_0 = ldots = c_{j-1} = 0,$ from which it follows that the matrices $A_{00},A_{11},A_{22},ldots,A_{j-1,j-1}$ are linearly independent.
For each $kin{0,ldots,j-1},$ we know that the matrices $A_{00},A_{11},A_{22},ldots,A_{kk}$ are linear combinations of the matrices $I,A,A^2,ldots,A^{k}.$ If the minimal polynomial $p$ had a degree less than $j,$ the equation $p(A)=0$ would induce a non-trivial linear combination of $A_{00},A_{11},A_{22},ldots,A_{kk}$ with $k<j$ which equals the zero matrix. This contradicts the fact that the matrices $A_{00},A_{11},A_{22},ldots,A_{kk}$ are linearly independent.
$endgroup$
$begingroup$
Thank you for your answer, but I still have a couple of questions though. 1.) What is the middle section for ? Shouldn't the fact that linearly dependent ordered set ${a_1,a_2,...,a_k}$ implies the existence of $1 leq p leq k$ such that $a_p=c_1 a_1+c_2 a_2+...+c_p-1 a_p-1$ be sufficient for linear independency ? ( if we ordered our set like this: ${A_{j-1,j-1},...,A_{00}}$ by all means). 2.) I'm not sure how does a non-trivial linear combination of $I,A,A^2,ldots,A^{k}$ which equals $0$ would induce a non-trivial combination of $A_{00},A_{11},A_{22},ldots,A_{kk}$ which also equals $0$ ?
$endgroup$
– James Groon
Dec 24 '18 at 12:03
$begingroup$
1) To really make the linear independence obvious, I preferred to use a standard argument: The elements are linear independent, if the only linear combination with the result of $0$ is the trivial linear combination with all coefficients being $0.$ 2) We have $A^k=A_{kk} + sum_{j=1}^{k}beta_{jk}A_{j-1,j-1}.$ Plug this into the equation $p(A)=0,$ rearrange in order to combine the coefficients of the $A_{jj}$ and observe that the coefficient of $A_{jj}$ with the greatest $j$ is $1.$
$endgroup$
– Reinhard Meier
Dec 27 '18 at 11:02
$begingroup$
I see , that explains it all. Really appreciated.
$endgroup$
– James Groon
Dec 27 '18 at 11:50
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3049789%2fminimal-polynomial-algorithm%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
For convenience, we define $A_{00}=I.$ The matrices $A_{00},A_{11},A_{22},ldots,A_{j-1,j-1}$ are linearly independent, because each matrix $A_{kk}$ has a non-zero element at a position $(a_k,b_k)$ where all matrices $A_{rs},,sgeq r>k$ have a zero element.
Now let us take a look at the sum
$$
S = c_0A_{00}+c_1A_{11}+ldots+c_{j-1}A_{j-1,j-1}
$$
In order to make this sum become the zero matrix, the entry of $S$ at position $(a_0,b_0)$ must be $0.$ Therefore, $c_0=0,$ because the other addends do not change anything at this position of the matrix. The entry of $S$ at position $(a_1,b_1)$ must also be $0$. Therefore, $c_1=0.$ By induction, we can show that $c_0 = ldots = c_{j-1} = 0,$ from which it follows that the matrices $A_{00},A_{11},A_{22},ldots,A_{j-1,j-1}$ are linearly independent.
For each $kin{0,ldots,j-1},$ we know that the matrices $A_{00},A_{11},A_{22},ldots,A_{kk}$ are linear combinations of the matrices $I,A,A^2,ldots,A^{k}.$ If the minimal polynomial $p$ had a degree less than $j,$ the equation $p(A)=0$ would induce a non-trivial linear combination of $A_{00},A_{11},A_{22},ldots,A_{kk}$ with $k<j$ which equals the zero matrix. This contradicts the fact that the matrices $A_{00},A_{11},A_{22},ldots,A_{kk}$ are linearly independent.
$endgroup$
$begingroup$
Thank you for your answer, but I still have a couple of questions though. 1.) What is the middle section for ? Shouldn't the fact that linearly dependent ordered set ${a_1,a_2,...,a_k}$ implies the existence of $1 leq p leq k$ such that $a_p=c_1 a_1+c_2 a_2+...+c_p-1 a_p-1$ be sufficient for linear independency ? ( if we ordered our set like this: ${A_{j-1,j-1},...,A_{00}}$ by all means). 2.) I'm not sure how does a non-trivial linear combination of $I,A,A^2,ldots,A^{k}$ which equals $0$ would induce a non-trivial combination of $A_{00},A_{11},A_{22},ldots,A_{kk}$ which also equals $0$ ?
$endgroup$
– James Groon
Dec 24 '18 at 12:03
$begingroup$
1) To really make the linear independence obvious, I preferred to use a standard argument: The elements are linear independent, if the only linear combination with the result of $0$ is the trivial linear combination with all coefficients being $0.$ 2) We have $A^k=A_{kk} + sum_{j=1}^{k}beta_{jk}A_{j-1,j-1}.$ Plug this into the equation $p(A)=0,$ rearrange in order to combine the coefficients of the $A_{jj}$ and observe that the coefficient of $A_{jj}$ with the greatest $j$ is $1.$
$endgroup$
– Reinhard Meier
Dec 27 '18 at 11:02
$begingroup$
I see , that explains it all. Really appreciated.
$endgroup$
– James Groon
Dec 27 '18 at 11:50
add a comment |
$begingroup$
For convenience, we define $A_{00}=I.$ The matrices $A_{00},A_{11},A_{22},ldots,A_{j-1,j-1}$ are linearly independent, because each matrix $A_{kk}$ has a non-zero element at a position $(a_k,b_k)$ where all matrices $A_{rs},,sgeq r>k$ have a zero element.
Now let us take a look at the sum
$$
S = c_0A_{00}+c_1A_{11}+ldots+c_{j-1}A_{j-1,j-1}
$$
In order to make this sum become the zero matrix, the entry of $S$ at position $(a_0,b_0)$ must be $0.$ Therefore, $c_0=0,$ because the other addends do not change anything at this position of the matrix. The entry of $S$ at position $(a_1,b_1)$ must also be $0$. Therefore, $c_1=0.$ By induction, we can show that $c_0 = ldots = c_{j-1} = 0,$ from which it follows that the matrices $A_{00},A_{11},A_{22},ldots,A_{j-1,j-1}$ are linearly independent.
For each $kin{0,ldots,j-1},$ we know that the matrices $A_{00},A_{11},A_{22},ldots,A_{kk}$ are linear combinations of the matrices $I,A,A^2,ldots,A^{k}.$ If the minimal polynomial $p$ had a degree less than $j,$ the equation $p(A)=0$ would induce a non-trivial linear combination of $A_{00},A_{11},A_{22},ldots,A_{kk}$ with $k<j$ which equals the zero matrix. This contradicts the fact that the matrices $A_{00},A_{11},A_{22},ldots,A_{kk}$ are linearly independent.
$endgroup$
$begingroup$
Thank you for your answer, but I still have a couple of questions though. 1.) What is the middle section for ? Shouldn't the fact that linearly dependent ordered set ${a_1,a_2,...,a_k}$ implies the existence of $1 leq p leq k$ such that $a_p=c_1 a_1+c_2 a_2+...+c_p-1 a_p-1$ be sufficient for linear independency ? ( if we ordered our set like this: ${A_{j-1,j-1},...,A_{00}}$ by all means). 2.) I'm not sure how does a non-trivial linear combination of $I,A,A^2,ldots,A^{k}$ which equals $0$ would induce a non-trivial combination of $A_{00},A_{11},A_{22},ldots,A_{kk}$ which also equals $0$ ?
$endgroup$
– James Groon
Dec 24 '18 at 12:03
$begingroup$
1) To really make the linear independence obvious, I preferred to use a standard argument: The elements are linear independent, if the only linear combination with the result of $0$ is the trivial linear combination with all coefficients being $0.$ 2) We have $A^k=A_{kk} + sum_{j=1}^{k}beta_{jk}A_{j-1,j-1}.$ Plug this into the equation $p(A)=0,$ rearrange in order to combine the coefficients of the $A_{jj}$ and observe that the coefficient of $A_{jj}$ with the greatest $j$ is $1.$
$endgroup$
– Reinhard Meier
Dec 27 '18 at 11:02
$begingroup$
I see , that explains it all. Really appreciated.
$endgroup$
– James Groon
Dec 27 '18 at 11:50
add a comment |
$begingroup$
For convenience, we define $A_{00}=I.$ The matrices $A_{00},A_{11},A_{22},ldots,A_{j-1,j-1}$ are linearly independent, because each matrix $A_{kk}$ has a non-zero element at a position $(a_k,b_k)$ where all matrices $A_{rs},,sgeq r>k$ have a zero element.
Now let us take a look at the sum
$$
S = c_0A_{00}+c_1A_{11}+ldots+c_{j-1}A_{j-1,j-1}
$$
In order to make this sum become the zero matrix, the entry of $S$ at position $(a_0,b_0)$ must be $0.$ Therefore, $c_0=0,$ because the other addends do not change anything at this position of the matrix. The entry of $S$ at position $(a_1,b_1)$ must also be $0$. Therefore, $c_1=0.$ By induction, we can show that $c_0 = ldots = c_{j-1} = 0,$ from which it follows that the matrices $A_{00},A_{11},A_{22},ldots,A_{j-1,j-1}$ are linearly independent.
For each $kin{0,ldots,j-1},$ we know that the matrices $A_{00},A_{11},A_{22},ldots,A_{kk}$ are linear combinations of the matrices $I,A,A^2,ldots,A^{k}.$ If the minimal polynomial $p$ had a degree less than $j,$ the equation $p(A)=0$ would induce a non-trivial linear combination of $A_{00},A_{11},A_{22},ldots,A_{kk}$ with $k<j$ which equals the zero matrix. This contradicts the fact that the matrices $A_{00},A_{11},A_{22},ldots,A_{kk}$ are linearly independent.
$endgroup$
For convenience, we define $A_{00}=I.$ The matrices $A_{00},A_{11},A_{22},ldots,A_{j-1,j-1}$ are linearly independent, because each matrix $A_{kk}$ has a non-zero element at a position $(a_k,b_k)$ where all matrices $A_{rs},,sgeq r>k$ have a zero element.
Now let us take a look at the sum
$$
S = c_0A_{00}+c_1A_{11}+ldots+c_{j-1}A_{j-1,j-1}
$$
In order to make this sum become the zero matrix, the entry of $S$ at position $(a_0,b_0)$ must be $0.$ Therefore, $c_0=0,$ because the other addends do not change anything at this position of the matrix. The entry of $S$ at position $(a_1,b_1)$ must also be $0$. Therefore, $c_1=0.$ By induction, we can show that $c_0 = ldots = c_{j-1} = 0,$ from which it follows that the matrices $A_{00},A_{11},A_{22},ldots,A_{j-1,j-1}$ are linearly independent.
For each $kin{0,ldots,j-1},$ we know that the matrices $A_{00},A_{11},A_{22},ldots,A_{kk}$ are linear combinations of the matrices $I,A,A^2,ldots,A^{k}.$ If the minimal polynomial $p$ had a degree less than $j,$ the equation $p(A)=0$ would induce a non-trivial linear combination of $A_{00},A_{11},A_{22},ldots,A_{kk}$ with $k<j$ which equals the zero matrix. This contradicts the fact that the matrices $A_{00},A_{11},A_{22},ldots,A_{kk}$ are linearly independent.
answered Dec 24 '18 at 1:19
Reinhard MeierReinhard Meier
2,912310
2,912310
$begingroup$
Thank you for your answer, but I still have a couple of questions though. 1.) What is the middle section for ? Shouldn't the fact that linearly dependent ordered set ${a_1,a_2,...,a_k}$ implies the existence of $1 leq p leq k$ such that $a_p=c_1 a_1+c_2 a_2+...+c_p-1 a_p-1$ be sufficient for linear independency ? ( if we ordered our set like this: ${A_{j-1,j-1},...,A_{00}}$ by all means). 2.) I'm not sure how does a non-trivial linear combination of $I,A,A^2,ldots,A^{k}$ which equals $0$ would induce a non-trivial combination of $A_{00},A_{11},A_{22},ldots,A_{kk}$ which also equals $0$ ?
$endgroup$
– James Groon
Dec 24 '18 at 12:03
$begingroup$
1) To really make the linear independence obvious, I preferred to use a standard argument: The elements are linear independent, if the only linear combination with the result of $0$ is the trivial linear combination with all coefficients being $0.$ 2) We have $A^k=A_{kk} + sum_{j=1}^{k}beta_{jk}A_{j-1,j-1}.$ Plug this into the equation $p(A)=0,$ rearrange in order to combine the coefficients of the $A_{jj}$ and observe that the coefficient of $A_{jj}$ with the greatest $j$ is $1.$
$endgroup$
– Reinhard Meier
Dec 27 '18 at 11:02
$begingroup$
I see , that explains it all. Really appreciated.
$endgroup$
– James Groon
Dec 27 '18 at 11:50
add a comment |
$begingroup$
Thank you for your answer, but I still have a couple of questions though. 1.) What is the middle section for ? Shouldn't the fact that linearly dependent ordered set ${a_1,a_2,...,a_k}$ implies the existence of $1 leq p leq k$ such that $a_p=c_1 a_1+c_2 a_2+...+c_p-1 a_p-1$ be sufficient for linear independency ? ( if we ordered our set like this: ${A_{j-1,j-1},...,A_{00}}$ by all means). 2.) I'm not sure how does a non-trivial linear combination of $I,A,A^2,ldots,A^{k}$ which equals $0$ would induce a non-trivial combination of $A_{00},A_{11},A_{22},ldots,A_{kk}$ which also equals $0$ ?
$endgroup$
– James Groon
Dec 24 '18 at 12:03
$begingroup$
1) To really make the linear independence obvious, I preferred to use a standard argument: The elements are linear independent, if the only linear combination with the result of $0$ is the trivial linear combination with all coefficients being $0.$ 2) We have $A^k=A_{kk} + sum_{j=1}^{k}beta_{jk}A_{j-1,j-1}.$ Plug this into the equation $p(A)=0,$ rearrange in order to combine the coefficients of the $A_{jj}$ and observe that the coefficient of $A_{jj}$ with the greatest $j$ is $1.$
$endgroup$
– Reinhard Meier
Dec 27 '18 at 11:02
$begingroup$
I see , that explains it all. Really appreciated.
$endgroup$
– James Groon
Dec 27 '18 at 11:50
$begingroup$
Thank you for your answer, but I still have a couple of questions though. 1.) What is the middle section for ? Shouldn't the fact that linearly dependent ordered set ${a_1,a_2,...,a_k}$ implies the existence of $1 leq p leq k$ such that $a_p=c_1 a_1+c_2 a_2+...+c_p-1 a_p-1$ be sufficient for linear independency ? ( if we ordered our set like this: ${A_{j-1,j-1},...,A_{00}}$ by all means). 2.) I'm not sure how does a non-trivial linear combination of $I,A,A^2,ldots,A^{k}$ which equals $0$ would induce a non-trivial combination of $A_{00},A_{11},A_{22},ldots,A_{kk}$ which also equals $0$ ?
$endgroup$
– James Groon
Dec 24 '18 at 12:03
$begingroup$
Thank you for your answer, but I still have a couple of questions though. 1.) What is the middle section for ? Shouldn't the fact that linearly dependent ordered set ${a_1,a_2,...,a_k}$ implies the existence of $1 leq p leq k$ such that $a_p=c_1 a_1+c_2 a_2+...+c_p-1 a_p-1$ be sufficient for linear independency ? ( if we ordered our set like this: ${A_{j-1,j-1},...,A_{00}}$ by all means). 2.) I'm not sure how does a non-trivial linear combination of $I,A,A^2,ldots,A^{k}$ which equals $0$ would induce a non-trivial combination of $A_{00},A_{11},A_{22},ldots,A_{kk}$ which also equals $0$ ?
$endgroup$
– James Groon
Dec 24 '18 at 12:03
$begingroup$
1) To really make the linear independence obvious, I preferred to use a standard argument: The elements are linear independent, if the only linear combination with the result of $0$ is the trivial linear combination with all coefficients being $0.$ 2) We have $A^k=A_{kk} + sum_{j=1}^{k}beta_{jk}A_{j-1,j-1}.$ Plug this into the equation $p(A)=0,$ rearrange in order to combine the coefficients of the $A_{jj}$ and observe that the coefficient of $A_{jj}$ with the greatest $j$ is $1.$
$endgroup$
– Reinhard Meier
Dec 27 '18 at 11:02
$begingroup$
1) To really make the linear independence obvious, I preferred to use a standard argument: The elements are linear independent, if the only linear combination with the result of $0$ is the trivial linear combination with all coefficients being $0.$ 2) We have $A^k=A_{kk} + sum_{j=1}^{k}beta_{jk}A_{j-1,j-1}.$ Plug this into the equation $p(A)=0,$ rearrange in order to combine the coefficients of the $A_{jj}$ and observe that the coefficient of $A_{jj}$ with the greatest $j$ is $1.$
$endgroup$
– Reinhard Meier
Dec 27 '18 at 11:02
$begingroup$
I see , that explains it all. Really appreciated.
$endgroup$
– James Groon
Dec 27 '18 at 11:50
$begingroup$
I see , that explains it all. Really appreciated.
$endgroup$
– James Groon
Dec 27 '18 at 11:50
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3049789%2fminimal-polynomial-algorithm%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
$begingroup$
Which textbook?
$endgroup$
– lhf
Dec 22 '18 at 20:02
$begingroup$
@lhf It's more like a script rather than a book ( I think ) and it's written in croatian language. But it is used as a teaching material in my college. If you're still interested, take a look at link ( 21st slide/page ).
$endgroup$
– James Groon
Dec 22 '18 at 20:18
$begingroup$
One is basically doing Gaussian elimination "by rows" where each matrix is interpreted as a "column" vector (you could concatenate all columns of a matrix to make it one long column; all we care about here is linear dependency among matrices). Each time chose in your current matrix a position with nonzero entry as pivot, and clear out that position in the remainder of the row. This is not a very convenient method for computing a minimal polynomial though, and you must cater with the possibility that at some point no pivot can be found.
$endgroup$
– Marc van Leeuwen
Dec 27 '18 at 10:13