What is the dimension of ${Xin M_{n,n}(F); AX=XA=0}$?












15












$begingroup$


Let $A$ be a fixed $ntimes n$ matrix over a field $F$. We can look at the subspace
$$W={Xin M_{n,n}(F); AX=XA=0}$$
of the matrices which fulfill both $AX=0$ and $XA=0$.



Looking a these equations we get that all columns of $X$ have to fulfill the equation $Avec c=vec 0$. (Let us say we're working with column vectors.) Similarly we get for the rows $vec r^T A=vec 0^T$. This tells us that if we are looking at the possible choices for columns/rows of the matrix $X$, they have to be in a subspace of dimension $n-operatorname{rank}A$ (in the right/left null space of $A$).



At least in some cases it is almost immediately possible to find $W$ or at least $dim W$.




  • Obviously, if $A$ is invertible, then $W={0}$ and $dim W=0$.

  • Another trivial case is when $A=0$, which gives us $W=M_{n,n}$ and $dim W=n^2$.

  • Slightly less trivial but still simple case is when $operatorname{rank} A=n-1$. In this case the condition on rows/columns give us one-dimensional spaces, so there are non-zero vectors $vec r$, $vec c$ such that each row has to be multiple of $vec r^T$ and each column has to be a multiple of $vec c$. Up to a scalar multiple, there is only one way how to get such a matrix and we get that $W$ is generated by the matrix $vec cvec r^T$ and $dim W=1$.


The general case seems to be a bit more complicated. If we denote $k=n-operatorname{rank}A$, we can use the same argument to see that there are $k$ linearly independent vectors $vec c_1,dots,vec c_k$ such that the columns have to be linear combinations of these vectors. Similarly, row can be chosen only from the span of the linearly independent vectors $vec r_1,dots,vec r_k$. (This is again just a direct consequence of $Avec c=vec 0$ and $vec r^TA=vec 0^T$.)



Using these vectors we can get $k^2$ matrices $$A_{ij}=vec c_i vec r_j^T$$
for $i,jin{1,2,dots,k}$. Unless I missed something, it seems that showing that these matrices are linearly independent is not too difficult. So we should get that $$dim W ge k^2 = (n-operatorname{rank}A)^2.$$
It is not obvious to me whether these vectors actually generate $W$. (And perhaps something can be said about the dimension of $W$ without exhibiting a basis.)



You may notice that in the three trivial examples above (with $k=0,1,n$) we got the equality $dim W=(n-operatorname{rank}A)^2$.



Another possible way to look at this problem could be to use the linear function
$$fcolon Xto(AX,XA)$$
$fcolon M_{n,n} to M_{n,n}oplus M_{n,n}$, then we have $W=operatorname{Ker} f$, so we are basically asking for the dimension of the kernel of this map.
So to find $dim W$ it would be sufficient to find $dimoperatorname{Im} f$. However, this does not seem to be easier than the original formulation of the problem.



It is also possible to see this as a system of $n^2$ linear equations with $n^2$ unknowns $x_{11}, x_{12}, dots, x_{nn}$. If we try to use this line of thinking, the difficult part seems to be determining how many of those equations are linearly dependent.



Question: What can be said about the dimension of the subspace $W$? Is it equal to $(n-operatorname{rank}A)^2$? Is it determined just by the rank of $A$? If not, what are best possible bounds we can get, if we know only the rank of $A$ and have no further information about $A$?





Motivation for this question was working on an exercise which asked for calculating dimensions of spaces $W_1$, $W_2$, $W_1cap W_2$ and $W_1+W_2$, where the spaces $W_1$ and $W_2$ were determined by the conditions $AX=0$ and $XA=0$, respectively. Since the matrix $A$ was given, in this exercise it was possible to find a basis of $W_1cap W_2$ explicitly. (And the exercise was probably intended just to make the students accustomed to some basic computations such as finding basis, using Grassmann's formula, etc.) Still, I was wondering how much we can say just from knowing the rank of $A$, without going through all the computations.










share|cite|improve this question











$endgroup$

















    15












    $begingroup$


    Let $A$ be a fixed $ntimes n$ matrix over a field $F$. We can look at the subspace
    $$W={Xin M_{n,n}(F); AX=XA=0}$$
    of the matrices which fulfill both $AX=0$ and $XA=0$.



    Looking a these equations we get that all columns of $X$ have to fulfill the equation $Avec c=vec 0$. (Let us say we're working with column vectors.) Similarly we get for the rows $vec r^T A=vec 0^T$. This tells us that if we are looking at the possible choices for columns/rows of the matrix $X$, they have to be in a subspace of dimension $n-operatorname{rank}A$ (in the right/left null space of $A$).



    At least in some cases it is almost immediately possible to find $W$ or at least $dim W$.




    • Obviously, if $A$ is invertible, then $W={0}$ and $dim W=0$.

    • Another trivial case is when $A=0$, which gives us $W=M_{n,n}$ and $dim W=n^2$.

    • Slightly less trivial but still simple case is when $operatorname{rank} A=n-1$. In this case the condition on rows/columns give us one-dimensional spaces, so there are non-zero vectors $vec r$, $vec c$ such that each row has to be multiple of $vec r^T$ and each column has to be a multiple of $vec c$. Up to a scalar multiple, there is only one way how to get such a matrix and we get that $W$ is generated by the matrix $vec cvec r^T$ and $dim W=1$.


    The general case seems to be a bit more complicated. If we denote $k=n-operatorname{rank}A$, we can use the same argument to see that there are $k$ linearly independent vectors $vec c_1,dots,vec c_k$ such that the columns have to be linear combinations of these vectors. Similarly, row can be chosen only from the span of the linearly independent vectors $vec r_1,dots,vec r_k$. (This is again just a direct consequence of $Avec c=vec 0$ and $vec r^TA=vec 0^T$.)



    Using these vectors we can get $k^2$ matrices $$A_{ij}=vec c_i vec r_j^T$$
    for $i,jin{1,2,dots,k}$. Unless I missed something, it seems that showing that these matrices are linearly independent is not too difficult. So we should get that $$dim W ge k^2 = (n-operatorname{rank}A)^2.$$
    It is not obvious to me whether these vectors actually generate $W$. (And perhaps something can be said about the dimension of $W$ without exhibiting a basis.)



    You may notice that in the three trivial examples above (with $k=0,1,n$) we got the equality $dim W=(n-operatorname{rank}A)^2$.



    Another possible way to look at this problem could be to use the linear function
    $$fcolon Xto(AX,XA)$$
    $fcolon M_{n,n} to M_{n,n}oplus M_{n,n}$, then we have $W=operatorname{Ker} f$, so we are basically asking for the dimension of the kernel of this map.
    So to find $dim W$ it would be sufficient to find $dimoperatorname{Im} f$. However, this does not seem to be easier than the original formulation of the problem.



    It is also possible to see this as a system of $n^2$ linear equations with $n^2$ unknowns $x_{11}, x_{12}, dots, x_{nn}$. If we try to use this line of thinking, the difficult part seems to be determining how many of those equations are linearly dependent.



    Question: What can be said about the dimension of the subspace $W$? Is it equal to $(n-operatorname{rank}A)^2$? Is it determined just by the rank of $A$? If not, what are best possible bounds we can get, if we know only the rank of $A$ and have no further information about $A$?





    Motivation for this question was working on an exercise which asked for calculating dimensions of spaces $W_1$, $W_2$, $W_1cap W_2$ and $W_1+W_2$, where the spaces $W_1$ and $W_2$ were determined by the conditions $AX=0$ and $XA=0$, respectively. Since the matrix $A$ was given, in this exercise it was possible to find a basis of $W_1cap W_2$ explicitly. (And the exercise was probably intended just to make the students accustomed to some basic computations such as finding basis, using Grassmann's formula, etc.) Still, I was wondering how much we can say just from knowing the rank of $A$, without going through all the computations.










    share|cite|improve this question











    $endgroup$















      15












      15








      15


      4



      $begingroup$


      Let $A$ be a fixed $ntimes n$ matrix over a field $F$. We can look at the subspace
      $$W={Xin M_{n,n}(F); AX=XA=0}$$
      of the matrices which fulfill both $AX=0$ and $XA=0$.



      Looking a these equations we get that all columns of $X$ have to fulfill the equation $Avec c=vec 0$. (Let us say we're working with column vectors.) Similarly we get for the rows $vec r^T A=vec 0^T$. This tells us that if we are looking at the possible choices for columns/rows of the matrix $X$, they have to be in a subspace of dimension $n-operatorname{rank}A$ (in the right/left null space of $A$).



      At least in some cases it is almost immediately possible to find $W$ or at least $dim W$.




      • Obviously, if $A$ is invertible, then $W={0}$ and $dim W=0$.

      • Another trivial case is when $A=0$, which gives us $W=M_{n,n}$ and $dim W=n^2$.

      • Slightly less trivial but still simple case is when $operatorname{rank} A=n-1$. In this case the condition on rows/columns give us one-dimensional spaces, so there are non-zero vectors $vec r$, $vec c$ such that each row has to be multiple of $vec r^T$ and each column has to be a multiple of $vec c$. Up to a scalar multiple, there is only one way how to get such a matrix and we get that $W$ is generated by the matrix $vec cvec r^T$ and $dim W=1$.


      The general case seems to be a bit more complicated. If we denote $k=n-operatorname{rank}A$, we can use the same argument to see that there are $k$ linearly independent vectors $vec c_1,dots,vec c_k$ such that the columns have to be linear combinations of these vectors. Similarly, row can be chosen only from the span of the linearly independent vectors $vec r_1,dots,vec r_k$. (This is again just a direct consequence of $Avec c=vec 0$ and $vec r^TA=vec 0^T$.)



      Using these vectors we can get $k^2$ matrices $$A_{ij}=vec c_i vec r_j^T$$
      for $i,jin{1,2,dots,k}$. Unless I missed something, it seems that showing that these matrices are linearly independent is not too difficult. So we should get that $$dim W ge k^2 = (n-operatorname{rank}A)^2.$$
      It is not obvious to me whether these vectors actually generate $W$. (And perhaps something can be said about the dimension of $W$ without exhibiting a basis.)



      You may notice that in the three trivial examples above (with $k=0,1,n$) we got the equality $dim W=(n-operatorname{rank}A)^2$.



      Another possible way to look at this problem could be to use the linear function
      $$fcolon Xto(AX,XA)$$
      $fcolon M_{n,n} to M_{n,n}oplus M_{n,n}$, then we have $W=operatorname{Ker} f$, so we are basically asking for the dimension of the kernel of this map.
      So to find $dim W$ it would be sufficient to find $dimoperatorname{Im} f$. However, this does not seem to be easier than the original formulation of the problem.



      It is also possible to see this as a system of $n^2$ linear equations with $n^2$ unknowns $x_{11}, x_{12}, dots, x_{nn}$. If we try to use this line of thinking, the difficult part seems to be determining how many of those equations are linearly dependent.



      Question: What can be said about the dimension of the subspace $W$? Is it equal to $(n-operatorname{rank}A)^2$? Is it determined just by the rank of $A$? If not, what are best possible bounds we can get, if we know only the rank of $A$ and have no further information about $A$?





      Motivation for this question was working on an exercise which asked for calculating dimensions of spaces $W_1$, $W_2$, $W_1cap W_2$ and $W_1+W_2$, where the spaces $W_1$ and $W_2$ were determined by the conditions $AX=0$ and $XA=0$, respectively. Since the matrix $A$ was given, in this exercise it was possible to find a basis of $W_1cap W_2$ explicitly. (And the exercise was probably intended just to make the students accustomed to some basic computations such as finding basis, using Grassmann's formula, etc.) Still, I was wondering how much we can say just from knowing the rank of $A$, without going through all the computations.










      share|cite|improve this question











      $endgroup$




      Let $A$ be a fixed $ntimes n$ matrix over a field $F$. We can look at the subspace
      $$W={Xin M_{n,n}(F); AX=XA=0}$$
      of the matrices which fulfill both $AX=0$ and $XA=0$.



      Looking a these equations we get that all columns of $X$ have to fulfill the equation $Avec c=vec 0$. (Let us say we're working with column vectors.) Similarly we get for the rows $vec r^T A=vec 0^T$. This tells us that if we are looking at the possible choices for columns/rows of the matrix $X$, they have to be in a subspace of dimension $n-operatorname{rank}A$ (in the right/left null space of $A$).



      At least in some cases it is almost immediately possible to find $W$ or at least $dim W$.




      • Obviously, if $A$ is invertible, then $W={0}$ and $dim W=0$.

      • Another trivial case is when $A=0$, which gives us $W=M_{n,n}$ and $dim W=n^2$.

      • Slightly less trivial but still simple case is when $operatorname{rank} A=n-1$. In this case the condition on rows/columns give us one-dimensional spaces, so there are non-zero vectors $vec r$, $vec c$ such that each row has to be multiple of $vec r^T$ and each column has to be a multiple of $vec c$. Up to a scalar multiple, there is only one way how to get such a matrix and we get that $W$ is generated by the matrix $vec cvec r^T$ and $dim W=1$.


      The general case seems to be a bit more complicated. If we denote $k=n-operatorname{rank}A$, we can use the same argument to see that there are $k$ linearly independent vectors $vec c_1,dots,vec c_k$ such that the columns have to be linear combinations of these vectors. Similarly, row can be chosen only from the span of the linearly independent vectors $vec r_1,dots,vec r_k$. (This is again just a direct consequence of $Avec c=vec 0$ and $vec r^TA=vec 0^T$.)



      Using these vectors we can get $k^2$ matrices $$A_{ij}=vec c_i vec r_j^T$$
      for $i,jin{1,2,dots,k}$. Unless I missed something, it seems that showing that these matrices are linearly independent is not too difficult. So we should get that $$dim W ge k^2 = (n-operatorname{rank}A)^2.$$
      It is not obvious to me whether these vectors actually generate $W$. (And perhaps something can be said about the dimension of $W$ without exhibiting a basis.)



      You may notice that in the three trivial examples above (with $k=0,1,n$) we got the equality $dim W=(n-operatorname{rank}A)^2$.



      Another possible way to look at this problem could be to use the linear function
      $$fcolon Xto(AX,XA)$$
      $fcolon M_{n,n} to M_{n,n}oplus M_{n,n}$, then we have $W=operatorname{Ker} f$, so we are basically asking for the dimension of the kernel of this map.
      So to find $dim W$ it would be sufficient to find $dimoperatorname{Im} f$. However, this does not seem to be easier than the original formulation of the problem.



      It is also possible to see this as a system of $n^2$ linear equations with $n^2$ unknowns $x_{11}, x_{12}, dots, x_{nn}$. If we try to use this line of thinking, the difficult part seems to be determining how many of those equations are linearly dependent.



      Question: What can be said about the dimension of the subspace $W$? Is it equal to $(n-operatorname{rank}A)^2$? Is it determined just by the rank of $A$? If not, what are best possible bounds we can get, if we know only the rank of $A$ and have no further information about $A$?





      Motivation for this question was working on an exercise which asked for calculating dimensions of spaces $W_1$, $W_2$, $W_1cap W_2$ and $W_1+W_2$, where the spaces $W_1$ and $W_2$ were determined by the conditions $AX=0$ and $XA=0$, respectively. Since the matrix $A$ was given, in this exercise it was possible to find a basis of $W_1cap W_2$ explicitly. (And the exercise was probably intended just to make the students accustomed to some basic computations such as finding basis, using Grassmann's formula, etc.) Still, I was wondering how much we can say just from knowing the rank of $A$, without going through all the computations.







      linear-algebra matrices vector-spaces linear-transformations matrix-equations






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited Dec 23 '18 at 9:36









      Batominovski

      33.1k33293




      33.1k33293










      asked Oct 26 '18 at 15:43









      Martin SleziakMartin Sleziak

      44.8k10119272




      44.8k10119272






















          4 Answers
          4






          active

          oldest

          votes


















          8












          $begingroup$

          There are invertible matrices $P$ and $Q$ such that $A=PJQ$ where
          $J=pmatrix{I_r&0\0&0}$ with $I_r$ an identity matrix of size $r=text{rank}(A)$.
          Then $AX=0$ iff $PJQX=0$ iff $J(QXP)=0$. Likewise $XA=0$ iff $(QXP)J=0$.
          Let $Y=QXP$. Then $YJ=JY=0$ iff $Y=pmatrix{0&0\0&*}$. So the dimension
          of admissible $Y$ (and so of admissible $X$) is $(n-r)^2$.






          share|cite|improve this answer









          $endgroup$





















            3












            $begingroup$

            Yes, the dimension is always $(n - operatorname{rank}(A))^2$. Here's one justification.





            For the convenience of eigenvalue stuff, I assume that $F$ is algebraically closed, or at least that we can appeal to the existence of its algebraic closure.



            Let $V$ denote the subspace $V_0 = {X: AX = XA}$. That is, $V$ is the solution space to the Sylvester equation $AX - XA = 0$. By using some vectorization tricks, we can see that $V_0$ is spanned by the matrices of the form $xy^T$ such that $Ax = lambda x$ $A^Ty = lambda y$ for some $lambda in bar F$. We can see that $dim(V_0) = sum d_k^2$ where $d_k$ is the geometric multiplicity of the $k$th eigenvalue.



            Some care is required in showing that this basis spans $V_0$ for a non-diagonalizable $A$. One way to show that this happens is to compute the kernel of $I otimes A - A^T otimes I$, taking $A$ to be in Jordan canonical form.



            The space $W$ that you're looking for is the intersection $V_0$ with the kernel of $X mapsto AX$. This is spanned by the vectors $xy^T$ such that $x in ker(A)$ and $y in ker(A^T)$. Your conclusion follows.






            share|cite|improve this answer











            $endgroup$













            • $begingroup$
              Hi Omno, I just read your post. I think that your proof is valid only for $A$ diagonalizable. Otherwise, the $xy^T$ does not span $V_0$ because $dim(V_0)> sum_k d_k^2$ (you must use squares of differences of dimensions of iterated kernels).
              $endgroup$
              – loup blanc
              Dec 23 '18 at 12:54










            • $begingroup$
              @loupblanc hence the “some care is required” paragraph
              $endgroup$
              – Omnomnomnom
              Dec 23 '18 at 16:22










            • $begingroup$
              Yes, of course.
              $endgroup$
              – loup blanc
              Dec 23 '18 at 17:43



















            2












            $begingroup$

            Here is a generalized version where you may be dealing with infinite dimensional vector spaces. For a given linear map $T:Vto V$ on a vector space $V$, I have a description of all linear maps $S:Vto V$ such that $ST=TS=0$.



            Let $V$ be a vector space over a field $F$ and let $T:Vto V$ be a linear transformation. Define $L_T:operatorname{End}_F(V)to operatorname{End}_F(V)oplus operatorname{End}_F(V)$ via
            $$L_T(S)=(ST,TS).$$
            We claim that there exists an isomorphism $varphi: ker L_Tto operatorname{Hom}_F(operatorname{coim} T,ker T)$ of vector spaces, where $operatorname{coim} T$ is the coimage of $T$: $$operatorname{coim} T=V/operatorname{im}T.$$




            Observe that $operatorname{im}Ssubseteq ker T$ and $operatorname{im}Tsubseteq ker S$ for all $Sinker L_T$. Let $pi:Vto operatorname{coim}T$ be the canonical projection $vmapsto v+operatorname{im}T$. For $Sin ker L_T$, we see that $S:Vtoker T$ factors through $pi$, i.e., $S=tilde{S}circ pi$ for a unique linear map $tilde{S}:operatorname{coim}Ttoker T$.
            We define $varphi:ker L_Tto operatorname{Hom}_F(operatorname{coim} T,ker T)$ in the obvious manner: $Smapsto tilde{S}$. This map is clearly an isomorphism with the inverse map $$varphi^{-1}(X)=Xcircpi$$ for all $Rin operatorname{Hom}_F(operatorname{coim} T,ker T)$. The claim is now justified.




            The nullity $operatorname{null} T$ of $T$ is the dimension of the kernel of $T$. The corank $operatorname{cork}T$ of $T$ is the dimension of $operatorname{coim} T$. In the case $operatorname{null}T<infty$ or $operatorname{cork}T<infty$,
            $$operatorname{Hom}_F(operatorname{coim} T,ker T)cong (ker T)otimes_F (operatorname{coim}T)^*,$$
            where the isomorphism is natural, so
            $$operatorname{null}L_T=dim_F ker L_T=(operatorname{null}T)big(dim_F(operatorname{coim}T)^*big)$$
            in this case. In particular, if $operatorname{cork}T<infty$, we have $(operatorname{coim}T)^*cong operatorname{coim}T$, so that
            $$operatorname{null}L_T=(operatorname{null}T)big(dim_F(operatorname{coim}T)^*big)=(operatorname{null}T)(dim_Foperatorname{coim}T)=(operatorname{null}T)(operatorname{cork}T).$$
            Particularly, when $V$ is finite dimensional, we have $operatorname{cork}T<infty$, and by the rank-nullity theorem, we get $operatorname{cork}T=operatorname{null}T=dim_F V-operatorname{rank}T$, and so
            $$operatorname{null}L_T=dim_F ker L_T=(dim_F V-operatorname{rank}T)^2$$
            as the OP conjectures. (But if $V$ is infinite dimensional, for any pair $(m,k)$ of non-negative integers, there exists $Tinoperatorname{End}_F(V)$ with nullity $m$ and corank $k$.)




            Here is example of $T:Vto V$ with nullity $m$ and corank $k$ when $V$ is infinite dimensional. Pick a basis $B$ of $V$. Since $B$ is infinite, it has a countable subset ${b_1,b_2,b_3,ldots}$. Let $Y$ be the span of ${b_1,b_2,b_3,ldots}$ and $Z$ the span of $Bsetminus{b_1,b_2,b_3,ldots}$. Then, $V=Yoplus Z$. Define $T:Vto V$ as follows: $$Tleft(sum_{i=1}^infty s_i b_i+zright)=sum_{i=1}^infty s_{m+i} b_{k+i}+z$$ for all $s_1,s_2,s_3,ldotsin F$ with only finitely many non-zero terms and for all $zin Z$. We have $ker T=operatorname{span}{b_1,b_2,ldots,b_m}$ and $V=(operatorname{im} T)oplus operatorname{span}{b_1,b_2,ldots,b_k}$, so $T$ has nullity $m$ and corank $k$.




            The situation is not so straightforward when $T$ has infinite corank. If $operatorname{null}T<infty$, then we already know that
            $$operatorname{null}L_T= (operatorname{null}T)big(dim_F(operatorname{coim}T)^*big),.$$
            From this mathoverflow thread, $dim_F(operatorname{coim}T)^*=|F|^{operatorname{cork}T}$. So, we have two cases when $operatorname{null}T$ is finite but $operatorname{cork}T$ is infinite:
            $$operatorname{null}L_T= begin{cases}0&text{if} operatorname{null}T=0,\
            |F|^{operatorname{cork}T}&text{if} 0<operatorname{null}T<infty.end{cases}$$

            If both $operatorname{null}T$ and $operatorname{cork}T$ are infinite, we can use the result from the same mathoverflow thread to prove that
            $$operatorname{null}L_T=operatorname{Hom}_F(operatorname{coim} T,ker T)=maxleft{|F|^{operatorname{cork}T},(operatorname{null}T)^{operatorname{cork}T}right}.$$





            Even more generally, let $U$ and $V$ be vector spaces over $F$. For $Rinoperatorname{End}_F(U)$ and $Tinoperatorname{End}_F(V)$, define $L_{R}^T:operatorname{Hom}_F(U,V)tooperatorname{Hom}_F(U,V)oplus operatorname{Hom}_F(U,V)$ by $$L_R^T(S)=(SR,TS).$$ (That is, when $U=V$, we have $L_T=L_T^T$.) Then, there exists an isomorphism of vector spaces
            $$varphi:ker L_R^Tto operatorname{Hom}_F(operatorname{coim}R,ker T).$$
            In particular, if $U$ and $V$ are both finite dimensional, then
            $$operatorname{null} L_R^T=dim_Fker L_R^T=(operatorname{cork}R)(operatorname{null} T)=(dim_FU-operatorname{rank}R)(dim_FV-operatorname{rank}T).$$
            In general,
            $$operatorname{null}L_R^T=begin{cases}(operatorname{cork} R)(operatorname{null}T)&text{if} operatorname{cork}R<infty,\
            0&text{if} operatorname{null} T=0,\
            |F|^{operatorname{cork}R}&text{if} 0<operatorname{null} T<infty wedge operatorname{cork}R=infty,\
            maxleft{|F|^{operatorname{cork}R},(operatorname{null} T)^{operatorname{cork}R}right}&text{if} operatorname{null}T=infty wedge operatorname{cork}R=infty.
            end{cases}$$





            This is my old proof that $operatorname{null}L_T=(operatorname{null}T)(operatorname{cork}T)$ when $T$ has finite nullity and finite corank.
            Suppose that $T$ has finite nullity $m$ and finite corank $k$, I claim that $L_T$ also has finite nullity $mk$.



            For $Sinker L_T$, we see that $operatorname{im} Ssubseteq ker T$ and $operatorname{im} Tsubseteq ker S$. Because $T$ has finite nullity $m$, it follows that $S$ has finite rank $rleq m$. Therefore,
            $$S=v_1otimes phi_1+v_2otimes phi_2+ldots+v_rotimes phi_r$$
            for some linearly independent $v_1,v_2,ldots,v_rin ker T$ and for some linearly independent $phi_1,phi_2,ldots,phi_rin V^*=operatorname{Hom}_F(V,F)$. Since $v_1,v_2,ldots,v_r$ are linearly independent, $$ker S=bigcap_{i=1}^rker phi_i.$$
            Therefore, $operatorname{im} T$ must be contained in $ker phi_i$ for all $i=1,2,ldots,r$.



            Since $T$ has finite corank $k$, $W=V/operatorname{im} T$ is a finite dimensional vector space of dimension $k$. Note that each $phi_i$ factors through $operatorname{im} T$. That is, $phi_i=psi_icirc pi$, where $pi:Vto V/operatorname{im} T=W$ is the canonical projection and $psi_iin W^*=operatorname{Hom}_F(W,F)$. We can now conclude that each $Sin ker L_T$ is of the form
            $$sum_{i=1}^r v_iotimes (psi_icirc pi),$$
            where $v_1,v_2,ldots,v_rin ker T$ are linearly independent and $psi_1,psi_2,ldots,psi_rin W^*=left(V/operatorname{im} Tright)^*$ are linearly independent.



            Define the linear map $f:(ker T)otimes_F W^*toker L_T$ in the obvious manner:
            $$votimes psimapsto votimes (psicircpi).$$
            By the observation in the previous paragraph, $f$ is surjective. By choosing a basis of $ker T$, say ${x_1,x_2,ldots,x_m}$, we see that an element in $ker f$ must take the form
            $$sum_{i=1}^m x_iotimes alpha_i$$
            for some $alpha_iin W^*$. Since $x_1,ldots,x_m$ are linearly independent, we must have that $alpha_icirc pi=0$ for all $i$. But this means $alpha_i=0$ as $pi$ is surjective. Thus, $ker f={0}$, and so $f$ is injective. Hence,
            $$ker L_Tcong (ker T)otimes_F W^*=(ker T)otimes_F (V/operatorname{im} T)^*.$$
            This establishes the assertion that $L_T$ has nullity $mk$.






            share|cite|improve this answer











            $endgroup$





















              0












              $begingroup$

              One can consider $U={(A,B)in M_ntimes M_n;AB=BA=0},V={(A,B)in M_ntimes M_n;AB=0}$.



              $U,V$ are closed algebraic sets stratified by $rank(A)$.



              Let $W_r$ be the algebraic set of matrices of rank $r$; from $dim(W_r)=r(2n-r)$, we deduce that the dimension of a stratum is $(n-r)^2+r(2n-r)=n^2$. In particular, the strata have same dimension and $dim(U)=n^2$.



              You'd think $V$ has about the same dimension as $U$, for example, $dim(V)=dim(U)+O(n)$. This is not the case; recall that, when $AB=0$, we may have $rank(BA)=n/2$.



              Using the Lord Shark the Unknown's post, we obtain that the dimension of a stratum is $d_r=[r(n-r)+(n-r)^2]+r(2n-r)=n^2+nr-r^2$ and depends on $r$.



              Since $max(d_r)$ is obtained with $r=n/2$, we deduce that $dim(V)=floor(5n^2/4)$.



              Now we can seek the singular locus of $U$ or $V$.






              share|cite|improve this answer











              $endgroup$













                Your Answer





                StackExchange.ifUsing("editor", function () {
                return StackExchange.using("mathjaxEditing", function () {
                StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
                StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
                });
                });
                }, "mathjax-editing");

                StackExchange.ready(function() {
                var channelOptions = {
                tags: "".split(" "),
                id: "69"
                };
                initTagRenderer("".split(" "), "".split(" "), channelOptions);

                StackExchange.using("externalEditor", function() {
                // Have to fire editor after snippets, if snippets enabled
                if (StackExchange.settings.snippets.snippetsEnabled) {
                StackExchange.using("snippets", function() {
                createEditor();
                });
                }
                else {
                createEditor();
                }
                });

                function createEditor() {
                StackExchange.prepareEditor({
                heartbeatType: 'answer',
                autoActivateHeartbeat: false,
                convertImagesToLinks: true,
                noModals: true,
                showLowRepImageUploadWarning: true,
                reputationToPostImages: 10,
                bindNavPrevention: true,
                postfix: "",
                imageUploader: {
                brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
                contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
                allowUrls: true
                },
                noCode: true, onDemand: true,
                discardSelector: ".discard-answer"
                ,immediatelyShowMarkdownHelp:true
                });


                }
                });














                draft saved

                draft discarded


















                StackExchange.ready(
                function () {
                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2972321%2fwhat-is-the-dimension-of-x-in-m-n-nf-ax-xa-0%23new-answer', 'question_page');
                }
                );

                Post as a guest















                Required, but never shown

























                4 Answers
                4






                active

                oldest

                votes








                4 Answers
                4






                active

                oldest

                votes









                active

                oldest

                votes






                active

                oldest

                votes









                8












                $begingroup$

                There are invertible matrices $P$ and $Q$ such that $A=PJQ$ where
                $J=pmatrix{I_r&0\0&0}$ with $I_r$ an identity matrix of size $r=text{rank}(A)$.
                Then $AX=0$ iff $PJQX=0$ iff $J(QXP)=0$. Likewise $XA=0$ iff $(QXP)J=0$.
                Let $Y=QXP$. Then $YJ=JY=0$ iff $Y=pmatrix{0&0\0&*}$. So the dimension
                of admissible $Y$ (and so of admissible $X$) is $(n-r)^2$.






                share|cite|improve this answer









                $endgroup$


















                  8












                  $begingroup$

                  There are invertible matrices $P$ and $Q$ such that $A=PJQ$ where
                  $J=pmatrix{I_r&0\0&0}$ with $I_r$ an identity matrix of size $r=text{rank}(A)$.
                  Then $AX=0$ iff $PJQX=0$ iff $J(QXP)=0$. Likewise $XA=0$ iff $(QXP)J=0$.
                  Let $Y=QXP$. Then $YJ=JY=0$ iff $Y=pmatrix{0&0\0&*}$. So the dimension
                  of admissible $Y$ (and so of admissible $X$) is $(n-r)^2$.






                  share|cite|improve this answer









                  $endgroup$
















                    8












                    8








                    8





                    $begingroup$

                    There are invertible matrices $P$ and $Q$ such that $A=PJQ$ where
                    $J=pmatrix{I_r&0\0&0}$ with $I_r$ an identity matrix of size $r=text{rank}(A)$.
                    Then $AX=0$ iff $PJQX=0$ iff $J(QXP)=0$. Likewise $XA=0$ iff $(QXP)J=0$.
                    Let $Y=QXP$. Then $YJ=JY=0$ iff $Y=pmatrix{0&0\0&*}$. So the dimension
                    of admissible $Y$ (and so of admissible $X$) is $(n-r)^2$.






                    share|cite|improve this answer









                    $endgroup$



                    There are invertible matrices $P$ and $Q$ such that $A=PJQ$ where
                    $J=pmatrix{I_r&0\0&0}$ with $I_r$ an identity matrix of size $r=text{rank}(A)$.
                    Then $AX=0$ iff $PJQX=0$ iff $J(QXP)=0$. Likewise $XA=0$ iff $(QXP)J=0$.
                    Let $Y=QXP$. Then $YJ=JY=0$ iff $Y=pmatrix{0&0\0&*}$. So the dimension
                    of admissible $Y$ (and so of admissible $X$) is $(n-r)^2$.







                    share|cite|improve this answer












                    share|cite|improve this answer



                    share|cite|improve this answer










                    answered Oct 26 '18 at 17:06









                    Lord Shark the UnknownLord Shark the Unknown

                    105k1160133




                    105k1160133























                        3












                        $begingroup$

                        Yes, the dimension is always $(n - operatorname{rank}(A))^2$. Here's one justification.





                        For the convenience of eigenvalue stuff, I assume that $F$ is algebraically closed, or at least that we can appeal to the existence of its algebraic closure.



                        Let $V$ denote the subspace $V_0 = {X: AX = XA}$. That is, $V$ is the solution space to the Sylvester equation $AX - XA = 0$. By using some vectorization tricks, we can see that $V_0$ is spanned by the matrices of the form $xy^T$ such that $Ax = lambda x$ $A^Ty = lambda y$ for some $lambda in bar F$. We can see that $dim(V_0) = sum d_k^2$ where $d_k$ is the geometric multiplicity of the $k$th eigenvalue.



                        Some care is required in showing that this basis spans $V_0$ for a non-diagonalizable $A$. One way to show that this happens is to compute the kernel of $I otimes A - A^T otimes I$, taking $A$ to be in Jordan canonical form.



                        The space $W$ that you're looking for is the intersection $V_0$ with the kernel of $X mapsto AX$. This is spanned by the vectors $xy^T$ such that $x in ker(A)$ and $y in ker(A^T)$. Your conclusion follows.






                        share|cite|improve this answer











                        $endgroup$













                        • $begingroup$
                          Hi Omno, I just read your post. I think that your proof is valid only for $A$ diagonalizable. Otherwise, the $xy^T$ does not span $V_0$ because $dim(V_0)> sum_k d_k^2$ (you must use squares of differences of dimensions of iterated kernels).
                          $endgroup$
                          – loup blanc
                          Dec 23 '18 at 12:54










                        • $begingroup$
                          @loupblanc hence the “some care is required” paragraph
                          $endgroup$
                          – Omnomnomnom
                          Dec 23 '18 at 16:22










                        • $begingroup$
                          Yes, of course.
                          $endgroup$
                          – loup blanc
                          Dec 23 '18 at 17:43
















                        3












                        $begingroup$

                        Yes, the dimension is always $(n - operatorname{rank}(A))^2$. Here's one justification.





                        For the convenience of eigenvalue stuff, I assume that $F$ is algebraically closed, or at least that we can appeal to the existence of its algebraic closure.



                        Let $V$ denote the subspace $V_0 = {X: AX = XA}$. That is, $V$ is the solution space to the Sylvester equation $AX - XA = 0$. By using some vectorization tricks, we can see that $V_0$ is spanned by the matrices of the form $xy^T$ such that $Ax = lambda x$ $A^Ty = lambda y$ for some $lambda in bar F$. We can see that $dim(V_0) = sum d_k^2$ where $d_k$ is the geometric multiplicity of the $k$th eigenvalue.



                        Some care is required in showing that this basis spans $V_0$ for a non-diagonalizable $A$. One way to show that this happens is to compute the kernel of $I otimes A - A^T otimes I$, taking $A$ to be in Jordan canonical form.



                        The space $W$ that you're looking for is the intersection $V_0$ with the kernel of $X mapsto AX$. This is spanned by the vectors $xy^T$ such that $x in ker(A)$ and $y in ker(A^T)$. Your conclusion follows.






                        share|cite|improve this answer











                        $endgroup$













                        • $begingroup$
                          Hi Omno, I just read your post. I think that your proof is valid only for $A$ diagonalizable. Otherwise, the $xy^T$ does not span $V_0$ because $dim(V_0)> sum_k d_k^2$ (you must use squares of differences of dimensions of iterated kernels).
                          $endgroup$
                          – loup blanc
                          Dec 23 '18 at 12:54










                        • $begingroup$
                          @loupblanc hence the “some care is required” paragraph
                          $endgroup$
                          – Omnomnomnom
                          Dec 23 '18 at 16:22










                        • $begingroup$
                          Yes, of course.
                          $endgroup$
                          – loup blanc
                          Dec 23 '18 at 17:43














                        3












                        3








                        3





                        $begingroup$

                        Yes, the dimension is always $(n - operatorname{rank}(A))^2$. Here's one justification.





                        For the convenience of eigenvalue stuff, I assume that $F$ is algebraically closed, or at least that we can appeal to the existence of its algebraic closure.



                        Let $V$ denote the subspace $V_0 = {X: AX = XA}$. That is, $V$ is the solution space to the Sylvester equation $AX - XA = 0$. By using some vectorization tricks, we can see that $V_0$ is spanned by the matrices of the form $xy^T$ such that $Ax = lambda x$ $A^Ty = lambda y$ for some $lambda in bar F$. We can see that $dim(V_0) = sum d_k^2$ where $d_k$ is the geometric multiplicity of the $k$th eigenvalue.



                        Some care is required in showing that this basis spans $V_0$ for a non-diagonalizable $A$. One way to show that this happens is to compute the kernel of $I otimes A - A^T otimes I$, taking $A$ to be in Jordan canonical form.



                        The space $W$ that you're looking for is the intersection $V_0$ with the kernel of $X mapsto AX$. This is spanned by the vectors $xy^T$ such that $x in ker(A)$ and $y in ker(A^T)$. Your conclusion follows.






                        share|cite|improve this answer











                        $endgroup$



                        Yes, the dimension is always $(n - operatorname{rank}(A))^2$. Here's one justification.





                        For the convenience of eigenvalue stuff, I assume that $F$ is algebraically closed, or at least that we can appeal to the existence of its algebraic closure.



                        Let $V$ denote the subspace $V_0 = {X: AX = XA}$. That is, $V$ is the solution space to the Sylvester equation $AX - XA = 0$. By using some vectorization tricks, we can see that $V_0$ is spanned by the matrices of the form $xy^T$ such that $Ax = lambda x$ $A^Ty = lambda y$ for some $lambda in bar F$. We can see that $dim(V_0) = sum d_k^2$ where $d_k$ is the geometric multiplicity of the $k$th eigenvalue.



                        Some care is required in showing that this basis spans $V_0$ for a non-diagonalizable $A$. One way to show that this happens is to compute the kernel of $I otimes A - A^T otimes I$, taking $A$ to be in Jordan canonical form.



                        The space $W$ that you're looking for is the intersection $V_0$ with the kernel of $X mapsto AX$. This is spanned by the vectors $xy^T$ such that $x in ker(A)$ and $y in ker(A^T)$. Your conclusion follows.







                        share|cite|improve this answer














                        share|cite|improve this answer



                        share|cite|improve this answer








                        edited Oct 26 '18 at 16:58

























                        answered Oct 26 '18 at 16:52









                        OmnomnomnomOmnomnomnom

                        128k791185




                        128k791185












                        • $begingroup$
                          Hi Omno, I just read your post. I think that your proof is valid only for $A$ diagonalizable. Otherwise, the $xy^T$ does not span $V_0$ because $dim(V_0)> sum_k d_k^2$ (you must use squares of differences of dimensions of iterated kernels).
                          $endgroup$
                          – loup blanc
                          Dec 23 '18 at 12:54










                        • $begingroup$
                          @loupblanc hence the “some care is required” paragraph
                          $endgroup$
                          – Omnomnomnom
                          Dec 23 '18 at 16:22










                        • $begingroup$
                          Yes, of course.
                          $endgroup$
                          – loup blanc
                          Dec 23 '18 at 17:43


















                        • $begingroup$
                          Hi Omno, I just read your post. I think that your proof is valid only for $A$ diagonalizable. Otherwise, the $xy^T$ does not span $V_0$ because $dim(V_0)> sum_k d_k^2$ (you must use squares of differences of dimensions of iterated kernels).
                          $endgroup$
                          – loup blanc
                          Dec 23 '18 at 12:54










                        • $begingroup$
                          @loupblanc hence the “some care is required” paragraph
                          $endgroup$
                          – Omnomnomnom
                          Dec 23 '18 at 16:22










                        • $begingroup$
                          Yes, of course.
                          $endgroup$
                          – loup blanc
                          Dec 23 '18 at 17:43
















                        $begingroup$
                        Hi Omno, I just read your post. I think that your proof is valid only for $A$ diagonalizable. Otherwise, the $xy^T$ does not span $V_0$ because $dim(V_0)> sum_k d_k^2$ (you must use squares of differences of dimensions of iterated kernels).
                        $endgroup$
                        – loup blanc
                        Dec 23 '18 at 12:54




                        $begingroup$
                        Hi Omno, I just read your post. I think that your proof is valid only for $A$ diagonalizable. Otherwise, the $xy^T$ does not span $V_0$ because $dim(V_0)> sum_k d_k^2$ (you must use squares of differences of dimensions of iterated kernels).
                        $endgroup$
                        – loup blanc
                        Dec 23 '18 at 12:54












                        $begingroup$
                        @loupblanc hence the “some care is required” paragraph
                        $endgroup$
                        – Omnomnomnom
                        Dec 23 '18 at 16:22




                        $begingroup$
                        @loupblanc hence the “some care is required” paragraph
                        $endgroup$
                        – Omnomnomnom
                        Dec 23 '18 at 16:22












                        $begingroup$
                        Yes, of course.
                        $endgroup$
                        – loup blanc
                        Dec 23 '18 at 17:43




                        $begingroup$
                        Yes, of course.
                        $endgroup$
                        – loup blanc
                        Dec 23 '18 at 17:43











                        2












                        $begingroup$

                        Here is a generalized version where you may be dealing with infinite dimensional vector spaces. For a given linear map $T:Vto V$ on a vector space $V$, I have a description of all linear maps $S:Vto V$ such that $ST=TS=0$.



                        Let $V$ be a vector space over a field $F$ and let $T:Vto V$ be a linear transformation. Define $L_T:operatorname{End}_F(V)to operatorname{End}_F(V)oplus operatorname{End}_F(V)$ via
                        $$L_T(S)=(ST,TS).$$
                        We claim that there exists an isomorphism $varphi: ker L_Tto operatorname{Hom}_F(operatorname{coim} T,ker T)$ of vector spaces, where $operatorname{coim} T$ is the coimage of $T$: $$operatorname{coim} T=V/operatorname{im}T.$$




                        Observe that $operatorname{im}Ssubseteq ker T$ and $operatorname{im}Tsubseteq ker S$ for all $Sinker L_T$. Let $pi:Vto operatorname{coim}T$ be the canonical projection $vmapsto v+operatorname{im}T$. For $Sin ker L_T$, we see that $S:Vtoker T$ factors through $pi$, i.e., $S=tilde{S}circ pi$ for a unique linear map $tilde{S}:operatorname{coim}Ttoker T$.
                        We define $varphi:ker L_Tto operatorname{Hom}_F(operatorname{coim} T,ker T)$ in the obvious manner: $Smapsto tilde{S}$. This map is clearly an isomorphism with the inverse map $$varphi^{-1}(X)=Xcircpi$$ for all $Rin operatorname{Hom}_F(operatorname{coim} T,ker T)$. The claim is now justified.




                        The nullity $operatorname{null} T$ of $T$ is the dimension of the kernel of $T$. The corank $operatorname{cork}T$ of $T$ is the dimension of $operatorname{coim} T$. In the case $operatorname{null}T<infty$ or $operatorname{cork}T<infty$,
                        $$operatorname{Hom}_F(operatorname{coim} T,ker T)cong (ker T)otimes_F (operatorname{coim}T)^*,$$
                        where the isomorphism is natural, so
                        $$operatorname{null}L_T=dim_F ker L_T=(operatorname{null}T)big(dim_F(operatorname{coim}T)^*big)$$
                        in this case. In particular, if $operatorname{cork}T<infty$, we have $(operatorname{coim}T)^*cong operatorname{coim}T$, so that
                        $$operatorname{null}L_T=(operatorname{null}T)big(dim_F(operatorname{coim}T)^*big)=(operatorname{null}T)(dim_Foperatorname{coim}T)=(operatorname{null}T)(operatorname{cork}T).$$
                        Particularly, when $V$ is finite dimensional, we have $operatorname{cork}T<infty$, and by the rank-nullity theorem, we get $operatorname{cork}T=operatorname{null}T=dim_F V-operatorname{rank}T$, and so
                        $$operatorname{null}L_T=dim_F ker L_T=(dim_F V-operatorname{rank}T)^2$$
                        as the OP conjectures. (But if $V$ is infinite dimensional, for any pair $(m,k)$ of non-negative integers, there exists $Tinoperatorname{End}_F(V)$ with nullity $m$ and corank $k$.)




                        Here is example of $T:Vto V$ with nullity $m$ and corank $k$ when $V$ is infinite dimensional. Pick a basis $B$ of $V$. Since $B$ is infinite, it has a countable subset ${b_1,b_2,b_3,ldots}$. Let $Y$ be the span of ${b_1,b_2,b_3,ldots}$ and $Z$ the span of $Bsetminus{b_1,b_2,b_3,ldots}$. Then, $V=Yoplus Z$. Define $T:Vto V$ as follows: $$Tleft(sum_{i=1}^infty s_i b_i+zright)=sum_{i=1}^infty s_{m+i} b_{k+i}+z$$ for all $s_1,s_2,s_3,ldotsin F$ with only finitely many non-zero terms and for all $zin Z$. We have $ker T=operatorname{span}{b_1,b_2,ldots,b_m}$ and $V=(operatorname{im} T)oplus operatorname{span}{b_1,b_2,ldots,b_k}$, so $T$ has nullity $m$ and corank $k$.




                        The situation is not so straightforward when $T$ has infinite corank. If $operatorname{null}T<infty$, then we already know that
                        $$operatorname{null}L_T= (operatorname{null}T)big(dim_F(operatorname{coim}T)^*big),.$$
                        From this mathoverflow thread, $dim_F(operatorname{coim}T)^*=|F|^{operatorname{cork}T}$. So, we have two cases when $operatorname{null}T$ is finite but $operatorname{cork}T$ is infinite:
                        $$operatorname{null}L_T= begin{cases}0&text{if} operatorname{null}T=0,\
                        |F|^{operatorname{cork}T}&text{if} 0<operatorname{null}T<infty.end{cases}$$

                        If both $operatorname{null}T$ and $operatorname{cork}T$ are infinite, we can use the result from the same mathoverflow thread to prove that
                        $$operatorname{null}L_T=operatorname{Hom}_F(operatorname{coim} T,ker T)=maxleft{|F|^{operatorname{cork}T},(operatorname{null}T)^{operatorname{cork}T}right}.$$





                        Even more generally, let $U$ and $V$ be vector spaces over $F$. For $Rinoperatorname{End}_F(U)$ and $Tinoperatorname{End}_F(V)$, define $L_{R}^T:operatorname{Hom}_F(U,V)tooperatorname{Hom}_F(U,V)oplus operatorname{Hom}_F(U,V)$ by $$L_R^T(S)=(SR,TS).$$ (That is, when $U=V$, we have $L_T=L_T^T$.) Then, there exists an isomorphism of vector spaces
                        $$varphi:ker L_R^Tto operatorname{Hom}_F(operatorname{coim}R,ker T).$$
                        In particular, if $U$ and $V$ are both finite dimensional, then
                        $$operatorname{null} L_R^T=dim_Fker L_R^T=(operatorname{cork}R)(operatorname{null} T)=(dim_FU-operatorname{rank}R)(dim_FV-operatorname{rank}T).$$
                        In general,
                        $$operatorname{null}L_R^T=begin{cases}(operatorname{cork} R)(operatorname{null}T)&text{if} operatorname{cork}R<infty,\
                        0&text{if} operatorname{null} T=0,\
                        |F|^{operatorname{cork}R}&text{if} 0<operatorname{null} T<infty wedge operatorname{cork}R=infty,\
                        maxleft{|F|^{operatorname{cork}R},(operatorname{null} T)^{operatorname{cork}R}right}&text{if} operatorname{null}T=infty wedge operatorname{cork}R=infty.
                        end{cases}$$





                        This is my old proof that $operatorname{null}L_T=(operatorname{null}T)(operatorname{cork}T)$ when $T$ has finite nullity and finite corank.
                        Suppose that $T$ has finite nullity $m$ and finite corank $k$, I claim that $L_T$ also has finite nullity $mk$.



                        For $Sinker L_T$, we see that $operatorname{im} Ssubseteq ker T$ and $operatorname{im} Tsubseteq ker S$. Because $T$ has finite nullity $m$, it follows that $S$ has finite rank $rleq m$. Therefore,
                        $$S=v_1otimes phi_1+v_2otimes phi_2+ldots+v_rotimes phi_r$$
                        for some linearly independent $v_1,v_2,ldots,v_rin ker T$ and for some linearly independent $phi_1,phi_2,ldots,phi_rin V^*=operatorname{Hom}_F(V,F)$. Since $v_1,v_2,ldots,v_r$ are linearly independent, $$ker S=bigcap_{i=1}^rker phi_i.$$
                        Therefore, $operatorname{im} T$ must be contained in $ker phi_i$ for all $i=1,2,ldots,r$.



                        Since $T$ has finite corank $k$, $W=V/operatorname{im} T$ is a finite dimensional vector space of dimension $k$. Note that each $phi_i$ factors through $operatorname{im} T$. That is, $phi_i=psi_icirc pi$, where $pi:Vto V/operatorname{im} T=W$ is the canonical projection and $psi_iin W^*=operatorname{Hom}_F(W,F)$. We can now conclude that each $Sin ker L_T$ is of the form
                        $$sum_{i=1}^r v_iotimes (psi_icirc pi),$$
                        where $v_1,v_2,ldots,v_rin ker T$ are linearly independent and $psi_1,psi_2,ldots,psi_rin W^*=left(V/operatorname{im} Tright)^*$ are linearly independent.



                        Define the linear map $f:(ker T)otimes_F W^*toker L_T$ in the obvious manner:
                        $$votimes psimapsto votimes (psicircpi).$$
                        By the observation in the previous paragraph, $f$ is surjective. By choosing a basis of $ker T$, say ${x_1,x_2,ldots,x_m}$, we see that an element in $ker f$ must take the form
                        $$sum_{i=1}^m x_iotimes alpha_i$$
                        for some $alpha_iin W^*$. Since $x_1,ldots,x_m$ are linearly independent, we must have that $alpha_icirc pi=0$ for all $i$. But this means $alpha_i=0$ as $pi$ is surjective. Thus, $ker f={0}$, and so $f$ is injective. Hence,
                        $$ker L_Tcong (ker T)otimes_F W^*=(ker T)otimes_F (V/operatorname{im} T)^*.$$
                        This establishes the assertion that $L_T$ has nullity $mk$.






                        share|cite|improve this answer











                        $endgroup$


















                          2












                          $begingroup$

                          Here is a generalized version where you may be dealing with infinite dimensional vector spaces. For a given linear map $T:Vto V$ on a vector space $V$, I have a description of all linear maps $S:Vto V$ such that $ST=TS=0$.



                          Let $V$ be a vector space over a field $F$ and let $T:Vto V$ be a linear transformation. Define $L_T:operatorname{End}_F(V)to operatorname{End}_F(V)oplus operatorname{End}_F(V)$ via
                          $$L_T(S)=(ST,TS).$$
                          We claim that there exists an isomorphism $varphi: ker L_Tto operatorname{Hom}_F(operatorname{coim} T,ker T)$ of vector spaces, where $operatorname{coim} T$ is the coimage of $T$: $$operatorname{coim} T=V/operatorname{im}T.$$




                          Observe that $operatorname{im}Ssubseteq ker T$ and $operatorname{im}Tsubseteq ker S$ for all $Sinker L_T$. Let $pi:Vto operatorname{coim}T$ be the canonical projection $vmapsto v+operatorname{im}T$. For $Sin ker L_T$, we see that $S:Vtoker T$ factors through $pi$, i.e., $S=tilde{S}circ pi$ for a unique linear map $tilde{S}:operatorname{coim}Ttoker T$.
                          We define $varphi:ker L_Tto operatorname{Hom}_F(operatorname{coim} T,ker T)$ in the obvious manner: $Smapsto tilde{S}$. This map is clearly an isomorphism with the inverse map $$varphi^{-1}(X)=Xcircpi$$ for all $Rin operatorname{Hom}_F(operatorname{coim} T,ker T)$. The claim is now justified.




                          The nullity $operatorname{null} T$ of $T$ is the dimension of the kernel of $T$. The corank $operatorname{cork}T$ of $T$ is the dimension of $operatorname{coim} T$. In the case $operatorname{null}T<infty$ or $operatorname{cork}T<infty$,
                          $$operatorname{Hom}_F(operatorname{coim} T,ker T)cong (ker T)otimes_F (operatorname{coim}T)^*,$$
                          where the isomorphism is natural, so
                          $$operatorname{null}L_T=dim_F ker L_T=(operatorname{null}T)big(dim_F(operatorname{coim}T)^*big)$$
                          in this case. In particular, if $operatorname{cork}T<infty$, we have $(operatorname{coim}T)^*cong operatorname{coim}T$, so that
                          $$operatorname{null}L_T=(operatorname{null}T)big(dim_F(operatorname{coim}T)^*big)=(operatorname{null}T)(dim_Foperatorname{coim}T)=(operatorname{null}T)(operatorname{cork}T).$$
                          Particularly, when $V$ is finite dimensional, we have $operatorname{cork}T<infty$, and by the rank-nullity theorem, we get $operatorname{cork}T=operatorname{null}T=dim_F V-operatorname{rank}T$, and so
                          $$operatorname{null}L_T=dim_F ker L_T=(dim_F V-operatorname{rank}T)^2$$
                          as the OP conjectures. (But if $V$ is infinite dimensional, for any pair $(m,k)$ of non-negative integers, there exists $Tinoperatorname{End}_F(V)$ with nullity $m$ and corank $k$.)




                          Here is example of $T:Vto V$ with nullity $m$ and corank $k$ when $V$ is infinite dimensional. Pick a basis $B$ of $V$. Since $B$ is infinite, it has a countable subset ${b_1,b_2,b_3,ldots}$. Let $Y$ be the span of ${b_1,b_2,b_3,ldots}$ and $Z$ the span of $Bsetminus{b_1,b_2,b_3,ldots}$. Then, $V=Yoplus Z$. Define $T:Vto V$ as follows: $$Tleft(sum_{i=1}^infty s_i b_i+zright)=sum_{i=1}^infty s_{m+i} b_{k+i}+z$$ for all $s_1,s_2,s_3,ldotsin F$ with only finitely many non-zero terms and for all $zin Z$. We have $ker T=operatorname{span}{b_1,b_2,ldots,b_m}$ and $V=(operatorname{im} T)oplus operatorname{span}{b_1,b_2,ldots,b_k}$, so $T$ has nullity $m$ and corank $k$.




                          The situation is not so straightforward when $T$ has infinite corank. If $operatorname{null}T<infty$, then we already know that
                          $$operatorname{null}L_T= (operatorname{null}T)big(dim_F(operatorname{coim}T)^*big),.$$
                          From this mathoverflow thread, $dim_F(operatorname{coim}T)^*=|F|^{operatorname{cork}T}$. So, we have two cases when $operatorname{null}T$ is finite but $operatorname{cork}T$ is infinite:
                          $$operatorname{null}L_T= begin{cases}0&text{if} operatorname{null}T=0,\
                          |F|^{operatorname{cork}T}&text{if} 0<operatorname{null}T<infty.end{cases}$$

                          If both $operatorname{null}T$ and $operatorname{cork}T$ are infinite, we can use the result from the same mathoverflow thread to prove that
                          $$operatorname{null}L_T=operatorname{Hom}_F(operatorname{coim} T,ker T)=maxleft{|F|^{operatorname{cork}T},(operatorname{null}T)^{operatorname{cork}T}right}.$$





                          Even more generally, let $U$ and $V$ be vector spaces over $F$. For $Rinoperatorname{End}_F(U)$ and $Tinoperatorname{End}_F(V)$, define $L_{R}^T:operatorname{Hom}_F(U,V)tooperatorname{Hom}_F(U,V)oplus operatorname{Hom}_F(U,V)$ by $$L_R^T(S)=(SR,TS).$$ (That is, when $U=V$, we have $L_T=L_T^T$.) Then, there exists an isomorphism of vector spaces
                          $$varphi:ker L_R^Tto operatorname{Hom}_F(operatorname{coim}R,ker T).$$
                          In particular, if $U$ and $V$ are both finite dimensional, then
                          $$operatorname{null} L_R^T=dim_Fker L_R^T=(operatorname{cork}R)(operatorname{null} T)=(dim_FU-operatorname{rank}R)(dim_FV-operatorname{rank}T).$$
                          In general,
                          $$operatorname{null}L_R^T=begin{cases}(operatorname{cork} R)(operatorname{null}T)&text{if} operatorname{cork}R<infty,\
                          0&text{if} operatorname{null} T=0,\
                          |F|^{operatorname{cork}R}&text{if} 0<operatorname{null} T<infty wedge operatorname{cork}R=infty,\
                          maxleft{|F|^{operatorname{cork}R},(operatorname{null} T)^{operatorname{cork}R}right}&text{if} operatorname{null}T=infty wedge operatorname{cork}R=infty.
                          end{cases}$$





                          This is my old proof that $operatorname{null}L_T=(operatorname{null}T)(operatorname{cork}T)$ when $T$ has finite nullity and finite corank.
                          Suppose that $T$ has finite nullity $m$ and finite corank $k$, I claim that $L_T$ also has finite nullity $mk$.



                          For $Sinker L_T$, we see that $operatorname{im} Ssubseteq ker T$ and $operatorname{im} Tsubseteq ker S$. Because $T$ has finite nullity $m$, it follows that $S$ has finite rank $rleq m$. Therefore,
                          $$S=v_1otimes phi_1+v_2otimes phi_2+ldots+v_rotimes phi_r$$
                          for some linearly independent $v_1,v_2,ldots,v_rin ker T$ and for some linearly independent $phi_1,phi_2,ldots,phi_rin V^*=operatorname{Hom}_F(V,F)$. Since $v_1,v_2,ldots,v_r$ are linearly independent, $$ker S=bigcap_{i=1}^rker phi_i.$$
                          Therefore, $operatorname{im} T$ must be contained in $ker phi_i$ for all $i=1,2,ldots,r$.



                          Since $T$ has finite corank $k$, $W=V/operatorname{im} T$ is a finite dimensional vector space of dimension $k$. Note that each $phi_i$ factors through $operatorname{im} T$. That is, $phi_i=psi_icirc pi$, where $pi:Vto V/operatorname{im} T=W$ is the canonical projection and $psi_iin W^*=operatorname{Hom}_F(W,F)$. We can now conclude that each $Sin ker L_T$ is of the form
                          $$sum_{i=1}^r v_iotimes (psi_icirc pi),$$
                          where $v_1,v_2,ldots,v_rin ker T$ are linearly independent and $psi_1,psi_2,ldots,psi_rin W^*=left(V/operatorname{im} Tright)^*$ are linearly independent.



                          Define the linear map $f:(ker T)otimes_F W^*toker L_T$ in the obvious manner:
                          $$votimes psimapsto votimes (psicircpi).$$
                          By the observation in the previous paragraph, $f$ is surjective. By choosing a basis of $ker T$, say ${x_1,x_2,ldots,x_m}$, we see that an element in $ker f$ must take the form
                          $$sum_{i=1}^m x_iotimes alpha_i$$
                          for some $alpha_iin W^*$. Since $x_1,ldots,x_m$ are linearly independent, we must have that $alpha_icirc pi=0$ for all $i$. But this means $alpha_i=0$ as $pi$ is surjective. Thus, $ker f={0}$, and so $f$ is injective. Hence,
                          $$ker L_Tcong (ker T)otimes_F W^*=(ker T)otimes_F (V/operatorname{im} T)^*.$$
                          This establishes the assertion that $L_T$ has nullity $mk$.






                          share|cite|improve this answer











                          $endgroup$
















                            2












                            2








                            2





                            $begingroup$

                            Here is a generalized version where you may be dealing with infinite dimensional vector spaces. For a given linear map $T:Vto V$ on a vector space $V$, I have a description of all linear maps $S:Vto V$ such that $ST=TS=0$.



                            Let $V$ be a vector space over a field $F$ and let $T:Vto V$ be a linear transformation. Define $L_T:operatorname{End}_F(V)to operatorname{End}_F(V)oplus operatorname{End}_F(V)$ via
                            $$L_T(S)=(ST,TS).$$
                            We claim that there exists an isomorphism $varphi: ker L_Tto operatorname{Hom}_F(operatorname{coim} T,ker T)$ of vector spaces, where $operatorname{coim} T$ is the coimage of $T$: $$operatorname{coim} T=V/operatorname{im}T.$$




                            Observe that $operatorname{im}Ssubseteq ker T$ and $operatorname{im}Tsubseteq ker S$ for all $Sinker L_T$. Let $pi:Vto operatorname{coim}T$ be the canonical projection $vmapsto v+operatorname{im}T$. For $Sin ker L_T$, we see that $S:Vtoker T$ factors through $pi$, i.e., $S=tilde{S}circ pi$ for a unique linear map $tilde{S}:operatorname{coim}Ttoker T$.
                            We define $varphi:ker L_Tto operatorname{Hom}_F(operatorname{coim} T,ker T)$ in the obvious manner: $Smapsto tilde{S}$. This map is clearly an isomorphism with the inverse map $$varphi^{-1}(X)=Xcircpi$$ for all $Rin operatorname{Hom}_F(operatorname{coim} T,ker T)$. The claim is now justified.




                            The nullity $operatorname{null} T$ of $T$ is the dimension of the kernel of $T$. The corank $operatorname{cork}T$ of $T$ is the dimension of $operatorname{coim} T$. In the case $operatorname{null}T<infty$ or $operatorname{cork}T<infty$,
                            $$operatorname{Hom}_F(operatorname{coim} T,ker T)cong (ker T)otimes_F (operatorname{coim}T)^*,$$
                            where the isomorphism is natural, so
                            $$operatorname{null}L_T=dim_F ker L_T=(operatorname{null}T)big(dim_F(operatorname{coim}T)^*big)$$
                            in this case. In particular, if $operatorname{cork}T<infty$, we have $(operatorname{coim}T)^*cong operatorname{coim}T$, so that
                            $$operatorname{null}L_T=(operatorname{null}T)big(dim_F(operatorname{coim}T)^*big)=(operatorname{null}T)(dim_Foperatorname{coim}T)=(operatorname{null}T)(operatorname{cork}T).$$
                            Particularly, when $V$ is finite dimensional, we have $operatorname{cork}T<infty$, and by the rank-nullity theorem, we get $operatorname{cork}T=operatorname{null}T=dim_F V-operatorname{rank}T$, and so
                            $$operatorname{null}L_T=dim_F ker L_T=(dim_F V-operatorname{rank}T)^2$$
                            as the OP conjectures. (But if $V$ is infinite dimensional, for any pair $(m,k)$ of non-negative integers, there exists $Tinoperatorname{End}_F(V)$ with nullity $m$ and corank $k$.)




                            Here is example of $T:Vto V$ with nullity $m$ and corank $k$ when $V$ is infinite dimensional. Pick a basis $B$ of $V$. Since $B$ is infinite, it has a countable subset ${b_1,b_2,b_3,ldots}$. Let $Y$ be the span of ${b_1,b_2,b_3,ldots}$ and $Z$ the span of $Bsetminus{b_1,b_2,b_3,ldots}$. Then, $V=Yoplus Z$. Define $T:Vto V$ as follows: $$Tleft(sum_{i=1}^infty s_i b_i+zright)=sum_{i=1}^infty s_{m+i} b_{k+i}+z$$ for all $s_1,s_2,s_3,ldotsin F$ with only finitely many non-zero terms and for all $zin Z$. We have $ker T=operatorname{span}{b_1,b_2,ldots,b_m}$ and $V=(operatorname{im} T)oplus operatorname{span}{b_1,b_2,ldots,b_k}$, so $T$ has nullity $m$ and corank $k$.




                            The situation is not so straightforward when $T$ has infinite corank. If $operatorname{null}T<infty$, then we already know that
                            $$operatorname{null}L_T= (operatorname{null}T)big(dim_F(operatorname{coim}T)^*big),.$$
                            From this mathoverflow thread, $dim_F(operatorname{coim}T)^*=|F|^{operatorname{cork}T}$. So, we have two cases when $operatorname{null}T$ is finite but $operatorname{cork}T$ is infinite:
                            $$operatorname{null}L_T= begin{cases}0&text{if} operatorname{null}T=0,\
                            |F|^{operatorname{cork}T}&text{if} 0<operatorname{null}T<infty.end{cases}$$

                            If both $operatorname{null}T$ and $operatorname{cork}T$ are infinite, we can use the result from the same mathoverflow thread to prove that
                            $$operatorname{null}L_T=operatorname{Hom}_F(operatorname{coim} T,ker T)=maxleft{|F|^{operatorname{cork}T},(operatorname{null}T)^{operatorname{cork}T}right}.$$





                            Even more generally, let $U$ and $V$ be vector spaces over $F$. For $Rinoperatorname{End}_F(U)$ and $Tinoperatorname{End}_F(V)$, define $L_{R}^T:operatorname{Hom}_F(U,V)tooperatorname{Hom}_F(U,V)oplus operatorname{Hom}_F(U,V)$ by $$L_R^T(S)=(SR,TS).$$ (That is, when $U=V$, we have $L_T=L_T^T$.) Then, there exists an isomorphism of vector spaces
                            $$varphi:ker L_R^Tto operatorname{Hom}_F(operatorname{coim}R,ker T).$$
                            In particular, if $U$ and $V$ are both finite dimensional, then
                            $$operatorname{null} L_R^T=dim_Fker L_R^T=(operatorname{cork}R)(operatorname{null} T)=(dim_FU-operatorname{rank}R)(dim_FV-operatorname{rank}T).$$
                            In general,
                            $$operatorname{null}L_R^T=begin{cases}(operatorname{cork} R)(operatorname{null}T)&text{if} operatorname{cork}R<infty,\
                            0&text{if} operatorname{null} T=0,\
                            |F|^{operatorname{cork}R}&text{if} 0<operatorname{null} T<infty wedge operatorname{cork}R=infty,\
                            maxleft{|F|^{operatorname{cork}R},(operatorname{null} T)^{operatorname{cork}R}right}&text{if} operatorname{null}T=infty wedge operatorname{cork}R=infty.
                            end{cases}$$





                            This is my old proof that $operatorname{null}L_T=(operatorname{null}T)(operatorname{cork}T)$ when $T$ has finite nullity and finite corank.
                            Suppose that $T$ has finite nullity $m$ and finite corank $k$, I claim that $L_T$ also has finite nullity $mk$.



                            For $Sinker L_T$, we see that $operatorname{im} Ssubseteq ker T$ and $operatorname{im} Tsubseteq ker S$. Because $T$ has finite nullity $m$, it follows that $S$ has finite rank $rleq m$. Therefore,
                            $$S=v_1otimes phi_1+v_2otimes phi_2+ldots+v_rotimes phi_r$$
                            for some linearly independent $v_1,v_2,ldots,v_rin ker T$ and for some linearly independent $phi_1,phi_2,ldots,phi_rin V^*=operatorname{Hom}_F(V,F)$. Since $v_1,v_2,ldots,v_r$ are linearly independent, $$ker S=bigcap_{i=1}^rker phi_i.$$
                            Therefore, $operatorname{im} T$ must be contained in $ker phi_i$ for all $i=1,2,ldots,r$.



                            Since $T$ has finite corank $k$, $W=V/operatorname{im} T$ is a finite dimensional vector space of dimension $k$. Note that each $phi_i$ factors through $operatorname{im} T$. That is, $phi_i=psi_icirc pi$, where $pi:Vto V/operatorname{im} T=W$ is the canonical projection and $psi_iin W^*=operatorname{Hom}_F(W,F)$. We can now conclude that each $Sin ker L_T$ is of the form
                            $$sum_{i=1}^r v_iotimes (psi_icirc pi),$$
                            where $v_1,v_2,ldots,v_rin ker T$ are linearly independent and $psi_1,psi_2,ldots,psi_rin W^*=left(V/operatorname{im} Tright)^*$ are linearly independent.



                            Define the linear map $f:(ker T)otimes_F W^*toker L_T$ in the obvious manner:
                            $$votimes psimapsto votimes (psicircpi).$$
                            By the observation in the previous paragraph, $f$ is surjective. By choosing a basis of $ker T$, say ${x_1,x_2,ldots,x_m}$, we see that an element in $ker f$ must take the form
                            $$sum_{i=1}^m x_iotimes alpha_i$$
                            for some $alpha_iin W^*$. Since $x_1,ldots,x_m$ are linearly independent, we must have that $alpha_icirc pi=0$ for all $i$. But this means $alpha_i=0$ as $pi$ is surjective. Thus, $ker f={0}$, and so $f$ is injective. Hence,
                            $$ker L_Tcong (ker T)otimes_F W^*=(ker T)otimes_F (V/operatorname{im} T)^*.$$
                            This establishes the assertion that $L_T$ has nullity $mk$.






                            share|cite|improve this answer











                            $endgroup$



                            Here is a generalized version where you may be dealing with infinite dimensional vector spaces. For a given linear map $T:Vto V$ on a vector space $V$, I have a description of all linear maps $S:Vto V$ such that $ST=TS=0$.



                            Let $V$ be a vector space over a field $F$ and let $T:Vto V$ be a linear transformation. Define $L_T:operatorname{End}_F(V)to operatorname{End}_F(V)oplus operatorname{End}_F(V)$ via
                            $$L_T(S)=(ST,TS).$$
                            We claim that there exists an isomorphism $varphi: ker L_Tto operatorname{Hom}_F(operatorname{coim} T,ker T)$ of vector spaces, where $operatorname{coim} T$ is the coimage of $T$: $$operatorname{coim} T=V/operatorname{im}T.$$




                            Observe that $operatorname{im}Ssubseteq ker T$ and $operatorname{im}Tsubseteq ker S$ for all $Sinker L_T$. Let $pi:Vto operatorname{coim}T$ be the canonical projection $vmapsto v+operatorname{im}T$. For $Sin ker L_T$, we see that $S:Vtoker T$ factors through $pi$, i.e., $S=tilde{S}circ pi$ for a unique linear map $tilde{S}:operatorname{coim}Ttoker T$.
                            We define $varphi:ker L_Tto operatorname{Hom}_F(operatorname{coim} T,ker T)$ in the obvious manner: $Smapsto tilde{S}$. This map is clearly an isomorphism with the inverse map $$varphi^{-1}(X)=Xcircpi$$ for all $Rin operatorname{Hom}_F(operatorname{coim} T,ker T)$. The claim is now justified.




                            The nullity $operatorname{null} T$ of $T$ is the dimension of the kernel of $T$. The corank $operatorname{cork}T$ of $T$ is the dimension of $operatorname{coim} T$. In the case $operatorname{null}T<infty$ or $operatorname{cork}T<infty$,
                            $$operatorname{Hom}_F(operatorname{coim} T,ker T)cong (ker T)otimes_F (operatorname{coim}T)^*,$$
                            where the isomorphism is natural, so
                            $$operatorname{null}L_T=dim_F ker L_T=(operatorname{null}T)big(dim_F(operatorname{coim}T)^*big)$$
                            in this case. In particular, if $operatorname{cork}T<infty$, we have $(operatorname{coim}T)^*cong operatorname{coim}T$, so that
                            $$operatorname{null}L_T=(operatorname{null}T)big(dim_F(operatorname{coim}T)^*big)=(operatorname{null}T)(dim_Foperatorname{coim}T)=(operatorname{null}T)(operatorname{cork}T).$$
                            Particularly, when $V$ is finite dimensional, we have $operatorname{cork}T<infty$, and by the rank-nullity theorem, we get $operatorname{cork}T=operatorname{null}T=dim_F V-operatorname{rank}T$, and so
                            $$operatorname{null}L_T=dim_F ker L_T=(dim_F V-operatorname{rank}T)^2$$
                            as the OP conjectures. (But if $V$ is infinite dimensional, for any pair $(m,k)$ of non-negative integers, there exists $Tinoperatorname{End}_F(V)$ with nullity $m$ and corank $k$.)




                            Here is example of $T:Vto V$ with nullity $m$ and corank $k$ when $V$ is infinite dimensional. Pick a basis $B$ of $V$. Since $B$ is infinite, it has a countable subset ${b_1,b_2,b_3,ldots}$. Let $Y$ be the span of ${b_1,b_2,b_3,ldots}$ and $Z$ the span of $Bsetminus{b_1,b_2,b_3,ldots}$. Then, $V=Yoplus Z$. Define $T:Vto V$ as follows: $$Tleft(sum_{i=1}^infty s_i b_i+zright)=sum_{i=1}^infty s_{m+i} b_{k+i}+z$$ for all $s_1,s_2,s_3,ldotsin F$ with only finitely many non-zero terms and for all $zin Z$. We have $ker T=operatorname{span}{b_1,b_2,ldots,b_m}$ and $V=(operatorname{im} T)oplus operatorname{span}{b_1,b_2,ldots,b_k}$, so $T$ has nullity $m$ and corank $k$.




                            The situation is not so straightforward when $T$ has infinite corank. If $operatorname{null}T<infty$, then we already know that
                            $$operatorname{null}L_T= (operatorname{null}T)big(dim_F(operatorname{coim}T)^*big),.$$
                            From this mathoverflow thread, $dim_F(operatorname{coim}T)^*=|F|^{operatorname{cork}T}$. So, we have two cases when $operatorname{null}T$ is finite but $operatorname{cork}T$ is infinite:
                            $$operatorname{null}L_T= begin{cases}0&text{if} operatorname{null}T=0,\
                            |F|^{operatorname{cork}T}&text{if} 0<operatorname{null}T<infty.end{cases}$$

                            If both $operatorname{null}T$ and $operatorname{cork}T$ are infinite, we can use the result from the same mathoverflow thread to prove that
                            $$operatorname{null}L_T=operatorname{Hom}_F(operatorname{coim} T,ker T)=maxleft{|F|^{operatorname{cork}T},(operatorname{null}T)^{operatorname{cork}T}right}.$$





                            Even more generally, let $U$ and $V$ be vector spaces over $F$. For $Rinoperatorname{End}_F(U)$ and $Tinoperatorname{End}_F(V)$, define $L_{R}^T:operatorname{Hom}_F(U,V)tooperatorname{Hom}_F(U,V)oplus operatorname{Hom}_F(U,V)$ by $$L_R^T(S)=(SR,TS).$$ (That is, when $U=V$, we have $L_T=L_T^T$.) Then, there exists an isomorphism of vector spaces
                            $$varphi:ker L_R^Tto operatorname{Hom}_F(operatorname{coim}R,ker T).$$
                            In particular, if $U$ and $V$ are both finite dimensional, then
                            $$operatorname{null} L_R^T=dim_Fker L_R^T=(operatorname{cork}R)(operatorname{null} T)=(dim_FU-operatorname{rank}R)(dim_FV-operatorname{rank}T).$$
                            In general,
                            $$operatorname{null}L_R^T=begin{cases}(operatorname{cork} R)(operatorname{null}T)&text{if} operatorname{cork}R<infty,\
                            0&text{if} operatorname{null} T=0,\
                            |F|^{operatorname{cork}R}&text{if} 0<operatorname{null} T<infty wedge operatorname{cork}R=infty,\
                            maxleft{|F|^{operatorname{cork}R},(operatorname{null} T)^{operatorname{cork}R}right}&text{if} operatorname{null}T=infty wedge operatorname{cork}R=infty.
                            end{cases}$$





                            This is my old proof that $operatorname{null}L_T=(operatorname{null}T)(operatorname{cork}T)$ when $T$ has finite nullity and finite corank.
                            Suppose that $T$ has finite nullity $m$ and finite corank $k$, I claim that $L_T$ also has finite nullity $mk$.



                            For $Sinker L_T$, we see that $operatorname{im} Ssubseteq ker T$ and $operatorname{im} Tsubseteq ker S$. Because $T$ has finite nullity $m$, it follows that $S$ has finite rank $rleq m$. Therefore,
                            $$S=v_1otimes phi_1+v_2otimes phi_2+ldots+v_rotimes phi_r$$
                            for some linearly independent $v_1,v_2,ldots,v_rin ker T$ and for some linearly independent $phi_1,phi_2,ldots,phi_rin V^*=operatorname{Hom}_F(V,F)$. Since $v_1,v_2,ldots,v_r$ are linearly independent, $$ker S=bigcap_{i=1}^rker phi_i.$$
                            Therefore, $operatorname{im} T$ must be contained in $ker phi_i$ for all $i=1,2,ldots,r$.



                            Since $T$ has finite corank $k$, $W=V/operatorname{im} T$ is a finite dimensional vector space of dimension $k$. Note that each $phi_i$ factors through $operatorname{im} T$. That is, $phi_i=psi_icirc pi$, where $pi:Vto V/operatorname{im} T=W$ is the canonical projection and $psi_iin W^*=operatorname{Hom}_F(W,F)$. We can now conclude that each $Sin ker L_T$ is of the form
                            $$sum_{i=1}^r v_iotimes (psi_icirc pi),$$
                            where $v_1,v_2,ldots,v_rin ker T$ are linearly independent and $psi_1,psi_2,ldots,psi_rin W^*=left(V/operatorname{im} Tright)^*$ are linearly independent.



                            Define the linear map $f:(ker T)otimes_F W^*toker L_T$ in the obvious manner:
                            $$votimes psimapsto votimes (psicircpi).$$
                            By the observation in the previous paragraph, $f$ is surjective. By choosing a basis of $ker T$, say ${x_1,x_2,ldots,x_m}$, we see that an element in $ker f$ must take the form
                            $$sum_{i=1}^m x_iotimes alpha_i$$
                            for some $alpha_iin W^*$. Since $x_1,ldots,x_m$ are linearly independent, we must have that $alpha_icirc pi=0$ for all $i$. But this means $alpha_i=0$ as $pi$ is surjective. Thus, $ker f={0}$, and so $f$ is injective. Hence,
                            $$ker L_Tcong (ker T)otimes_F W^*=(ker T)otimes_F (V/operatorname{im} T)^*.$$
                            This establishes the assertion that $L_T$ has nullity $mk$.







                            share|cite|improve this answer














                            share|cite|improve this answer



                            share|cite|improve this answer








                            edited Dec 23 '18 at 9:42









                            Batominovski

                            33.1k33293




                            33.1k33293










                            answered Oct 26 '18 at 17:35







                            user593746






























                                0












                                $begingroup$

                                One can consider $U={(A,B)in M_ntimes M_n;AB=BA=0},V={(A,B)in M_ntimes M_n;AB=0}$.



                                $U,V$ are closed algebraic sets stratified by $rank(A)$.



                                Let $W_r$ be the algebraic set of matrices of rank $r$; from $dim(W_r)=r(2n-r)$, we deduce that the dimension of a stratum is $(n-r)^2+r(2n-r)=n^2$. In particular, the strata have same dimension and $dim(U)=n^2$.



                                You'd think $V$ has about the same dimension as $U$, for example, $dim(V)=dim(U)+O(n)$. This is not the case; recall that, when $AB=0$, we may have $rank(BA)=n/2$.



                                Using the Lord Shark the Unknown's post, we obtain that the dimension of a stratum is $d_r=[r(n-r)+(n-r)^2]+r(2n-r)=n^2+nr-r^2$ and depends on $r$.



                                Since $max(d_r)$ is obtained with $r=n/2$, we deduce that $dim(V)=floor(5n^2/4)$.



                                Now we can seek the singular locus of $U$ or $V$.






                                share|cite|improve this answer











                                $endgroup$


















                                  0












                                  $begingroup$

                                  One can consider $U={(A,B)in M_ntimes M_n;AB=BA=0},V={(A,B)in M_ntimes M_n;AB=0}$.



                                  $U,V$ are closed algebraic sets stratified by $rank(A)$.



                                  Let $W_r$ be the algebraic set of matrices of rank $r$; from $dim(W_r)=r(2n-r)$, we deduce that the dimension of a stratum is $(n-r)^2+r(2n-r)=n^2$. In particular, the strata have same dimension and $dim(U)=n^2$.



                                  You'd think $V$ has about the same dimension as $U$, for example, $dim(V)=dim(U)+O(n)$. This is not the case; recall that, when $AB=0$, we may have $rank(BA)=n/2$.



                                  Using the Lord Shark the Unknown's post, we obtain that the dimension of a stratum is $d_r=[r(n-r)+(n-r)^2]+r(2n-r)=n^2+nr-r^2$ and depends on $r$.



                                  Since $max(d_r)$ is obtained with $r=n/2$, we deduce that $dim(V)=floor(5n^2/4)$.



                                  Now we can seek the singular locus of $U$ or $V$.






                                  share|cite|improve this answer











                                  $endgroup$
















                                    0












                                    0








                                    0





                                    $begingroup$

                                    One can consider $U={(A,B)in M_ntimes M_n;AB=BA=0},V={(A,B)in M_ntimes M_n;AB=0}$.



                                    $U,V$ are closed algebraic sets stratified by $rank(A)$.



                                    Let $W_r$ be the algebraic set of matrices of rank $r$; from $dim(W_r)=r(2n-r)$, we deduce that the dimension of a stratum is $(n-r)^2+r(2n-r)=n^2$. In particular, the strata have same dimension and $dim(U)=n^2$.



                                    You'd think $V$ has about the same dimension as $U$, for example, $dim(V)=dim(U)+O(n)$. This is not the case; recall that, when $AB=0$, we may have $rank(BA)=n/2$.



                                    Using the Lord Shark the Unknown's post, we obtain that the dimension of a stratum is $d_r=[r(n-r)+(n-r)^2]+r(2n-r)=n^2+nr-r^2$ and depends on $r$.



                                    Since $max(d_r)$ is obtained with $r=n/2$, we deduce that $dim(V)=floor(5n^2/4)$.



                                    Now we can seek the singular locus of $U$ or $V$.






                                    share|cite|improve this answer











                                    $endgroup$



                                    One can consider $U={(A,B)in M_ntimes M_n;AB=BA=0},V={(A,B)in M_ntimes M_n;AB=0}$.



                                    $U,V$ are closed algebraic sets stratified by $rank(A)$.



                                    Let $W_r$ be the algebraic set of matrices of rank $r$; from $dim(W_r)=r(2n-r)$, we deduce that the dimension of a stratum is $(n-r)^2+r(2n-r)=n^2$. In particular, the strata have same dimension and $dim(U)=n^2$.



                                    You'd think $V$ has about the same dimension as $U$, for example, $dim(V)=dim(U)+O(n)$. This is not the case; recall that, when $AB=0$, we may have $rank(BA)=n/2$.



                                    Using the Lord Shark the Unknown's post, we obtain that the dimension of a stratum is $d_r=[r(n-r)+(n-r)^2]+r(2n-r)=n^2+nr-r^2$ and depends on $r$.



                                    Since $max(d_r)$ is obtained with $r=n/2$, we deduce that $dim(V)=floor(5n^2/4)$.



                                    Now we can seek the singular locus of $U$ or $V$.







                                    share|cite|improve this answer














                                    share|cite|improve this answer



                                    share|cite|improve this answer








                                    edited Dec 23 '18 at 16:04

























                                    answered Dec 23 '18 at 15:46









                                    loup blancloup blanc

                                    23.5k21851




                                    23.5k21851






























                                        draft saved

                                        draft discarded




















































                                        Thanks for contributing an answer to Mathematics Stack Exchange!


                                        • Please be sure to answer the question. Provide details and share your research!

                                        But avoid



                                        • Asking for help, clarification, or responding to other answers.

                                        • Making statements based on opinion; back them up with references or personal experience.


                                        Use MathJax to format equations. MathJax reference.


                                        To learn more, see our tips on writing great answers.




                                        draft saved


                                        draft discarded














                                        StackExchange.ready(
                                        function () {
                                        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2972321%2fwhat-is-the-dimension-of-x-in-m-n-nf-ax-xa-0%23new-answer', 'question_page');
                                        }
                                        );

                                        Post as a guest















                                        Required, but never shown





















































                                        Required, but never shown














                                        Required, but never shown












                                        Required, but never shown







                                        Required, but never shown

































                                        Required, but never shown














                                        Required, but never shown












                                        Required, but never shown







                                        Required, but never shown







                                        Popular posts from this blog

                                        To store a contact into the json file from server.js file using a class in NodeJS

                                        Redirect URL with Chrome Remote Debugging Android Devices

                                        Dieringhausen