Fast algorithm for solving system of linear equations











up vote
20
down vote

favorite
5












I have a system of $N$ linear equations, $Ax=b$, in $N$ unknowns (where $N$ is large).



If I am interested in the solution for only one of the unknowns, what are the best approaches?



For example, assume $N=50,000$. We want the solution for $x_1$ through $x_{100}$ only. Is there any trick that does not require $O(n^{3})$ (or $O$(matrix inversion))?










share|cite|improve this question




















  • 2




    Is $A$ full or sparse? Are you doing this once or many times with the same $A$?
    – joriki
    Apr 1 '11 at 16:56










  • A is not sparse and has many nonzero elements, however, the coefficients themselves are derived from a smaller set of variables.
    – mghandi
    Apr 2 '11 at 3:37












  • About your second question, yes I'm doing this many times. I am familiar with the LU decomposition trick to speed up when the coeffs matrix is unchanged. Is there any other tricks to do that even more efficiently?
    – mghandi
    Apr 2 '11 at 3:51






  • 1




    You may apply iterative methods (CG, if you matrix is spd, GMRES something similar otherwise). You may also want to ask at scicomp.stackexchange.com.
    – Dirk
    Dec 15 '12 at 17:43















up vote
20
down vote

favorite
5












I have a system of $N$ linear equations, $Ax=b$, in $N$ unknowns (where $N$ is large).



If I am interested in the solution for only one of the unknowns, what are the best approaches?



For example, assume $N=50,000$. We want the solution for $x_1$ through $x_{100}$ only. Is there any trick that does not require $O(n^{3})$ (or $O$(matrix inversion))?










share|cite|improve this question




















  • 2




    Is $A$ full or sparse? Are you doing this once or many times with the same $A$?
    – joriki
    Apr 1 '11 at 16:56










  • A is not sparse and has many nonzero elements, however, the coefficients themselves are derived from a smaller set of variables.
    – mghandi
    Apr 2 '11 at 3:37












  • About your second question, yes I'm doing this many times. I am familiar with the LU decomposition trick to speed up when the coeffs matrix is unchanged. Is there any other tricks to do that even more efficiently?
    – mghandi
    Apr 2 '11 at 3:51






  • 1




    You may apply iterative methods (CG, if you matrix is spd, GMRES something similar otherwise). You may also want to ask at scicomp.stackexchange.com.
    – Dirk
    Dec 15 '12 at 17:43













up vote
20
down vote

favorite
5









up vote
20
down vote

favorite
5






5





I have a system of $N$ linear equations, $Ax=b$, in $N$ unknowns (where $N$ is large).



If I am interested in the solution for only one of the unknowns, what are the best approaches?



For example, assume $N=50,000$. We want the solution for $x_1$ through $x_{100}$ only. Is there any trick that does not require $O(n^{3})$ (or $O$(matrix inversion))?










share|cite|improve this question















I have a system of $N$ linear equations, $Ax=b$, in $N$ unknowns (where $N$ is large).



If I am interested in the solution for only one of the unknowns, what are the best approaches?



For example, assume $N=50,000$. We want the solution for $x_1$ through $x_{100}$ only. Is there any trick that does not require $O(n^{3})$ (or $O$(matrix inversion))?







linear-algebra systems-of-equations numerical-linear-algebra






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Jun 16 at 21:07









Rodrigo de Azevedo

12.7k41753




12.7k41753










asked Apr 1 '11 at 14:47









mghandi

8231625




8231625








  • 2




    Is $A$ full or sparse? Are you doing this once or many times with the same $A$?
    – joriki
    Apr 1 '11 at 16:56










  • A is not sparse and has many nonzero elements, however, the coefficients themselves are derived from a smaller set of variables.
    – mghandi
    Apr 2 '11 at 3:37












  • About your second question, yes I'm doing this many times. I am familiar with the LU decomposition trick to speed up when the coeffs matrix is unchanged. Is there any other tricks to do that even more efficiently?
    – mghandi
    Apr 2 '11 at 3:51






  • 1




    You may apply iterative methods (CG, if you matrix is spd, GMRES something similar otherwise). You may also want to ask at scicomp.stackexchange.com.
    – Dirk
    Dec 15 '12 at 17:43














  • 2




    Is $A$ full or sparse? Are you doing this once or many times with the same $A$?
    – joriki
    Apr 1 '11 at 16:56










  • A is not sparse and has many nonzero elements, however, the coefficients themselves are derived from a smaller set of variables.
    – mghandi
    Apr 2 '11 at 3:37












  • About your second question, yes I'm doing this many times. I am familiar with the LU decomposition trick to speed up when the coeffs matrix is unchanged. Is there any other tricks to do that even more efficiently?
    – mghandi
    Apr 2 '11 at 3:51






  • 1




    You may apply iterative methods (CG, if you matrix is spd, GMRES something similar otherwise). You may also want to ask at scicomp.stackexchange.com.
    – Dirk
    Dec 15 '12 at 17:43








2




2




Is $A$ full or sparse? Are you doing this once or many times with the same $A$?
– joriki
Apr 1 '11 at 16:56




Is $A$ full or sparse? Are you doing this once or many times with the same $A$?
– joriki
Apr 1 '11 at 16:56












A is not sparse and has many nonzero elements, however, the coefficients themselves are derived from a smaller set of variables.
– mghandi
Apr 2 '11 at 3:37






A is not sparse and has many nonzero elements, however, the coefficients themselves are derived from a smaller set of variables.
– mghandi
Apr 2 '11 at 3:37














About your second question, yes I'm doing this many times. I am familiar with the LU decomposition trick to speed up when the coeffs matrix is unchanged. Is there any other tricks to do that even more efficiently?
– mghandi
Apr 2 '11 at 3:51




About your second question, yes I'm doing this many times. I am familiar with the LU decomposition trick to speed up when the coeffs matrix is unchanged. Is there any other tricks to do that even more efficiently?
– mghandi
Apr 2 '11 at 3:51




1




1




You may apply iterative methods (CG, if you matrix is spd, GMRES something similar otherwise). You may also want to ask at scicomp.stackexchange.com.
– Dirk
Dec 15 '12 at 17:43




You may apply iterative methods (CG, if you matrix is spd, GMRES something similar otherwise). You may also want to ask at scicomp.stackexchange.com.
– Dirk
Dec 15 '12 at 17:43










3 Answers
3






active

oldest

votes

















up vote
3
down vote



accepted
+50










There is a way to reduce the complexity and make the system solvable in parallel.
It is called Diakoptics (a method invented by Gabriel Kron). The methods primary use is for large electrical networks that have few interconnections like power grids. But you should be able to adapt it.



The complexity (for the case below) is reduced from $O(n^3)$ to $O(2(frac{n}{2})^3)$ or $O(frac{1}{4}n^3)$, the impact can be much greater if the system is divided it into more subsystems. For that case the complexity is ($s$-subsysems, $c$-interconnection points) $O(c^3)+O((frac{n^3}{s²}))$, if the systems is divided into equaly sized subsystems. I'm not sure about the notation for multiple variables, but you should get the point.



In short:



Lets assume you have a $N times N$ system, lets say you can divide the system into two systems with 1 connection point(plus reference when you look at electrical systems). The connection points are $m$ and $n$. Lets assume these systems are of the size $N_1=N/2$ and $N_2=N/2$ (for simplicitys sake). You should now solve them separately.



$mathbf A_1^{-1}=mathbf B_1$



$mathbf A_2^{-1}=mathbf B_2$



The next step is to put them back together, that is done with the help of the so called "Thevenin Matrix"(in our case it is 1$times$1). You can look up the exact principle for higher orders(more connection points), but for this example it looks like:
begin{align}
mathbf{B_{TH}}=B_{1mm}+B_{2nn}-2B_{mn}
end{align}
For our case we have $B_{mn}=0$. Now we need the solutions $x_1$ and $x_2$ to form the coefficients $b_{th}$.



$mathbf x_{th}=x_{1m}-x_{2n}$



$mathbf b_{p}=mathbf{B_{TH}}^{-1} mathbf x_{th}$



begin{align}
mathbf b_{th}=begin{bmatrix}0&cdots&b_{p}&cdots&-b_{p}&cdots&0
end{bmatrix}^T
end{align}



The $mathbf b_{th}$ matrix only has nonzero elements at $m$ an $N/2 +n$. Now we can finally find the solution $x_n$ for the whole system:
begin{align}
mathbf x_n=begin{bmatrix}x_1\x_2
end{bmatrix}-begin{bmatrix}B_1&0\0&B_2
end{bmatrix}begin{bmatrix}b_{th}
end{bmatrix}
end{align}



I'm more used to the engineering notation with $Z, I, U$ and so on, so excuse for non-standard symbol usage.






share|cite|improve this answer




























    up vote
    9
    down vote













    It is an open important mathematical question



    The Wikipedia page https://en.wikipedia.org/wiki/Matrix_multiplication_algorithm#Sub-cubic_algorithms shows how the fast known asymptotic algorithm has evolved through time:





    This begs the question: how close can we get to O(n^2), which is of course a lower bound since we have to read 2 * n^2 inputs at least once?



    Proving the upper lower bound, even non constructively, would make you instantly famous. Wikipedia lists it as an "Unsolved problem in computer science".



    The constant factor for the better algorithms is so large though that it makes them impractical for most matrix sizes found in practice today, and I doubt this will change if new algorithms are found. Therefore any practical real world answer will come down to optimizing against a given machine model.



    Personal guess about solving for single variables: I don't think you can reduce complexity like that in general, since the entire system is coupled. How would you know that your solution for some variables also satisfies the entire global solution?






    share|cite|improve this answer























    • Here is the wiki to the family of algorithms you are referring to: en.wikipedia.org/wiki/Coppersmith%E2%80%93Winograd_algorithm Also worth looking at en.wikipedia.org/wiki/Strassen_algorithm
      – Novice C
      Jul 4 '17 at 20:37




















    up vote
    8
    down vote













    Unless your matrix is sparse or structured (e.g. Vandermonde, Hankel, or those other named matrix families that admit a fast solution method), there is not much hope of doing things better than $O(n^3)$ effort. Even if one were to restrict himself to solving for just one of the 50,000 variables, Cramer will demand computing two determinants for your answer, and the effort for computing a determinant is at least as much as decomposing/inverting a matrix to begin with.






    share|cite|improve this answer





















      Your Answer





      StackExchange.ifUsing("editor", function () {
      return StackExchange.using("mathjaxEditing", function () {
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      });
      });
      }, "mathjax-editing");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "69"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














       

      draft saved


      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f30330%2ffast-algorithm-for-solving-system-of-linear-equations%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      3 Answers
      3






      active

      oldest

      votes








      3 Answers
      3






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes








      up vote
      3
      down vote



      accepted
      +50










      There is a way to reduce the complexity and make the system solvable in parallel.
      It is called Diakoptics (a method invented by Gabriel Kron). The methods primary use is for large electrical networks that have few interconnections like power grids. But you should be able to adapt it.



      The complexity (for the case below) is reduced from $O(n^3)$ to $O(2(frac{n}{2})^3)$ or $O(frac{1}{4}n^3)$, the impact can be much greater if the system is divided it into more subsystems. For that case the complexity is ($s$-subsysems, $c$-interconnection points) $O(c^3)+O((frac{n^3}{s²}))$, if the systems is divided into equaly sized subsystems. I'm not sure about the notation for multiple variables, but you should get the point.



      In short:



      Lets assume you have a $N times N$ system, lets say you can divide the system into two systems with 1 connection point(plus reference when you look at electrical systems). The connection points are $m$ and $n$. Lets assume these systems are of the size $N_1=N/2$ and $N_2=N/2$ (for simplicitys sake). You should now solve them separately.



      $mathbf A_1^{-1}=mathbf B_1$



      $mathbf A_2^{-1}=mathbf B_2$



      The next step is to put them back together, that is done with the help of the so called "Thevenin Matrix"(in our case it is 1$times$1). You can look up the exact principle for higher orders(more connection points), but for this example it looks like:
      begin{align}
      mathbf{B_{TH}}=B_{1mm}+B_{2nn}-2B_{mn}
      end{align}
      For our case we have $B_{mn}=0$. Now we need the solutions $x_1$ and $x_2$ to form the coefficients $b_{th}$.



      $mathbf x_{th}=x_{1m}-x_{2n}$



      $mathbf b_{p}=mathbf{B_{TH}}^{-1} mathbf x_{th}$



      begin{align}
      mathbf b_{th}=begin{bmatrix}0&cdots&b_{p}&cdots&-b_{p}&cdots&0
      end{bmatrix}^T
      end{align}



      The $mathbf b_{th}$ matrix only has nonzero elements at $m$ an $N/2 +n$. Now we can finally find the solution $x_n$ for the whole system:
      begin{align}
      mathbf x_n=begin{bmatrix}x_1\x_2
      end{bmatrix}-begin{bmatrix}B_1&0\0&B_2
      end{bmatrix}begin{bmatrix}b_{th}
      end{bmatrix}
      end{align}



      I'm more used to the engineering notation with $Z, I, U$ and so on, so excuse for non-standard symbol usage.






      share|cite|improve this answer

























        up vote
        3
        down vote



        accepted
        +50










        There is a way to reduce the complexity and make the system solvable in parallel.
        It is called Diakoptics (a method invented by Gabriel Kron). The methods primary use is for large electrical networks that have few interconnections like power grids. But you should be able to adapt it.



        The complexity (for the case below) is reduced from $O(n^3)$ to $O(2(frac{n}{2})^3)$ or $O(frac{1}{4}n^3)$, the impact can be much greater if the system is divided it into more subsystems. For that case the complexity is ($s$-subsysems, $c$-interconnection points) $O(c^3)+O((frac{n^3}{s²}))$, if the systems is divided into equaly sized subsystems. I'm not sure about the notation for multiple variables, but you should get the point.



        In short:



        Lets assume you have a $N times N$ system, lets say you can divide the system into two systems with 1 connection point(plus reference when you look at electrical systems). The connection points are $m$ and $n$. Lets assume these systems are of the size $N_1=N/2$ and $N_2=N/2$ (for simplicitys sake). You should now solve them separately.



        $mathbf A_1^{-1}=mathbf B_1$



        $mathbf A_2^{-1}=mathbf B_2$



        The next step is to put them back together, that is done with the help of the so called "Thevenin Matrix"(in our case it is 1$times$1). You can look up the exact principle for higher orders(more connection points), but for this example it looks like:
        begin{align}
        mathbf{B_{TH}}=B_{1mm}+B_{2nn}-2B_{mn}
        end{align}
        For our case we have $B_{mn}=0$. Now we need the solutions $x_1$ and $x_2$ to form the coefficients $b_{th}$.



        $mathbf x_{th}=x_{1m}-x_{2n}$



        $mathbf b_{p}=mathbf{B_{TH}}^{-1} mathbf x_{th}$



        begin{align}
        mathbf b_{th}=begin{bmatrix}0&cdots&b_{p}&cdots&-b_{p}&cdots&0
        end{bmatrix}^T
        end{align}



        The $mathbf b_{th}$ matrix only has nonzero elements at $m$ an $N/2 +n$. Now we can finally find the solution $x_n$ for the whole system:
        begin{align}
        mathbf x_n=begin{bmatrix}x_1\x_2
        end{bmatrix}-begin{bmatrix}B_1&0\0&B_2
        end{bmatrix}begin{bmatrix}b_{th}
        end{bmatrix}
        end{align}



        I'm more used to the engineering notation with $Z, I, U$ and so on, so excuse for non-standard symbol usage.






        share|cite|improve this answer























          up vote
          3
          down vote



          accepted
          +50







          up vote
          3
          down vote



          accepted
          +50




          +50




          There is a way to reduce the complexity and make the system solvable in parallel.
          It is called Diakoptics (a method invented by Gabriel Kron). The methods primary use is for large electrical networks that have few interconnections like power grids. But you should be able to adapt it.



          The complexity (for the case below) is reduced from $O(n^3)$ to $O(2(frac{n}{2})^3)$ or $O(frac{1}{4}n^3)$, the impact can be much greater if the system is divided it into more subsystems. For that case the complexity is ($s$-subsysems, $c$-interconnection points) $O(c^3)+O((frac{n^3}{s²}))$, if the systems is divided into equaly sized subsystems. I'm not sure about the notation for multiple variables, but you should get the point.



          In short:



          Lets assume you have a $N times N$ system, lets say you can divide the system into two systems with 1 connection point(plus reference when you look at electrical systems). The connection points are $m$ and $n$. Lets assume these systems are of the size $N_1=N/2$ and $N_2=N/2$ (for simplicitys sake). You should now solve them separately.



          $mathbf A_1^{-1}=mathbf B_1$



          $mathbf A_2^{-1}=mathbf B_2$



          The next step is to put them back together, that is done with the help of the so called "Thevenin Matrix"(in our case it is 1$times$1). You can look up the exact principle for higher orders(more connection points), but for this example it looks like:
          begin{align}
          mathbf{B_{TH}}=B_{1mm}+B_{2nn}-2B_{mn}
          end{align}
          For our case we have $B_{mn}=0$. Now we need the solutions $x_1$ and $x_2$ to form the coefficients $b_{th}$.



          $mathbf x_{th}=x_{1m}-x_{2n}$



          $mathbf b_{p}=mathbf{B_{TH}}^{-1} mathbf x_{th}$



          begin{align}
          mathbf b_{th}=begin{bmatrix}0&cdots&b_{p}&cdots&-b_{p}&cdots&0
          end{bmatrix}^T
          end{align}



          The $mathbf b_{th}$ matrix only has nonzero elements at $m$ an $N/2 +n$. Now we can finally find the solution $x_n$ for the whole system:
          begin{align}
          mathbf x_n=begin{bmatrix}x_1\x_2
          end{bmatrix}-begin{bmatrix}B_1&0\0&B_2
          end{bmatrix}begin{bmatrix}b_{th}
          end{bmatrix}
          end{align}



          I'm more used to the engineering notation with $Z, I, U$ and so on, so excuse for non-standard symbol usage.






          share|cite|improve this answer












          There is a way to reduce the complexity and make the system solvable in parallel.
          It is called Diakoptics (a method invented by Gabriel Kron). The methods primary use is for large electrical networks that have few interconnections like power grids. But you should be able to adapt it.



          The complexity (for the case below) is reduced from $O(n^3)$ to $O(2(frac{n}{2})^3)$ or $O(frac{1}{4}n^3)$, the impact can be much greater if the system is divided it into more subsystems. For that case the complexity is ($s$-subsysems, $c$-interconnection points) $O(c^3)+O((frac{n^3}{s²}))$, if the systems is divided into equaly sized subsystems. I'm not sure about the notation for multiple variables, but you should get the point.



          In short:



          Lets assume you have a $N times N$ system, lets say you can divide the system into two systems with 1 connection point(plus reference when you look at electrical systems). The connection points are $m$ and $n$. Lets assume these systems are of the size $N_1=N/2$ and $N_2=N/2$ (for simplicitys sake). You should now solve them separately.



          $mathbf A_1^{-1}=mathbf B_1$



          $mathbf A_2^{-1}=mathbf B_2$



          The next step is to put them back together, that is done with the help of the so called "Thevenin Matrix"(in our case it is 1$times$1). You can look up the exact principle for higher orders(more connection points), but for this example it looks like:
          begin{align}
          mathbf{B_{TH}}=B_{1mm}+B_{2nn}-2B_{mn}
          end{align}
          For our case we have $B_{mn}=0$. Now we need the solutions $x_1$ and $x_2$ to form the coefficients $b_{th}$.



          $mathbf x_{th}=x_{1m}-x_{2n}$



          $mathbf b_{p}=mathbf{B_{TH}}^{-1} mathbf x_{th}$



          begin{align}
          mathbf b_{th}=begin{bmatrix}0&cdots&b_{p}&cdots&-b_{p}&cdots&0
          end{bmatrix}^T
          end{align}



          The $mathbf b_{th}$ matrix only has nonzero elements at $m$ an $N/2 +n$. Now we can finally find the solution $x_n$ for the whole system:
          begin{align}
          mathbf x_n=begin{bmatrix}x_1\x_2
          end{bmatrix}-begin{bmatrix}B_1&0\0&B_2
          end{bmatrix}begin{bmatrix}b_{th}
          end{bmatrix}
          end{align}



          I'm more used to the engineering notation with $Z, I, U$ and so on, so excuse for non-standard symbol usage.







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Apr 7 '14 at 14:55









          WalyKu

          279112




          279112






















              up vote
              9
              down vote













              It is an open important mathematical question



              The Wikipedia page https://en.wikipedia.org/wiki/Matrix_multiplication_algorithm#Sub-cubic_algorithms shows how the fast known asymptotic algorithm has evolved through time:





              This begs the question: how close can we get to O(n^2), which is of course a lower bound since we have to read 2 * n^2 inputs at least once?



              Proving the upper lower bound, even non constructively, would make you instantly famous. Wikipedia lists it as an "Unsolved problem in computer science".



              The constant factor for the better algorithms is so large though that it makes them impractical for most matrix sizes found in practice today, and I doubt this will change if new algorithms are found. Therefore any practical real world answer will come down to optimizing against a given machine model.



              Personal guess about solving for single variables: I don't think you can reduce complexity like that in general, since the entire system is coupled. How would you know that your solution for some variables also satisfies the entire global solution?






              share|cite|improve this answer























              • Here is the wiki to the family of algorithms you are referring to: en.wikipedia.org/wiki/Coppersmith%E2%80%93Winograd_algorithm Also worth looking at en.wikipedia.org/wiki/Strassen_algorithm
                – Novice C
                Jul 4 '17 at 20:37

















              up vote
              9
              down vote













              It is an open important mathematical question



              The Wikipedia page https://en.wikipedia.org/wiki/Matrix_multiplication_algorithm#Sub-cubic_algorithms shows how the fast known asymptotic algorithm has evolved through time:





              This begs the question: how close can we get to O(n^2), which is of course a lower bound since we have to read 2 * n^2 inputs at least once?



              Proving the upper lower bound, even non constructively, would make you instantly famous. Wikipedia lists it as an "Unsolved problem in computer science".



              The constant factor for the better algorithms is so large though that it makes them impractical for most matrix sizes found in practice today, and I doubt this will change if new algorithms are found. Therefore any practical real world answer will come down to optimizing against a given machine model.



              Personal guess about solving for single variables: I don't think you can reduce complexity like that in general, since the entire system is coupled. How would you know that your solution for some variables also satisfies the entire global solution?






              share|cite|improve this answer























              • Here is the wiki to the family of algorithms you are referring to: en.wikipedia.org/wiki/Coppersmith%E2%80%93Winograd_algorithm Also worth looking at en.wikipedia.org/wiki/Strassen_algorithm
                – Novice C
                Jul 4 '17 at 20:37















              up vote
              9
              down vote










              up vote
              9
              down vote









              It is an open important mathematical question



              The Wikipedia page https://en.wikipedia.org/wiki/Matrix_multiplication_algorithm#Sub-cubic_algorithms shows how the fast known asymptotic algorithm has evolved through time:





              This begs the question: how close can we get to O(n^2), which is of course a lower bound since we have to read 2 * n^2 inputs at least once?



              Proving the upper lower bound, even non constructively, would make you instantly famous. Wikipedia lists it as an "Unsolved problem in computer science".



              The constant factor for the better algorithms is so large though that it makes them impractical for most matrix sizes found in practice today, and I doubt this will change if new algorithms are found. Therefore any practical real world answer will come down to optimizing against a given machine model.



              Personal guess about solving for single variables: I don't think you can reduce complexity like that in general, since the entire system is coupled. How would you know that your solution for some variables also satisfies the entire global solution?






              share|cite|improve this answer














              It is an open important mathematical question



              The Wikipedia page https://en.wikipedia.org/wiki/Matrix_multiplication_algorithm#Sub-cubic_algorithms shows how the fast known asymptotic algorithm has evolved through time:





              This begs the question: how close can we get to O(n^2), which is of course a lower bound since we have to read 2 * n^2 inputs at least once?



              Proving the upper lower bound, even non constructively, would make you instantly famous. Wikipedia lists it as an "Unsolved problem in computer science".



              The constant factor for the better algorithms is so large though that it makes them impractical for most matrix sizes found in practice today, and I doubt this will change if new algorithms are found. Therefore any practical real world answer will come down to optimizing against a given machine model.



              Personal guess about solving for single variables: I don't think you can reduce complexity like that in general, since the entire system is coupled. How would you know that your solution for some variables also satisfies the entire global solution?







              share|cite|improve this answer














              share|cite|improve this answer



              share|cite|improve this answer








              edited Nov 19 at 21:29

























              answered Dec 15 '12 at 16:41









              Ciro Santilli 新疆改造中心 六四事件 法轮功

              36137




              36137












              • Here is the wiki to the family of algorithms you are referring to: en.wikipedia.org/wiki/Coppersmith%E2%80%93Winograd_algorithm Also worth looking at en.wikipedia.org/wiki/Strassen_algorithm
                – Novice C
                Jul 4 '17 at 20:37




















              • Here is the wiki to the family of algorithms you are referring to: en.wikipedia.org/wiki/Coppersmith%E2%80%93Winograd_algorithm Also worth looking at en.wikipedia.org/wiki/Strassen_algorithm
                – Novice C
                Jul 4 '17 at 20:37


















              Here is the wiki to the family of algorithms you are referring to: en.wikipedia.org/wiki/Coppersmith%E2%80%93Winograd_algorithm Also worth looking at en.wikipedia.org/wiki/Strassen_algorithm
              – Novice C
              Jul 4 '17 at 20:37






              Here is the wiki to the family of algorithms you are referring to: en.wikipedia.org/wiki/Coppersmith%E2%80%93Winograd_algorithm Also worth looking at en.wikipedia.org/wiki/Strassen_algorithm
              – Novice C
              Jul 4 '17 at 20:37












              up vote
              8
              down vote













              Unless your matrix is sparse or structured (e.g. Vandermonde, Hankel, or those other named matrix families that admit a fast solution method), there is not much hope of doing things better than $O(n^3)$ effort. Even if one were to restrict himself to solving for just one of the 50,000 variables, Cramer will demand computing two determinants for your answer, and the effort for computing a determinant is at least as much as decomposing/inverting a matrix to begin with.






              share|cite|improve this answer

























                up vote
                8
                down vote













                Unless your matrix is sparse or structured (e.g. Vandermonde, Hankel, or those other named matrix families that admit a fast solution method), there is not much hope of doing things better than $O(n^3)$ effort. Even if one were to restrict himself to solving for just one of the 50,000 variables, Cramer will demand computing two determinants for your answer, and the effort for computing a determinant is at least as much as decomposing/inverting a matrix to begin with.






                share|cite|improve this answer























                  up vote
                  8
                  down vote










                  up vote
                  8
                  down vote









                  Unless your matrix is sparse or structured (e.g. Vandermonde, Hankel, or those other named matrix families that admit a fast solution method), there is not much hope of doing things better than $O(n^3)$ effort. Even if one were to restrict himself to solving for just one of the 50,000 variables, Cramer will demand computing two determinants for your answer, and the effort for computing a determinant is at least as much as decomposing/inverting a matrix to begin with.






                  share|cite|improve this answer












                  Unless your matrix is sparse or structured (e.g. Vandermonde, Hankel, or those other named matrix families that admit a fast solution method), there is not much hope of doing things better than $O(n^3)$ effort. Even if one were to restrict himself to solving for just one of the 50,000 variables, Cramer will demand computing two determinants for your answer, and the effort for computing a determinant is at least as much as decomposing/inverting a matrix to begin with.







                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered Apr 4 '11 at 3:12









                  Juan Joder

                  1614




                  1614






























                       

                      draft saved


                      draft discarded



















































                       


                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f30330%2ffast-algorithm-for-solving-system-of-linear-equations%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      Wiesbaden

                      Marschland

                      Dieringhausen