Why do quasi newton methods such as DFP and BFGS have poor performance on ill-conditionned problem, even if...












1















I've been reading in the litterature that quasi newton methods such as DFP and BFGS have poor performance on ill-conditionned problems, but I don't understand the reason of it. I have been trying to use these methods on a quadratic problem that is ill-conditionned and it doesn't converge in p+1 iterations (it is one of quasi newton methods properties for quadratic problems), but a little more. Why is that ? Thank you for your help.










share|improve this question



























    1















    I've been reading in the litterature that quasi newton methods such as DFP and BFGS have poor performance on ill-conditionned problems, but I don't understand the reason of it. I have been trying to use these methods on a quadratic problem that is ill-conditionned and it doesn't converge in p+1 iterations (it is one of quasi newton methods properties for quadratic problems), but a little more. Why is that ? Thank you for your help.










    share|improve this question

























      1












      1








      1








      I've been reading in the litterature that quasi newton methods such as DFP and BFGS have poor performance on ill-conditionned problems, but I don't understand the reason of it. I have been trying to use these methods on a quadratic problem that is ill-conditionned and it doesn't converge in p+1 iterations (it is one of quasi newton methods properties for quadratic problems), but a little more. Why is that ? Thank you for your help.










      share|improve this question














      I've been reading in the litterature that quasi newton methods such as DFP and BFGS have poor performance on ill-conditionned problems, but I don't understand the reason of it. I have been trying to use these methods on a quadratic problem that is ill-conditionned and it doesn't converge in p+1 iterations (it is one of quasi newton methods properties for quadratic problems), but a little more. Why is that ? Thank you for your help.







      optimization data-science quadratic hessian






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 24 '18 at 23:30









      Kathryn SchutteKathryn Schutte

      164




      164
























          1 Answer
          1






          active

          oldest

          votes


















          2














          Ill-conditioning is a general problem for the optimization algorithms. Since quasi-Newton methods are based on the Newton method, let´s consider its properties first. Basically there are two main aspects with ill-conditioning:




          1. it leads to the numerical instability (e.g., roundoff errors) that are accumulated by the algorithm

          2. it slows down the convergence rate due to the stretched shape of the resulting contours of the Hessian


          Standard Newton method also involves the operation of the inverse of the Hessian which in case of large condition number results in the corresponding small eigenvalues blowing up leading to the numerical instability.



          Quasi-Newton methods have the same issues. However, due to the fact that they iteratively approximate the inverse Hessian, they are more robust in handling roundoff errors and also may be a bit faster in convergence but it doesn´t eliminate the problem completely hence they have poor performance.






          share|improve this answer























            Your Answer






            StackExchange.ifUsing("editor", function () {
            StackExchange.using("externalEditor", function () {
            StackExchange.using("snippets", function () {
            StackExchange.snippets.init();
            });
            });
            }, "code-snippets");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "1"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53463316%2fwhy-do-quasi-newton-methods-such-as-dfp-and-bfgs-have-poor-performance-on-ill-co%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            2














            Ill-conditioning is a general problem for the optimization algorithms. Since quasi-Newton methods are based on the Newton method, let´s consider its properties first. Basically there are two main aspects with ill-conditioning:




            1. it leads to the numerical instability (e.g., roundoff errors) that are accumulated by the algorithm

            2. it slows down the convergence rate due to the stretched shape of the resulting contours of the Hessian


            Standard Newton method also involves the operation of the inverse of the Hessian which in case of large condition number results in the corresponding small eigenvalues blowing up leading to the numerical instability.



            Quasi-Newton methods have the same issues. However, due to the fact that they iteratively approximate the inverse Hessian, they are more robust in handling roundoff errors and also may be a bit faster in convergence but it doesn´t eliminate the problem completely hence they have poor performance.






            share|improve this answer




























              2














              Ill-conditioning is a general problem for the optimization algorithms. Since quasi-Newton methods are based on the Newton method, let´s consider its properties first. Basically there are two main aspects with ill-conditioning:




              1. it leads to the numerical instability (e.g., roundoff errors) that are accumulated by the algorithm

              2. it slows down the convergence rate due to the stretched shape of the resulting contours of the Hessian


              Standard Newton method also involves the operation of the inverse of the Hessian which in case of large condition number results in the corresponding small eigenvalues blowing up leading to the numerical instability.



              Quasi-Newton methods have the same issues. However, due to the fact that they iteratively approximate the inverse Hessian, they are more robust in handling roundoff errors and also may be a bit faster in convergence but it doesn´t eliminate the problem completely hence they have poor performance.






              share|improve this answer


























                2












                2








                2







                Ill-conditioning is a general problem for the optimization algorithms. Since quasi-Newton methods are based on the Newton method, let´s consider its properties first. Basically there are two main aspects with ill-conditioning:




                1. it leads to the numerical instability (e.g., roundoff errors) that are accumulated by the algorithm

                2. it slows down the convergence rate due to the stretched shape of the resulting contours of the Hessian


                Standard Newton method also involves the operation of the inverse of the Hessian which in case of large condition number results in the corresponding small eigenvalues blowing up leading to the numerical instability.



                Quasi-Newton methods have the same issues. However, due to the fact that they iteratively approximate the inverse Hessian, they are more robust in handling roundoff errors and also may be a bit faster in convergence but it doesn´t eliminate the problem completely hence they have poor performance.






                share|improve this answer













                Ill-conditioning is a general problem for the optimization algorithms. Since quasi-Newton methods are based on the Newton method, let´s consider its properties first. Basically there are two main aspects with ill-conditioning:




                1. it leads to the numerical instability (e.g., roundoff errors) that are accumulated by the algorithm

                2. it slows down the convergence rate due to the stretched shape of the resulting contours of the Hessian


                Standard Newton method also involves the operation of the inverse of the Hessian which in case of large condition number results in the corresponding small eigenvalues blowing up leading to the numerical instability.



                Quasi-Newton methods have the same issues. However, due to the fact that they iteratively approximate the inverse Hessian, they are more robust in handling roundoff errors and also may be a bit faster in convergence but it doesn´t eliminate the problem completely hence they have poor performance.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Nov 26 '18 at 1:13









                Michael GlazunovMichael Glazunov

                10113




                10113
































                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53463316%2fwhy-do-quasi-newton-methods-such-as-dfp-and-bfgs-have-poor-performance-on-ill-co%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    To store a contact into the json file from server.js file using a class in NodeJS

                    Redirect URL with Chrome Remote Debugging Android Devices

                    Dieringhausen