Why is the Absolute value / modulus function used?












7












$begingroup$


Why is the absolute value function or modulus function $|x|$ used ? What are its uses?



For example the square of a modulus number will always be positive, but why is it used when for example the square of any number whether positive or negative is always positive ? For example, $X^2$, will give a positive number whether negative or positive where $X$ is any number positive or negative.










share|cite|improve this question











$endgroup$








  • 4




    $begingroup$
    $x^2ne |x|$ so if I want the positive value of $x$ , how would I "just do" $x^2$?
    $endgroup$
    – John Douma
    Dec 22 '18 at 19:26










  • $begingroup$
    I’m just giving an example. What is the use of the modulus function ?
    $endgroup$
    – Dan
    Dec 22 '18 at 19:27










  • $begingroup$
    Obviously: Getting the absolute value of a number.
    $endgroup$
    – Henrik
    Dec 22 '18 at 19:33






  • 4




    $begingroup$
    It has many uses. Have you had Calculus? It is used in definitions where we only care about the distance between two points regardless of which one is greater. e.g. $|x-c|ltdeltaimplies |f(x)-f(c)|ltepsilon$.
    $endgroup$
    – John Douma
    Dec 22 '18 at 19:33






  • 7




    $begingroup$
    The absolute value has use in absolutely every application of mathematics, pun intended.
    $endgroup$
    – Matt Samuel
    Dec 22 '18 at 20:47
















7












$begingroup$


Why is the absolute value function or modulus function $|x|$ used ? What are its uses?



For example the square of a modulus number will always be positive, but why is it used when for example the square of any number whether positive or negative is always positive ? For example, $X^2$, will give a positive number whether negative or positive where $X$ is any number positive or negative.










share|cite|improve this question











$endgroup$








  • 4




    $begingroup$
    $x^2ne |x|$ so if I want the positive value of $x$ , how would I "just do" $x^2$?
    $endgroup$
    – John Douma
    Dec 22 '18 at 19:26










  • $begingroup$
    I’m just giving an example. What is the use of the modulus function ?
    $endgroup$
    – Dan
    Dec 22 '18 at 19:27










  • $begingroup$
    Obviously: Getting the absolute value of a number.
    $endgroup$
    – Henrik
    Dec 22 '18 at 19:33






  • 4




    $begingroup$
    It has many uses. Have you had Calculus? It is used in definitions where we only care about the distance between two points regardless of which one is greater. e.g. $|x-c|ltdeltaimplies |f(x)-f(c)|ltepsilon$.
    $endgroup$
    – John Douma
    Dec 22 '18 at 19:33






  • 7




    $begingroup$
    The absolute value has use in absolutely every application of mathematics, pun intended.
    $endgroup$
    – Matt Samuel
    Dec 22 '18 at 20:47














7












7








7


1



$begingroup$


Why is the absolute value function or modulus function $|x|$ used ? What are its uses?



For example the square of a modulus number will always be positive, but why is it used when for example the square of any number whether positive or negative is always positive ? For example, $X^2$, will give a positive number whether negative or positive where $X$ is any number positive or negative.










share|cite|improve this question











$endgroup$




Why is the absolute value function or modulus function $|x|$ used ? What are its uses?



For example the square of a modulus number will always be positive, but why is it used when for example the square of any number whether positive or negative is always positive ? For example, $X^2$, will give a positive number whether negative or positive where $X$ is any number positive or negative.







absolute-value






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Dec 22 '18 at 19:30







Dan

















asked Dec 22 '18 at 19:20









Dan Dan

93431133




93431133








  • 4




    $begingroup$
    $x^2ne |x|$ so if I want the positive value of $x$ , how would I "just do" $x^2$?
    $endgroup$
    – John Douma
    Dec 22 '18 at 19:26










  • $begingroup$
    I’m just giving an example. What is the use of the modulus function ?
    $endgroup$
    – Dan
    Dec 22 '18 at 19:27










  • $begingroup$
    Obviously: Getting the absolute value of a number.
    $endgroup$
    – Henrik
    Dec 22 '18 at 19:33






  • 4




    $begingroup$
    It has many uses. Have you had Calculus? It is used in definitions where we only care about the distance between two points regardless of which one is greater. e.g. $|x-c|ltdeltaimplies |f(x)-f(c)|ltepsilon$.
    $endgroup$
    – John Douma
    Dec 22 '18 at 19:33






  • 7




    $begingroup$
    The absolute value has use in absolutely every application of mathematics, pun intended.
    $endgroup$
    – Matt Samuel
    Dec 22 '18 at 20:47














  • 4




    $begingroup$
    $x^2ne |x|$ so if I want the positive value of $x$ , how would I "just do" $x^2$?
    $endgroup$
    – John Douma
    Dec 22 '18 at 19:26










  • $begingroup$
    I’m just giving an example. What is the use of the modulus function ?
    $endgroup$
    – Dan
    Dec 22 '18 at 19:27










  • $begingroup$
    Obviously: Getting the absolute value of a number.
    $endgroup$
    – Henrik
    Dec 22 '18 at 19:33






  • 4




    $begingroup$
    It has many uses. Have you had Calculus? It is used in definitions where we only care about the distance between two points regardless of which one is greater. e.g. $|x-c|ltdeltaimplies |f(x)-f(c)|ltepsilon$.
    $endgroup$
    – John Douma
    Dec 22 '18 at 19:33






  • 7




    $begingroup$
    The absolute value has use in absolutely every application of mathematics, pun intended.
    $endgroup$
    – Matt Samuel
    Dec 22 '18 at 20:47








4




4




$begingroup$
$x^2ne |x|$ so if I want the positive value of $x$ , how would I "just do" $x^2$?
$endgroup$
– John Douma
Dec 22 '18 at 19:26




$begingroup$
$x^2ne |x|$ so if I want the positive value of $x$ , how would I "just do" $x^2$?
$endgroup$
– John Douma
Dec 22 '18 at 19:26












$begingroup$
I’m just giving an example. What is the use of the modulus function ?
$endgroup$
– Dan
Dec 22 '18 at 19:27




$begingroup$
I’m just giving an example. What is the use of the modulus function ?
$endgroup$
– Dan
Dec 22 '18 at 19:27












$begingroup$
Obviously: Getting the absolute value of a number.
$endgroup$
– Henrik
Dec 22 '18 at 19:33




$begingroup$
Obviously: Getting the absolute value of a number.
$endgroup$
– Henrik
Dec 22 '18 at 19:33




4




4




$begingroup$
It has many uses. Have you had Calculus? It is used in definitions where we only care about the distance between two points regardless of which one is greater. e.g. $|x-c|ltdeltaimplies |f(x)-f(c)|ltepsilon$.
$endgroup$
– John Douma
Dec 22 '18 at 19:33




$begingroup$
It has many uses. Have you had Calculus? It is used in definitions where we only care about the distance between two points regardless of which one is greater. e.g. $|x-c|ltdeltaimplies |f(x)-f(c)|ltepsilon$.
$endgroup$
– John Douma
Dec 22 '18 at 19:33




7




7




$begingroup$
The absolute value has use in absolutely every application of mathematics, pun intended.
$endgroup$
– Matt Samuel
Dec 22 '18 at 20:47




$begingroup$
The absolute value has use in absolutely every application of mathematics, pun intended.
$endgroup$
– Matt Samuel
Dec 22 '18 at 20:47










4 Answers
4






active

oldest

votes


















8












$begingroup$

In the context of real numbers the absolute value of a number is used in many ways but perhaps very elementarily it is used to write numbers in a canonical form. Every real number $ane 0$ is uniquely equal to $pm left |aright|$. So if we define the sign function $scolon mathbb Rsetminus{0}to {+,-}$ given by $s(a)=+$ if $a>0$ and $s(a)=-$ if $a<0$, then: for all $ane 0$ in $mathbb R$ we have $a=sign(a)cdot left | a right |$. In a sense this is a way to build all the reals from the positive ones. This is all just a special case of the polar representation of complex numbers, a representation of utmost importance.






share|cite|improve this answer









$endgroup$





















    15












    $begingroup$

    One use of it is to define the distance between numbers. For example, in Calculus, you may want to say "the distance between $x$ and $y$ is less than $1$". The way to write that mathematically is $|x-y|<1$. And you want to write it mathematically so you can work with it mathematically.






    share|cite|improve this answer











    $endgroup$





















      11












      $begingroup$

      The notation $vert xvert$ for absolute value of $x$ was introduced by Weierstrass in 1841:




      K. Weierstrass, Mathematische Werke, Vol. I (Berlin, 1894), p. 67.




      Quoted from [1]




      ...There has been a real need in analysis for a convenient symbolism for
      "absolute value" of a given number, or "absolute number," and the two
      vertical bars introduced in 1841 by Weierstrass, as in $vert zvert$, have met with wide adoption;...




      Extra information: Absolute is from the Latin absoluere, "to free from"; hence suggesting, to free from its sign.



      [1] Florian Cajori, A History of Mathematical Notations (Two volumes bound as one), Dover Publications, 1993.





      My take on a usage example of absolute value:
      $$
      min(x,y)=frac{1}{2}(|x+y|-|x-y|)
      $$

      $$
      max(x,y)=frac{1}{2}(|x+y|+|x-y|)
      $$






      share|cite|improve this answer











      $endgroup$













      • $begingroup$
        @HansLundmark me too, that was the reason why I have another reference, let me write it down. Done.
        $endgroup$
        – Picaud Vincent
        Dec 22 '18 at 19:50





















      2












      $begingroup$

      Because both of them are useful.



      You explicitly mentioned the square function. Therefore, I want to give some examples. The main idea is that the non-differentiability of $|cdot|$ is useful in minimization problem.



      Estimators



      We know that the arithmetic mean $hat{mu}=sum_{i=1}^n x_i$ gives



      $$min_{mu} ,(x_i-mu)^2$$



      but it is less well-known that the median gives



      $$min_{mu} , |x_i-mu|.$$



      Signal Processing



      Let's use image processing as an example. Suppose $g$ is a given, noisy image. We want to find some smoother image $f$ which looks like $g$.



      The Harmonic L$^2$ minimization model solves



      $$-bigtriangleup f + f = g $$



      and it turns out to be equivalent to solving a minimization problem:



      $$min_{f} ,(int_{Omega} (f(x,y)-g(x,y))^2 dxdy + int_{Omega} |nabla{f(x,y)}|^2 dxdy).$$



      An enhanced version is the ROF model. It solves



      $$min_{f} ,(frac{1}{2} int_{Omega} (f(x,y)-g(x,y))^2 dxdy + lambda int_{Omega} |nabla{f(x,y)}| dxdy).$$



      Notice that for appropriate $lambda$, these two models only differ by a square. Another remark is that $|cdot|$ gives the Euclidean norm when the argument is a vector. However, the idea still applies since the norm is non-zero



      Model Selection



      In classical model selection problem, we are given a set of predictors and a response (in vector form). We want to decide which predictors are useful. One way is to choose a "good" subset of predictors. Another way is to shrink the regression coefficients.



      The classical regression model solves the following minimization problem:



      $$min_{beta_0,...,beta_p} sum_{i=1}^n (y_i-beta_0-sum_{j=1}^p beta_j x_{ij})^2$$



      The Ridge Regression solves the following:



      $$min_{beta_0,...,beta_p} sum_{i=1}^n (y_i-beta_0-sum_{j=1}^p beta_j x_{ij})^2+lambda sum_{j=1}^p {beta_j}^2$$



      , so that larger $beta_j$ gives penalty.



      Another version is Lasso, which solves



      $$min_{beta_0,...,beta_p} sum_{i=1}^n (y_i-beta_0-sum_{j=1}^p beta_j x_{ij})^2+lambda sum_{j=1}^p |beta_j|.$$






      share|cite|improve this answer









      $endgroup$













        Your Answer





        StackExchange.ifUsing("editor", function () {
        return StackExchange.using("mathjaxEditing", function () {
        StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
        StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
        });
        });
        }, "mathjax-editing");

        StackExchange.ready(function() {
        var channelOptions = {
        tags: "".split(" "),
        id: "69"
        };
        initTagRenderer("".split(" "), "".split(" "), channelOptions);

        StackExchange.using("externalEditor", function() {
        // Have to fire editor after snippets, if snippets enabled
        if (StackExchange.settings.snippets.snippetsEnabled) {
        StackExchange.using("snippets", function() {
        createEditor();
        });
        }
        else {
        createEditor();
        }
        });

        function createEditor() {
        StackExchange.prepareEditor({
        heartbeatType: 'answer',
        autoActivateHeartbeat: false,
        convertImagesToLinks: true,
        noModals: true,
        showLowRepImageUploadWarning: true,
        reputationToPostImages: 10,
        bindNavPrevention: true,
        postfix: "",
        imageUploader: {
        brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
        contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
        allowUrls: true
        },
        noCode: true, onDemand: true,
        discardSelector: ".discard-answer"
        ,immediatelyShowMarkdownHelp:true
        });


        }
        });














        draft saved

        draft discarded


















        StackExchange.ready(
        function () {
        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3049766%2fwhy-is-the-absolute-value-modulus-function-used%23new-answer', 'question_page');
        }
        );

        Post as a guest















        Required, but never shown

























        4 Answers
        4






        active

        oldest

        votes








        4 Answers
        4






        active

        oldest

        votes









        active

        oldest

        votes






        active

        oldest

        votes









        8












        $begingroup$

        In the context of real numbers the absolute value of a number is used in many ways but perhaps very elementarily it is used to write numbers in a canonical form. Every real number $ane 0$ is uniquely equal to $pm left |aright|$. So if we define the sign function $scolon mathbb Rsetminus{0}to {+,-}$ given by $s(a)=+$ if $a>0$ and $s(a)=-$ if $a<0$, then: for all $ane 0$ in $mathbb R$ we have $a=sign(a)cdot left | a right |$. In a sense this is a way to build all the reals from the positive ones. This is all just a special case of the polar representation of complex numbers, a representation of utmost importance.






        share|cite|improve this answer









        $endgroup$


















          8












          $begingroup$

          In the context of real numbers the absolute value of a number is used in many ways but perhaps very elementarily it is used to write numbers in a canonical form. Every real number $ane 0$ is uniquely equal to $pm left |aright|$. So if we define the sign function $scolon mathbb Rsetminus{0}to {+,-}$ given by $s(a)=+$ if $a>0$ and $s(a)=-$ if $a<0$, then: for all $ane 0$ in $mathbb R$ we have $a=sign(a)cdot left | a right |$. In a sense this is a way to build all the reals from the positive ones. This is all just a special case of the polar representation of complex numbers, a representation of utmost importance.






          share|cite|improve this answer









          $endgroup$
















            8












            8








            8





            $begingroup$

            In the context of real numbers the absolute value of a number is used in many ways but perhaps very elementarily it is used to write numbers in a canonical form. Every real number $ane 0$ is uniquely equal to $pm left |aright|$. So if we define the sign function $scolon mathbb Rsetminus{0}to {+,-}$ given by $s(a)=+$ if $a>0$ and $s(a)=-$ if $a<0$, then: for all $ane 0$ in $mathbb R$ we have $a=sign(a)cdot left | a right |$. In a sense this is a way to build all the reals from the positive ones. This is all just a special case of the polar representation of complex numbers, a representation of utmost importance.






            share|cite|improve this answer









            $endgroup$



            In the context of real numbers the absolute value of a number is used in many ways but perhaps very elementarily it is used to write numbers in a canonical form. Every real number $ane 0$ is uniquely equal to $pm left |aright|$. So if we define the sign function $scolon mathbb Rsetminus{0}to {+,-}$ given by $s(a)=+$ if $a>0$ and $s(a)=-$ if $a<0$, then: for all $ane 0$ in $mathbb R$ we have $a=sign(a)cdot left | a right |$. In a sense this is a way to build all the reals from the positive ones. This is all just a special case of the polar representation of complex numbers, a representation of utmost importance.







            share|cite|improve this answer












            share|cite|improve this answer



            share|cite|improve this answer










            answered Dec 22 '18 at 19:48









            Ittay WeissIttay Weiss

            64.2k7102184




            64.2k7102184























                15












                $begingroup$

                One use of it is to define the distance between numbers. For example, in Calculus, you may want to say "the distance between $x$ and $y$ is less than $1$". The way to write that mathematically is $|x-y|<1$. And you want to write it mathematically so you can work with it mathematically.






                share|cite|improve this answer











                $endgroup$


















                  15












                  $begingroup$

                  One use of it is to define the distance between numbers. For example, in Calculus, you may want to say "the distance between $x$ and $y$ is less than $1$". The way to write that mathematically is $|x-y|<1$. And you want to write it mathematically so you can work with it mathematically.






                  share|cite|improve this answer











                  $endgroup$
















                    15












                    15








                    15





                    $begingroup$

                    One use of it is to define the distance between numbers. For example, in Calculus, you may want to say "the distance between $x$ and $y$ is less than $1$". The way to write that mathematically is $|x-y|<1$. And you want to write it mathematically so you can work with it mathematically.






                    share|cite|improve this answer











                    $endgroup$



                    One use of it is to define the distance between numbers. For example, in Calculus, you may want to say "the distance between $x$ and $y$ is less than $1$". The way to write that mathematically is $|x-y|<1$. And you want to write it mathematically so you can work with it mathematically.







                    share|cite|improve this answer














                    share|cite|improve this answer



                    share|cite|improve this answer








                    edited Dec 23 '18 at 12:17









                    amWhy

                    1




                    1










                    answered Dec 22 '18 at 19:34









                    OviOvi

                    12.4k1040113




                    12.4k1040113























                        11












                        $begingroup$

                        The notation $vert xvert$ for absolute value of $x$ was introduced by Weierstrass in 1841:




                        K. Weierstrass, Mathematische Werke, Vol. I (Berlin, 1894), p. 67.




                        Quoted from [1]




                        ...There has been a real need in analysis for a convenient symbolism for
                        "absolute value" of a given number, or "absolute number," and the two
                        vertical bars introduced in 1841 by Weierstrass, as in $vert zvert$, have met with wide adoption;...




                        Extra information: Absolute is from the Latin absoluere, "to free from"; hence suggesting, to free from its sign.



                        [1] Florian Cajori, A History of Mathematical Notations (Two volumes bound as one), Dover Publications, 1993.





                        My take on a usage example of absolute value:
                        $$
                        min(x,y)=frac{1}{2}(|x+y|-|x-y|)
                        $$

                        $$
                        max(x,y)=frac{1}{2}(|x+y|+|x-y|)
                        $$






                        share|cite|improve this answer











                        $endgroup$













                        • $begingroup$
                          @HansLundmark me too, that was the reason why I have another reference, let me write it down. Done.
                          $endgroup$
                          – Picaud Vincent
                          Dec 22 '18 at 19:50


















                        11












                        $begingroup$

                        The notation $vert xvert$ for absolute value of $x$ was introduced by Weierstrass in 1841:




                        K. Weierstrass, Mathematische Werke, Vol. I (Berlin, 1894), p. 67.




                        Quoted from [1]




                        ...There has been a real need in analysis for a convenient symbolism for
                        "absolute value" of a given number, or "absolute number," and the two
                        vertical bars introduced in 1841 by Weierstrass, as in $vert zvert$, have met with wide adoption;...




                        Extra information: Absolute is from the Latin absoluere, "to free from"; hence suggesting, to free from its sign.



                        [1] Florian Cajori, A History of Mathematical Notations (Two volumes bound as one), Dover Publications, 1993.





                        My take on a usage example of absolute value:
                        $$
                        min(x,y)=frac{1}{2}(|x+y|-|x-y|)
                        $$

                        $$
                        max(x,y)=frac{1}{2}(|x+y|+|x-y|)
                        $$






                        share|cite|improve this answer











                        $endgroup$













                        • $begingroup$
                          @HansLundmark me too, that was the reason why I have another reference, let me write it down. Done.
                          $endgroup$
                          – Picaud Vincent
                          Dec 22 '18 at 19:50
















                        11












                        11








                        11





                        $begingroup$

                        The notation $vert xvert$ for absolute value of $x$ was introduced by Weierstrass in 1841:




                        K. Weierstrass, Mathematische Werke, Vol. I (Berlin, 1894), p. 67.




                        Quoted from [1]




                        ...There has been a real need in analysis for a convenient symbolism for
                        "absolute value" of a given number, or "absolute number," and the two
                        vertical bars introduced in 1841 by Weierstrass, as in $vert zvert$, have met with wide adoption;...




                        Extra information: Absolute is from the Latin absoluere, "to free from"; hence suggesting, to free from its sign.



                        [1] Florian Cajori, A History of Mathematical Notations (Two volumes bound as one), Dover Publications, 1993.





                        My take on a usage example of absolute value:
                        $$
                        min(x,y)=frac{1}{2}(|x+y|-|x-y|)
                        $$

                        $$
                        max(x,y)=frac{1}{2}(|x+y|+|x-y|)
                        $$






                        share|cite|improve this answer











                        $endgroup$



                        The notation $vert xvert$ for absolute value of $x$ was introduced by Weierstrass in 1841:




                        K. Weierstrass, Mathematische Werke, Vol. I (Berlin, 1894), p. 67.




                        Quoted from [1]




                        ...There has been a real need in analysis for a convenient symbolism for
                        "absolute value" of a given number, or "absolute number," and the two
                        vertical bars introduced in 1841 by Weierstrass, as in $vert zvert$, have met with wide adoption;...




                        Extra information: Absolute is from the Latin absoluere, "to free from"; hence suggesting, to free from its sign.



                        [1] Florian Cajori, A History of Mathematical Notations (Two volumes bound as one), Dover Publications, 1993.





                        My take on a usage example of absolute value:
                        $$
                        min(x,y)=frac{1}{2}(|x+y|-|x-y|)
                        $$

                        $$
                        max(x,y)=frac{1}{2}(|x+y|+|x-y|)
                        $$







                        share|cite|improve this answer














                        share|cite|improve this answer



                        share|cite|improve this answer








                        edited Dec 23 '18 at 9:27









                        WAF

                        1032




                        1032










                        answered Dec 22 '18 at 19:41









                        Picaud VincentPicaud Vincent

                        1,434310




                        1,434310












                        • $begingroup$
                          @HansLundmark me too, that was the reason why I have another reference, let me write it down. Done.
                          $endgroup$
                          – Picaud Vincent
                          Dec 22 '18 at 19:50




















                        • $begingroup$
                          @HansLundmark me too, that was the reason why I have another reference, let me write it down. Done.
                          $endgroup$
                          – Picaud Vincent
                          Dec 22 '18 at 19:50


















                        $begingroup$
                        @HansLundmark me too, that was the reason why I have another reference, let me write it down. Done.
                        $endgroup$
                        – Picaud Vincent
                        Dec 22 '18 at 19:50






                        $begingroup$
                        @HansLundmark me too, that was the reason why I have another reference, let me write it down. Done.
                        $endgroup$
                        – Picaud Vincent
                        Dec 22 '18 at 19:50













                        2












                        $begingroup$

                        Because both of them are useful.



                        You explicitly mentioned the square function. Therefore, I want to give some examples. The main idea is that the non-differentiability of $|cdot|$ is useful in minimization problem.



                        Estimators



                        We know that the arithmetic mean $hat{mu}=sum_{i=1}^n x_i$ gives



                        $$min_{mu} ,(x_i-mu)^2$$



                        but it is less well-known that the median gives



                        $$min_{mu} , |x_i-mu|.$$



                        Signal Processing



                        Let's use image processing as an example. Suppose $g$ is a given, noisy image. We want to find some smoother image $f$ which looks like $g$.



                        The Harmonic L$^2$ minimization model solves



                        $$-bigtriangleup f + f = g $$



                        and it turns out to be equivalent to solving a minimization problem:



                        $$min_{f} ,(int_{Omega} (f(x,y)-g(x,y))^2 dxdy + int_{Omega} |nabla{f(x,y)}|^2 dxdy).$$



                        An enhanced version is the ROF model. It solves



                        $$min_{f} ,(frac{1}{2} int_{Omega} (f(x,y)-g(x,y))^2 dxdy + lambda int_{Omega} |nabla{f(x,y)}| dxdy).$$



                        Notice that for appropriate $lambda$, these two models only differ by a square. Another remark is that $|cdot|$ gives the Euclidean norm when the argument is a vector. However, the idea still applies since the norm is non-zero



                        Model Selection



                        In classical model selection problem, we are given a set of predictors and a response (in vector form). We want to decide which predictors are useful. One way is to choose a "good" subset of predictors. Another way is to shrink the regression coefficients.



                        The classical regression model solves the following minimization problem:



                        $$min_{beta_0,...,beta_p} sum_{i=1}^n (y_i-beta_0-sum_{j=1}^p beta_j x_{ij})^2$$



                        The Ridge Regression solves the following:



                        $$min_{beta_0,...,beta_p} sum_{i=1}^n (y_i-beta_0-sum_{j=1}^p beta_j x_{ij})^2+lambda sum_{j=1}^p {beta_j}^2$$



                        , so that larger $beta_j$ gives penalty.



                        Another version is Lasso, which solves



                        $$min_{beta_0,...,beta_p} sum_{i=1}^n (y_i-beta_0-sum_{j=1}^p beta_j x_{ij})^2+lambda sum_{j=1}^p |beta_j|.$$






                        share|cite|improve this answer









                        $endgroup$


















                          2












                          $begingroup$

                          Because both of them are useful.



                          You explicitly mentioned the square function. Therefore, I want to give some examples. The main idea is that the non-differentiability of $|cdot|$ is useful in minimization problem.



                          Estimators



                          We know that the arithmetic mean $hat{mu}=sum_{i=1}^n x_i$ gives



                          $$min_{mu} ,(x_i-mu)^2$$



                          but it is less well-known that the median gives



                          $$min_{mu} , |x_i-mu|.$$



                          Signal Processing



                          Let's use image processing as an example. Suppose $g$ is a given, noisy image. We want to find some smoother image $f$ which looks like $g$.



                          The Harmonic L$^2$ minimization model solves



                          $$-bigtriangleup f + f = g $$



                          and it turns out to be equivalent to solving a minimization problem:



                          $$min_{f} ,(int_{Omega} (f(x,y)-g(x,y))^2 dxdy + int_{Omega} |nabla{f(x,y)}|^2 dxdy).$$



                          An enhanced version is the ROF model. It solves



                          $$min_{f} ,(frac{1}{2} int_{Omega} (f(x,y)-g(x,y))^2 dxdy + lambda int_{Omega} |nabla{f(x,y)}| dxdy).$$



                          Notice that for appropriate $lambda$, these two models only differ by a square. Another remark is that $|cdot|$ gives the Euclidean norm when the argument is a vector. However, the idea still applies since the norm is non-zero



                          Model Selection



                          In classical model selection problem, we are given a set of predictors and a response (in vector form). We want to decide which predictors are useful. One way is to choose a "good" subset of predictors. Another way is to shrink the regression coefficients.



                          The classical regression model solves the following minimization problem:



                          $$min_{beta_0,...,beta_p} sum_{i=1}^n (y_i-beta_0-sum_{j=1}^p beta_j x_{ij})^2$$



                          The Ridge Regression solves the following:



                          $$min_{beta_0,...,beta_p} sum_{i=1}^n (y_i-beta_0-sum_{j=1}^p beta_j x_{ij})^2+lambda sum_{j=1}^p {beta_j}^2$$



                          , so that larger $beta_j$ gives penalty.



                          Another version is Lasso, which solves



                          $$min_{beta_0,...,beta_p} sum_{i=1}^n (y_i-beta_0-sum_{j=1}^p beta_j x_{ij})^2+lambda sum_{j=1}^p |beta_j|.$$






                          share|cite|improve this answer









                          $endgroup$
















                            2












                            2








                            2





                            $begingroup$

                            Because both of them are useful.



                            You explicitly mentioned the square function. Therefore, I want to give some examples. The main idea is that the non-differentiability of $|cdot|$ is useful in minimization problem.



                            Estimators



                            We know that the arithmetic mean $hat{mu}=sum_{i=1}^n x_i$ gives



                            $$min_{mu} ,(x_i-mu)^2$$



                            but it is less well-known that the median gives



                            $$min_{mu} , |x_i-mu|.$$



                            Signal Processing



                            Let's use image processing as an example. Suppose $g$ is a given, noisy image. We want to find some smoother image $f$ which looks like $g$.



                            The Harmonic L$^2$ minimization model solves



                            $$-bigtriangleup f + f = g $$



                            and it turns out to be equivalent to solving a minimization problem:



                            $$min_{f} ,(int_{Omega} (f(x,y)-g(x,y))^2 dxdy + int_{Omega} |nabla{f(x,y)}|^2 dxdy).$$



                            An enhanced version is the ROF model. It solves



                            $$min_{f} ,(frac{1}{2} int_{Omega} (f(x,y)-g(x,y))^2 dxdy + lambda int_{Omega} |nabla{f(x,y)}| dxdy).$$



                            Notice that for appropriate $lambda$, these two models only differ by a square. Another remark is that $|cdot|$ gives the Euclidean norm when the argument is a vector. However, the idea still applies since the norm is non-zero



                            Model Selection



                            In classical model selection problem, we are given a set of predictors and a response (in vector form). We want to decide which predictors are useful. One way is to choose a "good" subset of predictors. Another way is to shrink the regression coefficients.



                            The classical regression model solves the following minimization problem:



                            $$min_{beta_0,...,beta_p} sum_{i=1}^n (y_i-beta_0-sum_{j=1}^p beta_j x_{ij})^2$$



                            The Ridge Regression solves the following:



                            $$min_{beta_0,...,beta_p} sum_{i=1}^n (y_i-beta_0-sum_{j=1}^p beta_j x_{ij})^2+lambda sum_{j=1}^p {beta_j}^2$$



                            , so that larger $beta_j$ gives penalty.



                            Another version is Lasso, which solves



                            $$min_{beta_0,...,beta_p} sum_{i=1}^n (y_i-beta_0-sum_{j=1}^p beta_j x_{ij})^2+lambda sum_{j=1}^p |beta_j|.$$






                            share|cite|improve this answer









                            $endgroup$



                            Because both of them are useful.



                            You explicitly mentioned the square function. Therefore, I want to give some examples. The main idea is that the non-differentiability of $|cdot|$ is useful in minimization problem.



                            Estimators



                            We know that the arithmetic mean $hat{mu}=sum_{i=1}^n x_i$ gives



                            $$min_{mu} ,(x_i-mu)^2$$



                            but it is less well-known that the median gives



                            $$min_{mu} , |x_i-mu|.$$



                            Signal Processing



                            Let's use image processing as an example. Suppose $g$ is a given, noisy image. We want to find some smoother image $f$ which looks like $g$.



                            The Harmonic L$^2$ minimization model solves



                            $$-bigtriangleup f + f = g $$



                            and it turns out to be equivalent to solving a minimization problem:



                            $$min_{f} ,(int_{Omega} (f(x,y)-g(x,y))^2 dxdy + int_{Omega} |nabla{f(x,y)}|^2 dxdy).$$



                            An enhanced version is the ROF model. It solves



                            $$min_{f} ,(frac{1}{2} int_{Omega} (f(x,y)-g(x,y))^2 dxdy + lambda int_{Omega} |nabla{f(x,y)}| dxdy).$$



                            Notice that for appropriate $lambda$, these two models only differ by a square. Another remark is that $|cdot|$ gives the Euclidean norm when the argument is a vector. However, the idea still applies since the norm is non-zero



                            Model Selection



                            In classical model selection problem, we are given a set of predictors and a response (in vector form). We want to decide which predictors are useful. One way is to choose a "good" subset of predictors. Another way is to shrink the regression coefficients.



                            The classical regression model solves the following minimization problem:



                            $$min_{beta_0,...,beta_p} sum_{i=1}^n (y_i-beta_0-sum_{j=1}^p beta_j x_{ij})^2$$



                            The Ridge Regression solves the following:



                            $$min_{beta_0,...,beta_p} sum_{i=1}^n (y_i-beta_0-sum_{j=1}^p beta_j x_{ij})^2+lambda sum_{j=1}^p {beta_j}^2$$



                            , so that larger $beta_j$ gives penalty.



                            Another version is Lasso, which solves



                            $$min_{beta_0,...,beta_p} sum_{i=1}^n (y_i-beta_0-sum_{j=1}^p beta_j x_{ij})^2+lambda sum_{j=1}^p |beta_j|.$$







                            share|cite|improve this answer












                            share|cite|improve this answer



                            share|cite|improve this answer










                            answered Dec 24 '18 at 5:49









                            tonychow0929tonychow0929

                            28825




                            28825






























                                draft saved

                                draft discarded




















































                                Thanks for contributing an answer to Mathematics Stack Exchange!


                                • Please be sure to answer the question. Provide details and share your research!

                                But avoid



                                • Asking for help, clarification, or responding to other answers.

                                • Making statements based on opinion; back them up with references or personal experience.


                                Use MathJax to format equations. MathJax reference.


                                To learn more, see our tips on writing great answers.




                                draft saved


                                draft discarded














                                StackExchange.ready(
                                function () {
                                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3049766%2fwhy-is-the-absolute-value-modulus-function-used%23new-answer', 'question_page');
                                }
                                );

                                Post as a guest















                                Required, but never shown





















































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown

































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown







                                Popular posts from this blog

                                Wiesbaden

                                Marschland

                                Dieringhausen