Minimum gradient of curve required to bound fixed area.












3














I'm trying to calculate



$$infbig{Vert f'Vert_infty : fin C^1[0, 1], f ge 0, f(0)=a, f(1)=b, Vert f Vert_1 =Dbig}$$



Intuition tells me that the infimum should be the gradient of a straight line segment to a point $(0.5, y)$ with $y$ chosen appropriately to bound the correct area. However I can't think how to go about proving this.



Also if I wanted to introduce a further constraint on higher order derivatives, how could I plug this in - say $Vert f''Vert_infty < lambda$ ?










share|cite|improve this question





























    3














    I'm trying to calculate



    $$infbig{Vert f'Vert_infty : fin C^1[0, 1], f ge 0, f(0)=a, f(1)=b, Vert f Vert_1 =Dbig}$$



    Intuition tells me that the infimum should be the gradient of a straight line segment to a point $(0.5, y)$ with $y$ chosen appropriately to bound the correct area. However I can't think how to go about proving this.



    Also if I wanted to introduce a further constraint on higher order derivatives, how could I plug this in - say $Vert f''Vert_infty < lambda$ ?










    share|cite|improve this question



























      3












      3








      3


      1





      I'm trying to calculate



      $$infbig{Vert f'Vert_infty : fin C^1[0, 1], f ge 0, f(0)=a, f(1)=b, Vert f Vert_1 =Dbig}$$



      Intuition tells me that the infimum should be the gradient of a straight line segment to a point $(0.5, y)$ with $y$ chosen appropriately to bound the correct area. However I can't think how to go about proving this.



      Also if I wanted to introduce a further constraint on higher order derivatives, how could I plug this in - say $Vert f''Vert_infty < lambda$ ?










      share|cite|improve this question















      I'm trying to calculate



      $$infbig{Vert f'Vert_infty : fin C^1[0, 1], f ge 0, f(0)=a, f(1)=b, Vert f Vert_1 =Dbig}$$



      Intuition tells me that the infimum should be the gradient of a straight line segment to a point $(0.5, y)$ with $y$ chosen appropriately to bound the correct area. However I can't think how to go about proving this.



      Also if I wanted to introduce a further constraint on higher order derivatives, how could I plug this in - say $Vert f''Vert_infty < lambda$ ?







      real-analysis






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited Dec 4 '18 at 11:16







      user1894205

















      asked Dec 4 '18 at 9:44









      user1894205user1894205

      163




      163






















          1 Answer
          1






          active

          oldest

          votes


















          1














          This is a (not very detailed in one part) partial solution for sufficiently large $D$, where the condition$fge 0$ is ignored and we later have see for which $D$ the minimizing $f$ actually fulfills this condition.



          Similiar to what the OP suggested, we want to find a point $(x_0,y_0)$ with $0 < x_0 < 1$and consider the piecewise linear function



          $$f_0(x) =
          left{
          begin{matrix}
          a+frac{y_0-a}{x_0}x, &text{for }0 le x le x_0 \
          y_0+frac{b-y_0}{1-x_0}(x-x_0), &text{for }x_0 le x le 1. \
          end{matrix}
          right.$$



          First we note that this is well defined, as both branches give $f_0(x_0)=y_0$, and in addition $f_0(0)=a, f_0(1)=b$ holds.



          This $f_0$ is usually not differentiable at $x_0$, but it should be clear that it can be approximated by a series $g_n(x)$ of such functions that only change $f_0$ in a smaller and smaller neighbourhood of $x_0$ and monotonously change the derivative from the initial $frac{y_0-a}{x_0}$ to the later $frac{b-y_0}{1-x_0}$. That means $lim_{nto infty} Vert g_nVert_1 = Vert f_0Vert_1$ is possible to achieve as well as $lim_{nto infty} Vert g'_nVert_infty = Vert f'_0Vert_infty$.



          What we want to find is the point $(x_0,y_0)$ that satifies 2 condtions:



          $$Vert f_0Vert_1 = D,$$



          $$frac{y_0-a}{x_0} = - frac{b-y_0}{1-x_0}$$



          The first should be clear, the second means that the 2 branches have the same absolute value in $f'_0$.



          I'll skip the details of actually determining those values, the first equation leads to a linear equation in $x_0$ and $y_0$, the second to an equation containing the term $x_0y_0$ and otherwise linear terms. Geometric considerations lead to the fact that this has exactly one solution with $0 < x_0 < 1$ if $D neq frac{a+b}2$ (in that case every $(x_0,y_0)$ that lies on the line segment from $(0,a)$ to $(1,b)$ is a solution).



          That means we have now found a function $f_e$ that equals $f_0$ for this specific choice of $(x_0,y_0)$, that fulfills the conditions and has a certain $Vert f'_eVert_infty$.



          We now prove that $Vert f'_eVert_infty$ is the infimum under consideration. Depending on the sign of $D-frac{a+b}2$ the proof will be slightly different, let's start with
          $D ge frac{a+b}2$. Then because of the definition of $f_e$ and the condition $Vert f_eVert_1=D$, we have $f_e$ being at or above the line $y=a+(b-a)x$ that connects $(0,a)$ and $(1,b).$ It also means the first branch of $f_e$ is increasing, while the second is decreasing.



          Let $f$ be any arbitary function that fulfills the conditions to be considered for the infimum. If $f(x_1) > f_e(x_1)$ for any $0 < x_1 le x_0$, then by the mean value theorem there is an $x_2 in (0,x_1)$ with $f'(x_2)=frac{f(x_1)-f(0)}{x_1} > frac{f_e(x_1)-f_e(0)}{x_1} = Vert f'_eVert_infty$, which imples $Vert f'Vert_infty > Vert f'_eVert_infty$.



          Similiarly, if there was $f(x_1) > f_e(x_1)$ for any $x_0 le x_1 < 1$, then there was an $x_2 in (x_1,1)$ with $f'(x_2)=frac{f(1)-f(x_1)}{1-x_1} < frac{f_e(1)-f_e(x_1)}{1-x_1} = -Vert f'_eVert_infty$. Again (recalling that $f'(x_2)$ is negative), we see that $Vert f'Vert_infty > Vert f'_eVert_infty$ would be implied.



          Recap: We've shown that any function that fulfills the conditions to be considered for the infimum, if it wants a smaller $Vert f'Vert_infty$ than our $f_e$, it has to be at or below $f_e$. Now if it is equal to $f_e$, then it can't have a smaller $Vert f'Vert_infty$. If it is not equal, it must be smaller at some point $x_3$, and by continuity in some neighborhood of $x_3$. But that means $Vert fVert_1 < D$, which is a contradiction.



          The case $D le frac{a+b}2$ works basically the same, by now showing that any $f$ that goes below $f_e$ has a higher $Vert f'Vert_infty$ than $f_e$, and then showing that the opposite case implies $Vert fVert_1 > D$.



          Now, in all of this the condition $f ge 0$ was dropped. If our $f_e$ fullfills that condition, the stated infimum is obviously also the infimum with the condition added. This can be checked in a given cases by comparing $y_0$ with 0.



          If not, then a more detailed analysis is needed. Basically if $D$ is to small, $f$ needs to go down to 0 fast, such that the integral doesn't go over $D$. In that case, a 3-piecewise function seems to be the most natural idea: From $(0,a)$ to $(x_1,0)$ to $(x_2,0)$ to $(1,b)$. Again, the derivatives in the first and last branch should have equal value and opposing signs.






          share|cite|improve this answer





















          • That's great thanks and also agrees with the intuition i had including the case where D is small (I envisaged the same method of interpolating a point at x=0.5 but enforcing f>0 to split into three line segments). Some rough and ready calculations along these lines leads to some nasty looking algebra. However I was hoping there was a more high powered method of deriving the function which might allow for the additional constraint on higher derivatives to be mixed in.
            – user1894205
            Dec 5 '18 at 8:44













          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "69"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3025353%2fminimum-gradient-of-curve-required-to-bound-fixed-area%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          1














          This is a (not very detailed in one part) partial solution for sufficiently large $D$, where the condition$fge 0$ is ignored and we later have see for which $D$ the minimizing $f$ actually fulfills this condition.



          Similiar to what the OP suggested, we want to find a point $(x_0,y_0)$ with $0 < x_0 < 1$and consider the piecewise linear function



          $$f_0(x) =
          left{
          begin{matrix}
          a+frac{y_0-a}{x_0}x, &text{for }0 le x le x_0 \
          y_0+frac{b-y_0}{1-x_0}(x-x_0), &text{for }x_0 le x le 1. \
          end{matrix}
          right.$$



          First we note that this is well defined, as both branches give $f_0(x_0)=y_0$, and in addition $f_0(0)=a, f_0(1)=b$ holds.



          This $f_0$ is usually not differentiable at $x_0$, but it should be clear that it can be approximated by a series $g_n(x)$ of such functions that only change $f_0$ in a smaller and smaller neighbourhood of $x_0$ and monotonously change the derivative from the initial $frac{y_0-a}{x_0}$ to the later $frac{b-y_0}{1-x_0}$. That means $lim_{nto infty} Vert g_nVert_1 = Vert f_0Vert_1$ is possible to achieve as well as $lim_{nto infty} Vert g'_nVert_infty = Vert f'_0Vert_infty$.



          What we want to find is the point $(x_0,y_0)$ that satifies 2 condtions:



          $$Vert f_0Vert_1 = D,$$



          $$frac{y_0-a}{x_0} = - frac{b-y_0}{1-x_0}$$



          The first should be clear, the second means that the 2 branches have the same absolute value in $f'_0$.



          I'll skip the details of actually determining those values, the first equation leads to a linear equation in $x_0$ and $y_0$, the second to an equation containing the term $x_0y_0$ and otherwise linear terms. Geometric considerations lead to the fact that this has exactly one solution with $0 < x_0 < 1$ if $D neq frac{a+b}2$ (in that case every $(x_0,y_0)$ that lies on the line segment from $(0,a)$ to $(1,b)$ is a solution).



          That means we have now found a function $f_e$ that equals $f_0$ for this specific choice of $(x_0,y_0)$, that fulfills the conditions and has a certain $Vert f'_eVert_infty$.



          We now prove that $Vert f'_eVert_infty$ is the infimum under consideration. Depending on the sign of $D-frac{a+b}2$ the proof will be slightly different, let's start with
          $D ge frac{a+b}2$. Then because of the definition of $f_e$ and the condition $Vert f_eVert_1=D$, we have $f_e$ being at or above the line $y=a+(b-a)x$ that connects $(0,a)$ and $(1,b).$ It also means the first branch of $f_e$ is increasing, while the second is decreasing.



          Let $f$ be any arbitary function that fulfills the conditions to be considered for the infimum. If $f(x_1) > f_e(x_1)$ for any $0 < x_1 le x_0$, then by the mean value theorem there is an $x_2 in (0,x_1)$ with $f'(x_2)=frac{f(x_1)-f(0)}{x_1} > frac{f_e(x_1)-f_e(0)}{x_1} = Vert f'_eVert_infty$, which imples $Vert f'Vert_infty > Vert f'_eVert_infty$.



          Similiarly, if there was $f(x_1) > f_e(x_1)$ for any $x_0 le x_1 < 1$, then there was an $x_2 in (x_1,1)$ with $f'(x_2)=frac{f(1)-f(x_1)}{1-x_1} < frac{f_e(1)-f_e(x_1)}{1-x_1} = -Vert f'_eVert_infty$. Again (recalling that $f'(x_2)$ is negative), we see that $Vert f'Vert_infty > Vert f'_eVert_infty$ would be implied.



          Recap: We've shown that any function that fulfills the conditions to be considered for the infimum, if it wants a smaller $Vert f'Vert_infty$ than our $f_e$, it has to be at or below $f_e$. Now if it is equal to $f_e$, then it can't have a smaller $Vert f'Vert_infty$. If it is not equal, it must be smaller at some point $x_3$, and by continuity in some neighborhood of $x_3$. But that means $Vert fVert_1 < D$, which is a contradiction.



          The case $D le frac{a+b}2$ works basically the same, by now showing that any $f$ that goes below $f_e$ has a higher $Vert f'Vert_infty$ than $f_e$, and then showing that the opposite case implies $Vert fVert_1 > D$.



          Now, in all of this the condition $f ge 0$ was dropped. If our $f_e$ fullfills that condition, the stated infimum is obviously also the infimum with the condition added. This can be checked in a given cases by comparing $y_0$ with 0.



          If not, then a more detailed analysis is needed. Basically if $D$ is to small, $f$ needs to go down to 0 fast, such that the integral doesn't go over $D$. In that case, a 3-piecewise function seems to be the most natural idea: From $(0,a)$ to $(x_1,0)$ to $(x_2,0)$ to $(1,b)$. Again, the derivatives in the first and last branch should have equal value and opposing signs.






          share|cite|improve this answer





















          • That's great thanks and also agrees with the intuition i had including the case where D is small (I envisaged the same method of interpolating a point at x=0.5 but enforcing f>0 to split into three line segments). Some rough and ready calculations along these lines leads to some nasty looking algebra. However I was hoping there was a more high powered method of deriving the function which might allow for the additional constraint on higher derivatives to be mixed in.
            – user1894205
            Dec 5 '18 at 8:44


















          1














          This is a (not very detailed in one part) partial solution for sufficiently large $D$, where the condition$fge 0$ is ignored and we later have see for which $D$ the minimizing $f$ actually fulfills this condition.



          Similiar to what the OP suggested, we want to find a point $(x_0,y_0)$ with $0 < x_0 < 1$and consider the piecewise linear function



          $$f_0(x) =
          left{
          begin{matrix}
          a+frac{y_0-a}{x_0}x, &text{for }0 le x le x_0 \
          y_0+frac{b-y_0}{1-x_0}(x-x_0), &text{for }x_0 le x le 1. \
          end{matrix}
          right.$$



          First we note that this is well defined, as both branches give $f_0(x_0)=y_0$, and in addition $f_0(0)=a, f_0(1)=b$ holds.



          This $f_0$ is usually not differentiable at $x_0$, but it should be clear that it can be approximated by a series $g_n(x)$ of such functions that only change $f_0$ in a smaller and smaller neighbourhood of $x_0$ and monotonously change the derivative from the initial $frac{y_0-a}{x_0}$ to the later $frac{b-y_0}{1-x_0}$. That means $lim_{nto infty} Vert g_nVert_1 = Vert f_0Vert_1$ is possible to achieve as well as $lim_{nto infty} Vert g'_nVert_infty = Vert f'_0Vert_infty$.



          What we want to find is the point $(x_0,y_0)$ that satifies 2 condtions:



          $$Vert f_0Vert_1 = D,$$



          $$frac{y_0-a}{x_0} = - frac{b-y_0}{1-x_0}$$



          The first should be clear, the second means that the 2 branches have the same absolute value in $f'_0$.



          I'll skip the details of actually determining those values, the first equation leads to a linear equation in $x_0$ and $y_0$, the second to an equation containing the term $x_0y_0$ and otherwise linear terms. Geometric considerations lead to the fact that this has exactly one solution with $0 < x_0 < 1$ if $D neq frac{a+b}2$ (in that case every $(x_0,y_0)$ that lies on the line segment from $(0,a)$ to $(1,b)$ is a solution).



          That means we have now found a function $f_e$ that equals $f_0$ for this specific choice of $(x_0,y_0)$, that fulfills the conditions and has a certain $Vert f'_eVert_infty$.



          We now prove that $Vert f'_eVert_infty$ is the infimum under consideration. Depending on the sign of $D-frac{a+b}2$ the proof will be slightly different, let's start with
          $D ge frac{a+b}2$. Then because of the definition of $f_e$ and the condition $Vert f_eVert_1=D$, we have $f_e$ being at or above the line $y=a+(b-a)x$ that connects $(0,a)$ and $(1,b).$ It also means the first branch of $f_e$ is increasing, while the second is decreasing.



          Let $f$ be any arbitary function that fulfills the conditions to be considered for the infimum. If $f(x_1) > f_e(x_1)$ for any $0 < x_1 le x_0$, then by the mean value theorem there is an $x_2 in (0,x_1)$ with $f'(x_2)=frac{f(x_1)-f(0)}{x_1} > frac{f_e(x_1)-f_e(0)}{x_1} = Vert f'_eVert_infty$, which imples $Vert f'Vert_infty > Vert f'_eVert_infty$.



          Similiarly, if there was $f(x_1) > f_e(x_1)$ for any $x_0 le x_1 < 1$, then there was an $x_2 in (x_1,1)$ with $f'(x_2)=frac{f(1)-f(x_1)}{1-x_1} < frac{f_e(1)-f_e(x_1)}{1-x_1} = -Vert f'_eVert_infty$. Again (recalling that $f'(x_2)$ is negative), we see that $Vert f'Vert_infty > Vert f'_eVert_infty$ would be implied.



          Recap: We've shown that any function that fulfills the conditions to be considered for the infimum, if it wants a smaller $Vert f'Vert_infty$ than our $f_e$, it has to be at or below $f_e$. Now if it is equal to $f_e$, then it can't have a smaller $Vert f'Vert_infty$. If it is not equal, it must be smaller at some point $x_3$, and by continuity in some neighborhood of $x_3$. But that means $Vert fVert_1 < D$, which is a contradiction.



          The case $D le frac{a+b}2$ works basically the same, by now showing that any $f$ that goes below $f_e$ has a higher $Vert f'Vert_infty$ than $f_e$, and then showing that the opposite case implies $Vert fVert_1 > D$.



          Now, in all of this the condition $f ge 0$ was dropped. If our $f_e$ fullfills that condition, the stated infimum is obviously also the infimum with the condition added. This can be checked in a given cases by comparing $y_0$ with 0.



          If not, then a more detailed analysis is needed. Basically if $D$ is to small, $f$ needs to go down to 0 fast, such that the integral doesn't go over $D$. In that case, a 3-piecewise function seems to be the most natural idea: From $(0,a)$ to $(x_1,0)$ to $(x_2,0)$ to $(1,b)$. Again, the derivatives in the first and last branch should have equal value and opposing signs.






          share|cite|improve this answer





















          • That's great thanks and also agrees with the intuition i had including the case where D is small (I envisaged the same method of interpolating a point at x=0.5 but enforcing f>0 to split into three line segments). Some rough and ready calculations along these lines leads to some nasty looking algebra. However I was hoping there was a more high powered method of deriving the function which might allow for the additional constraint on higher derivatives to be mixed in.
            – user1894205
            Dec 5 '18 at 8:44
















          1












          1








          1






          This is a (not very detailed in one part) partial solution for sufficiently large $D$, where the condition$fge 0$ is ignored and we later have see for which $D$ the minimizing $f$ actually fulfills this condition.



          Similiar to what the OP suggested, we want to find a point $(x_0,y_0)$ with $0 < x_0 < 1$and consider the piecewise linear function



          $$f_0(x) =
          left{
          begin{matrix}
          a+frac{y_0-a}{x_0}x, &text{for }0 le x le x_0 \
          y_0+frac{b-y_0}{1-x_0}(x-x_0), &text{for }x_0 le x le 1. \
          end{matrix}
          right.$$



          First we note that this is well defined, as both branches give $f_0(x_0)=y_0$, and in addition $f_0(0)=a, f_0(1)=b$ holds.



          This $f_0$ is usually not differentiable at $x_0$, but it should be clear that it can be approximated by a series $g_n(x)$ of such functions that only change $f_0$ in a smaller and smaller neighbourhood of $x_0$ and monotonously change the derivative from the initial $frac{y_0-a}{x_0}$ to the later $frac{b-y_0}{1-x_0}$. That means $lim_{nto infty} Vert g_nVert_1 = Vert f_0Vert_1$ is possible to achieve as well as $lim_{nto infty} Vert g'_nVert_infty = Vert f'_0Vert_infty$.



          What we want to find is the point $(x_0,y_0)$ that satifies 2 condtions:



          $$Vert f_0Vert_1 = D,$$



          $$frac{y_0-a}{x_0} = - frac{b-y_0}{1-x_0}$$



          The first should be clear, the second means that the 2 branches have the same absolute value in $f'_0$.



          I'll skip the details of actually determining those values, the first equation leads to a linear equation in $x_0$ and $y_0$, the second to an equation containing the term $x_0y_0$ and otherwise linear terms. Geometric considerations lead to the fact that this has exactly one solution with $0 < x_0 < 1$ if $D neq frac{a+b}2$ (in that case every $(x_0,y_0)$ that lies on the line segment from $(0,a)$ to $(1,b)$ is a solution).



          That means we have now found a function $f_e$ that equals $f_0$ for this specific choice of $(x_0,y_0)$, that fulfills the conditions and has a certain $Vert f'_eVert_infty$.



          We now prove that $Vert f'_eVert_infty$ is the infimum under consideration. Depending on the sign of $D-frac{a+b}2$ the proof will be slightly different, let's start with
          $D ge frac{a+b}2$. Then because of the definition of $f_e$ and the condition $Vert f_eVert_1=D$, we have $f_e$ being at or above the line $y=a+(b-a)x$ that connects $(0,a)$ and $(1,b).$ It also means the first branch of $f_e$ is increasing, while the second is decreasing.



          Let $f$ be any arbitary function that fulfills the conditions to be considered for the infimum. If $f(x_1) > f_e(x_1)$ for any $0 < x_1 le x_0$, then by the mean value theorem there is an $x_2 in (0,x_1)$ with $f'(x_2)=frac{f(x_1)-f(0)}{x_1} > frac{f_e(x_1)-f_e(0)}{x_1} = Vert f'_eVert_infty$, which imples $Vert f'Vert_infty > Vert f'_eVert_infty$.



          Similiarly, if there was $f(x_1) > f_e(x_1)$ for any $x_0 le x_1 < 1$, then there was an $x_2 in (x_1,1)$ with $f'(x_2)=frac{f(1)-f(x_1)}{1-x_1} < frac{f_e(1)-f_e(x_1)}{1-x_1} = -Vert f'_eVert_infty$. Again (recalling that $f'(x_2)$ is negative), we see that $Vert f'Vert_infty > Vert f'_eVert_infty$ would be implied.



          Recap: We've shown that any function that fulfills the conditions to be considered for the infimum, if it wants a smaller $Vert f'Vert_infty$ than our $f_e$, it has to be at or below $f_e$. Now if it is equal to $f_e$, then it can't have a smaller $Vert f'Vert_infty$. If it is not equal, it must be smaller at some point $x_3$, and by continuity in some neighborhood of $x_3$. But that means $Vert fVert_1 < D$, which is a contradiction.



          The case $D le frac{a+b}2$ works basically the same, by now showing that any $f$ that goes below $f_e$ has a higher $Vert f'Vert_infty$ than $f_e$, and then showing that the opposite case implies $Vert fVert_1 > D$.



          Now, in all of this the condition $f ge 0$ was dropped. If our $f_e$ fullfills that condition, the stated infimum is obviously also the infimum with the condition added. This can be checked in a given cases by comparing $y_0$ with 0.



          If not, then a more detailed analysis is needed. Basically if $D$ is to small, $f$ needs to go down to 0 fast, such that the integral doesn't go over $D$. In that case, a 3-piecewise function seems to be the most natural idea: From $(0,a)$ to $(x_1,0)$ to $(x_2,0)$ to $(1,b)$. Again, the derivatives in the first and last branch should have equal value and opposing signs.






          share|cite|improve this answer












          This is a (not very detailed in one part) partial solution for sufficiently large $D$, where the condition$fge 0$ is ignored and we later have see for which $D$ the minimizing $f$ actually fulfills this condition.



          Similiar to what the OP suggested, we want to find a point $(x_0,y_0)$ with $0 < x_0 < 1$and consider the piecewise linear function



          $$f_0(x) =
          left{
          begin{matrix}
          a+frac{y_0-a}{x_0}x, &text{for }0 le x le x_0 \
          y_0+frac{b-y_0}{1-x_0}(x-x_0), &text{for }x_0 le x le 1. \
          end{matrix}
          right.$$



          First we note that this is well defined, as both branches give $f_0(x_0)=y_0$, and in addition $f_0(0)=a, f_0(1)=b$ holds.



          This $f_0$ is usually not differentiable at $x_0$, but it should be clear that it can be approximated by a series $g_n(x)$ of such functions that only change $f_0$ in a smaller and smaller neighbourhood of $x_0$ and monotonously change the derivative from the initial $frac{y_0-a}{x_0}$ to the later $frac{b-y_0}{1-x_0}$. That means $lim_{nto infty} Vert g_nVert_1 = Vert f_0Vert_1$ is possible to achieve as well as $lim_{nto infty} Vert g'_nVert_infty = Vert f'_0Vert_infty$.



          What we want to find is the point $(x_0,y_0)$ that satifies 2 condtions:



          $$Vert f_0Vert_1 = D,$$



          $$frac{y_0-a}{x_0} = - frac{b-y_0}{1-x_0}$$



          The first should be clear, the second means that the 2 branches have the same absolute value in $f'_0$.



          I'll skip the details of actually determining those values, the first equation leads to a linear equation in $x_0$ and $y_0$, the second to an equation containing the term $x_0y_0$ and otherwise linear terms. Geometric considerations lead to the fact that this has exactly one solution with $0 < x_0 < 1$ if $D neq frac{a+b}2$ (in that case every $(x_0,y_0)$ that lies on the line segment from $(0,a)$ to $(1,b)$ is a solution).



          That means we have now found a function $f_e$ that equals $f_0$ for this specific choice of $(x_0,y_0)$, that fulfills the conditions and has a certain $Vert f'_eVert_infty$.



          We now prove that $Vert f'_eVert_infty$ is the infimum under consideration. Depending on the sign of $D-frac{a+b}2$ the proof will be slightly different, let's start with
          $D ge frac{a+b}2$. Then because of the definition of $f_e$ and the condition $Vert f_eVert_1=D$, we have $f_e$ being at or above the line $y=a+(b-a)x$ that connects $(0,a)$ and $(1,b).$ It also means the first branch of $f_e$ is increasing, while the second is decreasing.



          Let $f$ be any arbitary function that fulfills the conditions to be considered for the infimum. If $f(x_1) > f_e(x_1)$ for any $0 < x_1 le x_0$, then by the mean value theorem there is an $x_2 in (0,x_1)$ with $f'(x_2)=frac{f(x_1)-f(0)}{x_1} > frac{f_e(x_1)-f_e(0)}{x_1} = Vert f'_eVert_infty$, which imples $Vert f'Vert_infty > Vert f'_eVert_infty$.



          Similiarly, if there was $f(x_1) > f_e(x_1)$ for any $x_0 le x_1 < 1$, then there was an $x_2 in (x_1,1)$ with $f'(x_2)=frac{f(1)-f(x_1)}{1-x_1} < frac{f_e(1)-f_e(x_1)}{1-x_1} = -Vert f'_eVert_infty$. Again (recalling that $f'(x_2)$ is negative), we see that $Vert f'Vert_infty > Vert f'_eVert_infty$ would be implied.



          Recap: We've shown that any function that fulfills the conditions to be considered for the infimum, if it wants a smaller $Vert f'Vert_infty$ than our $f_e$, it has to be at or below $f_e$. Now if it is equal to $f_e$, then it can't have a smaller $Vert f'Vert_infty$. If it is not equal, it must be smaller at some point $x_3$, and by continuity in some neighborhood of $x_3$. But that means $Vert fVert_1 < D$, which is a contradiction.



          The case $D le frac{a+b}2$ works basically the same, by now showing that any $f$ that goes below $f_e$ has a higher $Vert f'Vert_infty$ than $f_e$, and then showing that the opposite case implies $Vert fVert_1 > D$.



          Now, in all of this the condition $f ge 0$ was dropped. If our $f_e$ fullfills that condition, the stated infimum is obviously also the infimum with the condition added. This can be checked in a given cases by comparing $y_0$ with 0.



          If not, then a more detailed analysis is needed. Basically if $D$ is to small, $f$ needs to go down to 0 fast, such that the integral doesn't go over $D$. In that case, a 3-piecewise function seems to be the most natural idea: From $(0,a)$ to $(x_1,0)$ to $(x_2,0)$ to $(1,b)$. Again, the derivatives in the first and last branch should have equal value and opposing signs.







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Dec 4 '18 at 13:06









          IngixIngix

          3,389145




          3,389145












          • That's great thanks and also agrees with the intuition i had including the case where D is small (I envisaged the same method of interpolating a point at x=0.5 but enforcing f>0 to split into three line segments). Some rough and ready calculations along these lines leads to some nasty looking algebra. However I was hoping there was a more high powered method of deriving the function which might allow for the additional constraint on higher derivatives to be mixed in.
            – user1894205
            Dec 5 '18 at 8:44




















          • That's great thanks and also agrees with the intuition i had including the case where D is small (I envisaged the same method of interpolating a point at x=0.5 but enforcing f>0 to split into three line segments). Some rough and ready calculations along these lines leads to some nasty looking algebra. However I was hoping there was a more high powered method of deriving the function which might allow for the additional constraint on higher derivatives to be mixed in.
            – user1894205
            Dec 5 '18 at 8:44


















          That's great thanks and also agrees with the intuition i had including the case where D is small (I envisaged the same method of interpolating a point at x=0.5 but enforcing f>0 to split into three line segments). Some rough and ready calculations along these lines leads to some nasty looking algebra. However I was hoping there was a more high powered method of deriving the function which might allow for the additional constraint on higher derivatives to be mixed in.
          – user1894205
          Dec 5 '18 at 8:44






          That's great thanks and also agrees with the intuition i had including the case where D is small (I envisaged the same method of interpolating a point at x=0.5 but enforcing f>0 to split into three line segments). Some rough and ready calculations along these lines leads to some nasty looking algebra. However I was hoping there was a more high powered method of deriving the function which might allow for the additional constraint on higher derivatives to be mixed in.
          – user1894205
          Dec 5 '18 at 8:44




















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Mathematics Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3025353%2fminimum-gradient-of-curve-required-to-bound-fixed-area%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Wiesbaden

          Marschland

          Dieringhausen