What is the rate of growth of $M_n := max_{1 le i le n} U_i^{(n)}$, where $U_i^{(n)} sim...












1












$begingroup$


On pp. 370-374 (see this previous question) of Cramer's 1946 Mathematical Methods of Statistics, the author shows that for any continuous distribution $P$, if $X_1, dots, X_n sim P$ are i.i.d., then the RV:



$$ Xi_n overset{def}{=} n (1 - F(M_n)) ,, quad text{where }M_n overset{def}{=} max_{1 le i le n} X_i ,, $$



has the same distribution as the maximum of $n$ i.i.d. RV's $U_1, dots, U_n$, where $U_i sim operatorname{Uniform}(0,n)$, inasmuch as he shows that they have the same density (cf. p.49 of Keener, Theoretical Statistics).



Cramer says later that we can assume $Xi_n$ "is bounded" - but in what sense precisely is that true?




Question: What is the almost sure rate of growth of the random variables $Xi_n$? Of $log(Xi_n)$?



In particular, is it true that $log(Xi_n) = o(loglog n)$ almost surely?




$log(Xi_n) = o(loglog n)$ a.s. appears to be sufficient to show that $max_{1 le i le n} X_i sim sqrt{2 log n}$ a.s. where the $X_i sim mathscr{N}(0,1)$, using the argument found here, hence my interest.



In a community wiki "answer" below I have posted what I believe to be a proof that $Xi_n = o(log n)$ a.s., at least when the $X_i sim mathscr{N}(0,1)$, an assumption which would be considered acceptable for any accepted answer. Note that, it appears to be sufficient to show that for all $varepsilon >0$, $Xi_n = o((log n)^{varepsilon})$ a.s. in order to conclude $log (Xi_n) = o(log log n)$ a.s.



Note: For each $n$, $Xi_n$ has the distribution of the maximum of $n$ i.i.d. uniforms supported on $[0,n]$ (not $[0,1]$ or $[0, theta]$ for some fixed constant), so this problem does not seem completely trivial.










share|cite|improve this question











$endgroup$

















    1












    $begingroup$


    On pp. 370-374 (see this previous question) of Cramer's 1946 Mathematical Methods of Statistics, the author shows that for any continuous distribution $P$, if $X_1, dots, X_n sim P$ are i.i.d., then the RV:



    $$ Xi_n overset{def}{=} n (1 - F(M_n)) ,, quad text{where }M_n overset{def}{=} max_{1 le i le n} X_i ,, $$



    has the same distribution as the maximum of $n$ i.i.d. RV's $U_1, dots, U_n$, where $U_i sim operatorname{Uniform}(0,n)$, inasmuch as he shows that they have the same density (cf. p.49 of Keener, Theoretical Statistics).



    Cramer says later that we can assume $Xi_n$ "is bounded" - but in what sense precisely is that true?




    Question: What is the almost sure rate of growth of the random variables $Xi_n$? Of $log(Xi_n)$?



    In particular, is it true that $log(Xi_n) = o(loglog n)$ almost surely?




    $log(Xi_n) = o(loglog n)$ a.s. appears to be sufficient to show that $max_{1 le i le n} X_i sim sqrt{2 log n}$ a.s. where the $X_i sim mathscr{N}(0,1)$, using the argument found here, hence my interest.



    In a community wiki "answer" below I have posted what I believe to be a proof that $Xi_n = o(log n)$ a.s., at least when the $X_i sim mathscr{N}(0,1)$, an assumption which would be considered acceptable for any accepted answer. Note that, it appears to be sufficient to show that for all $varepsilon >0$, $Xi_n = o((log n)^{varepsilon})$ a.s. in order to conclude $log (Xi_n) = o(log log n)$ a.s.



    Note: For each $n$, $Xi_n$ has the distribution of the maximum of $n$ i.i.d. uniforms supported on $[0,n]$ (not $[0,1]$ or $[0, theta]$ for some fixed constant), so this problem does not seem completely trivial.










    share|cite|improve this question











    $endgroup$















      1












      1








      1


      1



      $begingroup$


      On pp. 370-374 (see this previous question) of Cramer's 1946 Mathematical Methods of Statistics, the author shows that for any continuous distribution $P$, if $X_1, dots, X_n sim P$ are i.i.d., then the RV:



      $$ Xi_n overset{def}{=} n (1 - F(M_n)) ,, quad text{where }M_n overset{def}{=} max_{1 le i le n} X_i ,, $$



      has the same distribution as the maximum of $n$ i.i.d. RV's $U_1, dots, U_n$, where $U_i sim operatorname{Uniform}(0,n)$, inasmuch as he shows that they have the same density (cf. p.49 of Keener, Theoretical Statistics).



      Cramer says later that we can assume $Xi_n$ "is bounded" - but in what sense precisely is that true?




      Question: What is the almost sure rate of growth of the random variables $Xi_n$? Of $log(Xi_n)$?



      In particular, is it true that $log(Xi_n) = o(loglog n)$ almost surely?




      $log(Xi_n) = o(loglog n)$ a.s. appears to be sufficient to show that $max_{1 le i le n} X_i sim sqrt{2 log n}$ a.s. where the $X_i sim mathscr{N}(0,1)$, using the argument found here, hence my interest.



      In a community wiki "answer" below I have posted what I believe to be a proof that $Xi_n = o(log n)$ a.s., at least when the $X_i sim mathscr{N}(0,1)$, an assumption which would be considered acceptable for any accepted answer. Note that, it appears to be sufficient to show that for all $varepsilon >0$, $Xi_n = o((log n)^{varepsilon})$ a.s. in order to conclude $log (Xi_n) = o(log log n)$ a.s.



      Note: For each $n$, $Xi_n$ has the distribution of the maximum of $n$ i.i.d. uniforms supported on $[0,n]$ (not $[0,1]$ or $[0, theta]$ for some fixed constant), so this problem does not seem completely trivial.










      share|cite|improve this question











      $endgroup$




      On pp. 370-374 (see this previous question) of Cramer's 1946 Mathematical Methods of Statistics, the author shows that for any continuous distribution $P$, if $X_1, dots, X_n sim P$ are i.i.d., then the RV:



      $$ Xi_n overset{def}{=} n (1 - F(M_n)) ,, quad text{where }M_n overset{def}{=} max_{1 le i le n} X_i ,, $$



      has the same distribution as the maximum of $n$ i.i.d. RV's $U_1, dots, U_n$, where $U_i sim operatorname{Uniform}(0,n)$, inasmuch as he shows that they have the same density (cf. p.49 of Keener, Theoretical Statistics).



      Cramer says later that we can assume $Xi_n$ "is bounded" - but in what sense precisely is that true?




      Question: What is the almost sure rate of growth of the random variables $Xi_n$? Of $log(Xi_n)$?



      In particular, is it true that $log(Xi_n) = o(loglog n)$ almost surely?




      $log(Xi_n) = o(loglog n)$ a.s. appears to be sufficient to show that $max_{1 le i le n} X_i sim sqrt{2 log n}$ a.s. where the $X_i sim mathscr{N}(0,1)$, using the argument found here, hence my interest.



      In a community wiki "answer" below I have posted what I believe to be a proof that $Xi_n = o(log n)$ a.s., at least when the $X_i sim mathscr{N}(0,1)$, an assumption which would be considered acceptable for any accepted answer. Note that, it appears to be sufficient to show that for all $varepsilon >0$, $Xi_n = o((log n)^{varepsilon})$ a.s. in order to conclude $log (Xi_n) = o(log log n)$ a.s.



      Note: For each $n$, $Xi_n$ has the distribution of the maximum of $n$ i.i.d. uniforms supported on $[0,n]$ (not $[0,1]$ or $[0, theta]$ for some fixed constant), so this problem does not seem completely trivial.







      probability-theory asymptotics probability-limit-theorems order-statistics upper-lower-bounds






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited Dec 19 '18 at 16:59







      Chill2Macht

















      asked Dec 11 '18 at 0:10









      Chill2MachtChill2Macht

      12k91764




      12k91764






















          1 Answer
          1






          active

          oldest

          votes


















          0












          $begingroup$

          First let's go through a chain of trivialities in order to rephrase the problem in a way which makes it easier to solve (note that by definition $Xi_n ge 0$):



          $$Xi_n = o(log n) quad iff quad lim_{n to infty} frac{Xi_n}{log n} = 0 quad iff quad \ forall varepsilon > 0, frac{Xi_n}{log n} > varepsilon text{ only finitely many times} quad iff \ forall varepsilon >0, quad Xi_n > varepsilon log n text{ only finitely many times} ,.$$



          One also has that:



          $$Xi_n > varepsilon log n quad iff quad n(1 - F(Z_n)) > varepsilon log n quad iff quad 1 - F(Z_n) > frac{varepsilon log n}{n} \ iff quad F(Z_n) < 1 - frac{varepsilon log n}{n} quad iff quad Z_n le inf left{ y: F(y) ge 1 - frac{varepsilon log n}{n} right} ,. $$



          Correspondingly, define for all $n$:



          $$ u_n^{(varepsilon)} = inf left{ y: F(y) ge 1 - frac{varepsilon log n}{n} right} ,. $$



          Therefore the above steps show us that:



          $$Xi_n = o(log n) text{ a.s.} quad iff quad mathbb{P}(Xi_n = o(log n)) = 1 quad iff quad \
          mathbb{P}(forall varepsilon > 0 , Xi_n > varepsilon log n text{ only finitely many times}) = 1 \
          iff mathbb{P}(forall varepsilon > 0, Z_n le u_n^{(varepsilon)} text{ only finitely many times}) = 1 \
          iff mathbb{P}(forall varepsilon >0, Z_n le u_n^{(varepsilon)} text{ infinitely often}) =0 ,. $$



          Notice that we can write:



          $$ { forall varepsilon >0, Z_n le u_n^{(varepsilon)} text{ infinitely often} } = bigcap_{varepsilon > 0} { Z_n le u_n^{(varepsilon)} text{ infinitely often} } ,.$$



          The sequences $u_n^{(varepsilon)}$ become uniformly larger as $varepsilon$ decreases, so we can conclude that the events $${ Z_n le u_n^{(varepsilon)} text{ infinitely often} } $$ are decreasing (or at least somehow monotonic) as $varepsilon$ goes to $0$. Therefore the probability axiom regarding monotonic sequences of events allows us to conclude that:



          $$mathbb{P}(forall varepsilon >0, Z_n le u_n^{(varepsilon)} text{ infinitely often}) = \
          mathbb{P} left( bigcap_{varepsilon > 0} { Z_n le u_n^{(varepsilon)} text{ infinitely often} } right) = \
          mathbb{P} left( lim_{varepsilon downarrow 0} { Z_n le u_n^{(varepsilon)} text{ infinitely often} } right) = \
          lim_{varepsilon downarrow 0} mathbb{P}(Z_n le u_n^{(varepsilon)} text{ infinitely often}) ,.$$



          Therefore it suffices to show that for all $varepsilon >0$,



          $$mathbb{P}(Z_n le u_n^{(varepsilon)} text{ infinitely often}) = 0 $$



          because of course the limit of any constant sequence is the constant.



          Here is somewhat of a sledgehammer result:




          Theorem 4.3.1., p. 252 of Galambos, The Asymptotic Theory of Extreme Order Statistics, 2nd edition. Let $X_1, X_2, dots$ be i.i.d. variables with common nondegenerate and continuous distribution function $F(x)$, and let $u_n$ be a nondecreasing sequence such that $n(1 - F(u_n))$ is also nondecreasing. Then, for $u_n < sup { x: F(x) <1 }$, $$mathbb{P}(Z_n le u_n text{ infinitely often}) =0 text{ or }1 $$
          according as
          $$sum_{j=1}^{+infty}[1 - F(u_j)]exp(-j[1-F(u_j)]) < +infty text{ or }=+infty ,. $$




          The proof is technical and takes around five pages, but ultimately it turns out to be a corollary of one of the Borel-Cantelli lemmas. I may get around to trying to condense the proof to only use the part required for this analysis as well as only the assumptions which hold in the Gaussian case, which may be shorter (but maybe it isn't) and type it up here, but holding your breath is not recommended. Note that in this case $omega(F)=+infty$, so that condition is vacuous, and $n(1-F(n))$ is $varepsilon log n$ thus clearly non-decreasing.



          Anyway the point being that, appealing to this theorem, if we can show that:



          $$sum_{j=1}^{+infty}[1 - F(u_j^{(varepsilon)})]exp(-j[1-F(u_j^{(varepsilon)})]) = sum_{j=1}^{+infty}left[ frac{varepsilon log j}{j} right]exp(-varepsilon log j) = varepsilon sum_{j=1}^{+infty} frac{ log j}{j^{1 + varepsilon}} < + infty ,. $$



          Note that since logarithmic growth is slower than any power law growth for any positive power law exponent (logarithms and exponentials are monotonicity preserving, so $log log n le alpha log n iff log n le n^{alpha}$ and the former inequality can always be seen to hold for all $n$ large enough due to the fact that $log n le n$ and a change of variables), we have that:



          $$ sum_{j=1}^{+infty} frac{log j}{j^{1 + varepsilon}} le sum_{j=1}^{+infty} frac{j^{varepsilon/2}}{j^{1 + varepsilon}} = sum_{j=1}^{+infty} frac{1}{j^{1 + varepsilon/2}} < +infty ,,$$



          since the p-series is known to converge for all $p>1$, and $varepsilon >0$ of course implies $1 + varepsilon/2 > 1$.



          Thus using the above theorem we have shown that for all $varepsilon >0$, $mathbb{P}(Z_n le u_n^{(varepsilon)} text{ i.o.}) = 0$, which to recapitulate should mean that $Xi_n = o(log n)$ almost surely.



          We need to show still that $log Xi_n = o(log log n)$. This doesn't follow from the above, since, e.g.,



          $$ frac{1}{n} log n = o(log n) ,, - log n + log log n not= o(log n) ,. $$



          However, given a sequence $x_n$, if one can show that $x_n = o( (log n)^{delta})$ for arbitrary $delta >0$, then it does follow that $log(x_n) = o(log log n)$. Ideally I would like to be able to show this for $Xi_n$ using the above lemma (assuming it's even true), but am not able to (as of yet).






          share|cite|improve this answer











          $endgroup$













            Your Answer





            StackExchange.ifUsing("editor", function () {
            return StackExchange.using("mathjaxEditing", function () {
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            });
            });
            }, "mathjax-editing");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "69"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            noCode: true, onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3034681%2fwhat-is-the-rate-of-growth-of-m-n-max-1-le-i-le-n-u-in-where-u%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            0












            $begingroup$

            First let's go through a chain of trivialities in order to rephrase the problem in a way which makes it easier to solve (note that by definition $Xi_n ge 0$):



            $$Xi_n = o(log n) quad iff quad lim_{n to infty} frac{Xi_n}{log n} = 0 quad iff quad \ forall varepsilon > 0, frac{Xi_n}{log n} > varepsilon text{ only finitely many times} quad iff \ forall varepsilon >0, quad Xi_n > varepsilon log n text{ only finitely many times} ,.$$



            One also has that:



            $$Xi_n > varepsilon log n quad iff quad n(1 - F(Z_n)) > varepsilon log n quad iff quad 1 - F(Z_n) > frac{varepsilon log n}{n} \ iff quad F(Z_n) < 1 - frac{varepsilon log n}{n} quad iff quad Z_n le inf left{ y: F(y) ge 1 - frac{varepsilon log n}{n} right} ,. $$



            Correspondingly, define for all $n$:



            $$ u_n^{(varepsilon)} = inf left{ y: F(y) ge 1 - frac{varepsilon log n}{n} right} ,. $$



            Therefore the above steps show us that:



            $$Xi_n = o(log n) text{ a.s.} quad iff quad mathbb{P}(Xi_n = o(log n)) = 1 quad iff quad \
            mathbb{P}(forall varepsilon > 0 , Xi_n > varepsilon log n text{ only finitely many times}) = 1 \
            iff mathbb{P}(forall varepsilon > 0, Z_n le u_n^{(varepsilon)} text{ only finitely many times}) = 1 \
            iff mathbb{P}(forall varepsilon >0, Z_n le u_n^{(varepsilon)} text{ infinitely often}) =0 ,. $$



            Notice that we can write:



            $$ { forall varepsilon >0, Z_n le u_n^{(varepsilon)} text{ infinitely often} } = bigcap_{varepsilon > 0} { Z_n le u_n^{(varepsilon)} text{ infinitely often} } ,.$$



            The sequences $u_n^{(varepsilon)}$ become uniformly larger as $varepsilon$ decreases, so we can conclude that the events $${ Z_n le u_n^{(varepsilon)} text{ infinitely often} } $$ are decreasing (or at least somehow monotonic) as $varepsilon$ goes to $0$. Therefore the probability axiom regarding monotonic sequences of events allows us to conclude that:



            $$mathbb{P}(forall varepsilon >0, Z_n le u_n^{(varepsilon)} text{ infinitely often}) = \
            mathbb{P} left( bigcap_{varepsilon > 0} { Z_n le u_n^{(varepsilon)} text{ infinitely often} } right) = \
            mathbb{P} left( lim_{varepsilon downarrow 0} { Z_n le u_n^{(varepsilon)} text{ infinitely often} } right) = \
            lim_{varepsilon downarrow 0} mathbb{P}(Z_n le u_n^{(varepsilon)} text{ infinitely often}) ,.$$



            Therefore it suffices to show that for all $varepsilon >0$,



            $$mathbb{P}(Z_n le u_n^{(varepsilon)} text{ infinitely often}) = 0 $$



            because of course the limit of any constant sequence is the constant.



            Here is somewhat of a sledgehammer result:




            Theorem 4.3.1., p. 252 of Galambos, The Asymptotic Theory of Extreme Order Statistics, 2nd edition. Let $X_1, X_2, dots$ be i.i.d. variables with common nondegenerate and continuous distribution function $F(x)$, and let $u_n$ be a nondecreasing sequence such that $n(1 - F(u_n))$ is also nondecreasing. Then, for $u_n < sup { x: F(x) <1 }$, $$mathbb{P}(Z_n le u_n text{ infinitely often}) =0 text{ or }1 $$
            according as
            $$sum_{j=1}^{+infty}[1 - F(u_j)]exp(-j[1-F(u_j)]) < +infty text{ or }=+infty ,. $$




            The proof is technical and takes around five pages, but ultimately it turns out to be a corollary of one of the Borel-Cantelli lemmas. I may get around to trying to condense the proof to only use the part required for this analysis as well as only the assumptions which hold in the Gaussian case, which may be shorter (but maybe it isn't) and type it up here, but holding your breath is not recommended. Note that in this case $omega(F)=+infty$, so that condition is vacuous, and $n(1-F(n))$ is $varepsilon log n$ thus clearly non-decreasing.



            Anyway the point being that, appealing to this theorem, if we can show that:



            $$sum_{j=1}^{+infty}[1 - F(u_j^{(varepsilon)})]exp(-j[1-F(u_j^{(varepsilon)})]) = sum_{j=1}^{+infty}left[ frac{varepsilon log j}{j} right]exp(-varepsilon log j) = varepsilon sum_{j=1}^{+infty} frac{ log j}{j^{1 + varepsilon}} < + infty ,. $$



            Note that since logarithmic growth is slower than any power law growth for any positive power law exponent (logarithms and exponentials are monotonicity preserving, so $log log n le alpha log n iff log n le n^{alpha}$ and the former inequality can always be seen to hold for all $n$ large enough due to the fact that $log n le n$ and a change of variables), we have that:



            $$ sum_{j=1}^{+infty} frac{log j}{j^{1 + varepsilon}} le sum_{j=1}^{+infty} frac{j^{varepsilon/2}}{j^{1 + varepsilon}} = sum_{j=1}^{+infty} frac{1}{j^{1 + varepsilon/2}} < +infty ,,$$



            since the p-series is known to converge for all $p>1$, and $varepsilon >0$ of course implies $1 + varepsilon/2 > 1$.



            Thus using the above theorem we have shown that for all $varepsilon >0$, $mathbb{P}(Z_n le u_n^{(varepsilon)} text{ i.o.}) = 0$, which to recapitulate should mean that $Xi_n = o(log n)$ almost surely.



            We need to show still that $log Xi_n = o(log log n)$. This doesn't follow from the above, since, e.g.,



            $$ frac{1}{n} log n = o(log n) ,, - log n + log log n not= o(log n) ,. $$



            However, given a sequence $x_n$, if one can show that $x_n = o( (log n)^{delta})$ for arbitrary $delta >0$, then it does follow that $log(x_n) = o(log log n)$. Ideally I would like to be able to show this for $Xi_n$ using the above lemma (assuming it's even true), but am not able to (as of yet).






            share|cite|improve this answer











            $endgroup$


















              0












              $begingroup$

              First let's go through a chain of trivialities in order to rephrase the problem in a way which makes it easier to solve (note that by definition $Xi_n ge 0$):



              $$Xi_n = o(log n) quad iff quad lim_{n to infty} frac{Xi_n}{log n} = 0 quad iff quad \ forall varepsilon > 0, frac{Xi_n}{log n} > varepsilon text{ only finitely many times} quad iff \ forall varepsilon >0, quad Xi_n > varepsilon log n text{ only finitely many times} ,.$$



              One also has that:



              $$Xi_n > varepsilon log n quad iff quad n(1 - F(Z_n)) > varepsilon log n quad iff quad 1 - F(Z_n) > frac{varepsilon log n}{n} \ iff quad F(Z_n) < 1 - frac{varepsilon log n}{n} quad iff quad Z_n le inf left{ y: F(y) ge 1 - frac{varepsilon log n}{n} right} ,. $$



              Correspondingly, define for all $n$:



              $$ u_n^{(varepsilon)} = inf left{ y: F(y) ge 1 - frac{varepsilon log n}{n} right} ,. $$



              Therefore the above steps show us that:



              $$Xi_n = o(log n) text{ a.s.} quad iff quad mathbb{P}(Xi_n = o(log n)) = 1 quad iff quad \
              mathbb{P}(forall varepsilon > 0 , Xi_n > varepsilon log n text{ only finitely many times}) = 1 \
              iff mathbb{P}(forall varepsilon > 0, Z_n le u_n^{(varepsilon)} text{ only finitely many times}) = 1 \
              iff mathbb{P}(forall varepsilon >0, Z_n le u_n^{(varepsilon)} text{ infinitely often}) =0 ,. $$



              Notice that we can write:



              $$ { forall varepsilon >0, Z_n le u_n^{(varepsilon)} text{ infinitely often} } = bigcap_{varepsilon > 0} { Z_n le u_n^{(varepsilon)} text{ infinitely often} } ,.$$



              The sequences $u_n^{(varepsilon)}$ become uniformly larger as $varepsilon$ decreases, so we can conclude that the events $${ Z_n le u_n^{(varepsilon)} text{ infinitely often} } $$ are decreasing (or at least somehow monotonic) as $varepsilon$ goes to $0$. Therefore the probability axiom regarding monotonic sequences of events allows us to conclude that:



              $$mathbb{P}(forall varepsilon >0, Z_n le u_n^{(varepsilon)} text{ infinitely often}) = \
              mathbb{P} left( bigcap_{varepsilon > 0} { Z_n le u_n^{(varepsilon)} text{ infinitely often} } right) = \
              mathbb{P} left( lim_{varepsilon downarrow 0} { Z_n le u_n^{(varepsilon)} text{ infinitely often} } right) = \
              lim_{varepsilon downarrow 0} mathbb{P}(Z_n le u_n^{(varepsilon)} text{ infinitely often}) ,.$$



              Therefore it suffices to show that for all $varepsilon >0$,



              $$mathbb{P}(Z_n le u_n^{(varepsilon)} text{ infinitely often}) = 0 $$



              because of course the limit of any constant sequence is the constant.



              Here is somewhat of a sledgehammer result:




              Theorem 4.3.1., p. 252 of Galambos, The Asymptotic Theory of Extreme Order Statistics, 2nd edition. Let $X_1, X_2, dots$ be i.i.d. variables with common nondegenerate and continuous distribution function $F(x)$, and let $u_n$ be a nondecreasing sequence such that $n(1 - F(u_n))$ is also nondecreasing. Then, for $u_n < sup { x: F(x) <1 }$, $$mathbb{P}(Z_n le u_n text{ infinitely often}) =0 text{ or }1 $$
              according as
              $$sum_{j=1}^{+infty}[1 - F(u_j)]exp(-j[1-F(u_j)]) < +infty text{ or }=+infty ,. $$




              The proof is technical and takes around five pages, but ultimately it turns out to be a corollary of one of the Borel-Cantelli lemmas. I may get around to trying to condense the proof to only use the part required for this analysis as well as only the assumptions which hold in the Gaussian case, which may be shorter (but maybe it isn't) and type it up here, but holding your breath is not recommended. Note that in this case $omega(F)=+infty$, so that condition is vacuous, and $n(1-F(n))$ is $varepsilon log n$ thus clearly non-decreasing.



              Anyway the point being that, appealing to this theorem, if we can show that:



              $$sum_{j=1}^{+infty}[1 - F(u_j^{(varepsilon)})]exp(-j[1-F(u_j^{(varepsilon)})]) = sum_{j=1}^{+infty}left[ frac{varepsilon log j}{j} right]exp(-varepsilon log j) = varepsilon sum_{j=1}^{+infty} frac{ log j}{j^{1 + varepsilon}} < + infty ,. $$



              Note that since logarithmic growth is slower than any power law growth for any positive power law exponent (logarithms and exponentials are monotonicity preserving, so $log log n le alpha log n iff log n le n^{alpha}$ and the former inequality can always be seen to hold for all $n$ large enough due to the fact that $log n le n$ and a change of variables), we have that:



              $$ sum_{j=1}^{+infty} frac{log j}{j^{1 + varepsilon}} le sum_{j=1}^{+infty} frac{j^{varepsilon/2}}{j^{1 + varepsilon}} = sum_{j=1}^{+infty} frac{1}{j^{1 + varepsilon/2}} < +infty ,,$$



              since the p-series is known to converge for all $p>1$, and $varepsilon >0$ of course implies $1 + varepsilon/2 > 1$.



              Thus using the above theorem we have shown that for all $varepsilon >0$, $mathbb{P}(Z_n le u_n^{(varepsilon)} text{ i.o.}) = 0$, which to recapitulate should mean that $Xi_n = o(log n)$ almost surely.



              We need to show still that $log Xi_n = o(log log n)$. This doesn't follow from the above, since, e.g.,



              $$ frac{1}{n} log n = o(log n) ,, - log n + log log n not= o(log n) ,. $$



              However, given a sequence $x_n$, if one can show that $x_n = o( (log n)^{delta})$ for arbitrary $delta >0$, then it does follow that $log(x_n) = o(log log n)$. Ideally I would like to be able to show this for $Xi_n$ using the above lemma (assuming it's even true), but am not able to (as of yet).






              share|cite|improve this answer











              $endgroup$
















                0












                0








                0





                $begingroup$

                First let's go through a chain of trivialities in order to rephrase the problem in a way which makes it easier to solve (note that by definition $Xi_n ge 0$):



                $$Xi_n = o(log n) quad iff quad lim_{n to infty} frac{Xi_n}{log n} = 0 quad iff quad \ forall varepsilon > 0, frac{Xi_n}{log n} > varepsilon text{ only finitely many times} quad iff \ forall varepsilon >0, quad Xi_n > varepsilon log n text{ only finitely many times} ,.$$



                One also has that:



                $$Xi_n > varepsilon log n quad iff quad n(1 - F(Z_n)) > varepsilon log n quad iff quad 1 - F(Z_n) > frac{varepsilon log n}{n} \ iff quad F(Z_n) < 1 - frac{varepsilon log n}{n} quad iff quad Z_n le inf left{ y: F(y) ge 1 - frac{varepsilon log n}{n} right} ,. $$



                Correspondingly, define for all $n$:



                $$ u_n^{(varepsilon)} = inf left{ y: F(y) ge 1 - frac{varepsilon log n}{n} right} ,. $$



                Therefore the above steps show us that:



                $$Xi_n = o(log n) text{ a.s.} quad iff quad mathbb{P}(Xi_n = o(log n)) = 1 quad iff quad \
                mathbb{P}(forall varepsilon > 0 , Xi_n > varepsilon log n text{ only finitely many times}) = 1 \
                iff mathbb{P}(forall varepsilon > 0, Z_n le u_n^{(varepsilon)} text{ only finitely many times}) = 1 \
                iff mathbb{P}(forall varepsilon >0, Z_n le u_n^{(varepsilon)} text{ infinitely often}) =0 ,. $$



                Notice that we can write:



                $$ { forall varepsilon >0, Z_n le u_n^{(varepsilon)} text{ infinitely often} } = bigcap_{varepsilon > 0} { Z_n le u_n^{(varepsilon)} text{ infinitely often} } ,.$$



                The sequences $u_n^{(varepsilon)}$ become uniformly larger as $varepsilon$ decreases, so we can conclude that the events $${ Z_n le u_n^{(varepsilon)} text{ infinitely often} } $$ are decreasing (or at least somehow monotonic) as $varepsilon$ goes to $0$. Therefore the probability axiom regarding monotonic sequences of events allows us to conclude that:



                $$mathbb{P}(forall varepsilon >0, Z_n le u_n^{(varepsilon)} text{ infinitely often}) = \
                mathbb{P} left( bigcap_{varepsilon > 0} { Z_n le u_n^{(varepsilon)} text{ infinitely often} } right) = \
                mathbb{P} left( lim_{varepsilon downarrow 0} { Z_n le u_n^{(varepsilon)} text{ infinitely often} } right) = \
                lim_{varepsilon downarrow 0} mathbb{P}(Z_n le u_n^{(varepsilon)} text{ infinitely often}) ,.$$



                Therefore it suffices to show that for all $varepsilon >0$,



                $$mathbb{P}(Z_n le u_n^{(varepsilon)} text{ infinitely often}) = 0 $$



                because of course the limit of any constant sequence is the constant.



                Here is somewhat of a sledgehammer result:




                Theorem 4.3.1., p. 252 of Galambos, The Asymptotic Theory of Extreme Order Statistics, 2nd edition. Let $X_1, X_2, dots$ be i.i.d. variables with common nondegenerate and continuous distribution function $F(x)$, and let $u_n$ be a nondecreasing sequence such that $n(1 - F(u_n))$ is also nondecreasing. Then, for $u_n < sup { x: F(x) <1 }$, $$mathbb{P}(Z_n le u_n text{ infinitely often}) =0 text{ or }1 $$
                according as
                $$sum_{j=1}^{+infty}[1 - F(u_j)]exp(-j[1-F(u_j)]) < +infty text{ or }=+infty ,. $$




                The proof is technical and takes around five pages, but ultimately it turns out to be a corollary of one of the Borel-Cantelli lemmas. I may get around to trying to condense the proof to only use the part required for this analysis as well as only the assumptions which hold in the Gaussian case, which may be shorter (but maybe it isn't) and type it up here, but holding your breath is not recommended. Note that in this case $omega(F)=+infty$, so that condition is vacuous, and $n(1-F(n))$ is $varepsilon log n$ thus clearly non-decreasing.



                Anyway the point being that, appealing to this theorem, if we can show that:



                $$sum_{j=1}^{+infty}[1 - F(u_j^{(varepsilon)})]exp(-j[1-F(u_j^{(varepsilon)})]) = sum_{j=1}^{+infty}left[ frac{varepsilon log j}{j} right]exp(-varepsilon log j) = varepsilon sum_{j=1}^{+infty} frac{ log j}{j^{1 + varepsilon}} < + infty ,. $$



                Note that since logarithmic growth is slower than any power law growth for any positive power law exponent (logarithms and exponentials are monotonicity preserving, so $log log n le alpha log n iff log n le n^{alpha}$ and the former inequality can always be seen to hold for all $n$ large enough due to the fact that $log n le n$ and a change of variables), we have that:



                $$ sum_{j=1}^{+infty} frac{log j}{j^{1 + varepsilon}} le sum_{j=1}^{+infty} frac{j^{varepsilon/2}}{j^{1 + varepsilon}} = sum_{j=1}^{+infty} frac{1}{j^{1 + varepsilon/2}} < +infty ,,$$



                since the p-series is known to converge for all $p>1$, and $varepsilon >0$ of course implies $1 + varepsilon/2 > 1$.



                Thus using the above theorem we have shown that for all $varepsilon >0$, $mathbb{P}(Z_n le u_n^{(varepsilon)} text{ i.o.}) = 0$, which to recapitulate should mean that $Xi_n = o(log n)$ almost surely.



                We need to show still that $log Xi_n = o(log log n)$. This doesn't follow from the above, since, e.g.,



                $$ frac{1}{n} log n = o(log n) ,, - log n + log log n not= o(log n) ,. $$



                However, given a sequence $x_n$, if one can show that $x_n = o( (log n)^{delta})$ for arbitrary $delta >0$, then it does follow that $log(x_n) = o(log log n)$. Ideally I would like to be able to show this for $Xi_n$ using the above lemma (assuming it's even true), but am not able to (as of yet).






                share|cite|improve this answer











                $endgroup$



                First let's go through a chain of trivialities in order to rephrase the problem in a way which makes it easier to solve (note that by definition $Xi_n ge 0$):



                $$Xi_n = o(log n) quad iff quad lim_{n to infty} frac{Xi_n}{log n} = 0 quad iff quad \ forall varepsilon > 0, frac{Xi_n}{log n} > varepsilon text{ only finitely many times} quad iff \ forall varepsilon >0, quad Xi_n > varepsilon log n text{ only finitely many times} ,.$$



                One also has that:



                $$Xi_n > varepsilon log n quad iff quad n(1 - F(Z_n)) > varepsilon log n quad iff quad 1 - F(Z_n) > frac{varepsilon log n}{n} \ iff quad F(Z_n) < 1 - frac{varepsilon log n}{n} quad iff quad Z_n le inf left{ y: F(y) ge 1 - frac{varepsilon log n}{n} right} ,. $$



                Correspondingly, define for all $n$:



                $$ u_n^{(varepsilon)} = inf left{ y: F(y) ge 1 - frac{varepsilon log n}{n} right} ,. $$



                Therefore the above steps show us that:



                $$Xi_n = o(log n) text{ a.s.} quad iff quad mathbb{P}(Xi_n = o(log n)) = 1 quad iff quad \
                mathbb{P}(forall varepsilon > 0 , Xi_n > varepsilon log n text{ only finitely many times}) = 1 \
                iff mathbb{P}(forall varepsilon > 0, Z_n le u_n^{(varepsilon)} text{ only finitely many times}) = 1 \
                iff mathbb{P}(forall varepsilon >0, Z_n le u_n^{(varepsilon)} text{ infinitely often}) =0 ,. $$



                Notice that we can write:



                $$ { forall varepsilon >0, Z_n le u_n^{(varepsilon)} text{ infinitely often} } = bigcap_{varepsilon > 0} { Z_n le u_n^{(varepsilon)} text{ infinitely often} } ,.$$



                The sequences $u_n^{(varepsilon)}$ become uniformly larger as $varepsilon$ decreases, so we can conclude that the events $${ Z_n le u_n^{(varepsilon)} text{ infinitely often} } $$ are decreasing (or at least somehow monotonic) as $varepsilon$ goes to $0$. Therefore the probability axiom regarding monotonic sequences of events allows us to conclude that:



                $$mathbb{P}(forall varepsilon >0, Z_n le u_n^{(varepsilon)} text{ infinitely often}) = \
                mathbb{P} left( bigcap_{varepsilon > 0} { Z_n le u_n^{(varepsilon)} text{ infinitely often} } right) = \
                mathbb{P} left( lim_{varepsilon downarrow 0} { Z_n le u_n^{(varepsilon)} text{ infinitely often} } right) = \
                lim_{varepsilon downarrow 0} mathbb{P}(Z_n le u_n^{(varepsilon)} text{ infinitely often}) ,.$$



                Therefore it suffices to show that for all $varepsilon >0$,



                $$mathbb{P}(Z_n le u_n^{(varepsilon)} text{ infinitely often}) = 0 $$



                because of course the limit of any constant sequence is the constant.



                Here is somewhat of a sledgehammer result:




                Theorem 4.3.1., p. 252 of Galambos, The Asymptotic Theory of Extreme Order Statistics, 2nd edition. Let $X_1, X_2, dots$ be i.i.d. variables with common nondegenerate and continuous distribution function $F(x)$, and let $u_n$ be a nondecreasing sequence such that $n(1 - F(u_n))$ is also nondecreasing. Then, for $u_n < sup { x: F(x) <1 }$, $$mathbb{P}(Z_n le u_n text{ infinitely often}) =0 text{ or }1 $$
                according as
                $$sum_{j=1}^{+infty}[1 - F(u_j)]exp(-j[1-F(u_j)]) < +infty text{ or }=+infty ,. $$




                The proof is technical and takes around five pages, but ultimately it turns out to be a corollary of one of the Borel-Cantelli lemmas. I may get around to trying to condense the proof to only use the part required for this analysis as well as only the assumptions which hold in the Gaussian case, which may be shorter (but maybe it isn't) and type it up here, but holding your breath is not recommended. Note that in this case $omega(F)=+infty$, so that condition is vacuous, and $n(1-F(n))$ is $varepsilon log n$ thus clearly non-decreasing.



                Anyway the point being that, appealing to this theorem, if we can show that:



                $$sum_{j=1}^{+infty}[1 - F(u_j^{(varepsilon)})]exp(-j[1-F(u_j^{(varepsilon)})]) = sum_{j=1}^{+infty}left[ frac{varepsilon log j}{j} right]exp(-varepsilon log j) = varepsilon sum_{j=1}^{+infty} frac{ log j}{j^{1 + varepsilon}} < + infty ,. $$



                Note that since logarithmic growth is slower than any power law growth for any positive power law exponent (logarithms and exponentials are monotonicity preserving, so $log log n le alpha log n iff log n le n^{alpha}$ and the former inequality can always be seen to hold for all $n$ large enough due to the fact that $log n le n$ and a change of variables), we have that:



                $$ sum_{j=1}^{+infty} frac{log j}{j^{1 + varepsilon}} le sum_{j=1}^{+infty} frac{j^{varepsilon/2}}{j^{1 + varepsilon}} = sum_{j=1}^{+infty} frac{1}{j^{1 + varepsilon/2}} < +infty ,,$$



                since the p-series is known to converge for all $p>1$, and $varepsilon >0$ of course implies $1 + varepsilon/2 > 1$.



                Thus using the above theorem we have shown that for all $varepsilon >0$, $mathbb{P}(Z_n le u_n^{(varepsilon)} text{ i.o.}) = 0$, which to recapitulate should mean that $Xi_n = o(log n)$ almost surely.



                We need to show still that $log Xi_n = o(log log n)$. This doesn't follow from the above, since, e.g.,



                $$ frac{1}{n} log n = o(log n) ,, - log n + log log n not= o(log n) ,. $$



                However, given a sequence $x_n$, if one can show that $x_n = o( (log n)^{delta})$ for arbitrary $delta >0$, then it does follow that $log(x_n) = o(log log n)$. Ideally I would like to be able to show this for $Xi_n$ using the above lemma (assuming it's even true), but am not able to (as of yet).







                share|cite|improve this answer














                share|cite|improve this answer



                share|cite|improve this answer








                answered Dec 11 '18 at 0:10


























                community wiki





                Chill2Macht































                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Mathematics Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3034681%2fwhat-is-the-rate-of-growth-of-m-n-max-1-le-i-le-n-u-in-where-u%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Wiesbaden

                    Marschland

                    Dieringhausen