Finding a probability by conditioning












0














Suppose $X_i$ $i=1,2,3..$ are indepednent random variables with common distribution F(x). Let $N$ be a geometric with parameter $alpha$ and suppose $X_i$ is independent of $N$ for all $i$. Let $M = max(X_1,...,X_N)$. Find $P(X leq x ) $ by conditioning on $N$ and find it in different way (Hint: for second part compute $P(M leq x mid N = 1)$ and $P(M leq x mid N > 1 $). )



Attempt:



By conditioning, we have



$$ P(M leq x) = sum_{i=1}^{infty} P(M leq x mid N=i ) P(N=i) $$
$$ = sum frac{ P(M leq x cap N = i ) cdot P(X=i) }{P(X=i)} = sum P( X_1 leq x, X_2 leq x,...,X_N leq x, N=i)$$



since they are all indepdenent, we have



$$ sum F(x)^N P(N=i) = F^N(x) sum P(N=i) = F^N(x) $$



Now, for the second part, Im not quite sure what they mean. Notice by independence, we have



$$ P(M leq x mid N = 1) = P(M leq x ) $$



$$ P(M leq x mid N > 1) = P(M leq x )$$



am I misunderstanding the hint?










share|cite|improve this question


















  • 1




    In your first part calculation, note that you are using the dummy variable $i$ in the argument of the pmf $Pr{N = i}$. So when you are simplifying, you should have $F(x)^i$ inside the summand and cannot pull it out.
    – BGM
    Nov 30 at 6:31










  • $M$ is not independent of $N$.
    – d.k.o.
    Dec 2 at 8:40
















0














Suppose $X_i$ $i=1,2,3..$ are indepednent random variables with common distribution F(x). Let $N$ be a geometric with parameter $alpha$ and suppose $X_i$ is independent of $N$ for all $i$. Let $M = max(X_1,...,X_N)$. Find $P(X leq x ) $ by conditioning on $N$ and find it in different way (Hint: for second part compute $P(M leq x mid N = 1)$ and $P(M leq x mid N > 1 $). )



Attempt:



By conditioning, we have



$$ P(M leq x) = sum_{i=1}^{infty} P(M leq x mid N=i ) P(N=i) $$
$$ = sum frac{ P(M leq x cap N = i ) cdot P(X=i) }{P(X=i)} = sum P( X_1 leq x, X_2 leq x,...,X_N leq x, N=i)$$



since they are all indepdenent, we have



$$ sum F(x)^N P(N=i) = F^N(x) sum P(N=i) = F^N(x) $$



Now, for the second part, Im not quite sure what they mean. Notice by independence, we have



$$ P(M leq x mid N = 1) = P(M leq x ) $$



$$ P(M leq x mid N > 1) = P(M leq x )$$



am I misunderstanding the hint?










share|cite|improve this question


















  • 1




    In your first part calculation, note that you are using the dummy variable $i$ in the argument of the pmf $Pr{N = i}$. So when you are simplifying, you should have $F(x)^i$ inside the summand and cannot pull it out.
    – BGM
    Nov 30 at 6:31










  • $M$ is not independent of $N$.
    – d.k.o.
    Dec 2 at 8:40














0












0








0







Suppose $X_i$ $i=1,2,3..$ are indepednent random variables with common distribution F(x). Let $N$ be a geometric with parameter $alpha$ and suppose $X_i$ is independent of $N$ for all $i$. Let $M = max(X_1,...,X_N)$. Find $P(X leq x ) $ by conditioning on $N$ and find it in different way (Hint: for second part compute $P(M leq x mid N = 1)$ and $P(M leq x mid N > 1 $). )



Attempt:



By conditioning, we have



$$ P(M leq x) = sum_{i=1}^{infty} P(M leq x mid N=i ) P(N=i) $$
$$ = sum frac{ P(M leq x cap N = i ) cdot P(X=i) }{P(X=i)} = sum P( X_1 leq x, X_2 leq x,...,X_N leq x, N=i)$$



since they are all indepdenent, we have



$$ sum F(x)^N P(N=i) = F^N(x) sum P(N=i) = F^N(x) $$



Now, for the second part, Im not quite sure what they mean. Notice by independence, we have



$$ P(M leq x mid N = 1) = P(M leq x ) $$



$$ P(M leq x mid N > 1) = P(M leq x )$$



am I misunderstanding the hint?










share|cite|improve this question













Suppose $X_i$ $i=1,2,3..$ are indepednent random variables with common distribution F(x). Let $N$ be a geometric with parameter $alpha$ and suppose $X_i$ is independent of $N$ for all $i$. Let $M = max(X_1,...,X_N)$. Find $P(X leq x ) $ by conditioning on $N$ and find it in different way (Hint: for second part compute $P(M leq x mid N = 1)$ and $P(M leq x mid N > 1 $). )



Attempt:



By conditioning, we have



$$ P(M leq x) = sum_{i=1}^{infty} P(M leq x mid N=i ) P(N=i) $$
$$ = sum frac{ P(M leq x cap N = i ) cdot P(X=i) }{P(X=i)} = sum P( X_1 leq x, X_2 leq x,...,X_N leq x, N=i)$$



since they are all indepdenent, we have



$$ sum F(x)^N P(N=i) = F^N(x) sum P(N=i) = F^N(x) $$



Now, for the second part, Im not quite sure what they mean. Notice by independence, we have



$$ P(M leq x mid N = 1) = P(M leq x ) $$



$$ P(M leq x mid N > 1) = P(M leq x )$$



am I misunderstanding the hint?







probability






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Nov 30 at 6:06









Jimmy Sabater

1,930219




1,930219








  • 1




    In your first part calculation, note that you are using the dummy variable $i$ in the argument of the pmf $Pr{N = i}$. So when you are simplifying, you should have $F(x)^i$ inside the summand and cannot pull it out.
    – BGM
    Nov 30 at 6:31










  • $M$ is not independent of $N$.
    – d.k.o.
    Dec 2 at 8:40














  • 1




    In your first part calculation, note that you are using the dummy variable $i$ in the argument of the pmf $Pr{N = i}$. So when you are simplifying, you should have $F(x)^i$ inside the summand and cannot pull it out.
    – BGM
    Nov 30 at 6:31










  • $M$ is not independent of $N$.
    – d.k.o.
    Dec 2 at 8:40








1




1




In your first part calculation, note that you are using the dummy variable $i$ in the argument of the pmf $Pr{N = i}$. So when you are simplifying, you should have $F(x)^i$ inside the summand and cannot pull it out.
– BGM
Nov 30 at 6:31




In your first part calculation, note that you are using the dummy variable $i$ in the argument of the pmf $Pr{N = i}$. So when you are simplifying, you should have $F(x)^i$ inside the summand and cannot pull it out.
– BGM
Nov 30 at 6:31












$M$ is not independent of $N$.
– d.k.o.
Dec 2 at 8:40




$M$ is not independent of $N$.
– d.k.o.
Dec 2 at 8:40










1 Answer
1






active

oldest

votes


















3





+50









begin{align}
P(M le x) &= sum_{n=1}^infty P(M le x|N=n)P(N=n)\
&=sum_{n=1}^infty P(max(X_1, ldots, X_n )le x )(1-alpha)^{n-1}alpha\
&= frac{alpha}{1-alpha}sum_{n=1}^infty [F(x)(1-alpha)]^n \
&= frac{alpha}{1-alpha}frac{F(x)(1-alpha)}{1-F(x)(1-alpha)}\
&= frac{F(x)alpha}{1-F(x)(1-alpha)}
end{align}



Notice that $N$ follows a geometric distribution, it has memoryless property, that is



$$P(N-1 > n|N>1)=frac{P(N>1+n)}{P(N>1)}=frac{(1-alpha)^{n+1}}{(1-alpha)}=P(N > n)$$



Hence,
begin{align}
P(M le x) &= P(M le x|N=1)P(N=1)+P(M le x|Nge1)P(N>1)\
&= F(x)alpha + (1-alpha) P(M le x|N > 1)\
&= F(x) alpha +(1-alpha)P(X_1 le x)P(max(X_2, ldots, X_N) le x|N > 1)\
&= F(x)alpha + (1-alpha)F(x)P(max(X_1, ldots, X_{N-1} )le x|N >1)\
&= F(x) alpha + (1-alpha) F(x) P(M le x)
end{align}



Hence $$(1-(1-alpha)F(x))P(M le x) = F(x) alpha$$



$$P( M le x) = frac{F(x) alpha}{1-(1-alpha)F(x)}$$






share|cite|improve this answer





















  • On second line, why $P( M leq x mid N = n ) = P( max (X_1,..,X_n) leq x) $?
    – Jimmy Sabater
    Dec 3 at 5:28












  • $P(M le x |N=n) = P(max(X_1, ldots, X_N ) le x|N=n) = P(max(X_1, ldots, X_n ) le x|N=n)=P(max(X_1, ldots, X_n ) le x)$
    – Siong Thye Goh
    Dec 3 at 5:46










  • in the fourth equality on the second derivation of $P(M leq x)$, how does the $X_1$ appear again and how pulled it out of max function?
    – Neymar
    Dec 4 at 3:47










  • $X_2, ldots, X_N |N > 1$ and $X_1, ldots, X_{N-1}|N>1$ follows the same distribution.
    – Siong Thye Goh
    Dec 4 at 3:56










  • But Im not given that the $X_i$ are identically distributed.
    – Neymar
    Dec 4 at 4:00











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3019714%2ffinding-a-probability-by-conditioning%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









3





+50









begin{align}
P(M le x) &= sum_{n=1}^infty P(M le x|N=n)P(N=n)\
&=sum_{n=1}^infty P(max(X_1, ldots, X_n )le x )(1-alpha)^{n-1}alpha\
&= frac{alpha}{1-alpha}sum_{n=1}^infty [F(x)(1-alpha)]^n \
&= frac{alpha}{1-alpha}frac{F(x)(1-alpha)}{1-F(x)(1-alpha)}\
&= frac{F(x)alpha}{1-F(x)(1-alpha)}
end{align}



Notice that $N$ follows a geometric distribution, it has memoryless property, that is



$$P(N-1 > n|N>1)=frac{P(N>1+n)}{P(N>1)}=frac{(1-alpha)^{n+1}}{(1-alpha)}=P(N > n)$$



Hence,
begin{align}
P(M le x) &= P(M le x|N=1)P(N=1)+P(M le x|Nge1)P(N>1)\
&= F(x)alpha + (1-alpha) P(M le x|N > 1)\
&= F(x) alpha +(1-alpha)P(X_1 le x)P(max(X_2, ldots, X_N) le x|N > 1)\
&= F(x)alpha + (1-alpha)F(x)P(max(X_1, ldots, X_{N-1} )le x|N >1)\
&= F(x) alpha + (1-alpha) F(x) P(M le x)
end{align}



Hence $$(1-(1-alpha)F(x))P(M le x) = F(x) alpha$$



$$P( M le x) = frac{F(x) alpha}{1-(1-alpha)F(x)}$$






share|cite|improve this answer





















  • On second line, why $P( M leq x mid N = n ) = P( max (X_1,..,X_n) leq x) $?
    – Jimmy Sabater
    Dec 3 at 5:28












  • $P(M le x |N=n) = P(max(X_1, ldots, X_N ) le x|N=n) = P(max(X_1, ldots, X_n ) le x|N=n)=P(max(X_1, ldots, X_n ) le x)$
    – Siong Thye Goh
    Dec 3 at 5:46










  • in the fourth equality on the second derivation of $P(M leq x)$, how does the $X_1$ appear again and how pulled it out of max function?
    – Neymar
    Dec 4 at 3:47










  • $X_2, ldots, X_N |N > 1$ and $X_1, ldots, X_{N-1}|N>1$ follows the same distribution.
    – Siong Thye Goh
    Dec 4 at 3:56










  • But Im not given that the $X_i$ are identically distributed.
    – Neymar
    Dec 4 at 4:00
















3





+50









begin{align}
P(M le x) &= sum_{n=1}^infty P(M le x|N=n)P(N=n)\
&=sum_{n=1}^infty P(max(X_1, ldots, X_n )le x )(1-alpha)^{n-1}alpha\
&= frac{alpha}{1-alpha}sum_{n=1}^infty [F(x)(1-alpha)]^n \
&= frac{alpha}{1-alpha}frac{F(x)(1-alpha)}{1-F(x)(1-alpha)}\
&= frac{F(x)alpha}{1-F(x)(1-alpha)}
end{align}



Notice that $N$ follows a geometric distribution, it has memoryless property, that is



$$P(N-1 > n|N>1)=frac{P(N>1+n)}{P(N>1)}=frac{(1-alpha)^{n+1}}{(1-alpha)}=P(N > n)$$



Hence,
begin{align}
P(M le x) &= P(M le x|N=1)P(N=1)+P(M le x|Nge1)P(N>1)\
&= F(x)alpha + (1-alpha) P(M le x|N > 1)\
&= F(x) alpha +(1-alpha)P(X_1 le x)P(max(X_2, ldots, X_N) le x|N > 1)\
&= F(x)alpha + (1-alpha)F(x)P(max(X_1, ldots, X_{N-1} )le x|N >1)\
&= F(x) alpha + (1-alpha) F(x) P(M le x)
end{align}



Hence $$(1-(1-alpha)F(x))P(M le x) = F(x) alpha$$



$$P( M le x) = frac{F(x) alpha}{1-(1-alpha)F(x)}$$






share|cite|improve this answer





















  • On second line, why $P( M leq x mid N = n ) = P( max (X_1,..,X_n) leq x) $?
    – Jimmy Sabater
    Dec 3 at 5:28












  • $P(M le x |N=n) = P(max(X_1, ldots, X_N ) le x|N=n) = P(max(X_1, ldots, X_n ) le x|N=n)=P(max(X_1, ldots, X_n ) le x)$
    – Siong Thye Goh
    Dec 3 at 5:46










  • in the fourth equality on the second derivation of $P(M leq x)$, how does the $X_1$ appear again and how pulled it out of max function?
    – Neymar
    Dec 4 at 3:47










  • $X_2, ldots, X_N |N > 1$ and $X_1, ldots, X_{N-1}|N>1$ follows the same distribution.
    – Siong Thye Goh
    Dec 4 at 3:56










  • But Im not given that the $X_i$ are identically distributed.
    – Neymar
    Dec 4 at 4:00














3





+50







3





+50



3




+50




begin{align}
P(M le x) &= sum_{n=1}^infty P(M le x|N=n)P(N=n)\
&=sum_{n=1}^infty P(max(X_1, ldots, X_n )le x )(1-alpha)^{n-1}alpha\
&= frac{alpha}{1-alpha}sum_{n=1}^infty [F(x)(1-alpha)]^n \
&= frac{alpha}{1-alpha}frac{F(x)(1-alpha)}{1-F(x)(1-alpha)}\
&= frac{F(x)alpha}{1-F(x)(1-alpha)}
end{align}



Notice that $N$ follows a geometric distribution, it has memoryless property, that is



$$P(N-1 > n|N>1)=frac{P(N>1+n)}{P(N>1)}=frac{(1-alpha)^{n+1}}{(1-alpha)}=P(N > n)$$



Hence,
begin{align}
P(M le x) &= P(M le x|N=1)P(N=1)+P(M le x|Nge1)P(N>1)\
&= F(x)alpha + (1-alpha) P(M le x|N > 1)\
&= F(x) alpha +(1-alpha)P(X_1 le x)P(max(X_2, ldots, X_N) le x|N > 1)\
&= F(x)alpha + (1-alpha)F(x)P(max(X_1, ldots, X_{N-1} )le x|N >1)\
&= F(x) alpha + (1-alpha) F(x) P(M le x)
end{align}



Hence $$(1-(1-alpha)F(x))P(M le x) = F(x) alpha$$



$$P( M le x) = frac{F(x) alpha}{1-(1-alpha)F(x)}$$






share|cite|improve this answer












begin{align}
P(M le x) &= sum_{n=1}^infty P(M le x|N=n)P(N=n)\
&=sum_{n=1}^infty P(max(X_1, ldots, X_n )le x )(1-alpha)^{n-1}alpha\
&= frac{alpha}{1-alpha}sum_{n=1}^infty [F(x)(1-alpha)]^n \
&= frac{alpha}{1-alpha}frac{F(x)(1-alpha)}{1-F(x)(1-alpha)}\
&= frac{F(x)alpha}{1-F(x)(1-alpha)}
end{align}



Notice that $N$ follows a geometric distribution, it has memoryless property, that is



$$P(N-1 > n|N>1)=frac{P(N>1+n)}{P(N>1)}=frac{(1-alpha)^{n+1}}{(1-alpha)}=P(N > n)$$



Hence,
begin{align}
P(M le x) &= P(M le x|N=1)P(N=1)+P(M le x|Nge1)P(N>1)\
&= F(x)alpha + (1-alpha) P(M le x|N > 1)\
&= F(x) alpha +(1-alpha)P(X_1 le x)P(max(X_2, ldots, X_N) le x|N > 1)\
&= F(x)alpha + (1-alpha)F(x)P(max(X_1, ldots, X_{N-1} )le x|N >1)\
&= F(x) alpha + (1-alpha) F(x) P(M le x)
end{align}



Hence $$(1-(1-alpha)F(x))P(M le x) = F(x) alpha$$



$$P( M le x) = frac{F(x) alpha}{1-(1-alpha)F(x)}$$







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Dec 2 at 9:04









Siong Thye Goh

98.9k1464116




98.9k1464116












  • On second line, why $P( M leq x mid N = n ) = P( max (X_1,..,X_n) leq x) $?
    – Jimmy Sabater
    Dec 3 at 5:28












  • $P(M le x |N=n) = P(max(X_1, ldots, X_N ) le x|N=n) = P(max(X_1, ldots, X_n ) le x|N=n)=P(max(X_1, ldots, X_n ) le x)$
    – Siong Thye Goh
    Dec 3 at 5:46










  • in the fourth equality on the second derivation of $P(M leq x)$, how does the $X_1$ appear again and how pulled it out of max function?
    – Neymar
    Dec 4 at 3:47










  • $X_2, ldots, X_N |N > 1$ and $X_1, ldots, X_{N-1}|N>1$ follows the same distribution.
    – Siong Thye Goh
    Dec 4 at 3:56










  • But Im not given that the $X_i$ are identically distributed.
    – Neymar
    Dec 4 at 4:00


















  • On second line, why $P( M leq x mid N = n ) = P( max (X_1,..,X_n) leq x) $?
    – Jimmy Sabater
    Dec 3 at 5:28












  • $P(M le x |N=n) = P(max(X_1, ldots, X_N ) le x|N=n) = P(max(X_1, ldots, X_n ) le x|N=n)=P(max(X_1, ldots, X_n ) le x)$
    – Siong Thye Goh
    Dec 3 at 5:46










  • in the fourth equality on the second derivation of $P(M leq x)$, how does the $X_1$ appear again and how pulled it out of max function?
    – Neymar
    Dec 4 at 3:47










  • $X_2, ldots, X_N |N > 1$ and $X_1, ldots, X_{N-1}|N>1$ follows the same distribution.
    – Siong Thye Goh
    Dec 4 at 3:56










  • But Im not given that the $X_i$ are identically distributed.
    – Neymar
    Dec 4 at 4:00
















On second line, why $P( M leq x mid N = n ) = P( max (X_1,..,X_n) leq x) $?
– Jimmy Sabater
Dec 3 at 5:28






On second line, why $P( M leq x mid N = n ) = P( max (X_1,..,X_n) leq x) $?
– Jimmy Sabater
Dec 3 at 5:28














$P(M le x |N=n) = P(max(X_1, ldots, X_N ) le x|N=n) = P(max(X_1, ldots, X_n ) le x|N=n)=P(max(X_1, ldots, X_n ) le x)$
– Siong Thye Goh
Dec 3 at 5:46




$P(M le x |N=n) = P(max(X_1, ldots, X_N ) le x|N=n) = P(max(X_1, ldots, X_n ) le x|N=n)=P(max(X_1, ldots, X_n ) le x)$
– Siong Thye Goh
Dec 3 at 5:46












in the fourth equality on the second derivation of $P(M leq x)$, how does the $X_1$ appear again and how pulled it out of max function?
– Neymar
Dec 4 at 3:47




in the fourth equality on the second derivation of $P(M leq x)$, how does the $X_1$ appear again and how pulled it out of max function?
– Neymar
Dec 4 at 3:47












$X_2, ldots, X_N |N > 1$ and $X_1, ldots, X_{N-1}|N>1$ follows the same distribution.
– Siong Thye Goh
Dec 4 at 3:56




$X_2, ldots, X_N |N > 1$ and $X_1, ldots, X_{N-1}|N>1$ follows the same distribution.
– Siong Thye Goh
Dec 4 at 3:56












But Im not given that the $X_i$ are identically distributed.
– Neymar
Dec 4 at 4:00




But Im not given that the $X_i$ are identically distributed.
– Neymar
Dec 4 at 4:00


















draft saved

draft discarded




















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3019714%2ffinding-a-probability-by-conditioning%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Wiesbaden

To store a contact into the json file from server.js file using a class in NodeJS

Marschland