Understanding the concept of conditional probability [duplicate]
$begingroup$
This question already has an answer here:
Trying to derive a result on conditional probability
1 answer
We have $X_1,..,$ indepdentnt random variables with common distribution $F(x)$ and $N$ geometric rv independent of the X_i's . Let $M = max ( X_1,...,X_N)$. Im trying to understand the following:
Im having trouble understanding the first and third equality. This is how I view it for the third equality
$$ P(M leq x, N=n mid N > 1 ) = frac{P(M leq x, N=n, N>1 )}{P(N > 1) } = frac{P(Mleq x mid N=n, N>1)P(N=n, N>1)}{P(N>1)} = frac{P(Mleq x mid N=n, N>1) P(N=n mid N>1)P(N>1)}{P(N>1)}= P(Mleq x mid N=n, N>1) P(N=n mid N>1) $$
Is this the correct reasoning? Also, the first equality follows by definition?
probability
$endgroup$
marked as duplicate by Did
StackExchange.ready(function() {
if (StackExchange.options.isMobile) return;
$('.dupe-hammer-message-hover:not(.hover-bound)').each(function() {
var $hover = $(this).addClass('hover-bound'),
$msg = $hover.siblings('.dupe-hammer-message');
$hover.hover(
function() {
$hover.showInfoMessage('', {
messageElement: $msg.clone().show(),
transient: false,
position: { my: 'bottom left', at: 'top center', offsetTop: -7 },
dismissable: false,
relativeToBody: true
});
},
function() {
StackExchange.helpers.removeMessages();
}
);
});
});
Dec 4 '18 at 22:42
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
add a comment |
$begingroup$
This question already has an answer here:
Trying to derive a result on conditional probability
1 answer
We have $X_1,..,$ indepdentnt random variables with common distribution $F(x)$ and $N$ geometric rv independent of the X_i's . Let $M = max ( X_1,...,X_N)$. Im trying to understand the following:
Im having trouble understanding the first and third equality. This is how I view it for the third equality
$$ P(M leq x, N=n mid N > 1 ) = frac{P(M leq x, N=n, N>1 )}{P(N > 1) } = frac{P(Mleq x mid N=n, N>1)P(N=n, N>1)}{P(N>1)} = frac{P(Mleq x mid N=n, N>1) P(N=n mid N>1)P(N>1)}{P(N>1)}= P(Mleq x mid N=n, N>1) P(N=n mid N>1) $$
Is this the correct reasoning? Also, the first equality follows by definition?
probability
$endgroup$
marked as duplicate by Did
StackExchange.ready(function() {
if (StackExchange.options.isMobile) return;
$('.dupe-hammer-message-hover:not(.hover-bound)').each(function() {
var $hover = $(this).addClass('hover-bound'),
$msg = $hover.siblings('.dupe-hammer-message');
$hover.hover(
function() {
$hover.showInfoMessage('', {
messageElement: $msg.clone().show(),
transient: false,
position: { my: 'bottom left', at: 'top center', offsetTop: -7 },
dismissable: false,
relativeToBody: true
});
},
function() {
StackExchange.helpers.removeMessages();
}
);
});
});
Dec 4 '18 at 22:42
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
1
$begingroup$
Please stop multiplying the duplicates on the exact same problem and try to concentrate on understanding at least some of the multiple explanations you already received about it.
$endgroup$
– Did
Dec 4 '18 at 22:43
add a comment |
$begingroup$
This question already has an answer here:
Trying to derive a result on conditional probability
1 answer
We have $X_1,..,$ indepdentnt random variables with common distribution $F(x)$ and $N$ geometric rv independent of the X_i's . Let $M = max ( X_1,...,X_N)$. Im trying to understand the following:
Im having trouble understanding the first and third equality. This is how I view it for the third equality
$$ P(M leq x, N=n mid N > 1 ) = frac{P(M leq x, N=n, N>1 )}{P(N > 1) } = frac{P(Mleq x mid N=n, N>1)P(N=n, N>1)}{P(N>1)} = frac{P(Mleq x mid N=n, N>1) P(N=n mid N>1)P(N>1)}{P(N>1)}= P(Mleq x mid N=n, N>1) P(N=n mid N>1) $$
Is this the correct reasoning? Also, the first equality follows by definition?
probability
$endgroup$
This question already has an answer here:
Trying to derive a result on conditional probability
1 answer
We have $X_1,..,$ indepdentnt random variables with common distribution $F(x)$ and $N$ geometric rv independent of the X_i's . Let $M = max ( X_1,...,X_N)$. Im trying to understand the following:
Im having trouble understanding the first and third equality. This is how I view it for the third equality
$$ P(M leq x, N=n mid N > 1 ) = frac{P(M leq x, N=n, N>1 )}{P(N > 1) } = frac{P(Mleq x mid N=n, N>1)P(N=n, N>1)}{P(N>1)} = frac{P(Mleq x mid N=n, N>1) P(N=n mid N>1)P(N>1)}{P(N>1)}= P(Mleq x mid N=n, N>1) P(N=n mid N>1) $$
Is this the correct reasoning? Also, the first equality follows by definition?
This question already has an answer here:
Trying to derive a result on conditional probability
1 answer
probability
probability
edited Dec 4 '18 at 22:40
Neymar
asked Dec 4 '18 at 22:20
NeymarNeymar
354114
354114
marked as duplicate by Did
StackExchange.ready(function() {
if (StackExchange.options.isMobile) return;
$('.dupe-hammer-message-hover:not(.hover-bound)').each(function() {
var $hover = $(this).addClass('hover-bound'),
$msg = $hover.siblings('.dupe-hammer-message');
$hover.hover(
function() {
$hover.showInfoMessage('', {
messageElement: $msg.clone().show(),
transient: false,
position: { my: 'bottom left', at: 'top center', offsetTop: -7 },
dismissable: false,
relativeToBody: true
});
},
function() {
StackExchange.helpers.removeMessages();
}
);
});
});
Dec 4 '18 at 22:42
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
marked as duplicate by Did
StackExchange.ready(function() {
if (StackExchange.options.isMobile) return;
$('.dupe-hammer-message-hover:not(.hover-bound)').each(function() {
var $hover = $(this).addClass('hover-bound'),
$msg = $hover.siblings('.dupe-hammer-message');
$hover.hover(
function() {
$hover.showInfoMessage('', {
messageElement: $msg.clone().show(),
transient: false,
position: { my: 'bottom left', at: 'top center', offsetTop: -7 },
dismissable: false,
relativeToBody: true
});
},
function() {
StackExchange.helpers.removeMessages();
}
);
});
});
Dec 4 '18 at 22:42
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
1
$begingroup$
Please stop multiplying the duplicates on the exact same problem and try to concentrate on understanding at least some of the multiple explanations you already received about it.
$endgroup$
– Did
Dec 4 '18 at 22:43
add a comment |
1
$begingroup$
Please stop multiplying the duplicates on the exact same problem and try to concentrate on understanding at least some of the multiple explanations you already received about it.
$endgroup$
– Did
Dec 4 '18 at 22:43
1
1
$begingroup$
Please stop multiplying the duplicates on the exact same problem and try to concentrate on understanding at least some of the multiple explanations you already received about it.
$endgroup$
– Did
Dec 4 '18 at 22:43
$begingroup$
Please stop multiplying the duplicates on the exact same problem and try to concentrate on understanding at least some of the multiple explanations you already received about it.
$endgroup$
– Did
Dec 4 '18 at 22:43
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
The first equality is expressing the Law of Total Probability (in essence).
The event $M leq x$ is the same as the event $M leq x wedge N in {1, 2, 3, ldots}$ (because the second part is just "N takes a valid value"). You can then break the second part into the disjoint events $N = 1$, $N = 2$, $N = 3$, etc, and then the probability of the overall event is the sum of the individual probabilities.
The third equality is then using the normal rule of conditional probability: $P(A wedge B) = P(A | B) P(B)$. All that is happening is that it's already conditioned on $N > 1$, but that essentially just changes the "universe" we're working in (i.e. for the sake of these probabilities, we are working in a universe where we already know that $N > 1$). So, if we ignore the $|N > 1$ part, it becomes:
$P(M leq x wedge N = n) = P(M leq x | N = n) P(N = n)$
But, where we have two things we're conditioning on, that winds up being expressed as the intersection of the two events, i.e. $P((cdot | N = n) | N > 1) = P(cdot | N = n, N > 1)$
$endgroup$
$begingroup$
I dont understand why the first equality is the law of total probability. Dont we have $$ P( A ) = sum P(A mid B ) P(B) $$ where is the B in this case
$endgroup$
– Neymar
Dec 4 '18 at 22:41
$begingroup$
No, because we're not summing over a bunch of conditions, we're breaking up the event (A) itself.
$endgroup$
– ConMan
Dec 5 '18 at 0:19
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
The first equality is expressing the Law of Total Probability (in essence).
The event $M leq x$ is the same as the event $M leq x wedge N in {1, 2, 3, ldots}$ (because the second part is just "N takes a valid value"). You can then break the second part into the disjoint events $N = 1$, $N = 2$, $N = 3$, etc, and then the probability of the overall event is the sum of the individual probabilities.
The third equality is then using the normal rule of conditional probability: $P(A wedge B) = P(A | B) P(B)$. All that is happening is that it's already conditioned on $N > 1$, but that essentially just changes the "universe" we're working in (i.e. for the sake of these probabilities, we are working in a universe where we already know that $N > 1$). So, if we ignore the $|N > 1$ part, it becomes:
$P(M leq x wedge N = n) = P(M leq x | N = n) P(N = n)$
But, where we have two things we're conditioning on, that winds up being expressed as the intersection of the two events, i.e. $P((cdot | N = n) | N > 1) = P(cdot | N = n, N > 1)$
$endgroup$
$begingroup$
I dont understand why the first equality is the law of total probability. Dont we have $$ P( A ) = sum P(A mid B ) P(B) $$ where is the B in this case
$endgroup$
– Neymar
Dec 4 '18 at 22:41
$begingroup$
No, because we're not summing over a bunch of conditions, we're breaking up the event (A) itself.
$endgroup$
– ConMan
Dec 5 '18 at 0:19
add a comment |
$begingroup$
The first equality is expressing the Law of Total Probability (in essence).
The event $M leq x$ is the same as the event $M leq x wedge N in {1, 2, 3, ldots}$ (because the second part is just "N takes a valid value"). You can then break the second part into the disjoint events $N = 1$, $N = 2$, $N = 3$, etc, and then the probability of the overall event is the sum of the individual probabilities.
The third equality is then using the normal rule of conditional probability: $P(A wedge B) = P(A | B) P(B)$. All that is happening is that it's already conditioned on $N > 1$, but that essentially just changes the "universe" we're working in (i.e. for the sake of these probabilities, we are working in a universe where we already know that $N > 1$). So, if we ignore the $|N > 1$ part, it becomes:
$P(M leq x wedge N = n) = P(M leq x | N = n) P(N = n)$
But, where we have two things we're conditioning on, that winds up being expressed as the intersection of the two events, i.e. $P((cdot | N = n) | N > 1) = P(cdot | N = n, N > 1)$
$endgroup$
$begingroup$
I dont understand why the first equality is the law of total probability. Dont we have $$ P( A ) = sum P(A mid B ) P(B) $$ where is the B in this case
$endgroup$
– Neymar
Dec 4 '18 at 22:41
$begingroup$
No, because we're not summing over a bunch of conditions, we're breaking up the event (A) itself.
$endgroup$
– ConMan
Dec 5 '18 at 0:19
add a comment |
$begingroup$
The first equality is expressing the Law of Total Probability (in essence).
The event $M leq x$ is the same as the event $M leq x wedge N in {1, 2, 3, ldots}$ (because the second part is just "N takes a valid value"). You can then break the second part into the disjoint events $N = 1$, $N = 2$, $N = 3$, etc, and then the probability of the overall event is the sum of the individual probabilities.
The third equality is then using the normal rule of conditional probability: $P(A wedge B) = P(A | B) P(B)$. All that is happening is that it's already conditioned on $N > 1$, but that essentially just changes the "universe" we're working in (i.e. for the sake of these probabilities, we are working in a universe where we already know that $N > 1$). So, if we ignore the $|N > 1$ part, it becomes:
$P(M leq x wedge N = n) = P(M leq x | N = n) P(N = n)$
But, where we have two things we're conditioning on, that winds up being expressed as the intersection of the two events, i.e. $P((cdot | N = n) | N > 1) = P(cdot | N = n, N > 1)$
$endgroup$
The first equality is expressing the Law of Total Probability (in essence).
The event $M leq x$ is the same as the event $M leq x wedge N in {1, 2, 3, ldots}$ (because the second part is just "N takes a valid value"). You can then break the second part into the disjoint events $N = 1$, $N = 2$, $N = 3$, etc, and then the probability of the overall event is the sum of the individual probabilities.
The third equality is then using the normal rule of conditional probability: $P(A wedge B) = P(A | B) P(B)$. All that is happening is that it's already conditioned on $N > 1$, but that essentially just changes the "universe" we're working in (i.e. for the sake of these probabilities, we are working in a universe where we already know that $N > 1$). So, if we ignore the $|N > 1$ part, it becomes:
$P(M leq x wedge N = n) = P(M leq x | N = n) P(N = n)$
But, where we have two things we're conditioning on, that winds up being expressed as the intersection of the two events, i.e. $P((cdot | N = n) | N > 1) = P(cdot | N = n, N > 1)$
answered Dec 4 '18 at 22:35
ConManConMan
7,6121324
7,6121324
$begingroup$
I dont understand why the first equality is the law of total probability. Dont we have $$ P( A ) = sum P(A mid B ) P(B) $$ where is the B in this case
$endgroup$
– Neymar
Dec 4 '18 at 22:41
$begingroup$
No, because we're not summing over a bunch of conditions, we're breaking up the event (A) itself.
$endgroup$
– ConMan
Dec 5 '18 at 0:19
add a comment |
$begingroup$
I dont understand why the first equality is the law of total probability. Dont we have $$ P( A ) = sum P(A mid B ) P(B) $$ where is the B in this case
$endgroup$
– Neymar
Dec 4 '18 at 22:41
$begingroup$
No, because we're not summing over a bunch of conditions, we're breaking up the event (A) itself.
$endgroup$
– ConMan
Dec 5 '18 at 0:19
$begingroup$
I dont understand why the first equality is the law of total probability. Dont we have $$ P( A ) = sum P(A mid B ) P(B) $$ where is the B in this case
$endgroup$
– Neymar
Dec 4 '18 at 22:41
$begingroup$
I dont understand why the first equality is the law of total probability. Dont we have $$ P( A ) = sum P(A mid B ) P(B) $$ where is the B in this case
$endgroup$
– Neymar
Dec 4 '18 at 22:41
$begingroup$
No, because we're not summing over a bunch of conditions, we're breaking up the event (A) itself.
$endgroup$
– ConMan
Dec 5 '18 at 0:19
$begingroup$
No, because we're not summing over a bunch of conditions, we're breaking up the event (A) itself.
$endgroup$
– ConMan
Dec 5 '18 at 0:19
add a comment |
1
$begingroup$
Please stop multiplying the duplicates on the exact same problem and try to concentrate on understanding at least some of the multiple explanations you already received about it.
$endgroup$
– Did
Dec 4 '18 at 22:43