Intuition behind independence and conditional probability












7












$begingroup$


I have a good intuition that $A$ is independent of $B$ if $P(A vert B) = P(A)$, and I see how you can easily derive from this that it must hold that $P(A,B) = P(A)P(B)$.



But the first statement is not normally taken as a definition; instead the second is.



What is the intuition, or even derivation behind defining $A$ and $B$ as independent iff $P(A, B) = P(A)(B)$?



The kind of explanation I am looking for would be one similar to that given by Jaynes for the definition of conditional probability in the first chapter of Probability: The Logic of Science, or even a Kolmogorov axiomatic explanation would help.










share|cite|improve this question











$endgroup$

















    7












    $begingroup$


    I have a good intuition that $A$ is independent of $B$ if $P(A vert B) = P(A)$, and I see how you can easily derive from this that it must hold that $P(A,B) = P(A)P(B)$.



    But the first statement is not normally taken as a definition; instead the second is.



    What is the intuition, or even derivation behind defining $A$ and $B$ as independent iff $P(A, B) = P(A)(B)$?



    The kind of explanation I am looking for would be one similar to that given by Jaynes for the definition of conditional probability in the first chapter of Probability: The Logic of Science, or even a Kolmogorov axiomatic explanation would help.










    share|cite|improve this question











    $endgroup$















      7












      7








      7


      3



      $begingroup$


      I have a good intuition that $A$ is independent of $B$ if $P(A vert B) = P(A)$, and I see how you can easily derive from this that it must hold that $P(A,B) = P(A)P(B)$.



      But the first statement is not normally taken as a definition; instead the second is.



      What is the intuition, or even derivation behind defining $A$ and $B$ as independent iff $P(A, B) = P(A)(B)$?



      The kind of explanation I am looking for would be one similar to that given by Jaynes for the definition of conditional probability in the first chapter of Probability: The Logic of Science, or even a Kolmogorov axiomatic explanation would help.










      share|cite|improve this question











      $endgroup$




      I have a good intuition that $A$ is independent of $B$ if $P(A vert B) = P(A)$, and I see how you can easily derive from this that it must hold that $P(A,B) = P(A)P(B)$.



      But the first statement is not normally taken as a definition; instead the second is.



      What is the intuition, or even derivation behind defining $A$ and $B$ as independent iff $P(A, B) = P(A)(B)$?



      The kind of explanation I am looking for would be one similar to that given by Jaynes for the definition of conditional probability in the first chapter of Probability: The Logic of Science, or even a Kolmogorov axiomatic explanation would help.







      probability definition motivation






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited Jan 2 at 13:14









      nbro

      2,46163474




      2,46163474










      asked Oct 4 '12 at 5:22









      zennazenna

      6191712




      6191712






















          3 Answers
          3






          active

          oldest

          votes


















          5












          $begingroup$

          Arguing from the intuitive idea of probability (be it frequentist, Bayesian, or a la Jaynes), what can we say about $P(AB)$? Let us assume that $P(A)le P(B)$. Since $ABsubseteq B$ we can safely deduce that $P(AB)le P(B)$. By looking at well-known and elementary examples it is easy to be convinced that $P(AB)$ can attain any value between $0$ and $P(B)$. But examining these cases shows that extreme values, close to $0$ or close to $P(B)$ are obtained when information about $B$ having occurred either severely conflicts with $A$ occurring (to get close to $0$), or strongly correlates with $A$ occurring (to get close to $P(B)$).



          Now, more mathematically, one value in the range of $P(AB)$ that appears naturally is, of course, $P(A)P(B)$, so it is natural to investigate when that would occur. Notice that this value is symmetric in $A$ and $B$. Since the exact location of $P(AB)$ in its possible range seems to be highly sensitive to whether, and how, $A$ and $B$ influence each other we must conclude that the special value $P(A)P(B)$, being symmetric in the arguments, means that the mutual influences are neutral. That neutrality is another way of thinking about independence. Thus, we turn the intuition into a definition and say that $P(AB)=P(A)P(B)$ holds if, and only if, $A$ and $B$ are independent.






          share|cite|improve this answer











          $endgroup$









          • 1




            $begingroup$
            +1 -- this is a "motivated" definition, in the sense of the OP's question, i.e. it presents a plausible story of how one might have proceeded and arrived at the current definitions. Very nice!
            $endgroup$
            – Assad Ebrahim
            May 19 '14 at 8:15






          • 1




            $begingroup$
            This is a great explanation. And it works where conditional probability often is undefined (i.e. when $P(B)=0$).
            $endgroup$
            – Christian Bueno
            May 20 '15 at 5:54





















          0












          $begingroup$

          They are equivalent when $: P(B) neq 0 :$.



          The problem with $: P(A|B) = P(A) :$ is figuring out what $P(A|B)$ would mean if $: P(B) = 0 :$.






          share|cite|improve this answer









          $endgroup$





















            -2












            $begingroup$

            As a definition of independence, $P(A,B) = P(A)P(B)$ uses intuitively simple concepts involving the probability both events happen and the probabilities each of then happens. It may not be intuitive for everyone why this is the definition, but it is intuitive what it is.



            $P(A vert B) = P(A)$ uses the less intuitively simple concept of conditional probability, which needs both definition and understanding.






            share|cite|improve this answer











            $endgroup$














              Your Answer





              StackExchange.ifUsing("editor", function () {
              return StackExchange.using("mathjaxEditing", function () {
              StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
              StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
              });
              });
              }, "mathjax-editing");

              StackExchange.ready(function() {
              var channelOptions = {
              tags: "".split(" "),
              id: "69"
              };
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function() {
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled) {
              StackExchange.using("snippets", function() {
              createEditor();
              });
              }
              else {
              createEditor();
              }
              });

              function createEditor() {
              StackExchange.prepareEditor({
              heartbeatType: 'answer',
              autoActivateHeartbeat: false,
              convertImagesToLinks: true,
              noModals: true,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: 10,
              bindNavPrevention: true,
              postfix: "",
              imageUploader: {
              brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
              contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
              allowUrls: true
              },
              noCode: true, onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              });


              }
              });














              draft saved

              draft discarded


















              StackExchange.ready(
              function () {
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f206980%2fintuition-behind-independence-and-conditional-probability%23new-answer', 'question_page');
              }
              );

              Post as a guest















              Required, but never shown

























              3 Answers
              3






              active

              oldest

              votes








              3 Answers
              3






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes









              5












              $begingroup$

              Arguing from the intuitive idea of probability (be it frequentist, Bayesian, or a la Jaynes), what can we say about $P(AB)$? Let us assume that $P(A)le P(B)$. Since $ABsubseteq B$ we can safely deduce that $P(AB)le P(B)$. By looking at well-known and elementary examples it is easy to be convinced that $P(AB)$ can attain any value between $0$ and $P(B)$. But examining these cases shows that extreme values, close to $0$ or close to $P(B)$ are obtained when information about $B$ having occurred either severely conflicts with $A$ occurring (to get close to $0$), or strongly correlates with $A$ occurring (to get close to $P(B)$).



              Now, more mathematically, one value in the range of $P(AB)$ that appears naturally is, of course, $P(A)P(B)$, so it is natural to investigate when that would occur. Notice that this value is symmetric in $A$ and $B$. Since the exact location of $P(AB)$ in its possible range seems to be highly sensitive to whether, and how, $A$ and $B$ influence each other we must conclude that the special value $P(A)P(B)$, being symmetric in the arguments, means that the mutual influences are neutral. That neutrality is another way of thinking about independence. Thus, we turn the intuition into a definition and say that $P(AB)=P(A)P(B)$ holds if, and only if, $A$ and $B$ are independent.






              share|cite|improve this answer











              $endgroup$









              • 1




                $begingroup$
                +1 -- this is a "motivated" definition, in the sense of the OP's question, i.e. it presents a plausible story of how one might have proceeded and arrived at the current definitions. Very nice!
                $endgroup$
                – Assad Ebrahim
                May 19 '14 at 8:15






              • 1




                $begingroup$
                This is a great explanation. And it works where conditional probability often is undefined (i.e. when $P(B)=0$).
                $endgroup$
                – Christian Bueno
                May 20 '15 at 5:54


















              5












              $begingroup$

              Arguing from the intuitive idea of probability (be it frequentist, Bayesian, or a la Jaynes), what can we say about $P(AB)$? Let us assume that $P(A)le P(B)$. Since $ABsubseteq B$ we can safely deduce that $P(AB)le P(B)$. By looking at well-known and elementary examples it is easy to be convinced that $P(AB)$ can attain any value between $0$ and $P(B)$. But examining these cases shows that extreme values, close to $0$ or close to $P(B)$ are obtained when information about $B$ having occurred either severely conflicts with $A$ occurring (to get close to $0$), or strongly correlates with $A$ occurring (to get close to $P(B)$).



              Now, more mathematically, one value in the range of $P(AB)$ that appears naturally is, of course, $P(A)P(B)$, so it is natural to investigate when that would occur. Notice that this value is symmetric in $A$ and $B$. Since the exact location of $P(AB)$ in its possible range seems to be highly sensitive to whether, and how, $A$ and $B$ influence each other we must conclude that the special value $P(A)P(B)$, being symmetric in the arguments, means that the mutual influences are neutral. That neutrality is another way of thinking about independence. Thus, we turn the intuition into a definition and say that $P(AB)=P(A)P(B)$ holds if, and only if, $A$ and $B$ are independent.






              share|cite|improve this answer











              $endgroup$









              • 1




                $begingroup$
                +1 -- this is a "motivated" definition, in the sense of the OP's question, i.e. it presents a plausible story of how one might have proceeded and arrived at the current definitions. Very nice!
                $endgroup$
                – Assad Ebrahim
                May 19 '14 at 8:15






              • 1




                $begingroup$
                This is a great explanation. And it works where conditional probability often is undefined (i.e. when $P(B)=0$).
                $endgroup$
                – Christian Bueno
                May 20 '15 at 5:54
















              5












              5








              5





              $begingroup$

              Arguing from the intuitive idea of probability (be it frequentist, Bayesian, or a la Jaynes), what can we say about $P(AB)$? Let us assume that $P(A)le P(B)$. Since $ABsubseteq B$ we can safely deduce that $P(AB)le P(B)$. By looking at well-known and elementary examples it is easy to be convinced that $P(AB)$ can attain any value between $0$ and $P(B)$. But examining these cases shows that extreme values, close to $0$ or close to $P(B)$ are obtained when information about $B$ having occurred either severely conflicts with $A$ occurring (to get close to $0$), or strongly correlates with $A$ occurring (to get close to $P(B)$).



              Now, more mathematically, one value in the range of $P(AB)$ that appears naturally is, of course, $P(A)P(B)$, so it is natural to investigate when that would occur. Notice that this value is symmetric in $A$ and $B$. Since the exact location of $P(AB)$ in its possible range seems to be highly sensitive to whether, and how, $A$ and $B$ influence each other we must conclude that the special value $P(A)P(B)$, being symmetric in the arguments, means that the mutual influences are neutral. That neutrality is another way of thinking about independence. Thus, we turn the intuition into a definition and say that $P(AB)=P(A)P(B)$ holds if, and only if, $A$ and $B$ are independent.






              share|cite|improve this answer











              $endgroup$



              Arguing from the intuitive idea of probability (be it frequentist, Bayesian, or a la Jaynes), what can we say about $P(AB)$? Let us assume that $P(A)le P(B)$. Since $ABsubseteq B$ we can safely deduce that $P(AB)le P(B)$. By looking at well-known and elementary examples it is easy to be convinced that $P(AB)$ can attain any value between $0$ and $P(B)$. But examining these cases shows that extreme values, close to $0$ or close to $P(B)$ are obtained when information about $B$ having occurred either severely conflicts with $A$ occurring (to get close to $0$), or strongly correlates with $A$ occurring (to get close to $P(B)$).



              Now, more mathematically, one value in the range of $P(AB)$ that appears naturally is, of course, $P(A)P(B)$, so it is natural to investigate when that would occur. Notice that this value is symmetric in $A$ and $B$. Since the exact location of $P(AB)$ in its possible range seems to be highly sensitive to whether, and how, $A$ and $B$ influence each other we must conclude that the special value $P(A)P(B)$, being symmetric in the arguments, means that the mutual influences are neutral. That neutrality is another way of thinking about independence. Thus, we turn the intuition into a definition and say that $P(AB)=P(A)P(B)$ holds if, and only if, $A$ and $B$ are independent.







              share|cite|improve this answer














              share|cite|improve this answer



              share|cite|improve this answer








              edited May 20 '15 at 9:00

























              answered Oct 4 '12 at 6:38









              Ittay WeissIttay Weiss

              64.3k7102185




              64.3k7102185








              • 1




                $begingroup$
                +1 -- this is a "motivated" definition, in the sense of the OP's question, i.e. it presents a plausible story of how one might have proceeded and arrived at the current definitions. Very nice!
                $endgroup$
                – Assad Ebrahim
                May 19 '14 at 8:15






              • 1




                $begingroup$
                This is a great explanation. And it works where conditional probability often is undefined (i.e. when $P(B)=0$).
                $endgroup$
                – Christian Bueno
                May 20 '15 at 5:54
















              • 1




                $begingroup$
                +1 -- this is a "motivated" definition, in the sense of the OP's question, i.e. it presents a plausible story of how one might have proceeded and arrived at the current definitions. Very nice!
                $endgroup$
                – Assad Ebrahim
                May 19 '14 at 8:15






              • 1




                $begingroup$
                This is a great explanation. And it works where conditional probability often is undefined (i.e. when $P(B)=0$).
                $endgroup$
                – Christian Bueno
                May 20 '15 at 5:54










              1




              1




              $begingroup$
              +1 -- this is a "motivated" definition, in the sense of the OP's question, i.e. it presents a plausible story of how one might have proceeded and arrived at the current definitions. Very nice!
              $endgroup$
              – Assad Ebrahim
              May 19 '14 at 8:15




              $begingroup$
              +1 -- this is a "motivated" definition, in the sense of the OP's question, i.e. it presents a plausible story of how one might have proceeded and arrived at the current definitions. Very nice!
              $endgroup$
              – Assad Ebrahim
              May 19 '14 at 8:15




              1




              1




              $begingroup$
              This is a great explanation. And it works where conditional probability often is undefined (i.e. when $P(B)=0$).
              $endgroup$
              – Christian Bueno
              May 20 '15 at 5:54






              $begingroup$
              This is a great explanation. And it works where conditional probability often is undefined (i.e. when $P(B)=0$).
              $endgroup$
              – Christian Bueno
              May 20 '15 at 5:54













              0












              $begingroup$

              They are equivalent when $: P(B) neq 0 :$.



              The problem with $: P(A|B) = P(A) :$ is figuring out what $P(A|B)$ would mean if $: P(B) = 0 :$.






              share|cite|improve this answer









              $endgroup$


















                0












                $begingroup$

                They are equivalent when $: P(B) neq 0 :$.



                The problem with $: P(A|B) = P(A) :$ is figuring out what $P(A|B)$ would mean if $: P(B) = 0 :$.






                share|cite|improve this answer









                $endgroup$
















                  0












                  0








                  0





                  $begingroup$

                  They are equivalent when $: P(B) neq 0 :$.



                  The problem with $: P(A|B) = P(A) :$ is figuring out what $P(A|B)$ would mean if $: P(B) = 0 :$.






                  share|cite|improve this answer









                  $endgroup$



                  They are equivalent when $: P(B) neq 0 :$.



                  The problem with $: P(A|B) = P(A) :$ is figuring out what $P(A|B)$ would mean if $: P(B) = 0 :$.







                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered Oct 4 '12 at 5:32







                  user57159






























                      -2












                      $begingroup$

                      As a definition of independence, $P(A,B) = P(A)P(B)$ uses intuitively simple concepts involving the probability both events happen and the probabilities each of then happens. It may not be intuitive for everyone why this is the definition, but it is intuitive what it is.



                      $P(A vert B) = P(A)$ uses the less intuitively simple concept of conditional probability, which needs both definition and understanding.






                      share|cite|improve this answer











                      $endgroup$


















                        -2












                        $begingroup$

                        As a definition of independence, $P(A,B) = P(A)P(B)$ uses intuitively simple concepts involving the probability both events happen and the probabilities each of then happens. It may not be intuitive for everyone why this is the definition, but it is intuitive what it is.



                        $P(A vert B) = P(A)$ uses the less intuitively simple concept of conditional probability, which needs both definition and understanding.






                        share|cite|improve this answer











                        $endgroup$
















                          -2












                          -2








                          -2





                          $begingroup$

                          As a definition of independence, $P(A,B) = P(A)P(B)$ uses intuitively simple concepts involving the probability both events happen and the probabilities each of then happens. It may not be intuitive for everyone why this is the definition, but it is intuitive what it is.



                          $P(A vert B) = P(A)$ uses the less intuitively simple concept of conditional probability, which needs both definition and understanding.






                          share|cite|improve this answer











                          $endgroup$



                          As a definition of independence, $P(A,B) = P(A)P(B)$ uses intuitively simple concepts involving the probability both events happen and the probabilities each of then happens. It may not be intuitive for everyone why this is the definition, but it is intuitive what it is.



                          $P(A vert B) = P(A)$ uses the less intuitively simple concept of conditional probability, which needs both definition and understanding.







                          share|cite|improve this answer














                          share|cite|improve this answer



                          share|cite|improve this answer








                          edited Jan 2 at 15:18

























                          answered Oct 4 '12 at 14:18









                          HenryHenry

                          101k482169




                          101k482169






























                              draft saved

                              draft discarded




















































                              Thanks for contributing an answer to Mathematics Stack Exchange!


                              • Please be sure to answer the question. Provide details and share your research!

                              But avoid



                              • Asking for help, clarification, or responding to other answers.

                              • Making statements based on opinion; back them up with references or personal experience.


                              Use MathJax to format equations. MathJax reference.


                              To learn more, see our tips on writing great answers.




                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function () {
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f206980%2fintuition-behind-independence-and-conditional-probability%23new-answer', 'question_page');
                              }
                              );

                              Post as a guest















                              Required, but never shown





















































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown

































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown







                              Popular posts from this blog

                              To store a contact into the json file from server.js file using a class in NodeJS

                              Redirect URL with Chrome Remote Debugging Android Devices

                              Dieringhausen