Markov chain with expected values and time optimization












1












$begingroup$


So maybe I'm approaching this completely in the wrong way, but i can't seem to grasp how to do this easily and I hope y'all can help. The problem itself, I think, is quite complex. Therefore I will split it in 3 parts. Part 1 I think to have solved and I would just like to check if my thought process is correct. Part 2 is an intermediate step which would possibly help in solving part 3 and would be an acceptable result if part 3 cannot be solved. However, part 3 is the true problem. So you immediately want to take a swing at that without reading over the rest, please be my guest.



Problem description



PART 1



I have a number of states (this can be a large number), as an example let's say 3: A, B and C. These can be modeled as a Markov Chain as shown below.



Markov model



Aside from having the expected transition probabilities, there is another aspect, namely every state has a value as noted in square brackets. As such, we can derive the transition matrix P and the value matrix V.



$$ P =left[
begin{array}{ccc}
0&0.3&0.7\
0.1&0&0.9\
0.3&0.7&0
end{array}
right],
V = left[ begin{array}{c}
5&
1&
2
end{array}right]
$$



The first step of the problem is to get the expected values of each of these states. My approach would be to first calculate the stationary distribution of the model and subsequently multiplying those probabilities with the corresponding values. So something like this:




In order to find the stationary distribution we need to find the
eigenvalues of the transpose of the transition matrix: $$ detleft(
begin{array}{ccc} -lambda&0.3&0.7\
0.1&-lambda&0.9\
0.3&0.7&-lambda end{array} right) = 0 $$



With an eigenvalue of $ lambda = 1 $ resulting in the only all
positive eigenvector $(0.38, 0.81, 1)$, the stationary distribution
is: $$ S = frac{1}{0.38 + 0.81 + 1} cdot left(begin{array}{ccc}0.38&0.81& 1end{array}right) = left(begin{array}{ccc}0.17& 0.37& 0.46end{array}right)$$



As such, the expected values are:
$$ V circ S = left(begin{array}{ccc}0.85& 0.37& 0.91end{array}right)$$




So the first question is probably: "Is this a right approach?"



PART 2



Now that we have the basic foundation, the problem is expanded. The next problem can be easily put in words:




What is the chance of reaching state Y before reaching state Z, given a starting state X?




Is there a way that this can be calculated for any three states? Note: this does not have to be a general solution for all X, Y and Z, since X and Z will be fixed states. So instead, it should be given states X and Z, what are the chances we will still reach Y before we reach Z. Another note: any state other than Z may be revisited.



PART 3



The ultimate goal I'm trying to achieve is this:




  • We are at a starting state A

  • We will at some point end at Z



We are looking for a state S with the highest value, that we will reach with only a minimal number of expected steps remaining before we reach Z. So:



$ A rightarrow $ [n steps] $ rightarrow S rightarrow $ [m steps] $ rightarrow Z $



Where we try to find a minimal m with a maximal value for S




As such, my thought is that at the least, there will have to be some function that can adds a value based on the number of steps taken. Or at least a weighting function between the value of S and the number of steps between S and Z.










share|cite|improve this question











$endgroup$

















    1












    $begingroup$


    So maybe I'm approaching this completely in the wrong way, but i can't seem to grasp how to do this easily and I hope y'all can help. The problem itself, I think, is quite complex. Therefore I will split it in 3 parts. Part 1 I think to have solved and I would just like to check if my thought process is correct. Part 2 is an intermediate step which would possibly help in solving part 3 and would be an acceptable result if part 3 cannot be solved. However, part 3 is the true problem. So you immediately want to take a swing at that without reading over the rest, please be my guest.



    Problem description



    PART 1



    I have a number of states (this can be a large number), as an example let's say 3: A, B and C. These can be modeled as a Markov Chain as shown below.



    Markov model



    Aside from having the expected transition probabilities, there is another aspect, namely every state has a value as noted in square brackets. As such, we can derive the transition matrix P and the value matrix V.



    $$ P =left[
    begin{array}{ccc}
    0&0.3&0.7\
    0.1&0&0.9\
    0.3&0.7&0
    end{array}
    right],
    V = left[ begin{array}{c}
    5&
    1&
    2
    end{array}right]
    $$



    The first step of the problem is to get the expected values of each of these states. My approach would be to first calculate the stationary distribution of the model and subsequently multiplying those probabilities with the corresponding values. So something like this:




    In order to find the stationary distribution we need to find the
    eigenvalues of the transpose of the transition matrix: $$ detleft(
    begin{array}{ccc} -lambda&0.3&0.7\
    0.1&-lambda&0.9\
    0.3&0.7&-lambda end{array} right) = 0 $$



    With an eigenvalue of $ lambda = 1 $ resulting in the only all
    positive eigenvector $(0.38, 0.81, 1)$, the stationary distribution
    is: $$ S = frac{1}{0.38 + 0.81 + 1} cdot left(begin{array}{ccc}0.38&0.81& 1end{array}right) = left(begin{array}{ccc}0.17& 0.37& 0.46end{array}right)$$



    As such, the expected values are:
    $$ V circ S = left(begin{array}{ccc}0.85& 0.37& 0.91end{array}right)$$




    So the first question is probably: "Is this a right approach?"



    PART 2



    Now that we have the basic foundation, the problem is expanded. The next problem can be easily put in words:




    What is the chance of reaching state Y before reaching state Z, given a starting state X?




    Is there a way that this can be calculated for any three states? Note: this does not have to be a general solution for all X, Y and Z, since X and Z will be fixed states. So instead, it should be given states X and Z, what are the chances we will still reach Y before we reach Z. Another note: any state other than Z may be revisited.



    PART 3



    The ultimate goal I'm trying to achieve is this:




    • We are at a starting state A

    • We will at some point end at Z



    We are looking for a state S with the highest value, that we will reach with only a minimal number of expected steps remaining before we reach Z. So:



    $ A rightarrow $ [n steps] $ rightarrow S rightarrow $ [m steps] $ rightarrow Z $



    Where we try to find a minimal m with a maximal value for S




    As such, my thought is that at the least, there will have to be some function that can adds a value based on the number of steps taken. Or at least a weighting function between the value of S and the number of steps between S and Z.










    share|cite|improve this question











    $endgroup$















      1












      1








      1





      $begingroup$


      So maybe I'm approaching this completely in the wrong way, but i can't seem to grasp how to do this easily and I hope y'all can help. The problem itself, I think, is quite complex. Therefore I will split it in 3 parts. Part 1 I think to have solved and I would just like to check if my thought process is correct. Part 2 is an intermediate step which would possibly help in solving part 3 and would be an acceptable result if part 3 cannot be solved. However, part 3 is the true problem. So you immediately want to take a swing at that without reading over the rest, please be my guest.



      Problem description



      PART 1



      I have a number of states (this can be a large number), as an example let's say 3: A, B and C. These can be modeled as a Markov Chain as shown below.



      Markov model



      Aside from having the expected transition probabilities, there is another aspect, namely every state has a value as noted in square brackets. As such, we can derive the transition matrix P and the value matrix V.



      $$ P =left[
      begin{array}{ccc}
      0&0.3&0.7\
      0.1&0&0.9\
      0.3&0.7&0
      end{array}
      right],
      V = left[ begin{array}{c}
      5&
      1&
      2
      end{array}right]
      $$



      The first step of the problem is to get the expected values of each of these states. My approach would be to first calculate the stationary distribution of the model and subsequently multiplying those probabilities with the corresponding values. So something like this:




      In order to find the stationary distribution we need to find the
      eigenvalues of the transpose of the transition matrix: $$ detleft(
      begin{array}{ccc} -lambda&0.3&0.7\
      0.1&-lambda&0.9\
      0.3&0.7&-lambda end{array} right) = 0 $$



      With an eigenvalue of $ lambda = 1 $ resulting in the only all
      positive eigenvector $(0.38, 0.81, 1)$, the stationary distribution
      is: $$ S = frac{1}{0.38 + 0.81 + 1} cdot left(begin{array}{ccc}0.38&0.81& 1end{array}right) = left(begin{array}{ccc}0.17& 0.37& 0.46end{array}right)$$



      As such, the expected values are:
      $$ V circ S = left(begin{array}{ccc}0.85& 0.37& 0.91end{array}right)$$




      So the first question is probably: "Is this a right approach?"



      PART 2



      Now that we have the basic foundation, the problem is expanded. The next problem can be easily put in words:




      What is the chance of reaching state Y before reaching state Z, given a starting state X?




      Is there a way that this can be calculated for any three states? Note: this does not have to be a general solution for all X, Y and Z, since X and Z will be fixed states. So instead, it should be given states X and Z, what are the chances we will still reach Y before we reach Z. Another note: any state other than Z may be revisited.



      PART 3



      The ultimate goal I'm trying to achieve is this:




      • We are at a starting state A

      • We will at some point end at Z



      We are looking for a state S with the highest value, that we will reach with only a minimal number of expected steps remaining before we reach Z. So:



      $ A rightarrow $ [n steps] $ rightarrow S rightarrow $ [m steps] $ rightarrow Z $



      Where we try to find a minimal m with a maximal value for S




      As such, my thought is that at the least, there will have to be some function that can adds a value based on the number of steps taken. Or at least a weighting function between the value of S and the number of steps between S and Z.










      share|cite|improve this question











      $endgroup$




      So maybe I'm approaching this completely in the wrong way, but i can't seem to grasp how to do this easily and I hope y'all can help. The problem itself, I think, is quite complex. Therefore I will split it in 3 parts. Part 1 I think to have solved and I would just like to check if my thought process is correct. Part 2 is an intermediate step which would possibly help in solving part 3 and would be an acceptable result if part 3 cannot be solved. However, part 3 is the true problem. So you immediately want to take a swing at that without reading over the rest, please be my guest.



      Problem description



      PART 1



      I have a number of states (this can be a large number), as an example let's say 3: A, B and C. These can be modeled as a Markov Chain as shown below.



      Markov model



      Aside from having the expected transition probabilities, there is another aspect, namely every state has a value as noted in square brackets. As such, we can derive the transition matrix P and the value matrix V.



      $$ P =left[
      begin{array}{ccc}
      0&0.3&0.7\
      0.1&0&0.9\
      0.3&0.7&0
      end{array}
      right],
      V = left[ begin{array}{c}
      5&
      1&
      2
      end{array}right]
      $$



      The first step of the problem is to get the expected values of each of these states. My approach would be to first calculate the stationary distribution of the model and subsequently multiplying those probabilities with the corresponding values. So something like this:




      In order to find the stationary distribution we need to find the
      eigenvalues of the transpose of the transition matrix: $$ detleft(
      begin{array}{ccc} -lambda&0.3&0.7\
      0.1&-lambda&0.9\
      0.3&0.7&-lambda end{array} right) = 0 $$



      With an eigenvalue of $ lambda = 1 $ resulting in the only all
      positive eigenvector $(0.38, 0.81, 1)$, the stationary distribution
      is: $$ S = frac{1}{0.38 + 0.81 + 1} cdot left(begin{array}{ccc}0.38&0.81& 1end{array}right) = left(begin{array}{ccc}0.17& 0.37& 0.46end{array}right)$$



      As such, the expected values are:
      $$ V circ S = left(begin{array}{ccc}0.85& 0.37& 0.91end{array}right)$$




      So the first question is probably: "Is this a right approach?"



      PART 2



      Now that we have the basic foundation, the problem is expanded. The next problem can be easily put in words:




      What is the chance of reaching state Y before reaching state Z, given a starting state X?




      Is there a way that this can be calculated for any three states? Note: this does not have to be a general solution for all X, Y and Z, since X and Z will be fixed states. So instead, it should be given states X and Z, what are the chances we will still reach Y before we reach Z. Another note: any state other than Z may be revisited.



      PART 3



      The ultimate goal I'm trying to achieve is this:




      • We are at a starting state A

      • We will at some point end at Z



      We are looking for a state S with the highest value, that we will reach with only a minimal number of expected steps remaining before we reach Z. So:



      $ A rightarrow $ [n steps] $ rightarrow S rightarrow $ [m steps] $ rightarrow Z $



      Where we try to find a minimal m with a maximal value for S




      As such, my thought is that at the least, there will have to be some function that can adds a value based on the number of steps taken. Or at least a weighting function between the value of S and the number of steps between S and Z.







      optimization markov-chains expected-value






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited Dec 4 '18 at 17:58







      Remy Kabel

















      asked Dec 4 '18 at 15:55









      Remy KabelRemy Kabel

      735




      735






















          1 Answer
          1






          active

          oldest

          votes


















          1












          $begingroup$

          Part 1.



          It is a right approach.



          Part 2.



          The trick is to make states Y and Z absorbing, i.e. redirect all the arrows that go from Y or Z to themselves. So your example (if X,Y,Z=A,B,C) will turn into this.
          enter image description here



          After that you use the standard technique to determine what is the probability to end in an absorbing state Y (or Z) if you start with the state X. Write a comment if you have a problem with understanding the formulas in the link.



          Part 3.



          I didn't quiet get what you want. But I believe that an essential part of it is the same. If for you coming to Z is the end of the game, you can make this state absorbing and then calculate whatever you want: expected number of steps, probabilities etc.






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            Hi, thanks so much for replying! I have clarified my description of Part 3. Indeed, I'll have to make Z absorbing, but I don't get yet how i could get to a min/max problem between the value and the number of steps.
            $endgroup$
            – Remy Kabel
            Dec 4 '18 at 18:06










          • $begingroup$
            It's still not clear what function $J([S], m_S)$ you are trying to maximize. In general case, you can't have both $[S]$ maximal and $m$ minimal. So if the number of states isn't ridiculously large, you calculate $m_S$ for each state and then see where your chosen criteria $J([S], m_S)$ is maximal
            $endgroup$
            – Vasily Mitch
            Dec 4 '18 at 19:15












          • $begingroup$
            You can select $J=[S]/m$ or $J=[S]-m$ as examples and see whether they give you what you would intuitevely expect.
            $endgroup$
            – Vasily Mitch
            Dec 4 '18 at 19:17










          • $begingroup$
            Ill give it a try, thanks! In any case I've marked your answer!
            $endgroup$
            – Remy Kabel
            Dec 4 '18 at 23:45











          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "69"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3025733%2fmarkov-chain-with-expected-values-and-time-optimization%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          1












          $begingroup$

          Part 1.



          It is a right approach.



          Part 2.



          The trick is to make states Y and Z absorbing, i.e. redirect all the arrows that go from Y or Z to themselves. So your example (if X,Y,Z=A,B,C) will turn into this.
          enter image description here



          After that you use the standard technique to determine what is the probability to end in an absorbing state Y (or Z) if you start with the state X. Write a comment if you have a problem with understanding the formulas in the link.



          Part 3.



          I didn't quiet get what you want. But I believe that an essential part of it is the same. If for you coming to Z is the end of the game, you can make this state absorbing and then calculate whatever you want: expected number of steps, probabilities etc.






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            Hi, thanks so much for replying! I have clarified my description of Part 3. Indeed, I'll have to make Z absorbing, but I don't get yet how i could get to a min/max problem between the value and the number of steps.
            $endgroup$
            – Remy Kabel
            Dec 4 '18 at 18:06










          • $begingroup$
            It's still not clear what function $J([S], m_S)$ you are trying to maximize. In general case, you can't have both $[S]$ maximal and $m$ minimal. So if the number of states isn't ridiculously large, you calculate $m_S$ for each state and then see where your chosen criteria $J([S], m_S)$ is maximal
            $endgroup$
            – Vasily Mitch
            Dec 4 '18 at 19:15












          • $begingroup$
            You can select $J=[S]/m$ or $J=[S]-m$ as examples and see whether they give you what you would intuitevely expect.
            $endgroup$
            – Vasily Mitch
            Dec 4 '18 at 19:17










          • $begingroup$
            Ill give it a try, thanks! In any case I've marked your answer!
            $endgroup$
            – Remy Kabel
            Dec 4 '18 at 23:45
















          1












          $begingroup$

          Part 1.



          It is a right approach.



          Part 2.



          The trick is to make states Y and Z absorbing, i.e. redirect all the arrows that go from Y or Z to themselves. So your example (if X,Y,Z=A,B,C) will turn into this.
          enter image description here



          After that you use the standard technique to determine what is the probability to end in an absorbing state Y (or Z) if you start with the state X. Write a comment if you have a problem with understanding the formulas in the link.



          Part 3.



          I didn't quiet get what you want. But I believe that an essential part of it is the same. If for you coming to Z is the end of the game, you can make this state absorbing and then calculate whatever you want: expected number of steps, probabilities etc.






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            Hi, thanks so much for replying! I have clarified my description of Part 3. Indeed, I'll have to make Z absorbing, but I don't get yet how i could get to a min/max problem between the value and the number of steps.
            $endgroup$
            – Remy Kabel
            Dec 4 '18 at 18:06










          • $begingroup$
            It's still not clear what function $J([S], m_S)$ you are trying to maximize. In general case, you can't have both $[S]$ maximal and $m$ minimal. So if the number of states isn't ridiculously large, you calculate $m_S$ for each state and then see where your chosen criteria $J([S], m_S)$ is maximal
            $endgroup$
            – Vasily Mitch
            Dec 4 '18 at 19:15












          • $begingroup$
            You can select $J=[S]/m$ or $J=[S]-m$ as examples and see whether they give you what you would intuitevely expect.
            $endgroup$
            – Vasily Mitch
            Dec 4 '18 at 19:17










          • $begingroup$
            Ill give it a try, thanks! In any case I've marked your answer!
            $endgroup$
            – Remy Kabel
            Dec 4 '18 at 23:45














          1












          1








          1





          $begingroup$

          Part 1.



          It is a right approach.



          Part 2.



          The trick is to make states Y and Z absorbing, i.e. redirect all the arrows that go from Y or Z to themselves. So your example (if X,Y,Z=A,B,C) will turn into this.
          enter image description here



          After that you use the standard technique to determine what is the probability to end in an absorbing state Y (or Z) if you start with the state X. Write a comment if you have a problem with understanding the formulas in the link.



          Part 3.



          I didn't quiet get what you want. But I believe that an essential part of it is the same. If for you coming to Z is the end of the game, you can make this state absorbing and then calculate whatever you want: expected number of steps, probabilities etc.






          share|cite|improve this answer









          $endgroup$



          Part 1.



          It is a right approach.



          Part 2.



          The trick is to make states Y and Z absorbing, i.e. redirect all the arrows that go from Y or Z to themselves. So your example (if X,Y,Z=A,B,C) will turn into this.
          enter image description here



          After that you use the standard technique to determine what is the probability to end in an absorbing state Y (or Z) if you start with the state X. Write a comment if you have a problem with understanding the formulas in the link.



          Part 3.



          I didn't quiet get what you want. But I believe that an essential part of it is the same. If for you coming to Z is the end of the game, you can make this state absorbing and then calculate whatever you want: expected number of steps, probabilities etc.







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Dec 4 '18 at 16:48









          Vasily MitchVasily Mitch

          1,35837




          1,35837












          • $begingroup$
            Hi, thanks so much for replying! I have clarified my description of Part 3. Indeed, I'll have to make Z absorbing, but I don't get yet how i could get to a min/max problem between the value and the number of steps.
            $endgroup$
            – Remy Kabel
            Dec 4 '18 at 18:06










          • $begingroup$
            It's still not clear what function $J([S], m_S)$ you are trying to maximize. In general case, you can't have both $[S]$ maximal and $m$ minimal. So if the number of states isn't ridiculously large, you calculate $m_S$ for each state and then see where your chosen criteria $J([S], m_S)$ is maximal
            $endgroup$
            – Vasily Mitch
            Dec 4 '18 at 19:15












          • $begingroup$
            You can select $J=[S]/m$ or $J=[S]-m$ as examples and see whether they give you what you would intuitevely expect.
            $endgroup$
            – Vasily Mitch
            Dec 4 '18 at 19:17










          • $begingroup$
            Ill give it a try, thanks! In any case I've marked your answer!
            $endgroup$
            – Remy Kabel
            Dec 4 '18 at 23:45


















          • $begingroup$
            Hi, thanks so much for replying! I have clarified my description of Part 3. Indeed, I'll have to make Z absorbing, but I don't get yet how i could get to a min/max problem between the value and the number of steps.
            $endgroup$
            – Remy Kabel
            Dec 4 '18 at 18:06










          • $begingroup$
            It's still not clear what function $J([S], m_S)$ you are trying to maximize. In general case, you can't have both $[S]$ maximal and $m$ minimal. So if the number of states isn't ridiculously large, you calculate $m_S$ for each state and then see where your chosen criteria $J([S], m_S)$ is maximal
            $endgroup$
            – Vasily Mitch
            Dec 4 '18 at 19:15












          • $begingroup$
            You can select $J=[S]/m$ or $J=[S]-m$ as examples and see whether they give you what you would intuitevely expect.
            $endgroup$
            – Vasily Mitch
            Dec 4 '18 at 19:17










          • $begingroup$
            Ill give it a try, thanks! In any case I've marked your answer!
            $endgroup$
            – Remy Kabel
            Dec 4 '18 at 23:45
















          $begingroup$
          Hi, thanks so much for replying! I have clarified my description of Part 3. Indeed, I'll have to make Z absorbing, but I don't get yet how i could get to a min/max problem between the value and the number of steps.
          $endgroup$
          – Remy Kabel
          Dec 4 '18 at 18:06




          $begingroup$
          Hi, thanks so much for replying! I have clarified my description of Part 3. Indeed, I'll have to make Z absorbing, but I don't get yet how i could get to a min/max problem between the value and the number of steps.
          $endgroup$
          – Remy Kabel
          Dec 4 '18 at 18:06












          $begingroup$
          It's still not clear what function $J([S], m_S)$ you are trying to maximize. In general case, you can't have both $[S]$ maximal and $m$ minimal. So if the number of states isn't ridiculously large, you calculate $m_S$ for each state and then see where your chosen criteria $J([S], m_S)$ is maximal
          $endgroup$
          – Vasily Mitch
          Dec 4 '18 at 19:15






          $begingroup$
          It's still not clear what function $J([S], m_S)$ you are trying to maximize. In general case, you can't have both $[S]$ maximal and $m$ minimal. So if the number of states isn't ridiculously large, you calculate $m_S$ for each state and then see where your chosen criteria $J([S], m_S)$ is maximal
          $endgroup$
          – Vasily Mitch
          Dec 4 '18 at 19:15














          $begingroup$
          You can select $J=[S]/m$ or $J=[S]-m$ as examples and see whether they give you what you would intuitevely expect.
          $endgroup$
          – Vasily Mitch
          Dec 4 '18 at 19:17




          $begingroup$
          You can select $J=[S]/m$ or $J=[S]-m$ as examples and see whether they give you what you would intuitevely expect.
          $endgroup$
          – Vasily Mitch
          Dec 4 '18 at 19:17












          $begingroup$
          Ill give it a try, thanks! In any case I've marked your answer!
          $endgroup$
          – Remy Kabel
          Dec 4 '18 at 23:45




          $begingroup$
          Ill give it a try, thanks! In any case I've marked your answer!
          $endgroup$
          – Remy Kabel
          Dec 4 '18 at 23:45


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Mathematics Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3025733%2fmarkov-chain-with-expected-values-and-time-optimization%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          To store a contact into the json file from server.js file using a class in NodeJS

          Redirect URL with Chrome Remote Debugging Android Devices

          Dieringhausen