Expected time till absorption in specific state of a Markov chain

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite
1












This question is a follow-up to Expected number of steps for reaching a specific absorbing state in an absorbing Markov chain because I don't understand the answer given there. I think I need to see a concrete example.



Suppose I play red and black at a casino. On each play, I either win the amount I staked, with probability $p<frac12$ or I lose my stake. Let's say I start with a bankroll of $2$ and I decide to play until I have won $3$ or lost everything. My strategy is to bet just enough to reach my goal, or everything I have, whichever is less.



We have a Markov chain with $6$ states, $2$ of which are absorbing. There are well-known methods to determine the probability of winning, and of determining the average number of plays I make, but what if I want to know the average number of plays I make if I reach my goal? The transition matrix, with $q=1-p$ is $$begin bmatrix
1&0&0&0&0&0\
q&0&p&0&0&0\
q&0&0&0&p&0\
0&q&0&0&0&p\
0&0&0&q&0&p\
0&0&0&0&0&1
endbmatrix$$



Henning Malcolm suggests two approaches, neither of which I can follow. (This is because of my limitations, and is in no way intended as a criticism of the answer.) The first assumes that I have figured out the probability of winning starting with every possible bankroll. Then we are to compute new transition probabilities that describe the experience of the winners, as I understand it, and compute the time to absorption in the new chain. Let $p_k$ be the probability of winning, if my bankroll is $k$. How should I calculate the new transition matrix?



Henning Malcolm gives an alternative method if there is only one starting state we're interested in, and I'm only interested in the actual case where I start in state $2$. He says, "first set up a system of equations that compute for each state the expected number of times one will encounter that state before being absorbed." If we let $e_k$ be this number for state $k=1,2,3,4,$ how do we construct the equations relating the $e_k?$ I can see how to do this if we only care about the number plays until the game ends, but how do I make it reflect only the number of plays until winning?







share|cite|improve this question





















  • Did you deliberately choose an example where each state has only two successors? I think in this case, in the first method you can set up a system of linear equations for the transition probabilities conditional on winning; but that won't work in the same way if you have a more general transition matrix, as you'll have too many unknowns and too few equations.
    – joriki
    Jul 15 at 7:20










  • @joriki No I didn't. I wanted to make a simple example, so that it wasn't asking to much to request a solution, but it didn't occur to me that I might be over-simplifying the problem.
    – saulspatz
    Jul 15 at 14:06














up vote
1
down vote

favorite
1












This question is a follow-up to Expected number of steps for reaching a specific absorbing state in an absorbing Markov chain because I don't understand the answer given there. I think I need to see a concrete example.



Suppose I play red and black at a casino. On each play, I either win the amount I staked, with probability $p<frac12$ or I lose my stake. Let's say I start with a bankroll of $2$ and I decide to play until I have won $3$ or lost everything. My strategy is to bet just enough to reach my goal, or everything I have, whichever is less.



We have a Markov chain with $6$ states, $2$ of which are absorbing. There are well-known methods to determine the probability of winning, and of determining the average number of plays I make, but what if I want to know the average number of plays I make if I reach my goal? The transition matrix, with $q=1-p$ is $$begin bmatrix
1&0&0&0&0&0\
q&0&p&0&0&0\
q&0&0&0&p&0\
0&q&0&0&0&p\
0&0&0&q&0&p\
0&0&0&0&0&1
endbmatrix$$



Henning Malcolm suggests two approaches, neither of which I can follow. (This is because of my limitations, and is in no way intended as a criticism of the answer.) The first assumes that I have figured out the probability of winning starting with every possible bankroll. Then we are to compute new transition probabilities that describe the experience of the winners, as I understand it, and compute the time to absorption in the new chain. Let $p_k$ be the probability of winning, if my bankroll is $k$. How should I calculate the new transition matrix?



Henning Malcolm gives an alternative method if there is only one starting state we're interested in, and I'm only interested in the actual case where I start in state $2$. He says, "first set up a system of equations that compute for each state the expected number of times one will encounter that state before being absorbed." If we let $e_k$ be this number for state $k=1,2,3,4,$ how do we construct the equations relating the $e_k?$ I can see how to do this if we only care about the number plays until the game ends, but how do I make it reflect only the number of plays until winning?







share|cite|improve this question





















  • Did you deliberately choose an example where each state has only two successors? I think in this case, in the first method you can set up a system of linear equations for the transition probabilities conditional on winning; but that won't work in the same way if you have a more general transition matrix, as you'll have too many unknowns and too few equations.
    – joriki
    Jul 15 at 7:20










  • @joriki No I didn't. I wanted to make a simple example, so that it wasn't asking to much to request a solution, but it didn't occur to me that I might be over-simplifying the problem.
    – saulspatz
    Jul 15 at 14:06












up vote
1
down vote

favorite
1









up vote
1
down vote

favorite
1






1





This question is a follow-up to Expected number of steps for reaching a specific absorbing state in an absorbing Markov chain because I don't understand the answer given there. I think I need to see a concrete example.



Suppose I play red and black at a casino. On each play, I either win the amount I staked, with probability $p<frac12$ or I lose my stake. Let's say I start with a bankroll of $2$ and I decide to play until I have won $3$ or lost everything. My strategy is to bet just enough to reach my goal, or everything I have, whichever is less.



We have a Markov chain with $6$ states, $2$ of which are absorbing. There are well-known methods to determine the probability of winning, and of determining the average number of plays I make, but what if I want to know the average number of plays I make if I reach my goal? The transition matrix, with $q=1-p$ is $$begin bmatrix
1&0&0&0&0&0\
q&0&p&0&0&0\
q&0&0&0&p&0\
0&q&0&0&0&p\
0&0&0&q&0&p\
0&0&0&0&0&1
endbmatrix$$



Henning Malcolm suggests two approaches, neither of which I can follow. (This is because of my limitations, and is in no way intended as a criticism of the answer.) The first assumes that I have figured out the probability of winning starting with every possible bankroll. Then we are to compute new transition probabilities that describe the experience of the winners, as I understand it, and compute the time to absorption in the new chain. Let $p_k$ be the probability of winning, if my bankroll is $k$. How should I calculate the new transition matrix?



Henning Malcolm gives an alternative method if there is only one starting state we're interested in, and I'm only interested in the actual case where I start in state $2$. He says, "first set up a system of equations that compute for each state the expected number of times one will encounter that state before being absorbed." If we let $e_k$ be this number for state $k=1,2,3,4,$ how do we construct the equations relating the $e_k?$ I can see how to do this if we only care about the number plays until the game ends, but how do I make it reflect only the number of plays until winning?







share|cite|improve this question













This question is a follow-up to Expected number of steps for reaching a specific absorbing state in an absorbing Markov chain because I don't understand the answer given there. I think I need to see a concrete example.



Suppose I play red and black at a casino. On each play, I either win the amount I staked, with probability $p<frac12$ or I lose my stake. Let's say I start with a bankroll of $2$ and I decide to play until I have won $3$ or lost everything. My strategy is to bet just enough to reach my goal, or everything I have, whichever is less.



We have a Markov chain with $6$ states, $2$ of which are absorbing. There are well-known methods to determine the probability of winning, and of determining the average number of plays I make, but what if I want to know the average number of plays I make if I reach my goal? The transition matrix, with $q=1-p$ is $$begin bmatrix
1&0&0&0&0&0\
q&0&p&0&0&0\
q&0&0&0&p&0\
0&q&0&0&0&p\
0&0&0&q&0&p\
0&0&0&0&0&1
endbmatrix$$



Henning Malcolm suggests two approaches, neither of which I can follow. (This is because of my limitations, and is in no way intended as a criticism of the answer.) The first assumes that I have figured out the probability of winning starting with every possible bankroll. Then we are to compute new transition probabilities that describe the experience of the winners, as I understand it, and compute the time to absorption in the new chain. Let $p_k$ be the probability of winning, if my bankroll is $k$. How should I calculate the new transition matrix?



Henning Malcolm gives an alternative method if there is only one starting state we're interested in, and I'm only interested in the actual case where I start in state $2$. He says, "first set up a system of equations that compute for each state the expected number of times one will encounter that state before being absorbed." If we let $e_k$ be this number for state $k=1,2,3,4,$ how do we construct the equations relating the $e_k?$ I can see how to do this if we only care about the number plays until the game ends, but how do I make it reflect only the number of plays until winning?









share|cite|improve this question












share|cite|improve this question




share|cite|improve this question








edited Jul 15 at 1:35
























asked Jul 15 at 1:20









saulspatz

10.7k21323




10.7k21323











  • Did you deliberately choose an example where each state has only two successors? I think in this case, in the first method you can set up a system of linear equations for the transition probabilities conditional on winning; but that won't work in the same way if you have a more general transition matrix, as you'll have too many unknowns and too few equations.
    – joriki
    Jul 15 at 7:20










  • @joriki No I didn't. I wanted to make a simple example, so that it wasn't asking to much to request a solution, but it didn't occur to me that I might be over-simplifying the problem.
    – saulspatz
    Jul 15 at 14:06
















  • Did you deliberately choose an example where each state has only two successors? I think in this case, in the first method you can set up a system of linear equations for the transition probabilities conditional on winning; but that won't work in the same way if you have a more general transition matrix, as you'll have too many unknowns and too few equations.
    – joriki
    Jul 15 at 7:20










  • @joriki No I didn't. I wanted to make a simple example, so that it wasn't asking to much to request a solution, but it didn't occur to me that I might be over-simplifying the problem.
    – saulspatz
    Jul 15 at 14:06















Did you deliberately choose an example where each state has only two successors? I think in this case, in the first method you can set up a system of linear equations for the transition probabilities conditional on winning; but that won't work in the same way if you have a more general transition matrix, as you'll have too many unknowns and too few equations.
– joriki
Jul 15 at 7:20




Did you deliberately choose an example where each state has only two successors? I think in this case, in the first method you can set up a system of linear equations for the transition probabilities conditional on winning; but that won't work in the same way if you have a more general transition matrix, as you'll have too many unknowns and too few equations.
– joriki
Jul 15 at 7:20












@joriki No I didn't. I wanted to make a simple example, so that it wasn't asking to much to request a solution, but it didn't occur to me that I might be over-simplifying the problem.
– saulspatz
Jul 15 at 14:06




@joriki No I didn't. I wanted to make a simple example, so that it wasn't asking to much to request a solution, but it didn't occur to me that I might be over-simplifying the problem.
– saulspatz
Jul 15 at 14:06















active

oldest

votes











Your Answer




StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "69"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);








 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2852105%2fexpected-time-till-absorption-in-specific-state-of-a-markov-chain%23new-answer', 'question_page');

);

Post as a guest



































active

oldest

votes













active

oldest

votes









active

oldest

votes






active

oldest

votes










 

draft saved


draft discarded


























 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2852105%2fexpected-time-till-absorption-in-specific-state-of-a-markov-chain%23new-answer', 'question_page');

);

Post as a guest













































































Comments

Popular posts from this blog

Color the edges and diagonals of a regular polygon

Relationship between determinant of matrix and determinant of adjoint?

What is the equation of a 3D cone with generalised tilt?