Expected time till absorption in specific state of a Markov chain
Clash Royale CLAN TAG#URR8PPP
up vote
1
down vote
favorite
This question is a follow-up to Expected number of steps for reaching a specific absorbing state in an absorbing Markov chain because I don't understand the answer given there. I think I need to see a concrete example.
Suppose I play red and black at a casino. On each play, I either win the amount I staked, with probability $p<frac12$ or I lose my stake. Let's say I start with a bankroll of $2$ and I decide to play until I have won $3$ or lost everything. My strategy is to bet just enough to reach my goal, or everything I have, whichever is less.
We have a Markov chain with $6$ states, $2$ of which are absorbing. There are well-known methods to determine the probability of winning, and of determining the average number of plays I make, but what if I want to know the average number of plays I make if I reach my goal? The transition matrix, with $q=1-p$ is $$begin bmatrix
1&0&0&0&0&0\
q&0&p&0&0&0\
q&0&0&0&p&0\
0&q&0&0&0&p\
0&0&0&q&0&p\
0&0&0&0&0&1
endbmatrix$$
Henning Malcolm suggests two approaches, neither of which I can follow. (This is because of my limitations, and is in no way intended as a criticism of the answer.) The first assumes that I have figured out the probability of winning starting with every possible bankroll. Then we are to compute new transition probabilities that describe the experience of the winners, as I understand it, and compute the time to absorption in the new chain. Let $p_k$ be the probability of winning, if my bankroll is $k$. How should I calculate the new transition matrix?
Henning Malcolm gives an alternative method if there is only one starting state we're interested in, and I'm only interested in the actual case where I start in state $2$. He says, "first set up a system of equations that compute for each state the expected number of times one will encounter that state before being absorbed." If we let $e_k$ be this number for state $k=1,2,3,4,$ how do we construct the equations relating the $e_k?$ I can see how to do this if we only care about the number plays until the game ends, but how do I make it reflect only the number of plays until winning?
markov-chains
add a comment |Â
up vote
1
down vote
favorite
This question is a follow-up to Expected number of steps for reaching a specific absorbing state in an absorbing Markov chain because I don't understand the answer given there. I think I need to see a concrete example.
Suppose I play red and black at a casino. On each play, I either win the amount I staked, with probability $p<frac12$ or I lose my stake. Let's say I start with a bankroll of $2$ and I decide to play until I have won $3$ or lost everything. My strategy is to bet just enough to reach my goal, or everything I have, whichever is less.
We have a Markov chain with $6$ states, $2$ of which are absorbing. There are well-known methods to determine the probability of winning, and of determining the average number of plays I make, but what if I want to know the average number of plays I make if I reach my goal? The transition matrix, with $q=1-p$ is $$begin bmatrix
1&0&0&0&0&0\
q&0&p&0&0&0\
q&0&0&0&p&0\
0&q&0&0&0&p\
0&0&0&q&0&p\
0&0&0&0&0&1
endbmatrix$$
Henning Malcolm suggests two approaches, neither of which I can follow. (This is because of my limitations, and is in no way intended as a criticism of the answer.) The first assumes that I have figured out the probability of winning starting with every possible bankroll. Then we are to compute new transition probabilities that describe the experience of the winners, as I understand it, and compute the time to absorption in the new chain. Let $p_k$ be the probability of winning, if my bankroll is $k$. How should I calculate the new transition matrix?
Henning Malcolm gives an alternative method if there is only one starting state we're interested in, and I'm only interested in the actual case where I start in state $2$. He says, "first set up a system of equations that compute for each state the expected number of times one will encounter that state before being absorbed." If we let $e_k$ be this number for state $k=1,2,3,4,$ how do we construct the equations relating the $e_k?$ I can see how to do this if we only care about the number plays until the game ends, but how do I make it reflect only the number of plays until winning?
markov-chains
Did you deliberately choose an example where each state has only two successors? I think in this case, in the first method you can set up a system of linear equations for the transition probabilities conditional on winning; but that won't work in the same way if you have a more general transition matrix, as you'll have too many unknowns and too few equations.
â joriki
Jul 15 at 7:20
@joriki No I didn't. I wanted to make a simple example, so that it wasn't asking to much to request a solution, but it didn't occur to me that I might be over-simplifying the problem.
â saulspatz
Jul 15 at 14:06
add a comment |Â
up vote
1
down vote
favorite
up vote
1
down vote
favorite
This question is a follow-up to Expected number of steps for reaching a specific absorbing state in an absorbing Markov chain because I don't understand the answer given there. I think I need to see a concrete example.
Suppose I play red and black at a casino. On each play, I either win the amount I staked, with probability $p<frac12$ or I lose my stake. Let's say I start with a bankroll of $2$ and I decide to play until I have won $3$ or lost everything. My strategy is to bet just enough to reach my goal, or everything I have, whichever is less.
We have a Markov chain with $6$ states, $2$ of which are absorbing. There are well-known methods to determine the probability of winning, and of determining the average number of plays I make, but what if I want to know the average number of plays I make if I reach my goal? The transition matrix, with $q=1-p$ is $$begin bmatrix
1&0&0&0&0&0\
q&0&p&0&0&0\
q&0&0&0&p&0\
0&q&0&0&0&p\
0&0&0&q&0&p\
0&0&0&0&0&1
endbmatrix$$
Henning Malcolm suggests two approaches, neither of which I can follow. (This is because of my limitations, and is in no way intended as a criticism of the answer.) The first assumes that I have figured out the probability of winning starting with every possible bankroll. Then we are to compute new transition probabilities that describe the experience of the winners, as I understand it, and compute the time to absorption in the new chain. Let $p_k$ be the probability of winning, if my bankroll is $k$. How should I calculate the new transition matrix?
Henning Malcolm gives an alternative method if there is only one starting state we're interested in, and I'm only interested in the actual case where I start in state $2$. He says, "first set up a system of equations that compute for each state the expected number of times one will encounter that state before being absorbed." If we let $e_k$ be this number for state $k=1,2,3,4,$ how do we construct the equations relating the $e_k?$ I can see how to do this if we only care about the number plays until the game ends, but how do I make it reflect only the number of plays until winning?
markov-chains
This question is a follow-up to Expected number of steps for reaching a specific absorbing state in an absorbing Markov chain because I don't understand the answer given there. I think I need to see a concrete example.
Suppose I play red and black at a casino. On each play, I either win the amount I staked, with probability $p<frac12$ or I lose my stake. Let's say I start with a bankroll of $2$ and I decide to play until I have won $3$ or lost everything. My strategy is to bet just enough to reach my goal, or everything I have, whichever is less.
We have a Markov chain with $6$ states, $2$ of which are absorbing. There are well-known methods to determine the probability of winning, and of determining the average number of plays I make, but what if I want to know the average number of plays I make if I reach my goal? The transition matrix, with $q=1-p$ is $$begin bmatrix
1&0&0&0&0&0\
q&0&p&0&0&0\
q&0&0&0&p&0\
0&q&0&0&0&p\
0&0&0&q&0&p\
0&0&0&0&0&1
endbmatrix$$
Henning Malcolm suggests two approaches, neither of which I can follow. (This is because of my limitations, and is in no way intended as a criticism of the answer.) The first assumes that I have figured out the probability of winning starting with every possible bankroll. Then we are to compute new transition probabilities that describe the experience of the winners, as I understand it, and compute the time to absorption in the new chain. Let $p_k$ be the probability of winning, if my bankroll is $k$. How should I calculate the new transition matrix?
Henning Malcolm gives an alternative method if there is only one starting state we're interested in, and I'm only interested in the actual case where I start in state $2$. He says, "first set up a system of equations that compute for each state the expected number of times one will encounter that state before being absorbed." If we let $e_k$ be this number for state $k=1,2,3,4,$ how do we construct the equations relating the $e_k?$ I can see how to do this if we only care about the number plays until the game ends, but how do I make it reflect only the number of plays until winning?
markov-chains
edited Jul 15 at 1:35
asked Jul 15 at 1:20
saulspatz
10.7k21323
10.7k21323
Did you deliberately choose an example where each state has only two successors? I think in this case, in the first method you can set up a system of linear equations for the transition probabilities conditional on winning; but that won't work in the same way if you have a more general transition matrix, as you'll have too many unknowns and too few equations.
â joriki
Jul 15 at 7:20
@joriki No I didn't. I wanted to make a simple example, so that it wasn't asking to much to request a solution, but it didn't occur to me that I might be over-simplifying the problem.
â saulspatz
Jul 15 at 14:06
add a comment |Â
Did you deliberately choose an example where each state has only two successors? I think in this case, in the first method you can set up a system of linear equations for the transition probabilities conditional on winning; but that won't work in the same way if you have a more general transition matrix, as you'll have too many unknowns and too few equations.
â joriki
Jul 15 at 7:20
@joriki No I didn't. I wanted to make a simple example, so that it wasn't asking to much to request a solution, but it didn't occur to me that I might be over-simplifying the problem.
â saulspatz
Jul 15 at 14:06
Did you deliberately choose an example where each state has only two successors? I think in this case, in the first method you can set up a system of linear equations for the transition probabilities conditional on winning; but that won't work in the same way if you have a more general transition matrix, as you'll have too many unknowns and too few equations.
â joriki
Jul 15 at 7:20
Did you deliberately choose an example where each state has only two successors? I think in this case, in the first method you can set up a system of linear equations for the transition probabilities conditional on winning; but that won't work in the same way if you have a more general transition matrix, as you'll have too many unknowns and too few equations.
â joriki
Jul 15 at 7:20
@joriki No I didn't. I wanted to make a simple example, so that it wasn't asking to much to request a solution, but it didn't occur to me that I might be over-simplifying the problem.
â saulspatz
Jul 15 at 14:06
@joriki No I didn't. I wanted to make a simple example, so that it wasn't asking to much to request a solution, but it didn't occur to me that I might be over-simplifying the problem.
â saulspatz
Jul 15 at 14:06
add a comment |Â
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2852105%2fexpected-time-till-absorption-in-specific-state-of-a-markov-chain%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Did you deliberately choose an example where each state has only two successors? I think in this case, in the first method you can set up a system of linear equations for the transition probabilities conditional on winning; but that won't work in the same way if you have a more general transition matrix, as you'll have too many unknowns and too few equations.
â joriki
Jul 15 at 7:20
@joriki No I didn't. I wanted to make a simple example, so that it wasn't asking to much to request a solution, but it didn't occur to me that I might be over-simplifying the problem.
â saulspatz
Jul 15 at 14:06