Expected number of steps for reaching a specific absorbing state in an absorbing Markov chain
Clash Royale CLAN TAG#URR8PPP
up vote
3
down vote
favorite
For a finite Markov chain with one or more absorbing states there are well-known methods for determining either the probability of ending up in a specific absorbing state or the expected number of steps to end up in any absorbing state. However, I've not found a way to determine the expected number of steps to reach a specific state.
For instance, in the classic Drunkard's Walk, the drunkard is absorbed by both home and the bar. If I were looking at a Drunkard's Walk, what I would want to determine is how many steps it takes, on average, a drunkard who gets home to actually get home. Ideally so that the answer can be presented in a form like "There's an X% chance the drunkard will end up at home, taking on average Y steps, and a Z% chance the drunkard will end up at the bar after an average of W steps." or equivalent.
I've not yet found a good way to find the solution, I'm at a bit of a loss where to start, and if the answer exists out there I've not found the correct Google search terms to land me on it.
Thanks!
linear-algebra markov-chains
add a comment |Â
up vote
3
down vote
favorite
For a finite Markov chain with one or more absorbing states there are well-known methods for determining either the probability of ending up in a specific absorbing state or the expected number of steps to end up in any absorbing state. However, I've not found a way to determine the expected number of steps to reach a specific state.
For instance, in the classic Drunkard's Walk, the drunkard is absorbed by both home and the bar. If I were looking at a Drunkard's Walk, what I would want to determine is how many steps it takes, on average, a drunkard who gets home to actually get home. Ideally so that the answer can be presented in a form like "There's an X% chance the drunkard will end up at home, taking on average Y steps, and a Z% chance the drunkard will end up at the bar after an average of W steps." or equivalent.
I've not yet found a good way to find the solution, I'm at a bit of a loss where to start, and if the answer exists out there I've not found the correct Google search terms to land me on it.
Thanks!
linear-algebra markov-chains
I think your stationary vector should give you the distribution over absorbing states... the Number of steps is roughly proportional to the âÂÂeigenvalue gapâ between the stationary vector and the eigenvector with second largest eigenvalue
â mm8511
Jul 14 at 23:52
If you look up âÂÂeigenvalue gap Markova chainâ in google, this should do the trick.
â mm8511
Jul 14 at 23:53
add a comment |Â
up vote
3
down vote
favorite
up vote
3
down vote
favorite
For a finite Markov chain with one or more absorbing states there are well-known methods for determining either the probability of ending up in a specific absorbing state or the expected number of steps to end up in any absorbing state. However, I've not found a way to determine the expected number of steps to reach a specific state.
For instance, in the classic Drunkard's Walk, the drunkard is absorbed by both home and the bar. If I were looking at a Drunkard's Walk, what I would want to determine is how many steps it takes, on average, a drunkard who gets home to actually get home. Ideally so that the answer can be presented in a form like "There's an X% chance the drunkard will end up at home, taking on average Y steps, and a Z% chance the drunkard will end up at the bar after an average of W steps." or equivalent.
I've not yet found a good way to find the solution, I'm at a bit of a loss where to start, and if the answer exists out there I've not found the correct Google search terms to land me on it.
Thanks!
linear-algebra markov-chains
For a finite Markov chain with one or more absorbing states there are well-known methods for determining either the probability of ending up in a specific absorbing state or the expected number of steps to end up in any absorbing state. However, I've not found a way to determine the expected number of steps to reach a specific state.
For instance, in the classic Drunkard's Walk, the drunkard is absorbed by both home and the bar. If I were looking at a Drunkard's Walk, what I would want to determine is how many steps it takes, on average, a drunkard who gets home to actually get home. Ideally so that the answer can be presented in a form like "There's an X% chance the drunkard will end up at home, taking on average Y steps, and a Z% chance the drunkard will end up at the bar after an average of W steps." or equivalent.
I've not yet found a good way to find the solution, I'm at a bit of a loss where to start, and if the answer exists out there I've not found the correct Google search terms to land me on it.
Thanks!
linear-algebra markov-chains
asked Jul 14 at 23:33
user2710532
161
161
I think your stationary vector should give you the distribution over absorbing states... the Number of steps is roughly proportional to the âÂÂeigenvalue gapâ between the stationary vector and the eigenvector with second largest eigenvalue
â mm8511
Jul 14 at 23:52
If you look up âÂÂeigenvalue gap Markova chainâ in google, this should do the trick.
â mm8511
Jul 14 at 23:53
add a comment |Â
I think your stationary vector should give you the distribution over absorbing states... the Number of steps is roughly proportional to the âÂÂeigenvalue gapâ between the stationary vector and the eigenvector with second largest eigenvalue
â mm8511
Jul 14 at 23:52
If you look up âÂÂeigenvalue gap Markova chainâ in google, this should do the trick.
â mm8511
Jul 14 at 23:53
I think your stationary vector should give you the distribution over absorbing states... the Number of steps is roughly proportional to the âÂÂeigenvalue gapâ between the stationary vector and the eigenvector with second largest eigenvalue
â mm8511
Jul 14 at 23:52
I think your stationary vector should give you the distribution over absorbing states... the Number of steps is roughly proportional to the âÂÂeigenvalue gapâ between the stationary vector and the eigenvector with second largest eigenvalue
â mm8511
Jul 14 at 23:52
If you look up âÂÂeigenvalue gap Markova chainâ in google, this should do the trick.
â mm8511
Jul 14 at 23:53
If you look up âÂÂeigenvalue gap Markova chainâ in google, this should do the trick.
â mm8511
Jul 14 at 23:53
add a comment |Â
1 Answer
1
active
oldest
votes
up vote
0
down vote
Once you have the probability at each point for ending up in your selected absorbing state, you can compute new transition probabilities everywhere that describe the behavior of exactly those drunkards who will eventually end up in that state.
Then compute the expected time to absorption for the new chain.
Alternatively, if you have one starting state you're interested in, first set up a system of equations that compute for each state the expected number of times one will encounter that state before being absorbed. With these numbers you can then find the coefficients for a system of equations that finds the "expected number of steps taken yet" for a uniformly selected drunkard who just reached each specific state.
This gives you the time-to-reach for all states at once, at the expense of only working for one starting state (or probability distribution for the starting state).
I'm really interested in this, but I can see how to carry out the computations you describe. I'm going to post a question with a concrete example, and I hope you'll indicate how to go about solving it.
â saulspatz
Jul 15 at 0:37
I posted my question at math.stackexchange.com/questions/2852105/⦠I hope you'll have a look. I just saw there was a typo in my previous comment. I meant to say, "I can't see how..." of course.
â saulspatz
Jul 15 at 1:20
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
Once you have the probability at each point for ending up in your selected absorbing state, you can compute new transition probabilities everywhere that describe the behavior of exactly those drunkards who will eventually end up in that state.
Then compute the expected time to absorption for the new chain.
Alternatively, if you have one starting state you're interested in, first set up a system of equations that compute for each state the expected number of times one will encounter that state before being absorbed. With these numbers you can then find the coefficients for a system of equations that finds the "expected number of steps taken yet" for a uniformly selected drunkard who just reached each specific state.
This gives you the time-to-reach for all states at once, at the expense of only working for one starting state (or probability distribution for the starting state).
I'm really interested in this, but I can see how to carry out the computations you describe. I'm going to post a question with a concrete example, and I hope you'll indicate how to go about solving it.
â saulspatz
Jul 15 at 0:37
I posted my question at math.stackexchange.com/questions/2852105/⦠I hope you'll have a look. I just saw there was a typo in my previous comment. I meant to say, "I can't see how..." of course.
â saulspatz
Jul 15 at 1:20
add a comment |Â
up vote
0
down vote
Once you have the probability at each point for ending up in your selected absorbing state, you can compute new transition probabilities everywhere that describe the behavior of exactly those drunkards who will eventually end up in that state.
Then compute the expected time to absorption for the new chain.
Alternatively, if you have one starting state you're interested in, first set up a system of equations that compute for each state the expected number of times one will encounter that state before being absorbed. With these numbers you can then find the coefficients for a system of equations that finds the "expected number of steps taken yet" for a uniformly selected drunkard who just reached each specific state.
This gives you the time-to-reach for all states at once, at the expense of only working for one starting state (or probability distribution for the starting state).
I'm really interested in this, but I can see how to carry out the computations you describe. I'm going to post a question with a concrete example, and I hope you'll indicate how to go about solving it.
â saulspatz
Jul 15 at 0:37
I posted my question at math.stackexchange.com/questions/2852105/⦠I hope you'll have a look. I just saw there was a typo in my previous comment. I meant to say, "I can't see how..." of course.
â saulspatz
Jul 15 at 1:20
add a comment |Â
up vote
0
down vote
up vote
0
down vote
Once you have the probability at each point for ending up in your selected absorbing state, you can compute new transition probabilities everywhere that describe the behavior of exactly those drunkards who will eventually end up in that state.
Then compute the expected time to absorption for the new chain.
Alternatively, if you have one starting state you're interested in, first set up a system of equations that compute for each state the expected number of times one will encounter that state before being absorbed. With these numbers you can then find the coefficients for a system of equations that finds the "expected number of steps taken yet" for a uniformly selected drunkard who just reached each specific state.
This gives you the time-to-reach for all states at once, at the expense of only working for one starting state (or probability distribution for the starting state).
Once you have the probability at each point for ending up in your selected absorbing state, you can compute new transition probabilities everywhere that describe the behavior of exactly those drunkards who will eventually end up in that state.
Then compute the expected time to absorption for the new chain.
Alternatively, if you have one starting state you're interested in, first set up a system of equations that compute for each state the expected number of times one will encounter that state before being absorbed. With these numbers you can then find the coefficients for a system of equations that finds the "expected number of steps taken yet" for a uniformly selected drunkard who just reached each specific state.
This gives you the time-to-reach for all states at once, at the expense of only working for one starting state (or probability distribution for the starting state).
edited Jul 15 at 0:32
answered Jul 15 at 0:02
Henning Makholm
226k16291520
226k16291520
I'm really interested in this, but I can see how to carry out the computations you describe. I'm going to post a question with a concrete example, and I hope you'll indicate how to go about solving it.
â saulspatz
Jul 15 at 0:37
I posted my question at math.stackexchange.com/questions/2852105/⦠I hope you'll have a look. I just saw there was a typo in my previous comment. I meant to say, "I can't see how..." of course.
â saulspatz
Jul 15 at 1:20
add a comment |Â
I'm really interested in this, but I can see how to carry out the computations you describe. I'm going to post a question with a concrete example, and I hope you'll indicate how to go about solving it.
â saulspatz
Jul 15 at 0:37
I posted my question at math.stackexchange.com/questions/2852105/⦠I hope you'll have a look. I just saw there was a typo in my previous comment. I meant to say, "I can't see how..." of course.
â saulspatz
Jul 15 at 1:20
I'm really interested in this, but I can see how to carry out the computations you describe. I'm going to post a question with a concrete example, and I hope you'll indicate how to go about solving it.
â saulspatz
Jul 15 at 0:37
I'm really interested in this, but I can see how to carry out the computations you describe. I'm going to post a question with a concrete example, and I hope you'll indicate how to go about solving it.
â saulspatz
Jul 15 at 0:37
I posted my question at math.stackexchange.com/questions/2852105/⦠I hope you'll have a look. I just saw there was a typo in my previous comment. I meant to say, "I can't see how..." of course.
â saulspatz
Jul 15 at 1:20
I posted my question at math.stackexchange.com/questions/2852105/⦠I hope you'll have a look. I just saw there was a typo in my previous comment. I meant to say, "I can't see how..." of course.
â saulspatz
Jul 15 at 1:20
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2852051%2fexpected-number-of-steps-for-reaching-a-specific-absorbing-state-in-an-absorbing%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
I think your stationary vector should give you the distribution over absorbing states... the Number of steps is roughly proportional to the âÂÂeigenvalue gapâ between the stationary vector and the eigenvector with second largest eigenvalue
â mm8511
Jul 14 at 23:52
If you look up âÂÂeigenvalue gap Markova chainâ in google, this should do the trick.
â mm8511
Jul 14 at 23:53