Markov Chains Stat 430

advertisement
Markov Chains
Stat 430
Outline
• transition matrix
• k step transitions
• absorbing/transient states
• fundamental matrix
Markov Chains
• Set of states S = {s , s , ... , s }
• The chain starts in one of these states and moves
1
2
n
successively from one state to another.
Each move is called a step.
• If the chain is in state s at the moment, it moves
i
with a probability pij into state sj
• The probabilities p
probabilities
ij
are called transition
Example
According to Kemeny, Snell, and Thompson, the Land of
Oz is blessed by many things, but not by good weather:
They never have two nice days in a row.
If they have a nice day, they are just as likely to have
snow as rain the next day.
If they have snow or rain, they have an even chance of
having the same the next day.
If there is change from snow or rain, only half of the
time is this a change to a nice day.
Take as states the kinds of weather R, N, and S and
derive transition probabilities from the information
above.
Multiplication rule
• Let P be the transition matrix of a Markov
chain.
The ijth entry of the matrix Pn gives the
probability that the Markov chain, starting
in state si, will be in state sj after n steps.
Absorbing/Transient
• A state s of a Markov chain is called
i
absorbing if it is impossible to leave it (i.e.,
pii = 1).
• A Markov chain is absorbing if it has at least
one absorbing state, and if from every state
it is possible to go to an absorbing state
(not necessarily in one step).
• In an absorbing Markov chain, a state which
is not absorbing is called transient.
Absorbing Chains
• The transition matrix of an absorbing
Markov chain can be written as:
absorb. transient
absorb.
Id
0
transient
R
Q
Absorbing Chains
• The n step transition matrix of an
absorbing chain can be written as:
absorb. transient
•
absorb.
Id
0
transient
*
Qn
for n -> infty,
Qn -> 0
Fundamental Matrix
• Matrix (Id - Q) has inverse N, it can be
written as
N = Id + Q + Q2 + Q3 + ...
• The ij-entry n
of the matrix N is the
expected number of times the chain is in
state sj, given that it starts in state si.
The initial state is counted if i = j.
ij
• N is called the fundamental matrix of the
chain
Time to Absorption
• Let t be the expected number of steps
i
before the chain is absorbed, given that the
chain starts in state si, and let t be the
column vector whose ith entry is ti.
Then t = Nc ,
where c is a column vector all of whose
entries are 1.
Absorption Probability
• The probability of absorption when starting
in state j is bj with
B = NR
• where N is the fundamental matrix and R is
sub-matrix of P
Tennis
Consider the game of tennis when deuce is reached. If
a player wins the next point, he has advantage. On the
following point, he either wins the game or the game
returns to deuce. Assume that for any point, player A
has probability .6 of winning the point and player B
has probability .4 of winning the point.
(a) Set this up as a Markov chain with state
1: A wins; 2: B wins; 3: advantage A; 4: deuce; 5: advantage B.
(b) Find the absorption probabilities.
(c) At deuce, find the expected duration of the game
and the probability that B will win.
Download