Nathan

advertisement
Markov Chains and
Absorbing States
My beard is a
Markov Chain
Andrey Markov
1856-1922
Nathan Hechtman
About Markov
• Russian Mathematician
• Helped prove the central limit theorem
• Specialized in stochastic processes and probability
Transition Diagrams: Bayesian Probability
Maps
• Transition diagrams are conditional probability trees with one
repeated process
• That process is expressed in a network of conditional probabilities
emerging from states (nodes)
1.0
1.0
Transition Diagrams as Matrices
1.0
1.0
S0
S1
S2
S3
S4
S5
S6
S7
S0
0
1
0
0
0
0
0
0
S1
0
0
.2
.8
0
0
0
0
S2
0
0
0
0
1
0
0
0
S3
0
0
0
0
.1
.4
.5
0
S4
0
0
0
0
1
0
0
0
S5
0
0
0
0
0
0
0
1
S6
0
0
0
0
0
0
0
1
0
0
0
0
0
0
1
• The network corresponds to
S7 0
the matrix of transformation of the Markov chain
• Entry (i, j) is the probability from going from node i to node j in a
single step
Simple Markov Chain:
• Initial state matrix, matrix of transformation, and power
• The final state matrix (matrix resulting after n steps) can be calculated
very efficiently this way
• The matrix of transformation will be square
n
[C0 C1
C2
C3
Initial state matrix
C4
C5
C6
C7 ]
0
1
0
0
0
0
0
0
0
0
.2 .8
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
.1
.4
.5
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
1
0 0 0 0 0 0
Matrix of transformation
1
0
Ergodic/Irreducible Chains
• Every node in the transition diagram leads to and from every other node
with a nonzero probability
• It does so in a finite amount of steps, but not necessarily one step
• Ergodic chains correspond to bijective (and invertible) transformations
Irreducible (ergodic)
Irreducible (ergodic)
Reducible (non-ergodic)
Periodic Markov Chains
• Periodic Markov chains repeat in cycles of length greater than one
• Periodic chains are a special case of ergodic Markov chains
1
0
0
0
0
1
0
1
0
=
1
0
0
0
0
1
0
1
0
2n
Regular Markov Chains and Steady State
• The MOT raised to some power of n has all positive entries
• Converge to a steady-state matrix
• Finding the steady-state matrix (v is the initial matrix, P is the MOT):
vP = v
v(P-I) = v(I-P) = 0
[C1
C2]
.8
.2
.6
.4
[C1
C2 ]
.8-1 .2
[.75 .25]
.6
.4-1
-.2
.2
.6
-.6
= [0 0]
= [0 0]
= [0 0]
An Analogy for Absorbing States: Ford and the Bistro
Absorbing States
• The matrix of transformation contains a row of all zeros, signifying a
node or nodes with no ‘children’
• All nodes have a pathway to at least one absorbing state, which can
be seen as a row with all zeros, except for a 1 along the diagonal
• Absorbing states exist iff chain is irreducible 0 1 0 0 0 0 0 0
• For example, S4 and S7
0 0 .2 .8 0 0 0 0
1.0
1.0
0
0
0
0
1
0
0
0
0
0
0
0
.1
.4
.5
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
1
The standard form of the transition matrix
• Absorbing Markov chain can be expressed with a standard form
transition matrix
• Absorbing states, like S4 and S7 are moved to the top and left:
• Recall: absorbing states are seen as rows of zeros
S0
S1
S2
S3
S4
S5
S6
S7
S0
0
1
0
0
0
0
0
0
S1
0
0
.2
.8
0
0
0
S2
0
0
0
0
1
0
S3
0
0
0
0
.1
S4
0
0
0
0
S5
0
0
0
S6
0
0
S7
0
0
S4
S7
S0
S1
S2
S3
S5
S6
S4
1
0
0
0
0
0
0
0
0
S7
0
1
0
0
0
0
0
0
0
0
S0
0
0
0
1
0
0
0
0
.4
.5
0
S1
0
0
0
0
.2
.8
0
0
1
0
0
0
S2
1
0
0
0
0
0
0
0
0
0
0
0
1
S3
.1
0
0
0
0
0
.4
.5
0
0
0
0
0
1
S5
0
1
0
0
0
0
0
0
0
0
0
0
0
1
S6
0
1
0
0
0
0
0
0
Transition Matrix
Standard form Transition Matrix
The Standard Form continued
Identity
Zero
Matrix
R
Q
• Four Parts: I, 0, R, Q
• Pk asymptotically approaches P, the limiting matrix
S4
S7
S0
S1
S2
S3
S5
S6
S4
1
0
0
0
0
0
0
0
S7
0
1
0
0
0
0
0
0
S0
0
0
0
1
0
0
0
0
S1
0
0
0
0
.2
.8
0
0
S2
1
0
0
0
0
0
0
0
S3
.1
0
0
0
0
0
.4
.5
S5
0
1
0
0
0
0
0
0
S6
0
1
0
0
0
0
0
0
Standard form transition matrix
• I, 0: no chance of leaving absorbing states
• R: probabilities of entering absorbing states
• Q: probabilities of entering other (preabsorbing) states
Are you absorbed, yet?
F, the fundamental matrix
• F= (I-Q)-1 is known as the fundamental matrix
S4
S7
S0
S1
S2
S3
S5
S6
S4
1
0
0
0
0
0
0
0
S7
0
1
0
0
0
0
0
0
S0
0
0
0
1
0
0
0
0
1
-1
0
0
0
0
1
1
.2
.8
.32
.4
S1
0
0
0
0
.2
.8
0
0
0
1
-.2
-.8
0
0
0
1
.2
.8
.32
.4
S2
1
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
0
1
-.4
-.5
0
0
0
1
.4
.5
-1
F=
=
S3
.1
0
0
0
0
0
.4
.5
S5
0
1
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
1
0
S6
0
1
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
1
Q
( I-Q )
(I-Q) -1
Property of F: expected time before
absorption
• (I-Q)-1 gives the expected number of periods before entering an
absorbing state (any absorbing state)
• The sum of each row:
1
1
.2
.8
.32
.4
3.74
0
1
.2
.8
.32
.4
2.74
0
0
1
0
0
0
1
0
0
0
1
.4
.5
1.9
0
0
0
0
1
0
1
0
0
0
0
0
1
1
P, the limiting matrix
P=
Identity
Zero
Matrix
FR
Zero
Matrix
• Finding FR
1
1
.2
.8
.32
.4
0
0
.28
.72
0
1
.2
.8
.32
.4
0
0
.28
.72
0
0
1
0
0
0
1
0
1
0
0
0
0
1
.4
.5
.1
0
.1
.9
0
0
0
0
1
0
0
1
0
1
0
0
0
0
0
1
0
1
0
1
F
R
=
FR
S4
S7
S0
S1
S2
S3
S5
S6
S4
1
0
0
0
0
0
0
0
S7
0
1
0
0
0
0
0
0
S0
.28
.72
0
0
0
0
0
0
S1
.28
.72
0
0
0
0
0
0
S2
1
0
0
0
0
0
0
0
S3
.1
.9
0
0
0
0
0
0
S5
0
1
0
0
0
0
0
0
S6
0
1
0
0
0
0
0
0
P
Interpreting P
• Entry (i, j) is the probability of going from state i to state j after an
infinite number of steps
S4 S7 S0 S1 S2 S3
• Starting in state S0, there is a 72%
S4 1
0
0
0
0
0
chance of ending up in S7
S7 0
1
0
0
0
0
S0 .28 .72 0
0
0
0
• Starting in state S2, there is a 100%
S1 .28 .72 0
0
0
0
chance of ending up in S4
S2 1
0
0
0
0
0
• There is no chance of ending up in a
S3 .1
.9
0
0
0
0
S5 0
1
0
0
0
0
Non-absorbing state
S6
0
1
0
0
0
P
0
S5
S6
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
Sources
• MDPs: https://www.youtube.com/watch?v=i0o-ui1N35U
• https://www.youtube.com/watch?v=uvYTGEZQTEs
• Feller, William. An Introduction to Probability and Its Applications. Tokyo: C.E. Tuttle, 1957. 338-51. Print.
• Anderson, David. "Markov Chains." Interactive Markov Chains Lecture Notes in Computer Science (2002): 35-55. Web.
• Wilde, Joshua. “Linear algebra III: Eigenvalues and Markov Chains." Eigenvalues, Eigenvectors, and Diagonalizability
(2002): 35-55. Web.
• http://www.avcsl.com/large-yellow-jumbo-sponge-bone-shape.html
• http://www.ssc.wisc.edu/~jmontgom/absorbingchains.pdf
• "Andrey Andreyevich Markov | Russian Mathematician." Encyclopedia Britannica Online. Encyclopedia Britannica, n.d.
Web. 24 Nov. 2015.
In Soviet Russia,
questions ask you!
Download