Markov Chains Jian He 1 Outline • Markov Decision Probability Review Process Stochastic Processes • Hidden Markov Definition Chains Classification of States • Continuous-time Limiting Probabilities Markov chain Time-reversible Markov Chain • Branching Processes • Random Walk • • • • • • 2 Probability Review • Conditional probability • Expectation of r.v. P[ A | B ] P[ AB ] P[ B ] E[ X ] • Bayes' Theorem P[ Ai | B] x: p ( x ) 0 P[ B | Ai ]P[ Ai ] n i 1 xp( x ) • Moment Generating Function ( t ) E [e ] P[ B | Ai ]P[ Ai ] tX • CDF vs PDF F (a ) P{ X (, a]} a n (0) E[ X n ] n1 f ( x )dx 3 Stochastic Processes • Collection of random variables { X ( t ), t T } • t is often interpreted as time,while X(t) as the state of the process • T is called the index set of the process FX(t)(x) P{X(t) x} • Example: job waiting in the queue as a function of time 4 Definition • Stochastic process {X n ,n 0 ,1,2 ,...} • Finite or countable number of possible values • Markov property P{ X n1 j | X n i , X n1 in1 ,..., X 1 i1 , X 0 i0 } P{ X n1 j | X n i } Pij 5 One-step Transition Probability • Homogeneous Markov chain P{X n1 j | X n i} P{X n j | X n1 i} n • One-step transition probability P00 P01 P02 P10 P Pi 0 P11 P12 Pi 1 Pi 2 6 Chapman-Kolmogorov Equations (n ) ij • n-step transition probabilities P ( n) ij P P{X nk j | X k i} nm ij P P ( n) P P k 0 P n m ik kj n 1 n 0, i, j 0 for all n, m 0, all i, j P P n 7 Classification of States • Accessibility n 0 P 0 i j • Communicate if i j and j i , then i j • Properties of communication if i j , then j i i i n ij if i j and j k , then i k • class states communicate with each other • irreducible only one class 8 Classification of States(cont.) • recurrent state • transient state P n 1 n ii n P ii n 1 • state i is recurrent if and only if, starting in state i, the expected number of time periods that the process is in state i is infinite 9 Recurrent vs Transient • If state i is recurrent, and state i communicates with state j, then state j is recurrent • At least one of the states must be recurrent in finite-state Markov chains • All states of a finite irreducible Markov chain are recurrent 10 Random Walk • Markov chain • State space i 0,1,2,... • Transition probability Pi ,i 1 p 1 Pi ,i 1 i 0,1,2,... • irreducible • all transient or all recurrent ? 2n 00 P (4 p(1 p)) n ~ n 1 n1 P if and only if p 2 n 00 11 Random Walk(cont.) • Symmetric random walk equal probability • Two-dimensional symmetric random walk P( i , j ),( i 1, j ) P( i , j ),( i 1, j ) P( i , j ),( i , j 1) P( i , j ),( i , j 1) 2n 00 P 1 ~ n 2n 00 P n 1 1 4 12 Limiting Probabilities • • • • • n d gcd{ n | P period ii 0} aperiodic d=1 if state i has period d, and states i and j communicate, then state j also has period d. positive recurrent n nPii expected return time is finite ergodic positive recurrent && aperiodic 13 Limiting Probabilities(cont.) • Irreducible ergodic Markov chain exists, and is independent of i n • Definition j lim Pij n • Property j i Pij i 0 lim n Pijn j0 j 0 j 1 • j equals the long-run proportion of time that the process will be in state j 14 Limiting Probabilities(Example) • Transition probability matrix 0 .5 0 .4 0 .1 P 0 .3 0 .4 0 .3 0 .2 0 .3 0 .5 0 0.5 0 0.3 1 0.2 2 21 23 18 1 0.4 0 0.4 1 0.3 2 0 , 1 , 2 62 62 62 2 0.1 0 0.3 1 0.5 2 15 Limiting Probability(Property) • Let { X n , n 1} be an irreducible Markov chain with stationary probabilities j , j 0 ,and let r be a bounded function on the state space. Then, with probability 1, lim N N n 1 r( X n ) N j 0 r ( j ) j • If we suppose that we earn a reward r(j) whenever the chain is in state j, then our average reward per unit time is j r ( j ) j 16 Advanced Random Walk • Markov chain with states 0,1,...,n having P0,1 1, Pi ,i 1 p, Pi ,i 1 q 1 p,1 i n • N i the number of additional transitions that it takes the chain when it first enters state i until it enters state i+1 • i E[ N i ] • Goals the number of trasitions that it takes the chain to go from state 0 to state n n 1 N 0,n N i i 0 17 Mean Number of Transitions 1 E[# additional trasitions to reach i 1 | i • chain to i 1]q • i 1 E[ N i*1 N i* ]q 1 q( i 1 i ) • q/ p 1 n 1 i 1 j n 1 i i 1 1 i j i E[ N 0 ,n ] 1 p i 1 j 0 i 1 p j 0 18 Mean Number of Transitions(cont.) 1 • when p , E[ N 0,n ] n2 2 n1 n1 1 2 ( n 1 ) n1 • when p , E[ N 0,n ] 1 2 (1 )2 1 • p exponentially increasing function 2 1 • p 2 for large n, essentially linear in n 19 Branching Processes • A population consisting of individuals able to produce offspring of the same kind • Each individual produces j new offsring with probability Pj , j 0 independently,by the end of lifetime. • X n the size of the nth generation • { X n , n 0,1,2,...} is a Markov chain 20 Extinction • State 0 is a recurrent state P00 1 • All other states are transient, if P0 0 • jP j E[ X ] E[ X ] n n n 1 j 0 X 0 1 • 0 lim P{ X n 0 | X 0 1} 0 Pj n j 0 j 0 21 Age-dependent Branching Processes • Attach another random variable 'age' t • CDF fT ( t ) e , t 0 22 Mean Number of Individuals • Theorem m ( t ) E[ Z ( t )] There exist 0 and 0 such that m( t ) ~ e t whenever m( t ) 1 23 Branching Processes(Example) • Service Capacity of P2P system in trasient regime • N d (t ) the number of peers available to serve document d at time t 24 Service Capacity • Basic Braching Processes Model 25 Time Reversible Markov Chains • Stationary ergodic Markov chain Pij stationary probabilities j • { X n , X n1 , X n 2 ,..., X 0 } a Markov Chain ? transition probabilities • Transition probabilities j Pji Qij P{ X m j | X m 1 i } i • Theorem Time reversible, iff Qij Pij i Pij j Pji 26 Reversibility(Example) • Arbitrary connected graph ij Pij 1 ij 2 3 3 1 j ij i j j ij i i ij i ji 6 2 5 4 4 1 27 Reversibility(Property) • An ergodic Markov chain for which Pij 0 whenever Pji 0 is time reversible if and only if starting in state i, any path back to i has the same probability as the reversed path. That is, if Pi ,i1 Pi1 ,i2 ... Pik ,i Pi ,ik Pik ,ik 1 ... Pi1 ,i for all states i , i 1 ,..., ik 28 Reversibility(Extenstion) • Irreducible Markov chain Pij • Theorem i 0 Qij : tran prob of reversed chain i i 1 : stationary probabilit ies (both) i i Pij j Q ji 29 Markov Decision Processes • After observing the state of the process, an action must be chosen P{ X n1 j | X 0 , a0 , X 1 , a1 ,..., X n i , an a } Pij (a ) • Limiting Probabilities ia 0 for all i , a 1 i a a ja ia i a ia Pij (a ) for all j 30 Hidden Markov Chains • finite set of signals p( s | j ) 1 • emitting signals s • observing the sequence of signals P{ Sn s | X 1 , S1 ,..., X n1 , Sn1 , X n j } p( s | j ) • conditional probability of state n P { S sn , X n j } n P { X n j | S sn } P{ S n sn } P{ S n sn , X n j } n P { S n, X n i } i 31 Continuous-Time Markov Chains • Definition P{ X ( t s ) j | X ( s ) i , X ( u) x( u),0 u s} P{ X ( t s ) j | X ( s ) i } • • Ti the amount of time stay in state i P{Ti s t | Ti s} P{Ti t } • memoryless exponentially distributed Continunous-Time Markov Chain(property) • The amount of time the process spends in a state before making a transition into another state is exponentially distributed • Transition probability Pii 0, all i P ij j 1, all i Limiting Probabilities (Continuous-Time version) • The mean time stays in one state 1 / v i • qij v i Pij • Properties of Limiting Probabilities v j Pj qkj Pk k j P j 1 j • v j Pj rate at which the process leaves state j • qkj Pk rate at which the process enters j k j Birth and Death Process • state the number of people • whenever there are n persons • successive arrival time exponentially distributed with mean 1 / n • successvie departure time exponentially distributed with mean 1 / n • Continuous-time Markov chain Birth and Death Process(cont.) • Transition probabilities i P01 1 Pi ,i 1 i i Pi ,i 1 i i i • Limiting Probabilities 0 P0 1 P1 (n n ) Pn n1 Pn1 n1 Pn1 0 1 ...n1 Pn 0 1 ...n1 1 2 ... n (1 ) n 1 1 2 ... n P0 1 0 1 ...n1 1 n 1 1 2 ... n M/M/1 System • Poisson arrival(or the interarrival time is exponential)and service time is exponentially t t distributed. Arrival is e and service is e M/M/1 System(cont.) • Limiting Probabilities P0 1 n Pn (1 )( ) • E[number of customers in the system] iPi i 0 1 system utilization References • Sheldon M. Ross, Introduction to Probability Models, 9th Edition. ISBN 978-7-115-160232/O1. 2007 • G.R. Grimmett and D. R. Stirzaker, Probability and Random Processes,2nd Edition, Clarendon Press, Oxford,1992. • X. Yang and G. de Veciana. Service Capacity of Peer to Peer Networks. INFOCOM,2004 39 40