******* 1

advertisement
Introduction to Probability Models
Markov Chains
Limiting Probabilities
ZIENAB B. MOHAMWD
135555
Example 4.22 (Mean Pattern Times in Markov Chain Generated Data)
Consider an irreducible Markov chain {Xn,n0} with transition
probabilities Pi,j and stationary probabilities πj,j 0. Starting in state r, we
are interested in determining the expected number of transitions until the
pattern i1,i2,...,ik appears. That is,
with we are interested in
Note that even if i1 =r, the initial state X0 is not considered part of the
pattern sequence. Let μ(i,i1) be the mean number of transitions for the
chain to enter state i1, given that the initial state is i,i 0. The quantities
μ(i,i1) can be determined as the solution of the following set of
equations, obtained by conditioning on the first transition out of state i:
For the Markov chain {Xn,n0} associate a corresponding Markov chain, which we will
refer to as the k-chain, whose state at any time is the sequence of the most recent k
states of the original chain. (For instance, if k=3 and X2 =4, X3 =1, X4=1,then the state
of the k-chain at time 4 is (4,1,1).) Let π(j1,...,jk) be the stationary probabilities for the
k-chain. Because π(j1,...,jk) is the proportion of time that the state of the original
Markov chain k units ago was j1 and the following k−1 states, in sequence, were
j2,...,jk, we can conclude that
Moreover, because the mean number of transitions between successive visits of the kchain to the state i1,i2,...,ik is equal to the inverse of the stationary probability of that
state, we have that
Let A(i1,...,im) be the additional number of transitions needed until the pattern appears,
given that the first m transitions have taken the chain into states X1=i1,...,Xm =im.
We will now consider whether the pattern has overlaps, where we say that the pattern
i1,i2,...,ik has an overlap of size j , j < k, if the sequence of its final j elements is the
same as that of its first j elements. That is, it has an overlap of size j if
Case 1: The pattern i1,i2,...,ik has no overlaps. Because there is no overlap, Equation
(4.13) yields that
Because the time until the pattern occurs is equal to the time until the chain enters state
i1 plus the additional time, we may write
The preceding two equations imply
Using that
gives the result
where
Case 2: Now suppose that the pattern has overlaps and let its largest overlap be of size
s. In this case the number of transitions between successive visits of the k-chain to the
state i1,i2,...,ik is equal to the additional number of transitions of the original chain until
the pattern appears given that it has already made s transitions with the results
X1=i1,...,Xs =is. Therefore, from Equation (4.13)
We can now repeat the same procedure on the pattern i1,...,is, continuing to do so until
we reach one that has no overlap, and then apply the result from Case 1.
For instance, suppose the desired pattern is 1,2,3,1,2,3,1,2. Then
Because the largest overlap of the pattern (1,2,3,1,2) is of size 2, the same argument as
in the preceding gives
Because the pattern (1,2) has no overlap, we obtain from Case 1 that
Putting it together yields
If the generated data is a sequence of independent and identically distributed random
variables, with each value equal to j with probability Pj, then the Markov chain has Pi,j
=Pj. In this case, πj =Pj. Also, because the time to go from state i to state j is a
geometric random variable with parameter Pj, we have μ(i,j)=1/Pj. Thus, the expected
number of data values that need be generated before the pattern 1,2,3,1,2,3,1,2 appears
would be
The following result is quite useful
Proposition 4.3 Let {Xn , n ≥ 1} be an irreducible Markov chain with stationary
probabilities s πj , j ≥ 0, and let r be a bounded function on the state space. Then, with
probability 1,
Proof If we let aj(N) be the amount of time the Markov chain spends in state j during
time periods 1,...,N, then
Since aj(N)/N →πj the result follows from the preceding upon dividing by N and then
letting N →∞ .
If we suppose that we earn a reward r (j)whenever the chain is in state j, then
Proposition 4.3 states that our average reward per unit time is
Download