9.2: Regular Markov Chains

advertisement
c Dr Oksana Shatalov, Fall 2010
°
1
9.2: Regular Markov Chains
DEFINITION 1. A transition matrix (stochastic matrix) is said to be regular if some power of
T has all positive entries (i.e. strictly greater than zero). The Markov chain represented by T is
called a regular Markov chain.
It can be shown that if zero occurs in the same position in two successive powers of the matrix
then it will appear in that position for all higher powers of the matrix.
EXAMPLE
2. Are the transition matrices


0.1 0.1 0.3
1 0



A =  0.1 0.2 0.5 
B= 0 1
0.8 0.7 0.2
0 0
regular?

"
#
0
0.8 0

C=
0 
0.2 1
1


1 0.1 0.3


D =  0 0.2 0.5 
0 0.7 0.2
One of many applications of Markov chains is making long-range predictions. It is not possible
to make long range prediction with all transition matrices, However, for regular transition matrix
it is always possible.
EXAMPLE 3. (this example is worked out in its entirety for you to study). Study the longterm trends for the rural/urban problem in section 9.1. In other words, if the trend is continues,
determine, in the long run, the population distribution in the state.
SOLUTION: Recall that we have:
T =
" R U #
,
R
0.9 0.4
U
0.1 0.6
"
After 10 years: X1 = T X0 =
0.75
0.25
X0 =
R
U
"
0.7
0.3
#
.
#
"
After 20 years: X2 = T X1 = T 2 X0 =
"
0.775
0.225
#
#
0.7875
After 30 years: X3 = T X2 = T 3 X0 =
0.2121
Proceeding further, we obtain the following vectors:
"
#
"
#
0.7999023438
0.79999999046
X10 = T 10 X0 =
. . . X20 = T 20 X0 =
0.2000976563
0.20000000954
c Dr Oksana Shatalov, Fall 2010
°
2
If this trend continues, about 80% will be rural and 20% will be urban.
In the
" long# run, the probability
"
#distribution vector Xm approaches the probability distribution
0.8
0.8
vector
, i.e. Xm →
This is called a steady-state (or limiting,) distribution
0.2
0.2
vector.
"
#
0.8
0.8
As m gets larger and larger, T m → L =
. This is called the steady-state matrix
0.2 0.2
for the system. Note that entries of any row of L are all equal. The steady-state would be reached
regardless of the
" initial# state of the system:
"
#"
#
p
0.8 0.8
p
For X0 =
we have LX0 =
=
1−p
0.2 0.2
1−p
• Finding the Steady-State Distribution Vector: Let T be a regular stochastic matrix. Then the
steady-state distribution vector X may be found by solving the matrix equation
TX = X
together with the condition that the sum of the elements of the vector X be equal to 1.
"
#
0.9 0.4
EXAMPLE 4. Find the steady-state vector for the transition matrix T =
from the
0.1 0.6
rural/urban problem ( again...).
c Dr Oksana Shatalov, Fall 2010
°
3
"
EXAMPLE 5. Find the steady-state distribution vector for A =
0.3 0.8
0.7 0.2
#

0.6 0.3 0


EXAMPLE 6. Find the steady-state distribution vector for T =  0 0.3 0.4 
0.4 .4 .6

c Dr Oksana Shatalov, Fall 2010
°
4
EXAMPLE 7. A psychologist conducts an experiment in which a mouse is placed in a T-maze,
where it has a choice at the T-junction of turning left and receiving a reward (cheese) or turning
right and receiving a mild shock. At the end of each trial a record is kept of the mouses response.
It is observed that the mouse is as likely to turn left (state 1) as right (state 2) during the first
trial. In subsequent trials, however, the observation is made that if the mouse had turned left in
the previous trial, then the probability that is will turn left in the next trial is 0.8, whereas the
probability that it will turn right is 0.2. If the mouse had turned right in the previous trial, then
the probability that it will turn right in the next trial is 0.1, whereas the probability it will turn left
is 0.9. In the long run, what percentage of time will the mouse turn left at the T-junction?
Download