docx: 5_Using matrices and Markov Chains

advertisement
Probability
Chapter 16: Markov chains
Lesson 5 – Using Matrices to Represent Conditional Probability
In this chapter we will be using matrices to help us determine the long term probability of events
happening.
Example 1:
Let’s pretend Monica plays netball and she is a shooter. Monica is a bit of a confidence player so if
she shoots her first goal she has a greater chance of shooting her next goal, however if she misses
she has less chance of shooting her next goal.
The probabilities look this:
This means she has a
probability of 3/5 of shooting
her next goal if she shot her
previous goal
Calculate her probability of her shooting one goal from two attempts.
We can complete this problem in 2 different ways:
1. Using the Probability Rules
2. Using a Tree Diagram
Now consider how we might use matrices to represent probability.
This is called a
transition matrix.
It is made up of
conditional
probabilities
Each column
adds to 1
Examples:
Suppose that the probability of snow on any one day is conditional on whether or not it
snowed on the preceding day. The probability that it will snow on a particular day given that
it snowed on the day before is 0.65, and the probability that it will snow on a particular day
given that it did not snow on the day before is 0.3. If the probability that it will snow on
Friday is 0.6, what is the probability that it will snow on Saturday?
Now calculate the probability that it will snow on Sunday?
Markov Chains
We describe a Markov chain as a set of states. The process starts in one of these states and
moves successively from one state to another. Each move is called a step. If the chain is
currently in state xn, then it moves to state xn+1 at the next step, with a determined probability,
which doesn’t depend upon which state the chain was in before the current state. These
probabilities are called transition probabilities. An initial probability distribution, defined on
S, specifies the starting state. Usually, this is done by
specifying a particular state as the starting state.
A Markov chain is like a frog jumping on a lily pad. The
frog starts on one of the pads and then jumps for lily pad
to lily pad with the appropriate transition probabilities.
Last lesson we started to use matrices to find
probabilities of long term events.
These matrices are called Markov chains.
Setting up a transition matrix
A car rental firm has two branches, one in Bendigo and one in Colac. Cars are usually
rented and returned in the same town. However, a small percentage of cars rented in
Bendigo each week are returned in Colac, and vice versa. The diagram below describes
what happens on a weekly basis.
From week to week:
0.8 of cars rented each week in Bendigo are returned to Bendigo
0.2 of cars rented each week in Bendigo are returned to Colac
0.1 of cars rented each week in Colac are returned to Bendigo
0.9 of cars rented each week in Colac are returned to Colac.
These results can be summarized in a matrix.
This matrix is an example of a transition matrix. It describes the way in which transitions
are made between two states:- the rental car is based in Bendigo or the rental car is based
in Colac.
Note: The columns in a transitional matrix will always add to one (100%). This is because
all the possibilities must be taken into account. In this case, if 80% of cars are returned to
Bendigo, then the other 20% must be returned to Colac.
The car rental company now plans to buy 90 new cars and base 50 in Bendigo and 40 in
Colac. Given this pattern of rental car returns, the questions the manager would like
answered are:


If we start off with 50 cars in Bendigo, and 40 cars in Colac, how many cars will be
available for rent at Bendigo and Colac after 2 weeks?
What will happen in the long term? Will the numbers of cars available for rent each
week from location vary from week to week or will it settle down to some fixed value?
Sn = T S0
n
Suppose that the initial population of the inner city in a town is 35 000 and that of the
suburban area is 65 000. If every year 40% of the inner city population moves to the
suburbs, while 30% of the suburban population moves to the inner part of the city,
determine:
a) the transition matrix T, that can be used to represent this information
b) the estimated number of people living in the suburbs after 4 years, assuming that the
total population remains constant. Give your answer to the nearest 100 people.
Steady state of a Markov Chain
We can verify this result using the example of the car rental company again.
Example:
Suppose that there are two dentists in a country town, Dr Hatchet and Dr Youngblood.
Each year, 10% of the patients of Dr Hatchet move to Dr Youngblood, while 18% of patients
move from Dr Youngblood to Dr Hatchet. Suppose initially, 50% of patients go to Dr
Hatchet and 50% go to Dr Youngblood. Find the percentage of patients, correct to one
decimal place, who will eventually be attending each dentist if this pattern continues
indefinitely.
Comparing run length for Bernoulli sequences and Markov chains
A Bernoulli sequence is one where the results can only be a success or a failure. We will
compare this to a Markov chain, where the outcomes are also described as a success or failure.
The number of times that the same outcome is observed in sequence is called the length of run
for the sequence.
Examples:
The manager of a shop knows from experience that 60% of her customers will use a credit card to
pay for their purchases. Find the probability that:
a) the next three customers will use a credit card, and the fourth will pay cash
b) the next three customers pay cash, and the fourth will use a credit card
Download