Lecture notes

advertisement
Mar. 21
Announcements
HW #7 is now due on Friday, March 23.
Office hours tomorrow (Thursday) as usual.
Mar. 21
6.6 Time reversibility
Analogue to Section 4.8: A continuous-time ergodic Markov chain is
time-reversible with limiting probabilities πi if and only if
πi qij = πj qji
for all pairs i, j.
(The # of i → j and j → i transitions per unit time are the same.)
This is detailed balance.
Notes: Note the clear similarity to the concept of detailed balance from
Chapter 4.
Mar. 21
6.6 Time reversibility
Compare the balance equations to the detailed balance equations using
a birth-death process.
Solve for πi using balance.
Solve for πi using detailed balance.
Notes: In the case of a birth-death process, which we have argued
previously does satisfy detailed balance, the balance (ergodic)
equations are not too difficult to solve. However, we showed in class that
the detailed balance equations are even easier! This example also gives
a nice, simple way to compare “overall” balance with detailed balance.
Mar. 21
6.5 Limiting probabilities
Theorem: If a continuous-time M.C. {Xt : t ≥ 0} is irreducible and has a
stationary distribution π, then
lim Pij (t) = πj
t→∞
for all j.
NB: Establishing positive recurrence is often complicated for
continuous-time MCs.
Notes: This result comes from the Durrett book referenced in the final
slide. It is a nice, simple result that is stated only indirectly in Ross.
Mar. 21
6.5 Limiting probabilities
Theorem: If a continuous-time M.C. {Xt : t ≥ 0} has rate (generator)
matrix R, then π is a stationary distribution if and only if πR = 0.
Notes: In combination with the theorem on the previous slide, this gives
one way to check for limiting probabilities that is often the easiest: Find
any solution to πR = 0, then show that the MC is irreducible, and these
results imply that the solution is unique and equal to the limiting
probabilities.
Mar. 21
LOLN-like results
Theorem: If a continuous-time M.C. {Xt : t ≥ 0} is irreducible and has a
stationary distribution π, and if g : Ω → R satisfies Eπ |g(X )| < ∞, then
as t → ∞,
Z
1 t
g(Xs ) ds → Eπ g(X ) with probability 1.
t 0
Compare with Proposition 4.3 (the last proposition in Section 4.4).
Notes: In class, I had omitted the phrase “with probability 1,” without
which the statement is a bit difficult to interpret because the left-hand
side is a RANDOM function of t, rather than a deterministic one. In
class, we discussed the intuition of the left-hand side, namely, it is the
total value of the g(X (s)) function, weighted by the length of time spent
in each state of the Markov chain, from zero to t, then divided by t.
Mar. 21
6.8 Computing the transition probabilities
To find P(t) (as opposed to P(∞)), we can use
P(t) = exp{Rt} =
∞
X
(Rt)i
i=0
i!
.
Notes: Matrix exponentials are not very easy to calculate in general.
However, there is software available to do it, such as the expm function
in the Matrix package for R.
Mar. 21
Further reading on Markov chains
Durrett, Essentials of Stochastic Processes
Lange, Applied Probability
Notes: I would call each of these books more difficult than Ross in terms
of mathematical level. The Durrett book is a standard reference for
stochastic processes. The Lange book is much more diverse, having
only a couple chapters on Markov chains. Thus, its Markov chain
material is much more concise though still quite thorough.
Download