Math 554 Iowa State University Introduction to Stochastic Processes Department of Mathematics

advertisement
Math 554
Introduction to Stochastic Processes
Instructor: Alex Roitershtein
Iowa State University
Department of Mathematics
Fall 2015
Solutions to Homework #3
1.15 (a)
For any state j 6= i, we have:
r(j) = Ei
−1
TX
I(Xn = j) = Ei
∞
X
n=0
∞
X
=
I Xn = k, n ≤ T − 1
n=0
Pi Xn = j, T > n =
n=0
∞
X
Pi Xn = j, T > n ,
n=1
where in the last step we used the fact Pi (X0 = j) = 0 for j 6= i.
Hence, using the Markov property,
rP(j) =
=
X
k∈Ω
∞
X
∞ X
X
r(k)P(k, j) =
Pi Xn = k, T > n P(k, j)
n=0 k∈Ω
∞
X
Pi Xn+1 = j, T > n + 1 =
Pi Xn = j, T > n = r(j).
n=0
n=1
On the other hand, r(i) = 1 and
X
rP(i) =
k∈Ω
∞
X
=
r(k)P(k, i) =
∞ X
X
Pi Xn = k, T > n P(k, i)
n=0 k∈Ω
∞
X
Pi (T = n + 1) =
n=0
Pi (T = n) = 1.
n=1
Thus r = rP.
1.15 (b)
P
Notice that the events {Xn = j}, j ∈ S, form a disjoint partition of S. In particular, j∈S I(Xn = j) = 1.
Therefore, we have:
X
j∈Ω
r(j) =
X
Ei
−1
TX
n=0
j∈Ω
= Ei
−1 X
TX
I(Xn = j)
I(Xn = j) = Ei
−1
TX
n=0 j∈Ω
1 = Ei (T ).
n=0
1.15 (c)
The claim is implied by the results in (a) and (b) since π is the unique (up to a multiplication
factor) left-eigenvector of P corresponding to the eigenvalue 1. More precisely, it follows from (a)
1
that r = cπ for a constant c 6= 0 (the constant cannot be zero, for instance, because r(i) = 1). It
then follows from (b) that c = Ei (T ). Since r(i) = 1, we conclude that
1 = r(i) = Ei (T ) · π(i),
and hence π(i) = 1/Ei (T ), as desired. In fact, in addition, we can also see now that
r(j) = Ei (T ) · π(j) =
π(j)
,
π(i)
the result that we have stated and justified heuristically in the class.
1.16
Identify for a moment state 0 with N. Then the event {XTN = k} for k = 1, . . . , N − 1 is the
(disjoint) union of the following two events:
A := {XTN = k, the walk has been visited to k + 1
but never crossed the bond (k, k + 1) before time Tn }
and
B := {XTN = k, the walk has been visited to k − 1
but never crossed the bond (k, k − 1) before time Tn }.
Consider first the event A. Identify now
N − j with − j for j = 1, . . . , N − k.
In particular, k + 1 = N − N − (k + 1) is now identified with N − (k + 1). Cut the circle at point
k and unfold it into the interval [−(N − k), k]. Let Qi be the probability distribution of a simple
symmetric nearest-neighbor random walk Yn on this interval, starting at Y0 = i. Let τk be the first
hitting time of site k. Then, using the Mrkov property,
P(A) = Q0 (τ−(N −k−1) < τk ) × Q−(N −k−1) (τk < τ−(N −k) ).
Using the solution to the gambler’s ruin problem, we obtain:
P(A) =
k
1
k
·
=
.
N −1 N
N (N − 1)
Similar arguments show that
P(B) =
N −k
,
N (N − 1)
and hence P(XTN = k) = P(A) + P(B) = N 1−1 . If you want to consider the degenerate case
k = N − 1 separately, you
use the above argument for k = 1, . . . , N − 2 and then observe that
P can
−2
P(XTN = N − 1) = 1 − N
P(X
TN = k).
k=1
1.17 (a) Let Ω = {1, . . . , N }. At first step the walk will move from 1 elsewhere besides 1. We have
P(i, 1) = N 1−1 for any i ∈ Ω, i 6= 1. Thus, for any m ∈ N,
P(T = m + 1) =
N − 2 m−1
1
·
.
N −1
N −1
2
1
N −1 .
This is a geometric distribution with parameter
In particular, for any m ∈ N,
∞
X
∞ X
1
N − 2 n−1 N − 2 m−1
P(T > m) =
P(T = n + 1) =
·
=
N − 1 n=m N − 1
N −1
n=m
Thus, by virtue of formula (1.13) in the textbook, we have
∞
X
∞ X
N −2
1
N − 2 m−1
=2+
·
= N.
E(T ) =
P(T > m) = 2 +
N
N −1
N −1 1− −2
m=0
m=2
N −1
By a symmetry argument, π(j) =
instance.
1
N
for any j ∈ Ω, and hence (1.11) holds in this particular
1.17 (b) In fact, the proof in (a) tells us that
P(T = m + 1|X0 = 1) = 1 + P(T = m|X0 = i)
for any i 6= 1. In words, T − 1 under {X0 = 1} is distributed as T under {X0 = i}. In particular,
E(T |X0 = 2) = E(T |X0 = 1) − 1 = N − 1.
1.17 (c)
Let
Rn = {i ∈ Ω : Xk = i for some k = 0, . . . , n}
be the range of the random walk at time n. Let rn = |Rn | be the cardinality (number of elements)
of the set Rn . Finally, let τ1 = 0 and for n = 1, . . . , N − 1,
τn+1 = inf{k > τn : rk = n + 1}.
Thus τn is the first time k ≥ 0, such that rk = n. We thus want to compute E(τN ). The rest of the
argument is very similar to that of part (a).
Notice that gn := τn+1 − τn are geometric random variables with
n − 1 m−1 N − n
,
m ∈ N.
P(gn = m) =
N −1
N −1
Hence
E(gn ) =
∞
X
P(gn > m) = 1 +
m=0
∞ X
n − 1 m
m=1
N −1
=1+
n−1
N −1
1
·
=
,
n
−
1
N −1 1−
N −n
N −1
and
E(τN ) =
N
−1
X
k=1
E(gk ) = (N − 1)
N
−1
X
k=1
1
∼ N log N,
k
as N → ∞.
1.18 Combining together the results mentioned in the textbook about the stationary distribution
3
distribution of this Markov chain and the result of Exercise 1.15(c), we obtain that E(T ) = 52! in
seconds. Hence, in years,
E(T ) =
602
52!
.
· 24 · 365
1.19
Let Xn , n ∈ N, be the results of n-th tossing of the coin. Define Yn as follows:
Yn = max j ∈ [0, n] : Xn = · · · = Xn−j+1 = HEAD .
Thus Yn is the length of the current run of HEAD’s, which might be anything between zero (when
Xn = TAIL) and n (when no TAIL occurred in the firs n trials). We are interested in
T = inf{n ∈ N : Yn = 4},
and can consider the Markov chain Yn only until time T. In other words, we can consider a Markov
chain Yn in the state space {0, 1, 2, 3, 4}, with Y0 = 0, absorbing state at 4, and transition kernel


1/2 1/2 0
0
0
 1/2 0 1/2 0
0 



0 1/2 0 
P =  1/2 0

 1/2 0
0
0 1/2 
0
0
0
0
1
If ui = E(T |Y0 = i), then
u0
+
2
u0
+
= 1+
2
u0
= 1+
+
2
u0
= 1+ .
2
u0 = 1 +
u1
u2
u3
u1
,
2
u2
,
2
u3
,
2
Substituting the last row into the third one, we obtain the following system of equations:
u0 u1
u0 = 1 +
+ ,
2
2
u0 u2
u1 = 1 +
+ ,
2
2
3 3u0
u2 =
+
.
2
4
Substituting the last row into the second one, we obtain that
u0 u1
u0 = 1 +
+
and
2
2
7 7u0
u1 =
+
.
4
8
Therefore,
15u0
u0 = 1 + u20 + u21 = 15
8 + 16 .
It follows from the last identity that u0 = 30.
4
Download