Math 554 Introduction to Stochastic Processes Instructor: Alex Roitershtein Iowa State University Department of Mathematics Fall 2015 Solutions to Homework #4 2.1 Write the equations α = P α and π = πP for, respectively, α(x) and π(x). A straightforward manipulation shows that these are the same equations as for the corresponding functions α and π associated with the random walk reflected at zero and such that p̃ := p(x, x + 1) = p(1 − q) p(1 − q) + q(1 − p) and q̃ := p(x, x − 1) = q(1 − p) . p(1 − q) + q(1 − p) Therefore (either solve the equation from the scratch or refer to the formulas for the random walk’s α on p. 50 and for random walk’s π on p. 52): 1. The chain is positive recurrent if p̃ < 1/2 (i.e., p < q), null recurrent if p̃ = 1/2 (i.e., p = q), and is transient if p̃ > 1/2 (i.e., p > q). 2. Furthermore, in the first case 1 − 2p̃ p̃ x π(x) = , 1 − p̃ 1 − p̃ while in the third case α(x) = 1 − p̃ x p̃ , 2.2 Let T be the time of return to zero, that is T = inf{k ∈ N : Xk = 0}. Then, E(T |X0 = 0) = ∞ X pn (1 + n) = 1 + n=1 ∞ X npn . n=1 Hence the chain is positive recurrent if and only if µ := stationary (limiting) distribution π is as follows: π0 = 1 E(T |X0 = 0) = n=1 1 , 1+µ and πn = π0 pn + πn+1 , P P∞ which implies πn = π0 − π0 n−1 k=1 pk = π0 k=n pk . 1 P∞ npn < ∞. If µ < ∞, then the 2.3 It suffices to show that there is a probability distribution π such π = P π. The last equation yields: ∞ π0 = Thus πn = 1 3 · 1X 1 πn = 3 n=0 3 2 and πn = πn−1 for all n ∈ N. 3 2 n . 3 2.5 (a) First solution: The event {Y = −k} for k ∈ N is the event that the random walk (Xn )n≥0 crossed at least once the edge from j to j − 1 for j = 0, −1, . . . , −k + 1 but never reached the site −k − 1 In other words, {Y = −k} = 0 \ {∃ n : Xn = j, Xn+1 = j − 1} \ {6 ∃ n : Xn = −k + 1, Xn+1 = −k} j=−k+1 k−1 k−1 2p−1 Thus P (Y = −k) = α(1) 1 − α(1) = · 1−p , where the function p p α(x) is introduced on p. 50 of the textbook and computed for the underlying random walk and z = 0 in the Example on p. 51. Alternative solution: Let Tx = inf{k ∈ N : Xk = x} be the time of the first return to x ∈ Z. First, using gamble’r ruin probabilities, observe that for any k, N ∈ N, k 1 − 1−p p P0 (T−k > TN ) = . 1−p N 1− p This together with the fact that the random walk is transient to the right (and hence P(Tx < ∞) = 1 for all x ∈ N) implies that P0 (T−k = +∞) = E0 I(T−k = +∞) = E0 lim I(T−k > TN ) N →∞ 1 − p k = lim E0 I(T−k > TN ) = lim P0 (T−k > TN ) = 1 − . N →∞ N →∞ p It follows that for k ≥ 0, P(Y > −k) = P0 (T−k = +∞) = 1 − 1 − p k p and P(Y = −k) = P(Y > −k − 1) − P(Y > −k) = 2p − 1 1 − p k−1 = · . p p 2 1 − p k−1 p − 1 − p k p (b) Let T0 = 0 and P τn = Tn − P Tn−1 for n ∈ N. Then τn are identically distributed, and k k hence e(k) = E j=1 τj = j=1 E(τj ) = ke(1). (c) It is beyond the scope of the course to prove that e(1) < ∞. For instance, one can estimate E min{T, M } for some constant M ∈ N by using the first-step analysis (see below) and then take M to infinity. The fact can be also derived, for instance, from the law of large numbers and renewal arguments establishing that lim n∞ −1 Xn −1 e(n) = lim = E0 (X1 ) = (p − q)−1 . n→∞ n n (Notice that the above formula and the law of large numbers for the i.i.d. increments Tk − Tk−1 actually imply that e(1) = (p − q)−1 = (2p − 1)−1 ). Now, assuming that we know that e(1) < ∞, we can proceed as follows: T1 = T1 · 1{X1 =1} + T1 · 1{X1 =−1} = 1{X1 =1} + T1 · 1{X1 =−1} , and hence e(1) = E(T1 ) = E(1{X1 =1} ) + E(T1 · 1{X1 =−1} ) = p + qE(1 + T1 |X0 = −1) = p + q 1 + 2e(1) = 1 + 2qe(1), and hence e(1) = (1 − 2q)−1 = (2p − 1)−1 . (d) For p = 1/2, the result in (c) reads e(1) = 1 + e(1), which implies e(1) = +∞. 3