Uploaded by Kartik Rastogi

tutorial-solutions

advertisement
Solutions to MTL106 Tutorials
I Semester 2021-22
Rishabh Dhiman
Compiled on 2021-11-14 16:31:43+05:30
Contents
1 Basic Probability
3
2 Random Variables
14
3 Functions of Random Variables (Incomplete)
25
4 Random Vectors (Incomplete)
33
5 Moments (Incomplete)
34
6 Limiting Probabilities
45
7 Introduction to Stochastic Processes
53
8 Discrete Time Markov Chains (Incomplete)
58
9 Continuous Time Markov Chains
67
10 Simple Markovian Queues
80
2
1 Basic Probability
Problem 1. Items coming off a production line are marked defective (D) or non-defective (N ).
Items are observed and their condition noted. This is continued until two consecutive defectives are
produced or four items have been checked, which ever occurs first. Describe the sample space for this
experiment
•
•
•
DD
•
•
•
DNDD
NDD
•
DNDN
DNND
•
DNNN
•
NDND
•
NDNN
NNDD
•
NNDN
NNND
NNNN
Figure 1.1: Construction of all solutions
Problem 2. Let Ω = {0, 1, 2, . . . }. Let F be the collection of subsets of Ω that are either finite or
whose complement is finite. Is F a σ-field? Justify your answer.
S
Solution. No, consider the set S = {2i | i ∈ Ω} = i∈Ω {2i}. S ∈ F since its the countable union of
elements in F, however S ∈
/ F as neither S nor its complement is finite.
Problem 3. Consider Ω = {(x, y) : 0 ≤ x ≤ 1, 0 ≤ y ≤ 1}. Let F be the largest σ-field over
Ω. Define, for any event R, P (R) = area of R = (b − a)(d − c) where R is the rectangular region
that is a subset of the form R = {(u, v) : a ≤ u < b, c ≤ v < d}. Let T be the triangular region
T = {(x, y) : x ≥ 0, y ≥ 0, x + y < 1}. Show that T is an event, and find P (T ), using the axiomatic
definition of probability.
Solution. We first show that T ∈ F . Suppose that T ∈
/ F , we can add T to F and extend it to a
larger σ-field which violates the maximality of F.
Considering how we defined the probability measure, we should take F to be the Borel σ-field over
R2 , which is what we do from now on.
Define
n−1 i i + 1
n−1 i i + 1
i+1
i
def S
def S
Ln =
,
× 0, 1 −
and Rn =
,
× 0, 1 −
.
n
n
n
n
i=0 n
i=0 n
Now, since Ln ⊆ T ⊆ Rn and, P is a probability measure,
P (Ln ) ≤ P (T ) ≤ P (Rn ) =⇒ lim P (Ln ) ≤ P (T ) ≤ lim P (Rn ).
n→∞
n→∞
Finally,
lim P (Rn ) = lim
n→∞
n→∞
n−1
X
i=0
1
n
Z 1
i
1
1−
=
1 − x dx = ,
n
2
0
3
and similarly, limn→∞ P (Ln ) = 12 .
Thus, we get
1
P (T ) = .
2
Remark. In the above proof, we assumed that T lies in the Borel σ-field of R2 . This can be proved
by showing that limn→∞ Ln = T . Alternatively use the fact that any open set can be written as a
countable union of open rectangles, and do a few extra operations to show that the given half-open
triangle and rectangles are in the algebra generated by open sets, ie, can be represented using finite
union and complement operations.
Problem 4. Let A1 , A2 , . . . , AN be a system of completely independent events, i.e.,
P(
r
T
Aij ) =
j=1
Assume that P (An ) =
1
n+1 ,
r
Y
P (Aij ), r = 2, 3, . . . , N.
j=1
n = 1, 2, . . . , N .
(a) Find the probability that exactly one of the Ai ’s occur?
(b) Find the probability that atmost two Ai ’s occur?
Solution. Note that
N
Y
P (Aci ) =
i=1
N
Y
i=1
i
1
=
.
i+1
N +1
S
c
(a) Exactly one event occurs if one of Ai ∩ ( N
j=1,j6=i Aj ) , 1 ≤ i ≤ N event occurs.
N
P (Ai ) Y
1
1
P (Ai ∩ (
Aj ) ) =
P (Acj ) = ·
.
c)
P
(A
i
N
+1
j=1,j6=i
i
N
S
c
j=1
Therefore, the answer is
1
N +1
1
i=1 i .
PN
(b) Exactly two event occurs if one Ai ∩ Aj ∩ (
P (Ai ∩ Aj ∩ (
N
S
k=1
k∈{i,j}
/
SN
Ak )
k=1,k∈{i,j}
/
c,
1 ≤ i < j ≤ N is true.
N
P (Ai ) P (Aj ) Y
1
1
Ak ) ) =
·
·
P (Ack ) =
·
.
c
c
P (Ai ) P (Aj )
ij N + 1
c
k=1
The probability for exactly two events is given by,
1
N +1
1
1≤i<j≤N ij .
P
Thus, the final answer is given by

1 
1+
N +1
N
X
1
i=1
i

X
+
1≤i<j≤N
1
.
ij
T
Problem 5. Let (Ω, F, P ) be a probability space and A1 , A2 , . . . , An ∈ F with P ( n−1
i=1 Ai ) > 0.
Prove that
n
n−1
T
T
P ( Ai ) = P (A1 )P (A2 | A1 )P (A3 | A1 ∩ A2 ) · · · P (An |
Ai ).
i=1
i=1
Proof. We take the empty intersection to be Ω, that is
T
being of the form P (Ak | k−1
i=1 Ai ).)
We prove it by induction on n,
4
T
S∈∅ S
= Ω. (This is to deal with P (A1 ) not
• For the base case n = 1, P (A1 ) = P (A1 | Ω).
• For the inductive hypothesis, assume that
n−1
T
P(
Ai ) =
i=1
n−1
Y
P (Ak |
k−1
T
Ai ).
i=1
k=1
• Now note that P (A ∩ B) = P (A | B)P (B). Therefore,
n
T
P(
n−1
T
Ai ) = P (An ∩
i=1
= P (An |
Ai )
i=1
n−1
T
Ai )P (
i=1
= P (An |
n−1
T
=
P (Ak |
k−1
T
Ai )
i=1
n−1
Y
k−1
T
k=1
i=1
P (Ak |
Ai )
i=1
n
Y
n−1
T
Ai )
Ai ).
i=1
k=1
Problem 6. If A1 , A2 , . . . , An are n events, then show that
n
X
X
P (Ai ) −
i=1
P (Ai ∩ Aj ) ≤ P (
n
S
n
X
Ai ) ≤
i=1
1≤i<j≤n
P (Ai ).
i=1
Proof. We first prove the right inequality by inducting on n,
• For the base case n = 1, P (A1 ) ≤ P (A1 ) so we are done.
• For the inductive hypothesis, assume that
P(
n−1
S
Ai ) ≤
i=1
n−1
X
P (Ai ).
i=1
• Note that P (A ∪ B) = P (A) + P (B) − P (A ∩ B) ≤ P (A) + P (B). So,
n
S
P(
Ai ) ≤ P (
i=1
n−1
S
Ai ) + P (An ) ≤
i=1
n
X
P (Ai ).
i=1
Next we prove the inequality on the left, again by induction on n.
• For the base case n = 1, P (A1 ) ≤ P (A1 ).
• For the inductive hypothesis, assume that
n−1
X
i=1
P (Ai ) −
n−1
S
X
P (Ai ∩ Aj ) ≤ P (
1≤i<j<n
5
i=1
Ai ).
• We now use the result proved earlier,
n
X
i=1
P (Ai ) −
X
P (Ai ∩ Aj ) ≤ P (
n−1
S
Ai ) + P (An ) −
i=1
1≤i<j≤n
n−1
X
P (Ai ∩ An )
i=1
≤ P(
n−1
S
= P(
i=1
n−1
S
= P(
i=1
n
S
n−1
S
Ai ) + P (An ) − P (
(Ai ∩ An ))
i=1
Ai ) + P (An ) − P (An ∩ (
n−1
S
Ai ))
i=1
Ai ).
i=1
Problem 7. Let Ω = {a, b, c, d}, F = {∅, {a}, {b, c}, {d}, {a, b, c}, {b, c, d}, {a, d}, Ω} and P a function
from F to [0, 1] with P ({a}) = 27 , P ({b, c}) = 35 and P ({d}) = β. The value of β such that P to be a
probability on (Ω, F).
Solution.
1 = P (Ω) = P ({a} ∪ {b, c} ∪ {d})
= P ({a}) + P ({b, c}) + P ({d})
= 2/7 + 3/5 + β
=⇒ β = 4/35.
Problem 8. Prove that, for any two events A and B,
P (A ∩ B) ≥ P (A) + P (B) − 1.
Proof.
P (A ∩ B) = P (A) + P (B) − P (A ∪ B) ≥ P (A) + P (B) − 1.
Problem 9. Consider a gambler who on each independent bet either wins 1 with probability 13 or
losses 1 with probability 23 . The gambler will quit either when he or she is winning a total of 10 or
after 50 plays. The probability the gambler plays exactly 14 times.
Solution. The last turn must have been a win, otherwise the gambler would have stopped at an earlier
step. Suppose they lost l and won w games in the first 13 tries. Then,
w − l = 9, w + l = 13 =⇒ w = 11, l = 2.
Pick the two positions where the gambler lost. The first of these must occur in the first 10 tries
and the second one should occur in the first 12, otherwise the score will be 10 before the end. Thus,
the number of positions is
10
10
2
+
= 65.
1
2
Therefore, the probability is given by
1
× 65 ×
3
11 2
1
2
260
= 14 .
3
3
3
Problem 10. Let Ω = {4, 3, 2, 1},
(a) Find three different σ-algebras {Fn } for n = 1, 2, 3 such that F1 ⊂ F2 ⊂ F3 .
6
(b) Further, create a set function P : F3 → R such that (Ω, F3 , P ) is a probability space.
Solution.
(a) Take F1 = {∅, Ω}, F2 = {∅, {1}, {2, 3, 4}, Ω}, F3 = 2Ω .
(b) Define P (S) = #S/#Ω.
Problem 11. Suppose that the number of passengers for a limousine pickup is thought to be either
1, 2, 3, or 4, each with equal probability, and the number of pieces of luggage of each passenger is
thought to be 1 or 2, with equal probability, independently for different passengers. What is the
probability that there will be five or more pieces of luggage?
Solution. Suppose that there are p passengers and r of them bring 2 pieces of luggage. The probability
of this happening is given by
1 p 1
.
4 r 2p
The number of pieces of luggage is given by p + r, thus the probability of bringing 5 or more pieces
is given by
4
X
X 1 p 1
X 1 3 1
X 1 4 1
23 /2
24 − 1
23
=
+
=
+
= .
p
3
4
3
4
4 r 2
4 r 2
4 r 2
2 ×4 4×2
64
p=1 p+r≥5
r≥2
r≥1
Problem 12. Let Ω = R and F be the Borel σ-field on R. For each interval I ⊆ R with end points
a and b (a ≤ b), let
Z b
1 1
P (I) =
dx.
2
a π1+x
Does P define a probability on the measurable space (Ω, F)? Justify your answer.
Solution. Yes, P defines a probability measure on (Ω, F).
Note that
Z b
1
dx = tan−1 (b) − tan−1 (a).
2
a 1+x
Extend the function to F → R, that is
Z
P (S) =
S
1 1
dx
π 1 + x2
for any S ∈ F .
We show that P is a probability measure,
R
1
1. For any S ∈ F , P (S) = S π1 1+x
2 dx ≥ 0.
2. P (Ω) =
R +∞
1 1
−∞ π 1+x2
=
1
π
tan−1 (x)|+∞
−∞ = 1.
3. We now wish to show that P is σ-additive. Let {Tn | n ∈ N} ⊆ F be a countable subset of F.
We wish to show that
X
S
P(
Tn ) =
P (Tn ).
n∈N
Let Sn =
Sn
k=1 Tk
n∈N
and S = limn→∞ Sn . Consider the function
def
fn (x) =
1 1
· 1Tn (x)
π 1 + x2
where 1X refers to the indicator function of the set X.
7
Note that fn (x) ≥ 0 and fn+1 (x) ≥ fn (x). Thus, from Monotone Convergence Theorem,
Z
Z
lim
fn (x) dx =
lim fn (x) dx.
n→∞ R
R n→∞
The LHS gives us,
Z
lim
Z
n→∞ R
1 1
1S (x) dx
π 1 + x2 n
1 1
= lim
dx
n→∞ S π 1 + x2
n
n Z
X
1 1
= lim
dx
n→∞
π 1 + x2
k=1 Tk
X
=
P (Tk ).
fn (x) dx = lim
n→∞ R
Z
k∈N
Whereas the RHS gives us, note that limn→∞ 1Sn = 1S .
Z
Z
1 1
lim fn (x) dx =
lim
1S (x) dx
n→∞
n→∞
π 1 + x2 n
R
ZR
1 1
=
1S (x) dx
π 1 + x2
ZR
1 1
=
dx
2
S π1+x
S
= P(
Tn ).
n∈N
Thus, P defines a probability measure over (Ω, F).
Problem 13. Let (Ω, F, P ) be a probability space. Let {An } be a nondecreasing sequence of elements
in F. Prove that
P ( lim An ) = lim P (An ).
n→∞
n→∞
Proof. Let B1 = A1 and Bn = An \ An−1 for n > 1.
Note that Bi ∩ Bj = ∅ and
n
S
Bk = An =⇒ lim
n
S
n→∞ k=1
k=1
Bk = lim An
n→∞
Therefore,
P ( lim An ) = P ( lim
n→∞
n
S
n→∞ k=1
= lim
n→∞
n
X
Bk )
P (Bk )
k=1
= P (A1 ) + lim
n→∞
= P (A1 ) + lim
n→∞
= lim P (An ).
n→∞
8
n
X
k=2
n
X
k=2
P (An \ An−1 )
P (An ) − P (An−1 )
1
1
1
Problem
( 14. Let Ω = {s1 , s2 , s3 , s4 } and P {s1 } = 6 , P {s2 } = 5 , P {s3 } = 3 , P {s4 } =
{s1 , s3 } if n is odd
An =
. Find P (lim inf n→∞ An ), P (lim supn→∞ An ).
{s2 , s4 } if n is even
3
10 .
Define,
Solution. lim inf n→∞ An = ∅ and lim supn→∞ An = Ω so
P (lim inf An ) = 0 and P (lim sup An ) = 1.
n→∞
n→∞
Problem 15. Let ω be a complex cube root of unity with ω 6= 1. A fair die is thrown three times.
If x, y and z are the numbers obtained on the die. Find the probability that wx + wy + wz = 0.
Solution.
wx + wy + wz = 0 ⇐⇒ {x, y, z} = {0, 1, 2} mod 3.
So, the probability is 3!/33 = 2/9.
Problem 16. An urn contains balls numbered from 1 to N . A ball is randomly drawn
(a) What is the probability that the number on the ball is divisible by 3 or 4?
(b) What happens to the probability in the previous question when N → ∞?
Solution.
(a) Let A3 and A4 be the event that the ball drawn is divisible by 3 and 4 respectively.
N N N + 4 − 12
P (A3 ∪ A4 ) = P (A3 ) + P (A4 ) − P (A3 ∩ A4 ) = 3
.
N
(b) In the limit as N tends to infinity,
N lim
3
+
N 4
−
N 12
N
N →∞
=
1 1
1
1
+ −
= .
3 4 12
2
Problem 17. Consider the flights starting from Delhi to Bombay. In these flights, 90% leave on time
and arrive on time, 6% leave on time and arrive late, 1% leave late and arrive on time and 3% leave
late and arrive late. What is the probability that, given a flight leaves late, it will arrive on time?
Solution. The conditional probability that a flight arrives on time given it left late,
P (arrives on time | left late) =
P (arrives on time ∩ left late)
1%
1
=
= .
P (left late)
1% + 3%
4
Problem 18. Let A and B are two independent events. Prove or disprove that A and B c , Ac and
B c are independent events.
Solution. They are independent,
P (Ac )P (B) = (1 − P (A))P (B)
= P (B) − P (A ∩ B)
= P (B \ (A ∩ B))
= P (((Ac ∩ B) ∪ (A ∩ B)) \ (A ∩ B))
= P (Ac ∩ B).
So (A, B) being independent implies (Ac , B) and (B, A) are independent pairs.
Thus, we get that independence of (A, B) =⇒ (B, A) =⇒ (B c , A) =⇒ (A, B c ) =⇒ (Ac , B c )
are all independent pairs.
9
Problem 19. Pick a number x at random out of the integers 1 through 30. Let A be the event that x
is even, B that x is divisible by 3 and C that x is divisible by 5. Are the events A, B and C pairwise
independent? Further, are the events A, B and C mutually independent?
Solution.
1
1
1
P (A) = , P (B) = , P (C) =
2
3
5
1
1
1
P (A ∩ B) = , P (B ∩ C) = , P (C ∩ A) =
6
15
10
1
P (A ∩ B ∩ C) =
30
Clearly, they are pairwise and mutually independent.
Problem 20. Let Ci , i = 1, 2, . . . , k be the partition of sample space Ω. For any events A and B, find
k
X
P (Ci | B)P (A | (B ∩ Ci )).
i=1
Solution.
k
X
k
X
P (Ci ∩ B) P (A ∩ B ∩ Ci )
P (Ci | B)P (A | (B ∩ Ci )) =
·
P (B)
P (B ∩ Ci )
i=1
i=1
Pk
P (A ∩ B ∩ Ci )
= i=1
P (B)
S
P (A ∩ B ∩ ( ki=1 Ci ))
=
P (B)
P (A ∩ B)
=
= P (A | B).
P (B)
Problem 21. The first generation of particles is the collection of off-springs of a given particle. The
next generation is formed by the off-springs of these members. If the probability that a particle has k
offsprings (splits into k parts) is pk , where p0 = 0.4, p1 = 0.3, p2 = 0.3, find the probability that there
is no particle in second generation. Assume particles act independently and identically irrespective of
the generation.
Solution. Probability is given by
X
pk pk0 = 0.4 + 0.3 × 0.4 + 0.3 × 0.42 = 0.568.
k≥0
Problem 22. A and B throw a pair of unbiased dice alternatively with A starting the game. The
game ends when either A or B wins. A wins if he throws 6 before B throws 7. B wins if he throws
7 before A throws 6. What is the probability that A wins the game? Note that “A throws 6” means
the sum of values of the two dice is 6. Similarly “B throws 7”.
5
6
Solution. The probability that A throws 6 is 36
, the probability that B throws 7 is 36
. Let p be the
probability that A wins,
5
5
5
6
30
36
= .
p=
+ 1−
× 1−
× p ⇐⇒ p =
5
6
36
36
36
61
1 − 1 − 36 × 1 − 36
10
Problem 23. In a meeting at the UNO 40 members from under-developed countries and 4 from
developed ones sit in a row. What is the probability no two adjacent members are representatives of
developed countries?
Solution. Place the undeveloped countries, then we place the developed countries in the space between
them, this gives
40! × 4! 41
3
.
44!
Problem 24. A random walker starts at 0 on the x-axis and at each time unit moves 1 step to the
right or 1 step to the left with probability 0.5. Find the probability that, after 4 moves, the walker is
more than 2 steps from the starting position.
Solution. Let l and r be the number of steps towards left and right respectively. We have
l + r = 4 and |r − l| > 2 =⇒ l = 0 or r = 0.
Thus, it happens with probability 2 × 2−4 = 1/8.
Problem 25. The coefficients a, b and c of the quadratic equation ax2 + bx + c = 0 are determined
by rolling a fair die three times in a row. What is the probability that both the roots of the equation
are real? What is the probability that both roots of the equation are complex?
Solution. Let D = b2 − 4ac,
P (D ≥ 0) =
=
6
X
P (D ≥ 0 | b = k)P (b = k)
k=1
6
X
1
6
P ( k 2 /4 ≥ ac | b = k)
k=1
1 1
= ·
(0 + 1 + 3 + 8 + 14 + 17)
6 36
43
=
.
216
Thus, the probability that the roots are real is 43/216 and that they are complex is 173/216.
Problem 26. An electronic assembly consists of two subsystems, say A and B. From previous testing
procedures, the following probabilities assumed to be known:
P (A fails) = 0.20, P (A and B both fail) = 0.15, P (B fails alone) = 0.15.
Evaluate the following probabilities (a) P (A fails | B has failed), (b) P (A fails alone | A or B fail).
Solution. Let X, Y be the event that A, B fail respectively. We have the following relations
P (X) = 0.20, P (X ∩ Y ) = 0.15, and P (Y \ X) = 0.15.
(a) The event A fails, given B fails is the same as P (X | Y ).
P (X | Y ) =
P (X ∩ Y )
P (X ∩ Y )
0.15
1
=
=
= .
P (Y )
P (Y \ X) + P (X ∩ Y )
0.30
2
11
(b) The event that only A fails, given atleast one of A or B fails is P ((X \ Y ) | (X ∪ Y ))
P ((X \ Y ) ∩ (X ∪ Y ))
P (X ∪ Y )
P (X \ Y )
=
P (X ∪ Y )
P (X) − P (X ∩ Y )
=
P (X) + P (Y \ X)
0.20 − 0.15
1
=
=
0.20 + 0.15
7
P ((X \ Y ) | (X ∪ Y )) =
Problem 27. An aircraft has four engines in which two engines in each wing. The aircraft can land
using atleast two engines. Assume that the reliability of each engine is R = 0.93 to complete a mission,
and that engine failures are independent.
(a) Obtain the mission reliability of the aircraft.
(b) If at least one functioning engine must be on each wing, what is the mission reliability?
Solution. Each engine probability 1 − R of failing.
1. The probability that k engines don’t fail is given by k4 (1 − R)4−k Rk . Thus, the reliability of
the mission is
X 4
(1 − R)4−k Rk = 6(1 − R)2 R2 + 4(1 − R)R3 + R4 = 0.9987.
k
k≥2
2. The probability that k engines don’t fail on a single wing is given by k2 (1−R)2−k Rk . Therefore,
the reliability of the mission is

2
X 2
2

(1 − R)2−k Rk  = 2(1 − R)R + R2 = 0.990224.
k
k≥1
Problem 28. Four lamps are located in circle. Each lamp can fail with probability q, independently
of all the others. The system is operational if no two adjacent lamps fail. Obtain an expression for
system reliability?
Solution. Note that atmost 2 lamps can fail, if one fails, it can be placed anywhere, if two fail they
must be facing each other. Therefore the system reliability is,
(1 − q)4 + 4(1 − q)3 q + 2(1 − q)2 q 2 .
Problem 29. An urn contains b black balls and r red balls. One of the ball is drawn at random, but
when it is put back in the urn c additional balls of the same colour are put in with it. Now suppose
that we draw another ball. Find the probability that the first ball drawn was black given that the
second ball drawn was red?
Solution. Let Bi and Ri be the events that black and red balls were drawn in the i-th turn.
P (R2 ) = P (R2 | B1 )P (B1 ) + P (R2 | R1 )P (R1 ) =
P (B1 | R2 ) =
P (R2 | B1 )P (B1 )
=
P (R2 )
r
b
r+c
r
·
+
·
.
r+b+c r+b r+b+c r+b
r
b
r+b+c · r+b
r
b
r+c
r+b+c · r+b + r+b+c
12
·
r
r+b
=
b
.
b+r+c
Problem 30. The base and altitude of a right triangle are obtained by picking points randomly from
[0, a] and [0, b], respectively. Find the probability that the area of the triangle so formed will be less
than ab/4?
Solution. Given random variables x ∈ [0, a] and y ∈ [0, b] we wish to find the probability that xy <
ab/2.
P
ab
xy <
2
b
ab
P x<
| y = t dt
2y
0
Z
1 b1
ab
=
· min a,
dt
b 0 a
2t
Z b
1 1
=
min
,
dt
b 2t
0
Z b/2
Z b
dt
dt
=
+
b
0
b/2 2t
1
= (1 + log 2).
2
1
=
b
Z
Problem 31. A batch of N transistors is dispatched from a factory. To control the quality of the
batch the following checking procedure is used; a transistor is chosen at random from the batch,
tested and placed on one side. This procedure is repeated until either a pre-set number n (n < N ) of
transistors have passed the test (in which case the batch is accepted) or one transistor fails (in this
case the batch is rejected). Suppose that the batch actually contains exactly D faulty transistors.
Find the probability that the batch will be accepted.
Solution. Extend the process so that you keep checking till all N transistors have been checked. Now,
the batch is accepted if the first n transistors pass the test. This happens if all the faulty transistors
are checked after n turns. Thus, the probability of the solution being accepted is
N −n
N
.
D
D
13
2 Random Variables
Problem 1. Consider a probability space (Ω, F, P ) with Ω = {0, 1, 2}, F = {∅, {0}, {1, 2}, Ω},
P ({0}) = 0.5 = P ({1, 2}). Give an example of a real-valued function on Ω that is not a random
variable. Justify your answer.
Solution. Consider the real-valued function X : Ω → R given by X(i) = i.
The set {w | X(w) ≤ 2} = {0, 1} ∈
/ F , so X is not a random variable.
Problem 2. Let Ω = {1, 2, 3}. Let F be a σ-algebra on Ω, so that X(w) = w + 2 is a random
variable. Find F.
Solution. Consider the sets {X ≤ 3} = {1}, {X ≤ 4} = {1, 2}, {X ≤ 5} = {1, 2, 3}, all of these must
lie in F, so F = 2Ω .
Problem 3. Do
 the following functions define distribution functions?
(

0, x < 0
−1
0,
x<0
(a) F (x) = x, 0 ≤ x ≤ 12 , (b) F (x) = tan π (x) for x ∈ R, (c) F (x) =
.
−x

1−e , x≥0

1
1, x > 2
(a) No, limx→ 1 + F (x) =
Solution.
2
1
2
6= 1 = F ( 23 ).
(b) No, F (−1) < 0.
(c) Yes.
• F is non-decreasing,
• it is right-continuous at every point,
• limx→−∞ F (x) = 0 and,
• limx→+∞ F (x) = 1.
Problem 4. Consider the random variable X that represents the number of people who are hospitalized or die in a single head-on collision on the road in front of a particular spot in a year. The
distribution of such random variables are typically obtained from historical data. Without getting
into the statistical aspects involved, let us suppose that the cumulative distribution function of X is
as follows:
x
F (x)
0
0.250
1
0.546
2
0.898
3
0.932
4
0.955
5
0.972
6
0.981
7
0.989
8
0.995
9
0.998
10
1.000
Find (a) P (X = 10), (b) P (X ≤ 5 | X > 2).
Solution.
(a) P (X = 10) = P (X ≤ 10) − P (X < 10) = 1.000 − 0.998 = 0.002.
(b)
P (X ≤ 5 | X > 2) =
P (2 < X ≤ 5)
P (X ≤ 5) − P (X ≤ 2)
0.972 − 0.898
=
=
= 0.7255.
P (X > 2)
1 − P (X ≤ 2)
1 − 0.898
14


0,



 1+x ,
x < −1
−1 ≤ x < 0
9
Problem 5. Let X be an rv having the cdf: F (x) = 2+x
. Find P (X ∈ E) where
2

,
0
≤
x
<
2

9


1,
x≥2
E is (−1, 0] ∪ (1, 2).
Solution. Note that P (x ∈ E) = P (x ∈ (−1, 0]) + P (x ∈ (1, 2)).
P (x ∈ (−1, 0]) = P (x ≤ 0) − P (x ≤ −1) = F (0) − F (−1) =
2
2
−0= .
9
9
P (x ∈ (1, 2)) = P (x < 2) − P (x ≤ 1) = lim F (x) − F (1) =
6 3
3
− = .
9 9
9
x→2−
So, P (x ∈ E) =
5
9.
Problem
6. Let X be a random variable with cumulative distribution function given by: FX (x) =

0,
x<1



1, 1≤x<2
25
. Determine the cumulative discrete distribution functions Fd and one continuous
x2


10 , 2 ≤ x < 3


1,
x≥3
Fc such that αFd (x) + βFc (x).
Solution. For a function f , we use the short-hand f (x− ) to refer to limt→x− f (t).
For any x, we have that
F (x) − F (x− ) = αFc (x) + βFd (x) − αFc (x− ) − βFd (x− ) = β(Fd (x) − Fd (x− ))
where we use Fc (x) = Fc (x− ) by the continuity of Fc .
Consider the jump discontinuities of F .
1
,
25
9
F (2) − F (2− ) = ,
25
1
F (3) − F (3− ) = .
10
F (1) − F (1− ) =
Using the above results we get that
βFd (1) = F (1) − F (1− ) =
1
,
25
2
βFd (2) = βFd (1) + F (2) − F (2− ) = ,
5
1
−
βFd (3) = βFd (2) + F (3) − F (3 ) = .
2
Since limx→+∞ Fd (x) = 1, we see that
lim βFd (x) =
x→+∞
1
1
=⇒ β = .
2
2
To compute α, note that limx→+∞ F (x) = 1, which means
1
1 = lim F (x) = β lim Fc (x) + α lim Fd (x) =⇒ 1 = α + β =⇒ α = .
x→∞
x→∞
x→+∞
2
15
Finally to compute Fc we simply solve the equation, F = αFc + βFd .
This gives us, α = 12 , β = 12 ,




0,
x
<
1
0,






0,

2
1≤x<2
,
Fd (x) = x2 −4
and, Fc (x) = 25
4




5 , 2≤x<3
5,




1,

x≥3
1,
Problem 7. Let X be an rv such that P (X = 2) =


0,





α(x + 3),
FX (x) = 34 ,


βx2 ,




1,
1
4
x<1
1≤x<2
.
2≤x<3
x≥3
and its CDF is given by
x < −3
−3 ≤ x < 2
.
2≤x<4
√
4 ≤ x < 8/ 3
√
x ≥ 8/ 3
(a) Find α, β if 2 is the only jump discontinuity of F .
(b) Compute P (X < 3 | X ≥ 2).
Solution.
(a) To find α, we use the value of P (X = 2),
P (X = 2) = FX (2) − lim FX (x) ⇐⇒
x→2−
1
3
1
= − 5α ⇐⇒ α = .
4
4
10
To find β, we use continuity at 4,
lim FX (x) = FX (4) =⇒
x→4−
3
3
= β × 42 =⇒ β = .
4
64
(b)
P (2 ≤ X < 3)
P (X ≥ 2)
P (X < 3) − P (X < 2)
=
1 − P (X < 2)
limx→3− FX (x) − limx→2− FX (x)
=
1 − limx→2− FX (x)
3
1
−
1
= 4 12 = .
2
1− 2
P (X < 3 | X ≥ 2) =
Problem 8. An airline knows that 5 percent of the people making reservation on a certain flight
will not show up. Consequently, their policy is to sell 52 tickets for a flight that can hold only 50
passengers. Assume that passengers come to airport are independent with each other. What is the
probability that there will be a seat available for every passenger who shows up?
Solution. Let X be the random variable denoting the number of people who show up, we wish to find
the probability that P (X ≤ 50). Note that X satisfies a binomial distribution.
52
P (X ≤ 50) = 1−P (X > 50) = 1−P (X = 51)−P (X = 52) = 1−
×0.05×0.9551 −0.9552 ≈ 0.74.
51
16
Problem 9. The probability of hitting an aircraft is 0.001 for each shot. Assume that the number
of hits when n shots are fired is a random variable having a binomial distribution. How many shots
should be fired so that the probability of hitting with two or more shots is above 0.95?
Solution. Let X be the random variable denoting the number of shots hit,
n
n
n
P (X ≥ 2) = 1 − P (X < 2) = 1 −
0.999 −
0.999n−1 × 0.001.
0
1
We wish to pick the smallest n such that
P (X ≥ 2) ≥ 0.95 ⇐⇒ 0.999n−1 (0.001n + 0.999) ≤ 0.05.
We binary search on this n,
• For n = 10000, we get 0.00049,
• For n = 5000, we get 0.0403,
• For n = 2500, we get 0.287144,
• For n = 3750, we get 0.111588,
• For n = 4375, we get 0.0675,
• For n = 4687, we get 0.0523,
• For n = 4843, we get 0.0459892,
• For n = 4765, we get 0.049058,
• For n = 4726, we get 0.0506649,
• For n = 4745, we get 0.498759,
• For n = 4735, we get 0.0502897,
• For n = 4740, we get 0.500824,
• For n = 4742, we get 0.049997,
• For n = 4741, we get 0.050041.
Therefore, the answer is n = 4742.
Remark. I don’t think you have to do these calculations in exam, I was just having fun with scripting.
Problem 10. A reputed publisher claims that in the handbooks published by them misprints occur
at the rate of 0.0024 per page. What is the probability that in a randomly choosen handbook of 300
pages, the third misprint will occur after examining 100 pages?
Solution. Let X and Y be the random variables representing the number of misprints in the first 100
and the remaining pages, respectively.
The probability that the third misprint occurs after the first hundred pages is given by
P (X = 0)P (Y ≥ 3) + P (X = 1)P (Y ≥ 2) + P (X = 2)P (Y ≥ 1).
Let n = 100, m = 200 and p = 0.0024. The probabilities P (X = k) and P (Y ≥ k) are given by
n k
P (X = k) =
p (1 − p)n−k ,
k
17
k−1 X
m k
P (Y ≥ k) = 1 − P (Y < k) = 1 −
p (1 − p)m−k .
k
i=0
Therefore,
m
m
m−1
m−2
(1 − p) 1 − (1 − p) − mp(1 − p)
−
p(1 − p)
2
n
+ np(1 − p)n−1 1 − (1 − p)m − mp(1 − p)m−1
n 2
+
p (1 − p)n−1 (1 − (1 − p)m ) .
2
Problem 11. Let 0 < p < 1 and N be a positive integer. Let X ∼ B(N, Np ). Find limN →∞ 1 −
if it exists.
p N
,
N
Remark. I have no idea why this problem exists.
Problem 12. In a torture test, a light switch is turned on and off until it fails. If the probability that
the switch will fail any time it is turned ‘on’ or ‘off’ is 0.001, what is a probability that the switch
will fail after it has been turned on or off 1200 times?
Solution. The probability that the light fails is a sure event, therefore the probability that light fails
after 1200 tries is equal to the probability that it doesn’t fail in the first 1200 tries which is given by
(1 − 0.001)1200 .
Problem 13. Let X be a Poisson random variable with parameter λ.Show that P (x = i) increases
monotonically and then decreases monotonically as i increases, reaching its maximum when i is the
largest integer not exceeding λ.
Proof. P (X = i) = λi e−λ /i! by definition of Poission distribution. For any x ∈ N,
λx−1
λx
≥
⇐⇒ x ≥ λ.
(x − 1)!
x!
Therefore, P (X = i) increases while i < λ and then decreases.
Problem 14. For what values of α, p does the following function represent a probability mass function
pX (x) = αpx , x = 0, 1, 2, . . . . Prove that the random variable having such a probability mass function
satisfies the following memoryless property P (X > a + s | X > a) = P (X ≥ s).
Solution. For pX (x) to be a probability mass function we require it to be non-negative, so p ≥ 0. We
also require that
∞
X
αpx = 1.
x=0
If p ≥ 1, then the LHS diverges, so p < 1. Now,
∞
X
x=0
αpx =
α
=⇒ α = 1 − p.
1−p
So, fX (x) = (1 − p)px is a probability mass function for any 0 ≤ p < 1.
Note that
∞
X
P (X > s) =
fX (x) = ps+1 .
x=s+1
18
So,
P (X > a + s | X > a) =
P (X > a + s and X > a)
pa+s+1
= a+1 = ps = P (X ≥ s).
P (X > a)
p
Remark. I assumed a and s to be non-negative integers otherwise the result is just not true.
Problem 15. Consider a random experiment of choosing a point in the annular disc of inner radius
r1 and outer radius r2 (r1 < r2 ). Let X be the distance of the chosen point from the center of the
annular disc. Find the pdf of X.
Solution. Probability that a point is chosen in the measureable region S is given by
area of S
.
total area
So the probability that that X ≤ x is given by
πx2 − πr12
x2 − r12
=
πr22 − πr12
r22 − r12
Fx (x) = P (X ≤ x) =
for r1 ≤ x ≤ r2 .
The pdf fX (x) is given by
fX (x) =
dFX (x)
2x
= 2
dx
r2 − r12
for r1 ≤ x ≤ r2 , fX (x) = 0 otherwise.
Problem 16. Let X be an absolutely continuous random variable with density function f . Prove
that the random variables X and −X have the same distribution function if and only if f (x) = f (−x)
for all x ∈ R.
Solution. The distribution function of X is given by
Z
x
FX (x) = P (X ≤ x) =
f (t) dt.
−∞
The distribution function of −X is given by
Z
∞
F−X (x) = P (−X ≤ x) = P (X ≥ −x) =
f (t) dt
−x
Substituting u = −t, we get
x
Z
F−X (x) =
f (−u) du.
−∞
So,
Z
x
FX (x) = F−X (x)∀x ∈ R ⇐⇒
Z
x
f (−t) dt∀x ∈ R ⇐⇒ f (x) = f (−x)∀x ∈ R
f (t) dt =
−∞
−∞
Problem 17. The life time (in(hours) of a certain piece of equipment is a continuous random varixe−x/100
, x>0
104
able SX, having pdf fX (x) =
. If four pieces of this equipment are selected
0,
otherwise
independently of each other from a lot, what is the probability that at least two of them have life
length more than 20 hours?
19
Solution. The probability that a given equipment has life length of more than 20 hours is given by
Z ∞
6
fX (x) dx = e−1/5 .
5
20
Call it p for convenience.
The probability that 2 or more of them have life length more than 20 hours is given by
1 − (1 − p)4 − 4p(1 − p)3 ≈ 0.9999788.
Problem 18. Suppose that f and f are density functions and that 0 < λ < 1 is a constant. (a) Is
λf + (1 − λ)g a probability density function? (b) Is f g (i.e., f g(x) = f (x)g(x)) a probability desnity
function? Explain.
Solution.
(a) For any x,
λf (x) + (1 − λ)g(x) ≥ 0 · λ + (1 − λ) · 0 = 0.
Along with this,
Z
Z
λf + (1 − λ)g = λ
R
Z
f + (1 − λ)
R
g = λ + (1 − λ) = 1.
R
So, λf + (1 − λ)g is a probabilty density function.
(b) f g is not necessarily a pdf, pick f to be uniform in (0, 1) and g to be uniform in (0, −1), f g is
always zero.
Problem 19. A system has a very large number (can be assumed to be infinite) of components. The
probability that one of these component will fail in the interval (a, b) is e−a/T − e−b/T , independent
of others, where T > 0 is a constant. Find the mean and variance of the number of failures in the
interval (0, T /4).
Remark. I have no idea what the question means.
Problem 20. A stuent arrives to the bus stop at 6:00 AM sharp, knowing that the bus will arrive at
any moment, uniformly distributed between 6:00 AM and 6:20 AM.
(a) What is the probabilty that the student must wait more than five minutes?
(b) If at 6:10 AM the bus has not arrived yet, what is the probabilty that the student has to wait
at least five more minutes?
Solution. Scale the time to the interval [0, 20]. The pdf is given by f (x) = 1/20 for x ∈ [0, 20].
R 20
(a) This is given by 5 f (x) dx = 3/4.
R 20
(b) The probability that you have to wait till 6:10 is given by 10 f (x) dx = 1/2. The probability
R 20
that you have to wait atleast 5 minutes more is given by 15 f (x) dx = 1/4. Thus, the probability
is given by 1/4
1/2 = 1/2.
Problem 21. The time to failure of certain units is exponentially distributed with parameter λ. At
time t = 0, n identical units are put in operation. The units operate, so that failure of any unit is not
affected by the behavior of the other units. For any t > 0, let Nt be the random variable whose value
is the number of units still in operation time t. Find the distribution of the random variable.
20
Solution. The probability that a device doesn’t fail by time t is given by 1 −
the distribution of Nt is given by
( n
−λt )k (1 − e−λt )n−k 0 ≤ k ≤ n
k (e
P (Nt = k) =
.
0
otherwise
Rt
0
λe−λx dx = e−λt . So
Problem 22. Consider the marks of MTL 106 examination. Suppose that marks are distributed
normally with mean 76 and standard deviation 15. 15% of the best students obtained A as grade and
10% of the worst students fail in the course. (a) Find the minimum mark to obtain A as a grade.
(b) Find the minimum mark to pass the course.
Solution. Let µ = 76 and σ = 15, the pdf is given by
− 12
f (x) =
e
x−µ
σ
2
√
σ 2π
.
(a) Let a be the min marks to get an A grade. We require,
Z ∞
f (x) dx = 0.15.
a
(b) Let p be the min marks to pass. We require,
Z ∞
f (x) dx = 0.90.
p
I don’t really know if there is a way to compute these values without Ra computer (binary sarching
on a and p, and numerically computing these integrals) as the integrals f (x) dx can not be written
as an elementary function, look up the error function.
Problem 23. Suppose that the life length of two electronic devices say D1 and D2 have normal
distributions N (40, 36) and N (45, 9) respectively. (a) If a device is to be used for 45 hours, which
device would be preferred? (b) If it is to be used for 42 hours which one should be preferred?
Handwavy argument. We want it to have a longer life length than the required time so it makes more
sense to choose D2 in both the cases.


x<0
0,
−λx
Problem 24. Let X be a rv with cdf F (x) = p + (1 − p)(1 − e
), 0 ≤ x < 4 with 0 < p < 1


1,
4≤x<∞
and λ > 0. Find the mean of X.
Solution. We use the following result,
Claim. For an arbitrary cdf F over R, the expectation is given by
Z ∞
Z 0
(1 − F (x)) dx −
F (x) dx.
−∞
0
21
Proof. For any condition X, let 1[X] be 1 if the condition X is true, 0 if it is false.
Note that for x ≥ 0,
Z x
Z +∞
x=
dt =
1[t < x] dt.
0
For any x ≤ 0,
0
0
Z
x=−
0
Z
dt = −
1[t ≥ x] dt.
−∞
x
For the continuous case, say f is the pdf then. Note that for x ≥ 0,
Z +∞
Z +∞ Z +∞
xf (x) dx =
1[t < x] dtf (x) dx
0
0
0
Z +∞ Z +∞
=
1[t < x]f (x) dx dt
0
0
Z +∞ Z +∞
=
f (x) dx dt
0
t
Z +∞
=
(1 − F (t)) dt.
0
Z
0
Z
0
Z
0
xf (x) dx = −
−∞
1[t ≥ x] dtf (x) dx
−∞ −∞
Z 0 Z 0
=−
1[t ≥ x]f (x) dx dt
−∞ −∞
Z 0 Z t
=−
f (x) dx dt
−∞
Z 0
=−
−∞
F (t) dt.
−∞
We could swap the order of integrals as all the terms were non-negative and we could apply Fubini’s
theorem.
Z +∞
E[X] =
xf (x) dx
−∞
0
Z
=
Z
∞
xf (x) dx +
−∞
Z ∞
xf (x) dx
0
Z
0
(1 − F (x)) dx −
=
F (x) dx.
−∞
0
For the discrete case where X is over non-negative integers, let p(x) be the pmf
∞
X
x=0
xp(x) =
∞ X
∞
X
x=0 t=0
1[t < x]p(x) =
∞ X
∞
X
1[t < x]p(x) =
t=0 x=0
∞ X
∞
X
∞
X
p(x) =
(1 − F (x)).
t=0 x=t+1
t=0
You can prove a similar result for the negative part.
After proving for continuous case and discrete over non-negative integers, I can try to formalize it
for mixed random variables but I don’t want to, might see in the future (would also be easier if I knew
measure theory which might let me bypass taking these cases).
So, the expectation here is given by
Z 4
Z 4
(1 − p)(1 − e−4λ )
(1 − F (x)) dx =
(1 − p)e−λx dx =
.
λ
0
0
22
Problem 25. Let X be a random variable having a Poisson distribution with parameter λ. Prove
that, n = 1, 2, . . . , E[X n ] = λE[(X + 1)n−1 ].
Solution.
λE[(X + 1)
n−1
]=λ
∞
X
P (X = k)(k + 1)n−1
k=0
∞
X
λk e−λ
=λ
(k + 1)n−1
k!
k=0
=
∞
X
(k + 1)n
k=0
=
=
∞
X
k=1
∞
X
kn
λk e−λ
k!
kn
λk e−λ
k!
k=0
=
∞
X
λk+1 e−λ
(k + 1)!
P (X = k)k n
k=0
= E[X n ]
Problem 26. Prove that for any random variable X, E[X 2 ] ≥ [E[X]]2 . Discuss the nature of X when
one has equality.
Proof.
E[(X − E[X])2 ] = E[X 2 − 2XE[X]E[X]2 ] = E[X 2 ] − 2E[X]E[X] + E[X]2 = E[X 2 ] − E[X]2 ≥ 0
We achieve equality when X is a constant.
Problem 27. Let X be a random variable with P (a ≤ X ≤ b) = 1, where −∞ < a < b < ∞. Show
2
that Var(X) ≤ (b−a)
4 .
Proof. Consider the function f (t) given by
f (t) = E[(X − t)2 ] = E[X 2 ] − 2E[X]t + t2 .
Since f (t) is a quadratic, its minimum is given by E[X]. Now,
Var(X) = f (E[X])
a+b
≤f
2
1
= E[((X − a) − (b − X))2 ]
4
1
≤ E[((X − a) + (b − X))2 ]
4
(b − a)2
=
.
4
Problem 28. Consider a random variable X with E(X) = 1 and E(X 2 ) = 1.
(a) Find E[(X − E(X))4 ] if it exists.
23
(b) Find P (0.4 < X < 1.7) and P (X = 0).
Solution. From problem 26, we know that X = E[X] always. So,
(a) E[(X − E(X))4 ] = 0.
(b) P (0.4 < X < 1.7) = 1 and P (X = 0) = 0.
(
α + βx2 ,
Problem 29. Let X be a continuous type random variable with pdf f (x) =
0,
If E(X) = 3/5, find the value of α and β.
Solution. Since f is a pdf,
Since E[X] = 3/5,
0≤x≤1
.
otherwise
Z
1
f (x) dx = 1 ⇐⇒ α + β = 1.
3
R
Z
xf (x) dx =
R
3
1
1
3
⇐⇒ α + β = .
5
2
4
5
Together, these imply α = 3/5 and β = 6/5.
Problem 30. Let X ∼ P (λ) such that P (X = 0) = e−1 . Find Var(X).
Solution. It’s known that for a poisson distribution P (λ), P (X = 0) = e−λ and Var(X) = λ. So, here
we get λ = 1 and Var(X) = 1.
24
3 Functions of Random Variables (Incomplete)
Problem 1. X has a uniform distribution over the set of integers
{−n, −(n − 1), . . . , −1, 0, 1, . . . , (n − 1), n}.
Find the distribution of (a) |X| (b) X 2 (c) 1/(1 + |X|)
Solution.
(a) For any k ∈ {1, 2, . . . , n},
P (|X| = k) = P (X = k or X = −k) = P (X = k) + P (X = −k) =
For k = 0, P (|X| = 0) = P (X = 0) =
2
.
2n + 1
1
2n+1 .
(b)

2

 2n+1 ,
√
1
P (X 2 = x) = P (|X| = x) = 2n+1
,


0,
x ∈ {12 , 22 , . . . , n2 }
.
x=0
otherwise
(c)
P
1
=x
1 + |X|
=P

2


 2n+1
1
1
|X| = − 1 =
2n+1

x

0
x∈
n
1
k+1
|1≤k≤n
x=1
otherwise
o
.
Problem 2. Let P [X ≤ 0.49] = 0.75, where X is a continous type RV with some CDF over (0, 1). If
Y = 1 − X, find k such that P [Y ≤ k] = 0.25.
Solution.
P [X ≤ 0.49] = P [1 − X ≥ 0.51] = P [Y ≥ 0.51] = 1 − P [Y < 0.51] =⇒ P [Y < 0.51] = 0.75.
So k = 0.51 if the CDF of Y is left-continous at 0.51. Otherwise, no such k exists.
Problem 3. Let X be uniformly distributed random variable on the interval (0, 1). Define Y =
a + (b − a)X, a < b. Find the distribution of Y .
Solution.
Problem 4.
Problem 5.
Problem 6.
Problem 7.
Problem 8.
25
Problem 9.
Problem 10.
Problem 11.
Problem 12.
Problem 13.
Problem 14.
Problem 15.
Problem 16.
Problem 17.
Problem 18. Let U be a uniform distributed random variable in the interval [0, 1]. Find the probability that the quadratic equation x2 + 4U x + 1 = 0 has two distinct real roots x1 and x2 ?
Solution. We have two roots if
(4U )2 − 4 > 0 ⇐⇒ |U | > 1/2.
Since U ∈ [0, 1], this happens if U ∈ (1/2, 1] which has 1/2 probability of occuring.
Problem 19.
Problem 20. Let X be a continuous type random variable with strictly increasing CDF FX .
(a) What is the distribution of X?
(b) What is the distribution of the random variable Y = − ln(FX (X))?


0,
Problem 21. Let X be a continuous type random variable having the pdf f (x) = 12 ,

 1
x≤0
0<x≤1 .
, 1<x<∞
kx2
(a) Find k.
(b) Find the pdf of Y =
Solution.
1
X.
(a) Integrating over R, we see that
Z
Z 1
Z ∞
1
1
1 1
1=
f (x) dx =
dx +
= +
=⇒ k = 2.
2
kx
2 k
R
0 2
1
(b) Since, g(x) = 1/x is a decreasing function over x > 0 and the support of X is (0, ∞), we can
use the Theorem discussed in Lecture 11,

y≤0

0,
dg −1 (y)
1
1
−1
1
fY (y) = −fX (g (y)) ·
= fX
· 2 = 2y2 , 0 < 1/y ≤ 1 ⇐⇒ y ≥ 1 .

dy
y
y
1
1 < 1/y < ∞ ⇐⇒ y < 1
2
Problem 22.
26
Problem 23. Let X be exponentially distributed random variable with parameter λ > 0.
(a) Find P (|X − 1| > 1 | X > 1)
(b) Explain whether there exists a random variable Y = g(X) such that the cumulative distribution
function of Y has uncountably infinite discontinuity points. Justify your answer.
R∞
Solution. (a) P (X > 1) = 1 λe−λx dx = e−λ , along with this
Z ∞
P (|X − 1| > 1 and X > 1) = P (X > 2) =
λe−λx dx = e−2λ
2
which together imply that P (|X − 1| > 1 | X > 1) = e−λ .
(b) I don’t think so, I think you can in general prove that the cdf of any random variable can only
have countable discontinuities.
Problem 24. Let X be a continuous random variable having the probability density function (pdf)
fX (x) = kx2 e−x+1 , x > 1
(a) Find k.
(b) Determine E[X].
(c) Find the pdf of Y = X 2 .
Solution. I assume that fX (x) = 0 for x ≤ 1.
(a) Since fX is a pdf,
Z
1=
fX (x) dx
ZR∞
=
kx2 e−x+1 dx
1
Z ∞
=
k(t + 1)2 e−t dt
Z0 ∞
=
k(t2 + 2t + 1)e−t dt
0
= k(Γ(3) + 2Γ(2) + Γ(1))
1
=⇒ k = .
5
(b) Similar to above,
E[X] =
1
5
Z
0
∞
(t + 1)3 e−t dt =
Γ(4) + 3Γ(3) + 3Γ(2) + Γ(1)
16
= .
5
5
(c) Since g(x) = x2 is increasing over x > 0 and the support of X is (1, ∞) we can use the Theorem
discussed in Lecture 9.
√
√ 1−√y
ye
√ d y
fY (y) = fX ( y)
=
dy
10
for y > 1 and 0 otherwise.
27
Problem 25. Let X ∼ N (0, σ 2 ) be a random variable. Find the moment generating function for the
random variable X. Deduce the moments of order n about zero for the random variable X from the
above results.
Solution. The moment generating function of X is given by
x2
Z +∞
exp − 2σ
2
MX (t) = E[etX ] =
etx · √
dx
2πσ 2
−∞
Z +∞ exp − (x−tσ22 )2 + 1 (tσ)2
2
2σ
√
=
dx
2πσ 2
−∞
Z +∞ exp − (x−tσ2 )2
1 2 2
2σ 2
√
= exp
t σ
dx
2
2πσ 2
−∞
1 2 2
= exp
t σ
2
where the term evaluates
to 1 as it is the integral of the pdf of N (tσ 2 , σ 2 ) distribution.
P
Since MX (t) = n≥0 E[X n ]tn /n! and
MX (t) =
X ( 1 t 2 σ 2 )n
2
n≥0
we get that
n!
=
X σ 2n (2n)!
n≥0
2n n!
·
t2n
(2n)!
(
0
n is odd
E[X ] =
.
n!
σ n 2n/2 (n/2)! = σ n (n − 1)!! n is even
n
Problem 26. The moment generating function (MGF) of a random variable X is given by
MX (t) =
1 1 t 1 2t 1 3t
+ e + e + e .
6 4
3
4
If µ is the mean and σ 2 is the variance of X, what is the value of P (µ − σ < X < µ).
Solution. Note that the random variable Y whose pmf is given by

1


6, k = 0


1


4, k = 1
P (Y = k) = 13 , k = 2


1, k = 3


4


0, otherwise
has the same mgf as X. Since MX (t) has a non-zero radius convergence, the uniqueness theorem
discussed in Lecture 11, X and Y have the same distribution.
Now, the mean
0 1 2 3
5
+ + + =
6 4 3 4
3
0
1
4
9
23
E[X 2 ] = + + + =
6 4 3 4
6
r
p
19
σ = E[X 2 ] − E[X]2 =
18
µ = E[X] =
28
Since, µ − σ ≈ 0.63 and µ ≈ 1.66,
1
P (µ − σ < X < µ) = P (X = 1) = .
4
(
q bxc , x ≥ 0
Problem 27. Let X be a random variable such that P (X > x) =
where 0 < q < 1 is
1,
x<0
a constant and bxc is integral part of x. Determine the pmf/pdf of X as applicable to this case.
Solution. P (X ≤ x) has discontinuities at positive integers and is uniform between these so X has a
pmf. For a positive integer k,
P (X = k) = P (k − 1 < X ≤ k) = P (X > k − 1) − P (X > k) = q k−1 (1 − q).
Problem 28. Let ln(X) be a normally distributed random variable with mean 0 and variance 2.
Find the pdf of X.
Solution. Let Y = ln(X), that is X = exp(Y ). We know that pdf of Y ,
exp(−y 2 /4)
√
.
4π
fY (y) =
Since exp is an increasing function, from the Theorem discussed in Lecture 9, the pdf of X is given
by,
d ln x
exp(−(ln x)2 /4)
√
fX (x) = fY (ln x)
=
dx
4π x
for x > 0 and 0 otherwise.
Problem 29. Prove that if X is a continuous type rv such that E[X r ] exists, then E[X s ] exists for
all s < r.
Proof. Say X has pdf f . Note that E[X] exists if and only if E[|X|] is finite.
Z +∞
s
E[|X |] =
|x|s f (x) dx
−∞
−1
Z
+1
Z
s
|x| f (x) dx +
=
−∞
Z −1
≤
|x|r f (x) dx +
−∞
+∞
Z
=
|x| f (x) dx +
−1
Z +1
|x|r f (x) dx −
−∞
|x|s f (x) dx +
−1
+1
Z
= E[|X r |] −
+1
|x|r f (x) dx +
−1
|x|s f (x) dx
+1
Z +∞
|x|r f (x) dx +
−1
Z
∞
Z
s
+1
+1
Z
|x|r f (x) dx
|x|s f (x) dx
−1
Z
+1
|x|s f (x) dx.
−1
So, if E[X r ] exists then E[|X|s ] is finite and hence E[X s ] exists as well.
Problem 30.
Problem 31. Let Φ be the characteristic function of a random variable X. Prove that
1 − |Φ(2u)|2 ≤ 4(1 − |Φ(u)|2 ).
29
Proof. First we write Φ in terms of sin and cos,
Problem 32.
Problem 33. Let X be a random variable having a binomial distribution with parameters n and p.
Prove that
1
1 − (1 − p)n+1
E
=
.
X +1
(n + 1)p
Proof. Let f (x) = (1 + x)n ,
f (x) =
n X
n
k=0
k
xk =⇒
Z
n X
n xk
1 x
(1 + x)n+1 − 1
=
f (t) dt =
k k+1
x 0
(n + 1)x
k=0
E
1
X +1
n X
n k
1
=
p (1 − p)n−k
k
k+1
k=0
p
= (1 − p)n f
1−p
n+1
1 − (1 − p)
=
.
(n + 1)p
Problem 34. Let X be a continous random variable with CDF FX (x). Define Y = FX (X).
(a) Find the distribution of Y .
(b) Find the variance of Y , if it exists.
(c) Find the characteristic function of Y .
Solution.
(a) Since FX is a CDF Y ∈ [0, 1]. For some x ∈ [0, 1],
FY (x) = P (Y ≤ x) = P (FX (X) ≤ x)
= P (X ≤ sup{y | FX (y) ≤ x})
= FX (sup{y | FX (y) ≤ x}) = x.
The last equality is true as FX is surjective over [0, 1].
So, the pdf of Y is given by
(
1 y ∈ [0, 1]
fY (y) =
.
0 otherwise
(b)
Z
+∞
E[Y ] =
yfY (y) dy =
−∞
+∞
E[Y 2 ] =
1
Z
Z
y dy =
0
y 2 fY (y) dy =
Z
−∞
0
1
Var(Y ) = E[Y 2 ] − E[Y ]2 = .
12
30
1
2
1
y 2 dy =
1
3
(c)
itX
ϕX (t) = E[e
1
Z
eitx dx =
]=
0
eit − 1
it
Problem 35. Suppose that X is a continuous random variable having the following pdf:
(
e+x /2, x ≤ 0
f (x) =
.
e−x /2, x > 0
Let Y = |X|. Obtain E(Y ) and Var(Y ).
Solution. Note that f (x) = exp(− |x|)/2.
Z
+∞
E[Y ] = E[|X|] =
Z−∞
∞
=
|x| e−|x| /2 dx
xe−x dx
0
= Γ(2) = 1,
Z +∞
2
2
E[Y ] = E[X ] =
x2 e−|x| /2 dx
−∞
Z ∞
=
x2 e−x dx
0
= Γ(3) = 2,
Var(Y ) = E[Y 2 ] − E[Y ]2 = 1.
Problem 36. The mgf of an r.v. X is given by MX (t) = exp(µ(et − 1)). (a) What is the distribution
of X? (b) Find P (µ − 2σ < X < µ + 2σ), given µ = 4.
Solution. (a) We prove in the next problem that Poisson has this distribution. Since, the given
MX (t) has a non-zero radius of convergence, we are done by uniqueness theorem.
A way to guess that the distribution is Poisson would be to look at GX (z) = E[z X ]. Since
G(et ) = E[etX ] = MX (t),
if MX (ln z) can be represented by a power series in t, we have found the probability distribution.
GX (z) = MX (ln z) = eµ(z−1) =
X e−µ µn
n≥0
n!
zn.
The coefficients of this power series tell us that X is Poisson.
(b) Since its a Poisson distribution, σ 2 = µ = 4.
P (µ − 2σ < X < µ + 2σ) = P (0 < X < 8) =
7
X
e−4 4k
k=1
k!
.
Problem 37. Let X be a discrete type rv with moment generating function MX (t) = a + be2t + ce4t ,
E(X) = 3, Var(X) = 2. Find E(2X ).
31
Solution. Since MX (t) = E[X n ],
(n)
MX (0) = E[1] =⇒ a + b + c = 1
0
MX
(0) = E[X] =⇒ 2b + 4c = 3
00
MX
(0) = E[X 2 ] =⇒ 4b + 16c = 11.
which together imply
1
1
5
a = ,b = ,c = .
8
4
8
E[2X ] = E[eln 2X ] = MX (ln 2) = a + b · 22 + c · 24 =
1
+ 11 = 11.125.
8
Problem 38. Let X be a random variable with Poisson distribution with paramete λ. Show that the
characteristic function of X is ϕX (t) = exp[λ(eit − 1)]. Hence, compute E(X 2 ), Var(X) and E(X 3 ).
Solution. The characteristic function is given by,
ϕX (t) = E[eitX ] =
∞ −λ x
X
e λ
x=0
= e−λ
=e
=e
The moment-generating function, MX (t) =
P
× eitx
x!
∞
X
(eit λ)x
x!
x=0
−λ λeit
e
λ(eit −1)
n≥0 E[X
.
n ]tn /n!
is given by,
MX (t) = ϕX (−it) = exp[λ(et − 1)],
0
MX
(t) = λ exp[λ(et − 1) + t],
00
MX
(t) = λ(λet + 1) exp[λ(et − 1) + t],
000
MX
(t) = λ(λet + 1)2 exp[λ(et − 1) + t] + λ2 et exp[λ(et − 1) + t].
0
E[X] = MX
(0) = λ,
00
E[X 2 ] = MX
(0) = λ(λ + 1),
000
E[X 3 ] = MX
(0) = λ(λ + 1)2 + λ2 ,
Var(X) = E[X 2 ] − E[X]2 = λ.
32
4 Random Vectors (Incomplete)
Sadly I lost the files where I had written up these solutions :')
33
5 Moments (Incomplete)
Problem 1. Let X1 and X2 be independent exponential distributed random variables with parameters
5 and 4 respectively. Define X(1) = min{X1 , X2 } and X(2) = max{X1 , X2 }.
(a) Find Var(X(1) ). (b) Find the distribution of X(1) . (c) Find E[X(2) ].
Solution.
(b) We find the distribution of X(1) first,
P (X(1) > x) = P (min{X1 , X2 } > x)
= P (X1 > x, X2 > x)
= P (X1 > x)P (X2 > x)
dP (X(1) > x)
=⇒ fX(1) (x) = −
dx
dP (X2 > x)
dP (X1 > x)
= −P (X1 > x) ·
− P (X2 > x) ·
dx
dx
= P (X1 > x)fX2 (x) + P (X2 > x)fX1 (x)
= 5e−5x · e−4x + 4e−4x · e−5x
= 9e−9x .
(a) Since X(1) is exponentially distributed, Var[X(1) ] = 1/92 .
(c)
X1 + X2 = X(1) + X(2)
=⇒ E[X1 + X2 ] = E[X(1) + X(2) ]
=⇒ E[X(2) ] = E[X1 ] + E[X2 ] − E[X(1) ]
1 1 1
= + −
5 4 9
61
=
.
180
Problem 2. Let X and Y be two non-negative continuous random variables having respective CDFs
FX and FY . Suppose that for some constants a and b > 0, FX (x) = FY ( x−a
b ). Determine E[X] in
terms of E[Y ].
Solution. Note that X and bY + a have the same distribution as
x−a
x−a
FX (x) = FY
=P Y ≤
= P (bY + a ≤ x) = FbY +a (x).
b
b
Therefore,
E[X] = E[bY + a] = bE[Y ] + a.
Problem 3. Let X be a random variable having an exponential distribution with parameter 12 . Let
Z be a random variable having a normal distribution with mean 0 and variance 1. Assume that, X
and Z are independent random variables. (a) Find pdf of T = qZX . (b) Compute E[T ] and Var(T ).
2
34
Solution. Note that
1
1
2
fX,Z (x, z) = fX (x)fZ (z) = e−x/2 × √ e−z /2
2
2π
for x ≥ 0 and all z, 0 otherwise.
(a) We find the joint pdf of (X, T ), the map (X, Z) → (X, T ) is invertible as
p
Z = T X/2.
The determinant of the jacobian of the map (X, T ) → (X, Z) is given by
det J =
p
1
∂X/∂X ∂X/∂T
p0
=
= X/2.
∂Z/∂X ∂Z/∂T
∂Z/∂X
X/2
Now,
fX,T (x, t) = fX,Z (x, z) |det J|
r
p
x
= fX,Z (x, t x/2) ×
2
r
1
1
x
2
= exp(−x/2) × √ exp(−t x/4) ×
2
2
2π
for all x > 0, 0 otherwise.
The pdf of T is given by
Z
fT (t) =
fX,T (x, t) dx
R
r
Z +∞
1
1
x
2
=
exp(−x/2) × √ exp(−t x/4) ×
dx
2
2
2π
0
2
Z ∞
√
1
t
x
= √
x exp −
+1 ×
dx
2
2
4 π 0
√
Z ∞
√
1
2 2
√
=
× 2
u exp(−u) du on setting u = 12 (t2 /2 + 1)x
4 π (t /2 + 1)3/2 0
√
1
2 2
= √ × 2
× Γ(3/2)
4 π (t /2 + 1)3/2
1
= 2
∀t ∈ R
(t + 2)3/2
(b) The expectation of T is,
Z
+∞
E[T ] =
Let t =
√
−∞
(t2
t
dt = 0.
+ 2)3/2
2 tan θ,
Z
2
+∞
E[T ] =
t2
dt
(t2 + 2)3/2
−∞
+π/2
2 tan2 θ · sec2 θ
dθ
23/2 sec2 θ
Z
=
−π/2
1
=√
2
1
=√
2
Z
+π/2
tan2 θ cos θ dθ
−π/2
Z
+π/2
Z
+π/2
sec θ dθ −
−π/2
cos θ dθ .
−π/2
35
!
The integral
∞.
R +π/2
−π/2
cos θ dθ converges whereas
R +π/2
−π/2
sec θ doesn’t so E[X 2 ] and hence Var(X) is
Problem 4. Let X and Y be two identically distributed random variables with Var(X) and Var(Y )
exist. Prove or disprove that Var( X+Y
2 ) ≤ Var(X).
Proof. Note that
Var
X +Y
2
E[(X + Y )2 ] − E[X + Y ]2
4
2
E[X ] + E[Y 2 ] + 2E[XY ] − E[X]2 − E[Y ]2 − 2E[X]E[Y ]
=
4
Var(X) + Var(Y ) + 2 Cov(X, Y )
=
.
4
=
. Since X and Y are identically distributed, Var(X) = Var(Y ), so
p
Var(X) + Var(Y ) + 2 Var(X) Var(Y )
Var(X) =
4
Tutorial 4 Problem 26 Var(X) + Var(Y ) + 2 Cov(X, Y )
≥
4
X +Y
= Var
.
2
Problem 5. Let X and Y be i.i.d. random variables each having a N (0, 1). Calculate E[(X + Y )4 |
X − Y ].
Solution. We claim that X +Y and X −Y are independent variables and that X +Y, X −Y ∼ N (0, 2).
Let A = X + Y , B = X − Y , the determinant of the jacobian of the map (X, Y ) → (A, B) is given
by
∂A/∂X ∂A/∂Y
1 1
det J =
=
= −2.
∂B/∂X ∂B/∂Y
1 −1
Thus, the joint pdf of A, B is
fAB (a, b) = fXY (x, y)/ |det J|
1
1
1
= · √ exp(−(a + b)2 /8) · √ exp(−(a − b)2 /8)
2
2π
2π
1
1
= √ √ exp(−a2 /4) · √ √ exp(−b2 /4).
2 · 2π
2 · 2π
By integrating wrt A and B we see that A, B ∼ N (0, 2) and fAB (a, b) = fA (a)fB (b) that is, A and B
are independent. Therefore,
Z +∞
1
4
4
√
E[A | B] = E[A ] =
a4 exp(−a2 /4) da
2 π −∞
Z ∞
1
=√
a4 exp(−a2 /4) da
π 0
Z
16 ∞ 3/2
=√
u exp(−u) du on setting u = a2 /4.
π 0
16
= √ · Γ(5/2)
π
= 12.
36
Problem 6. Let X1 , . . . , X5 be a random sample from N (0, σ 2 ). Find a constant c such that
c(X1 − X2 )
Y =p 2
X3 + X42 + X52
has a t-distribution. Also, find E[Y ].
Solution. A t-distribution has the form
Z
p
V /ν
where Z ∼ N (0, 1), V ∼ χ2 (ν) and, V and Z are independent.
For independent normal distributions X ∼ N (µx , σx2 ) and Y ∼ N (µy , σy2 ),
X + Y ∼ N (µx + µy , σx2 + σy2 ) and aX ∼ N (aµx , a2 σx2 ).
Therefore, X1 − X2 ∼ N (0, 2σ 2 ),
X1 − X2
√
∼ N (0, 1)
2σ 2
and Xi /σ ∼ N (0, 1),
X32 + X42 + X52
∼ χ2 (3).
σ2
Therefore,
X√
1 −X2
2σ
q
X32 +X42 +X52
/3
σ2
p
has a t-distribution that is, c = 3/2.
Since, V and Z are independent the mean
p
p
p
E[Y ] = E[Z/ V /ν] = E[Z]E[ ν/V ] = 0 · E[ ν/V ] = 0.
p
Note that E[ ν/V ] is finite as the variable is non-negative and V is not always zero.
Problem 7. Consider the metro train arrives at the station near your home every quarter hour
starting at 5:00 AM. You walk into the station every morning between 7:10 and 7:30 AM, with the
time in this interval being a uniform random variably that is U([7 : 10, 7 : 30]).
(a) Find the distribution of time you have to wait for the first train to arrive.
(b) Also, find its mean waiting time.
Solution. (a) If you arrive between [7:10, 7:15] you wait till 7:15, else you wait till 7:30. This tells
us the waiting time T in minutes has the distribution,


1/10 0 ≤ t < 5
fT (t) = 1/20 5 ≤ t < 15 .


0
otherwise
(b) The mean is given by
Z
E[T ] =
Z
tfT (t) dt =
R
5
Z
t/10 dt +
0
t/20 dt = 5/4 + 5 = 6.25.
5
37
15
Problem 8. Let X and Y be iid random variables each having uniform distribution (2, 3). Find
E[ X
Y ].
Solution. The mean of 1/Y is given by
Z
3
E[1/Y ] =
2
dy
= ln(3/2).
y
Since X amd Y are independent,
E[X/Y ] = E[X]E[1/Y ] =
5 3
ln .
2 2
Problem 9. Let X and Y be two random variables such that ρ(X, Y ) =
Var(Y ) = 4. Compute Var(X − 3Y ).
Solution.
ρ(X, Y ) =
1
2,
Var(X) = 1 and
1
1p
=⇒ Cov(X, Y ) =
Var(X) Var(Y ) = 1.
2
2
Therefore,
Var(X − 3Y ) = E[(X − 3Y )2 ] − E[X − 3Y ]2
= E[X 2 ] − E[X]2 + 9E[Y 2 ] − 9E[Y ]2 − 6E[XY ] + 6E[X]E[Y ]
= Var(X) + 9 Var(Y ) − 6 Cov(X, Y )
= 31.
Problem 10. Let X1 , X2 , . . . , Xn be iid random variables with E[X1 ] = µ and Var(X1 ) = σ 2 . Define
Pn
Pn
(Xi − X)2
2
i=1 Xi
X=
, and S = i=1
.
n
n−1
Find (a) Var(X) (b) E[S 2 ].
Solution.
(a) The mean of X and X are
Pn
Pn
Pn
µ
i=1 Xi
i=1 E[Xi ]
E[X] = E
=
= i=1 = µ,
n
n
n
2
" Pn Pn
2
j=1 Xi Xj
n2
i=1
E[X ] = E
#
Pn Pn
=
i=1
j=1 E[Xi Xj ]
n2
Pn
2
i=1 E[Xi ]
=
Pn
i=1 (σ
2
P
1≤i,j≤n E[Xi Xj ]
i6=j
n2
+ µ2 ) +
=
=
+
P
1≤i,j≤n µ
i6=j
n2
σ2
+ µ2 .
n
Therefore, the variance
2
2
Var(X ) = E[X ] − E[X]2 =
38
σ2
.
n
2
(b) The expected value of Xi X is
" Pn
j=1 Xi Xj
E[Xi X] = E
#
n
Pn
=
j=1 E[Xi Xj ]
n
n
X
1
2
=
E[Xi ] +
E[Xi ]E[Xj ]
n
j=1
i6=j
=
σ2
+ µ2 .
n
The expected value of S 2 is
Pn
2
i=1 (Xi − X)
E[S 2 ] = E
n−1
Pn
E[(Xi − X)2 ]
= i=1
n−1
Pn
2
E[Xi2 ] − 2E[Xi X] + E[X ]
= i=1
n−1
Pn
2
2
(σ + µ ) − 2(σ 2 /n + µ2 ) + (σ 2 /n + µ2 )
= i=1
n−1
Pn
2
σ (1 − 1/n)
= i=1
n−1
= σ2.
Problem 11. Pick the point (X, Y ) uniformly in the triangle {(x, y) | 0 ≤ x ≤ 1 and 0 ≤ y ≤ x}.
Calculate E[(X − Y )2 | X].
Solution. Since (X, Y ) is uniformly picked their joint pdf is
(
1
0≤y≤x≤1
fX,Y (x, y) = 2
.
0 otherwise
The pdf of X is given by
x
Z
1
x
dy =
2
2
fX (x) =
0
for x ∈ [0, 1] and 0 otherwise.
Therefore, the expected value
Z
2
(x − y)2 fY |X (y, x) dy
E[(X − Y ) | X = x] =
ZRx
=
(x − y)2 ·
0
Z
x
=
=
0
x2
3
for x ∈ [0, 1].
39
y2
dy
x
1/2
dy
x/2
Problem 12. Find E[Y | X] where (X, Y ) is jointly distributed with joint pdf
(
y
y
exp
−
4
1+x , x, y ≥ 0
f (x, y) = (1+x)
0,
otherwise.
Solution. The pdf of X is given by
Z
fX (x) =
∞
0
y
y
1
exp
−
dy =
(1 + x)4
1+x
(1 + x)2
for x ≥ 0, 0 otherwise.
Now, E[Y | X] is given by
∞
Z
E[Y | X = x] =
yfY |X (y, x) dy
Z ∞
y2
y
=
exp
−
dy
(1 + x)2
1+x
0
Z ∞
= (1 + x)
t2 exp(−t) dt on setting t = y/(1 + x),
0
0
= Γ(3)(1 + x)
= 2(1 + x).
1
Problem 13. Let X have a beta distribution i.e., its pdf is fX (x) = β(a,b)
xa−1 (1 − x)b−1 , 0 < x < 1
and Y given X = x has binomial distribution with parameters (n, x). Find regression of X on Y . Is
regression linear?
Solution. As Y | X ∼ Binomial(n, X) and X ∼ Beta(a, b),
1
n a+y−1
fX,Y (x, y) = fY |X (y, x)fX (x) =
x
(1 − x)n−y+b−1 .
β(a, b) y
The pmf of Y is
Z
1
fY (y) =
0
β(a + y, n − y + b) n
fX,Y (x, y) dx =
β(a, b)
y
The pdf of X | Y is thus given by
fX|Y (x, y) =
fX,Y (x, y)
xa+y−1 (1 − x)n−y+b−1
=
.
fY (y)
β(a + y, n − y + b)
Therefore, the regression of X on Y is given by
Z 1 a+y
x (1 − x)n−y+b−1
β(a + y + 1, n − y + b)
a+y
E[X | Y = y] =
dx =
=
β(a
+
y,
n
−
y
+
b)
β(a
+
y,
n
−
y
+
b)
n
+
a+b
0
where 0 ≤ y ≤ n is an integer. The regression is linear.
Problem 14. Let X ∼ Exp(λ). Find E[X | X > y] and E[X − y | X > y].
Solution. The probability fX|X>y (x, y) is given by
(
λe−λ(x−y)
fX|X>y (x, y) =
0
Thus,
Z
E[X | X > y] =
∞
x · λe
−λ(x−y)
Z
dy =
y
∞
x>y
.
otherwise
(t + y) · λe−λt dt = y +
0
and the expected value E[X − y | X > y] = E[X | X > y] − E[y | X > y] = 1/λ.
40
1
,
λ
Problem 15. Consider n independent trials, where each trial results in outcome i with probability
pi = 1/3, i = 1, 2, 3. Let Xi denote the number of trials that result in outcome i amongst these
n trials. Find the distribution of X2 . Find the conditional expectation of X1 given X2 > 0. Also
determine Cov(X1 , X2 | X2 ≤ 1).
Solution. For i ∈ {1, 2, 3} the probability
n−k
n k
n 2
n−k
P (Xi = k) =
p (1 − pi )
=
.
k i
k 3n
Note that X1 | (X2 = 0) ∼ Binom(n, 1/2) as in each trial either X1 or X3 happens with equal
probability.
By Law of Total Expectation,
E[X1 ] = E[X1 | X2 = 0]P (X2 = 0) + E[X1 | X2 > 0]P (X2 > 0)
E[X1 ] − E[X1 | X2 = 0]P (X2 = 0)
=⇒ E[X1 | X2 > 0] =
P (X2 > 0)
n/3 − n/2 × (2/3)n
=
1 − (2/3)n
3n−1 − 2n−1
=n·
.
3n − 2n
For finding Cov(X1 , X2 | X2 ≤ 1), we first find
E[X2 | X2 ≤ 1] =
0 · P (X2 = 0) + 1 · P (X2 = 1)
n2n−1 /3n
n
= n n
=
,
n−1
n
P (X2 ≤ 1)
2 /3 + n2
/3
n+2
Pn−1
E[X1 X2 | X2 ≤ 1] =
=
=
=
=
kP (X1 = k, X2 = 1)
P (X2 ≤ 1)
Pn−1 n
n−1
k
n−k−1
k=0 k 1 1/3 ×
k (1/3) × (2/3)
(2/3)n + n2n−1 /3n
Pn−1 n−1 n−1−k n−1
n/3 k=0 k k 2
/3
n
n
n−1
n
2 /3 + n2
/3
n(n − 1)/9
2n /3n + n2n−1 /3n
n(n − 1)3n−2
.
2n + n2n−1
k=0
The sum was turned into (n − 1)/3 as it was the mean of Binom(n − 1, 1/3).
Therefore,
Cov(X1 , X2 | X2 ≤ 1) = E[X1 X2 | X2 ≤ 1] − E[X1 ]E[X2 | X2 ≤ 1]
n(n − 1)3n−2 n
n2n−1
− · n
n
n−1
2 + n2
3 2 + n2n−1
n(n − 1)3n−2 − n2 2n−1 /3
=
.
2n + n2n−1
=
Remark. Pretty sure there’s something wrong with the value of Cov(X1 , X2 | X2 ≤ 1), I don’t want
to try and fix it though.
Problem 16.
(a) Show that Cov(X, Y ) = Cov(X, E[Y | X]).
41
(b) Suppose that, for constants a and b, E[Y | X] = a + bX. Show that b = Cov(X, Y )/ Var(X).
Solution.
(a) Note that Cov(X, E[Y | X]) is a constant so,
Cov(X, E[Y | X]) = E[Cov(X, E[Y | X])
= E[E[XY | X] − E[X]E[E[Y | X]]]
= E[E[XY | X]] − E[E[X]E[Y ]]
= E[XY ] − E[X]E[Y ]
= Cov(X, Y ).
(b) By the above result,
Cov(X, Y ) = Cov(X, E[Y | X]) = Cov(X, a + bX) = b Cov(X, X) =⇒ b =
Cov(X, Y )
.
Var(X)
Problem 17. Let X be a random variable which is uniformly distributed
over the interval (0, 1). Let
(
1/x, 0 < y ≤ x
Y be chosen from interval (0, X] according to the pdf f (y | x) =
. Find E[Y k | X]
0,
otherwise
and E[Y k ] for any fixed positive integer k.
Solution. The expectation of Y k | X is given by
k
Z
E[Y | X = x] =
0
and the expectation of Y
x
yk
xk
dy =
x
k+1
can be determined from Law of Total Expectation as
Z 1
xk
1
k
k
E[Y ] = E[E[Y | X]] =
dx =
.
k
+
1
(k
+
1)2
0
k
Problem 18. Suppose that a signal X, standard normal distributed, is transmitted over a noisy
channel so that the received measurement is Y = X + W , where W follows normal distribution with
mean 0 and variance σ 2 is independent of X. Find fX|Y (x | y) and E[X | Y = y].
Solution. Since X ∼ N (0, 1) and W ∼ N (0, σ 2 ) and, X, W are independent Y ∼ N (0, 1 + σ 2 ).
We find the joint pdf (X, Y ). The map (X, W ) → (X, Y ) is invertible as W = Y − X. The
determinant of the jacobian of the map (X, Y ) → (X, W ) is given by
det J =
∂X/∂X ∂X/∂Y
1 0
=
= 1.
∂W/∂X ∂W/∂Y
−1 1
Thus, the joint pdf
fX,Y (x, y) = fX,W (x, w) |det J| = fX,W (x, y − x) = fX (x)fW (y − x)
1
1 2
2
2
=
exp − x + (y − x) /σ
.
2πσ
2
The conditional pdf,
√
fX,Y (x, y)
1 + σ2
1
(x − y)2
y2
2
√
fX|Y (x | y) =
=
exp −
x +
−
fY (y)
2
σ2
1 + σ2
σ 2π
√
2 !
1 + σ2
1 + σ2
y
√
=
exp −
x−
.
2σ 2
1 + σ2
σ 2π
y
σ
Therefore, X | (Y = y) ∼ N ( 1+σ
2 , 1+σ 2 ). Hence, the mean
2
E[X | Y = y] =
42
y
.
1 + σ2
Problem 19. Suppose X follows Exp(1). Given X = x, Y is a uniform distributed rv in the interval
[0, x]. Find the value of E[Y ].
Solution. The conditional expectation E[Y | X] = X/2. By Law of Total Expectation,
E[Y ] = E[E[Y | X]] = E[X/2] = 1/2.
Problem 20.
Problem 21.
Problem 22.
Problem 23. Suppose you participate in a chess tournament in which you play until you lose a game.
Suppose you are a very average player, each game is equally likely to be a win, a loss or a tie. You
collect 2 points for each win, 1 point for each tie and 0 points for each loss. The outcome of each
game is independent of the outcome of every other game. Let Xi be the number of points you earn
for game i and let Y equal the total number of points earned in the tournament. Find the moment
generating function MY (t) and hence compute E[Y ].
Solution.
(a) Let N be the number of games played, N ∼ Geometric(1/3).
Note that
MXi (t) = E[etXi ] =
1 + et + e2t
3
is independent of i, call it MX (t).
The conditional expectation of etY given N is
"
X
# Y
N
N
N
E[exp(tY ) | N ] = E exp t
Xi
=
E[exp(tXi )] = MX
(t).
i=1
i=1
By the Law of Total Expectation,
tY
MY (t) = E[e ] = E[E[e
tY
| N ]] =
X 1 2 n−1
n≥1
3 3
n
MX
(t) =
MX (t)
.
3 − 2MX (t)
(b) The expectation of Y , E[Y ] is given by
MY0 (0) =
0 (0)(3 − 2M (0)) − M (0) × −2M 0 (0)
MX
X
X
X
= 3.
(3 − 2MX (0))2
Problem 24.
Problem 25. Let X1 , X2 , . . . , Xn be independent and ln(Xi ) has normal distribution N (2i, 1), i =
1, 2, . . . , n. Let W = X1α X22α · · · Xnnα , α > 0 where α is any constant. Determine E(W ), V ar(W ) and
the pdf of W .
Solution. For independent normal distributions X ∼ N (µx , σx2 ) and Y ∼ N (µy , σy2 ),
X + Y ∼ N (µx + µy , σx2 + σy2 ) and aX ∼ N (aµx , a2 σx2 ).
Also, note that for X ∼ N (µ, σ 2 ), the mgf of X is given by
MX (t) = exp(µt + σ 2 t2 /2).
43
Therefore,
ln(W ) =
for convenience, let µ = 2α
The expected value of W ,
Pn
k=1
n
X
X
n
n
X
2
2
2
kα ln(Xk ) ∼ N 2α
k ,α
k
k=1
k 2 and
σ2
=
α2
Pn
k=1
k=1
2
k .
k=1
E[W ] = E[eln W ] = Mln W (1) = exp(µ + σ 2 /2).
The variance of W ,
Var(W ) = E[W 2 ] − E[W ]2 = Mln X (2) − E[W ]2 = exp(2µ + 2σ 2 ) − exp(2µ + σ 2 ).
The pdf of W is given by
d ln w
1
(ln w − µ)2
√ exp −
fW (w) = fln W (ln w)
=
.
dw
2σ 2
σw 2π
Problem 26. Let (X, Y ) be a two-dimensional continuous type random variables. Assume that,
E[X], E[Y ] and E[XY ] exist. Suppose that, E[X | Y = y] does not depend on y. Find E[XY ].
Solution. Let c = E[X | Y ], c is independent of Y . By Law of Total Expectation,
E[X] = E[E[X | Y ]] = E[c] = c
therefore, c = E[X].
By Law of Total Expectation,
E[XY ] = E[E[XY | Y ]] = E[Y E[X | Y ]] = E[Y E[X]] = E[X]E[Y ].
Problem 27.
Problem 28. Let X and Y be two discrete random variables with
P (X = x1 ) = p1 , P (X = x2 ) = 1 − p1 , 0 < p1 < 1;
and
P (Y = y1 ) = p2 , P (Y = y2 ) = 1 − p2 , 0 < p2 < 1.
If the correlation coefficient between X and Y is zero, check whether X and Y are independent random
variables.
Proof. We show that they are independent.
Note that Cov(X, Y ) = Cov(X − a, Y ) for some constant a.
E[(X − x1 )(Y − y1 )] = P (X = x2 , Y = y2 )(x2 − x1 )(y2 − y1 )
E[X − x1 ] E[Y − y1 ] = P (X = x2 )(x2 − x1 )P (Y = y2 )(y2 − y1 )
Since Cov(X, Y ) = 0, Cov(X − x1 , Y − y1 ) = 0 therefore,
E[(X − x1 )(Y − y1 )] = E[X − x1 ] E[Y − y1 ] =⇒ P (X = x2 , Y = y2 ) = P (X = x2 )P (Y = y2 ).
Similarly, looking at Cov(X − xi , Y − yj ) for other i, j gives us the desired result.
Problem 29.
Problem 30. A real function g(x) is non-negative and satisfies the inequality g(x) ≥ b > 0 for all
x ≥ a. Prove that for a random variable X if E[g(X)] exists then P (X ≥ a) ≤ E[g(X)]
.
b
Proof. Note that {X ≥ a} ⊆ {g(X) ≥ b}. By Markov’s inequality,
P (X ≥ a) ≤ P (g(X) ≥ b) ≤
Problem 31.
Problem 32.
44
E[g(X)]
.
b
6 Limiting Probabilities
Problem 1. Let {Xn } be a sequence of independent random variables defined by
P {Xn = 0} = 1 −
1
1
, and P {Xn = 1} = , n = 1, 2, . . .
n
n
(a) Find the distribution of X such that Xn → X.
a.s.
p
(b) Find the distribution of X such that Xn → X.
Solution. (a) No such X exists, (the standard) proof uses the Second Borel-Cantelli Lemma which
we haven’t done in lectures.
p
(b) Consider the constant random variable X = 0. We claim that Xn → X. For any ε > 0,
(
P {Xn = 1} = n1 ε < 1
P {|Xn − X| > ε} =
=⇒ lim P {|Xn − X| > ε} = 0.
n→∞
0
ε≥1
p
Therefore, Xn → 0.
Problem 2. For each n ≥ 1, let Xn be a uniformly distributed random variable over the set
{0, n1 , n2 , . . . , n−1
n , 1}. Prove that Xn converges to U [0, 1] in distribution.
Solution. Note that nXn ∈ Z. For any x ∈ [0, 1],
FXn (x) = P (Xn ≤ x)
= P (nXn ≤ nx)
X
=
P (nXn = k)
0≤k≤nx
X
=
P (nXn = k)
0≤k≤bnxc
=
bnxc + 1
.
n+1
For x < 0, Fn (x) = 0 and Fn (x) = 1 for x > 1.
Therefore,


0
lim Fn (x) = x
n→∞


1
x<0
0≤x<1
x≥1
which is the distribution U [0, 1].
Problem 3. Let (Ω, F, P ) = ([0, 1], B(R) ∩ [0, 1], U([0, 1])). Let {Xn , n = 2, . . . } be a sequence of
random variables with Xn = U([ 12 − n1 , 12 + n1 ]). Prove or disprove that Xn → X with X = 12 .
d
d
45
Proof. The distribution of Xn is given by


0
FXn (x) = x−(1/2−1/n)
=
2/n


1
2nx−n+2
4
1
2
In the limiting case,
lim FXn ( 12 ) = lim
n→∞
for x < 12 ,
FXn (x) =
n→∞

0
n>
 n(2x−1)+2
otherwise
4
x<
1
2 −
1
1
−x
2
+
1
2
1
n
1
n
− n1
≤x<
1
2
+
1
n
.
≤x
1
1
= ,
2
2
=⇒ lim FXn (x) = 0.
n→∞
Similarly solving for x > 12 , we get,
lim FXn (x) =
n→∞


0
1
2


1
x<
x=
x>
1
2
1
2
1
2
.
The distribution of X = 1/2 is given by
(
0
FX (x) =
1
x<
x≥
1
2
1
2
.
Since limn→∞ FXn (x) = FX (x) at all values of x at which FX (x) is continuous, Xn → X.
d
Problem
P 4. Let X1 , X2 , . . . be a sequence of i.i.d. random variables such that Xi ∼ N (0, 1). Define
Sn = ni=1 Xi , n = 1, 2, . . . . Then, as n → ∞, Snn converges in probability to X. Find X.
Solution. It converges to their mean which is 0 by the Weak Law of Large Numbers.
Problem 5. Consider polling of n voters and record the fraction Sn of those polled who are in
favour of a particular candidate. If p is the fraction of the entire voter population that supports this
candidate, then Sn = X1 +X2n+···+Xn , where Xi are i.i.d. random variables with B(1, p). How many
voters should be sampled so that we wish our estimate Sn to be within 0.02 of p with probability at
least 0.90?
Solution. Note that E[Xi ] = p and Var(Xi ) = p(1 − p). By CLT, for sufficiently large n, we can
approximate
!
√
Sn − p
n p
p(1 − p)
by N (0, 1). We wish to find the minimum n such that
P (|Sn − p| ≤ 0.02) ≥ 0.90.
Note that
P (Sn − p ≤ 0.02) = P
√
!
r
r
Sn − p
n
n
n· p
≤ 0.02
≈ Φ 0.02
p(1 − p)
p(1 − p)
p(1 − p)
46
Therefore,
r
P (|Sn − p| ≤ 0.02) ≈ Φ 0.02
r
n
n
= 2Φ 0.02
−1
p(1 − p)
p(1 − p)
p
So, we wish to find the smallest n such that Φ 0.02 n/p(1 − p) ≥ 0.95. From the Z-table we see
that
r
n
0.02
≈ 1.65 =⇒ n ≈ 6806.25p(1 − p).
p(1 − p)
n
p(1 − p)
r
− Φ −0.02
For this n to be valid it must be true for all values of p, and hence must be true for the value where
p(1 − p) is maximum that is at p = 1/2 which implies
n ≥ 6806.25 ×
1 1
× = 1701.5625.
2 2
Problem 6. Suppose that 30 electronic devices say D1 , D2 , . . . , D30 are used in the following manner.
As soon as D1 fails, D2 becomes operative. When D2 fails, D3 becomes operative etc. Assume that the
time to failure of Di is an exponentially distributed random variable with parameter = 0.1 (hour)−1 .
Let T be the total time of operation of the 30 devices. What is the probability that T exceeds 350
hours?
Solution. Let Xi be the time in hours for which the i-th device is active, Xi ∼ Exponential(0.1). So
E[Xi ] = 10 and Var(Xi ) = 100. By CLT, we can approximate
Pn
√
n
i=1 Xi
×
− 10
10
n
by N (0, 1) for sufficiently large n.
We wish to find
X
30
P
Xi > 350 = P
i=1
√
30
10
P30
i=1 Xi
30
!
− 10
√ !
√
5 30
>
≈ 1 − Φ(5/ 30).
30
Problem 7. Let X ∼ Bin(n, p). Use the CLT to find n such that: P [X > n/2] ≤ 1 − α. Calculate
the value of n when α = 0.90 and p = 0.45.
P
Solution. Let Xi ∼ Bin(1, p) be i.i.d. Note that ni=1 Xi ∼ Bin(n, p).
Note that E[Xi ] = p and Var(Xi ) = p(1 − p) so by CLT, for sufficiently large n,
Pn
r
n
i=1 Xi
− p ∼ N (0, 1).
p(1 − p)
n
Now, setting p = 0.45
P (X > n/2) = P (X/n − 0.45 > 0.05)
r
√
n
X
0.05
=P
− 0.45 > n × √
0.45 × 0.55 n
0.45 × 0.55
p
≈ 1 − Φ(0.05 n/0.2475)
We require,
P (X > n/2) ≈ 1 − Φ(x) ≤ 1 − α =⇒ Φ(x) ≥ α = 0.90.
From the Z-table, we see that this happens when
p
0.05 n/0.2475 = x ≥ 1.29 =⇒ n ≥ 164.746.
47
Problem 8. Use CLT to show that limn→∞ e−n
ni
i=0 i!
Pn
= 0.5.
Solution. Let Xi ∼ Poisson(1) be i.i.d. variables, then
X=
n
X
Xi ∼ Poisson(n).
i=1
By CLT, for sufficiently large n,
e−n
√
n
X
ni
i=0
Therefore,
lim e
−n
n→∞
n(X − n) ∼ N (0, σ 2 ). Therefore,
i!
n
X
ni
i=0
i!
= P (X ≤ n) = P
= lim P
n→∞
√
n(X − n) ≤ 0 .
√
1
n(X − n) ≤ 0 = Φ(0) = .
2
Problem 9. A person puts few one rupee coins into a piggy-bank each day. The number of one rupee
coins added on any given day is equally likely to be 1, 2, 3, 4, 5 or 6, and is independent from day
to day. Find an approximate probability that it takes at least 80 days to collect 300 rupees? Final
answer can be in terms of Φ(z) where
Z z
1
2
Φ(z) = √
e−t /2 dt.
2π −∞
Solution. Let Xi be the coins added on the i-th day,
1+2+3+4+5+6
= 3.5
6
12 + 2 2 + 3 2 + 4 2 + 5 2 + 6 2
E[Xi2 ] =
= 91/6
6
σ 2 = Var(Xi ) = E[Xi2 ] − E[Xi ]2 = 35/12
µ = E[Xi ] =
Let X =
Therefore,
P
X
n
i=1
Pn
i=1
n
Xi
. By CLT,
√ X−µ n
can be approximated by N (0, 1) for sufficiently large n.
σ
√ X − µ √ 300/n − µ
Xi ≤ 300 = P (nX ≤ 300) = P
n·
≤ n·
≈Φ
σ
σ
√
√ !
300/ n − 3.5 n
p
35/12
We wish to find the probability that it takes at least 80 days to collect 300 rupees, which is equivalent
to not collecting 300 rupees in the first 79 days, that is
P
X
79
Xi < 300
≈ Φ(1.5481).
i=1
Remark. The answer key gives the probability of P
P
80
i=1 Xi
< 300 which I believe is wrong.
Problem 10. Suppose that Xi , i = 1, 2, . . . , 30 are independent random variables each having a
Poisson distribution with parameter 0.01. Let S = X1 + X2 + · · · + X30 .
(a) Using central limit theorem evaluate P (S ≥ 3).
(b) Compare the answer in (a) with exact value of this probability.
48
Solution. Note that E[Xi ] = Var(Xi ) = 0.01.
(a) By CLT, we can approximate
√
30
S/30 − 0.01
√
0.01
by N (0, 1). Now
P (S ≥ 3) = P
√
30
S/30 − 0.01
√
0.01
≥
√
30
3/30 − 0.01
√
0.01
≈ 1 − Φ(0.9 ×
√
30)
which is on the order of 4 × 10−7 .
(b) We see that S ∼ Poisson(0.3). Therefore,
2
X
0.3i
P (S ≥ 3) = 1 − P (S < 3) = 1 − e−0.3
i=0
i!
≈ 0.003599.
7
Problem 11. Let X1 , X2 , . . . be iid random variables, each having pmf P (Xi = 1) =
9 = 1 − P (Xi =
P30
2
0). Let Yi = Xi + Xi , i = 1, 2, . . . . Use central limit theorem to evaluate P
i=1 Yi > 60
approximately.
Solution. Note that E[Yi ] = 14/9 and Var(Yi ) = 56/81. By CLT,
!
P30
√
Y
/30
−
14/9
i=1pi
30
56/81
can be approximated by N (0, 1). Now,
P
X
30
Yi > 60 = P
√
P30
30
i=1
Yi /30 − 14/9
i=1p
56/81
!
>
√
30
60/30 − 14/9
p
56/81
!!
≈ 1 − Φ(2
p
15/7).
Problem 12. Consider the dinning hall of Aravali Hostel, IIT Delhi which serves dinner to their
hostel students only. They are seated at 12-seat tables. The mess secretary observes over a long
period of time that 95 percent of the time there are between six and nine full tables of students, and
the remainder of the time the numbers are equally likely to fall above or below this range. Assume
that each student decides to come with a given probability p, and that the decisions are independent.
How many students are there? What is p?
Solution. Suppose that there are n students, let Xi be the indicator variable representing if the i-th
student arives, Xi ∼ B(1, p). Note that E[Xi ] = p and Var(Xi ) = p(1 − p).
We are given that
P
X
n
Xi < 6 × 12 = 0.025
P
i=1
X
n
Xi > 9 × 12 = 0.025.
i=1
By CLT, we can approximate
√
n
!
Pn
X
/n
−
p
i=1 i
p
p(1 − p)
by N (0, 1).
49
Therefore,
X
n
!
Pn
√
X
/n
−
p
i=1 i
p
P
n
< n
p(1 − p)
i=1
!!
√
72/n − p
≈Φ
n p
p(1 − p)
!
Pn
X
n
√
√
Xi /n − p
i=1
p
P
Xi > 9 × 12 = P
n
> n
p(1 − p)
i=1
!!
√
108/n − p
≈1−Φ
n p
.
p(1 − p)
√
Xi < 6 × 12 = P
72/n − p
p
p(1 − p)
!!
72/n − p
p
p(1 − p)
!!
From the Z-table we see that
Φ(x) = 0.025 =⇒ x ≈ −1.96 and 1 − Φ(x) = 0.025 =⇒ x ≈ 1.96.
Let q = 1.96. Substituting these values we see that

(
 p108−np = +q
np = 90
np(1−p)
=⇒
72−np
p108−90 = q
p
= −q
=⇒
(
p=1−
n=
90(1−p)
np(1−p)
90
p
182
90×q 2
= 0.0629
= 1431.06
.
Problem 13. Let X1 , X2 , . . . be a sequence of independent and identically distributed random
variables
P100with mean 1 and variance 1600, and assume that these variables are non-negative. Let
Y = k=1 Xk .
(a) What does Markov’s inequality tell you about the probability P (Y ≥ 900).
(b) Use the central limit theorem to approximate the probability P (Y ≥ 900).
Solution.
(a) By Markov’s Inequality,
P (Y ≥ 900) ≤
E[Y ]
1
= .
900
9
(b) By CLT, we can approximate
10
Y /100 − 1
40
by N (0, 1). Thus,
P (Y ≥ 900) = P
Y /100 − 1
10 ·
≥ 2 ≈ 1 − Φ(2) ≈ 0.02275.
40
Problem 14. A person stands on the street and sells newspapers. Assume that each of the people
passing by buys a newspaper independently with probability 13 . Let X denote the number of people
passing past the seller during the time until he sells his first 100 copies of the newspaper. Using CLT,
find P (X ≤ 300) approximately.
Solution. Let Xi be the rv representing the number of people passing past the seller after he has sold
the i −
paper and till he sells the i-th paper. We see that Xi are i.i.d., Xi ∼ Geometric(1/3) and
P1-th
100
X = i=1 Xi .
Note that E[Xi ] = 3 and Var(Xi ) = 6. By CLT, we can approximate
X/100 − 3
√
10
6
50
by N (0, 1).
Therefore,
P (X ≤ 300) = P
X/100 − 3
√
10
≤ 0 ≈ Φ(0) = 0.5.
6
Problem 15. Let X1 , X2 , . . . be iid random variables, each having Bernoulli distribution with parameter 8/9.
(a) Find the distribution of Yi = Xi + Xi2 , i = 1, 2, . . . .
P
20
(b) Use central limit theorem to evaluate P
Y
>
20
approximately.
i
i=1
(a) The pmf of Yi is given by
Solution.


1/9
P (Yi = k) = 8/9


0
k=0
.
k=2
otherwise
(b) The mean E[Yi ] = 16/9 and Var(Yi ) = 32/81. By CLT, we can approximate
!
P20
√
Yi /20 − 16/9
i=1p
20
32/81
by N (0, 1). Therefore,
P
X
20
Yi > 20 = P
i=1
√
P20
20
Yi /20 − 16/9
i=1p
32/81
!
√ !
√
7 10
>−
≈ 1 − Φ −7 10/4 .
4
Problem 16. Let X1 , X2 , . . . , Xn be n independent Poisson distributed random variables with means
1, 2, . . . , n respectively. Find an x in terms of t such that
Sn − n2 /2
P
≤ t ≈ Φ(x), for sufficiently large n
n
where Φ is the CDF of N (0, 1).
Solution. Let Yi be i.i.d. rv which are Poisson distributed with parameter 1. Note that E[Yi ] =
Var(Yi ) = 1 and that
Sn =
n
X
i=1
Therefore, Sn and
Pn(n+1)/2
i=1
X
n Xi ∼ Poisson
i = Poisson(n(n + 1)/2).
i=1
Yi have the same distribution. By CLT we can approximate
r
n(n + 1)
Sn
−1
2
n(n + 1)/2
as N (0, 1).
51
Therefore,
Sn − n2 /2
P
≤ t = P (Sn ≤ nt + 12 n2 )
n
r
r
n(n + 1)
Sn
n(n + 1)
=P
−1 ≤
2
n(n + 1)/2
2
!!
r
nt + 12 n2
n(n + 1)
≈Φ
−1
2
n(n + 1)/2
r
n
=Φ
(2t − 1) .
2(n + 1)
nt + 12 n2
−1
n(n + 1)/2
!!
Problem 17. Using MGF, find the limit of Binomial distribution with parameters n and p as n → ∞
such that np = λ so that p → 0.
Solution. The MGF of a binomial distribution X ∼ B(n, p) is given by
tX
MX (t) = E[e
]=
n
X
x=0
n x
e
p (1 − p)n−x = (pet + 1 − p)n .
x
tx
On substituting p = λ/n, we see that
lim MX (t) = lim
n→∞
n→∞
λ(et − 1)
1+
n
n
= exp(λ(et − 1)).
The MGF of a poisson variable Y with parameter λ is given by
MY (t) = E[etY ] =
X
e−λ ·
n≥0
(λet )n
= exp(λ(et − 1)).
n!
Thus, as n → ∞ the Binomial distribution B(n, λ/n) approaches Poisson(λ).
52
7 Introduction to Stochastic Processes
Problem 1. Trace the path of the following stochastic processes:
(a) {Wk | k ∈ T } where Wk be the time that the k th has to wait in the system before service and
T = {1, 2, . . . }.
(b) {X(t) | t ∈ T } where X(t) be the number of jobs in system at time t, T = {t | 0 ≤ t < ∞}.
(c) {Y (t) | t ∈ T } where Y (t) denote the cumulative service requirements of all jobs in system at
time t, T = {t | 0 ≤ t < ∞}.
Problem 2. Suppose that X1 , X2 , . . . are iid random
each having N (0, σ 2 ). Let {Sn | n =
Pn variables
1
1, 2, . . . } be a stochastic process where Sn = exp( i=1 Xi − 2 nσ 2 ). Find E[Sn ] for all n.
Solution. Note that
E[exp(Xn − 12 σ 2 )] = E[exp(Xn )] · e−σ
2 /2
= MXn (1)e−σ
2 /2
= 1.
Therefore,
X
Y
Y
n
n
n
2
2
1
1 2
E[Sn ] = E exp
Xi − 2 nσ
=E
exp(Xi − 2 σ ) =
E[exp(Xi − 12 σ 2 )] = 1.
i=1
i=1
i=1
Problem 3. Let X(t) = A0 + A1 t + A2 t2 , where Ai ’s are uncorrelated random variables with mean
0 and variance 1. Find the mean function and covariance function of X(t).
Solution. The mean
E[X(t)] = E[A0 + A1 t + A2 t2 ] = E[A0 ] + E[A1 ]t + E[A2 ]t2 = 0.
The covariance function,
Cov(X(s), X(t)) = E[X(s)X(t)] − E[X(s)]E[X(t)]
= E[(A0 + A1 s + A2 s2 )(A0 + A1 t + A2 t2 )]
X
2 X
2
i j
=E
Ai Aj s t
i=0 j=0
=
2 X
2
X
E[Ai Aj ]si tj
i=0 j=0
j=2
2 X
X
2 i i
i j
=
E[Ai ]s t +
E[Ai Aj ]s t
i=0
=
2
X
i=0
j=0
i6=j
si ti +
2 X
2
X
i=0 j=0
i6=j
2
= 1 + st + (st) .
53
E[Ai ]E[Aj ]si tj
Problem 4. Consider the process Xt = A cos(wt)+B sin(wt) where A and B are uncorrelated random
variables with mean 0 and variance 1 and w is a positive constant. Is {Xt | t ≥ 0} co-variance/widesense stationary?
Solution. The process is wide-sense stationary as
• The mean of Xt
E[Xt ] = E[A cos(wt) + B sin(wt)] = E[A] cos(wt) + E[B] sin(wt) = 0
is independent of time.
• The covariance function,
Cov(X(s), X(t)) = E[X(s)X(t)] − E[X(s)]E[X(t)]
= E[(A cos(ws) + B sin(ws))(A cos(wt) + B sin(wt)]
= E[A2 ] cos(ws) cos(wt) + E[A]E[B](cos(ws) sin(wt) + sin(ws) cos(wt))
+ E[B 2 ] sin(ws) sin(wt)
= cos(ws) cos(wt) + sin(ws) sin(wt)
= cos(w(s − t))
= Cov(X(s − t), X(0))
only depends on the difference of times.
• And the second moment is finite as
E[(X(t))2 ] = E[X(t)]2 + Cov(X(t), X(t)) = cos(w(t − t)) = 1.
Problem 5. In a communication system, the carrier signal at the receiver is modeled by Y (t) =
X(t) cos(2πt + Θ) where {X(t) | t ≥ 0} is a zero-mean and wide sense stationary process, Θ is a
uniform distributed random variable with interval (−π, π) and w is a positive constant. Assume that,
Θ is independent of the process {X(t) | t ≥ 0}. Is {Y (t) | t ≥ 0} wide sense stationary? Justify your
answer.
Solution. The process is wide-sense stationary as,
• The mean,
E[Y (t)] = E[X(t)]E[cos(2πt + Θ)] = 0
is independent of time.
• The covariance function,
Cov(Y (s), Y (t)) = E[Y (s)Y (t)] − E[Y (s)]E[Y (t)]
= E[X(s)X(t) cos(2πt + Θ) cos(2πs + Θ)]
= E[X(s)X(t)]E[cos(2πt + Θ) cos(2πs + Θ)]
Z +π
1
= Cov(X(s), X(t))
2π cos(2πt + θ) cos(2πs + θ) dθ
−π
=
1
2
cos(2π(s − t)) Cov(X(s), X(t)).
Since X is WSS, Cov(Y (s), Y (t)) =
t), Y (0).
1
2
Cov(X(s), X(t)) =
54
1
2
Cov(X(s − t), X(0)) = Cov(Y (s −
• Finally, the second moment is finite as
E[Y (t)2 ] = Cov(Y (t), Y (t)) + E[Y (t)]2 =
1
2
Cov(X(t), X(t)) = 12 E[X(t)2 ]
which is finite as X is WSS.
Problem 6. Let X and Y be iid random variables each having uniform distribution on interval
[−π, +π]. Let Z(t) = sin(Xt + Y ) for t ≥ 0. Is {Z(t), t ≥ 0} covariance stationary?
Solution. The process is wide-stationary as,
• The mean,
E[Z(t)] = E[sin(Xt + Y )] = E[E[sin(tx + Y ) | X = x]] = E[0] = 0
is independent of time.
• Note that
Z
+π
sin(a + x) sin(b + x) dx = π cos(a − b).
−π
Therefore, the covariance function
Cov(Z(s), Z(t)) = E[Z(s)Z(t)] − E[Z(s)]E[Z(t)]
= E[sin(Xs + Y ) sin(Xt + Y )]
= E[E[sin(xs + Y ) sin(xt + Y ) | X = x]]
= E[ 12 cos(X(s − t))]
(
1
s=t
= 21 sin(π(s−t))
s 6= t
2 ·
π(s−t)
(
1
s−t=0
= 21 sin(π(s−t))
s − t 6= 0
2 ·
π(s−t)
= Cov(Z(s − t), Z(0))
only depends on the difference in times.
• The second moment is finite as
E[Z(t)2 ] = Cov(Z(t), Z(t)) + E[Z(t)]2 = 12 .
Problem 7. Consider the random telegraph signal, denoted by X(t), jumps between two states, 0
and 1, according to the following rules. At time t = 0, the signal X(t) start with equal probability
for the two states, i.e., P (X(0) = 0) = P (X(0) = 1) = 12 , and let the switching times be decided by a
Poisson process {Y (t) | t ≥ 0} with parameter λ independent of X(0). At time t, the signal
1
X(t) =
1 − (−1)X(0)+Y (t) , t > 0.
2
If {X(t) | t ≥ 0} covariance/wide-sense stationary?
Solution. The process is wide-sense stationary as,
55
• The mean,
E[X(t)] =
1
2
− 12 E[(−1)X(0) ]E[(−1)Y (t) ] =
1
2
−
1
2
· 0 · E[(−1)Y (t) ] =
1
2
as independent of time.
• If X ∼ Poisson(λ) then,
E[(−1)X ] =
X
e−λ λk
(−1)k ·
= e−2λ .
k!
k≥0
The covariance function for s > t,
Cov(X(s), X(t)) = E[X(s)X(t)] − E[X(s)]E[X(t)]
E[(1 − (−1)X(0)+Y (s) )(1 − (−1)X(0)+Y (t) )] − 1
4
2X(0)+Y
(s)+Y
(t)
E[(−1)
]
=
4
E[(−1)2X(0) ]E[(−1)Y (s)+Y (t) ]
=
4
E[(−1)Y (s)−Y (t) ]
=
(as X(0), Y (s), Y (t) ∈ Z)
4
= e−2λ(s−t) .
(as Y (s) − Y (t) ∼ Poisson(λ(s − t)))
=
depends only on the difference as desired.
• The second moment is finite as
1
E[X(t)2 ] = Cov(X(t), X(t)) + E[X(t)]2 = 1 + .
4
Problem 8. Let {X(t) | 0 ≤ t ≤ T } be a stochastic process such that E[X(t)] = 0 and E[X(t)2 ] = 1
for all t ∈ [0, T ]. Find the upper bound of |E[X(t)X(t + h)]| for any h > 0 and t ∈ [0, T − h].
Solution. We show that it is upper bounded above by 1,
X(t)2 + X(t + h)2 − (X(t + h) − X(t))2
E[X(t)X(t + h)] = E
2
= 1 − 12 E[(X(t + h) − X(t))2 ]
≤ 1.
(X(t + h) + X(t))2 − X(t)2 − X(t + h)2
E[X(t)X(t + h)] = E
2
= 12 E[(X(t + h) + X(t))2 ] − 1
≥ −1.
Therefore,
|E[X(t)X(t + h)]| ≤ 1.
Remark. It also trivially follows from Cauchy-Schwarz inequality.
56
Problem 9. Let A be a positive random variable that is independent of a strictly stationary random
process {X(t) | t ≥ 0}. Show that Y (t) = AX(t) is also strictly stationary random process.
Proof. I assume that A is a discrete variable, the same idea works for the continuous case, just replace
the integral with a sum.
Let S be the support of A. For any τ, t1 , t2 , . . . , tn ∈ R and n ∈ N,
FY (t1 ),...,Y (tn ) (y1 , . . . , yn ) = P (Y (t1 ) ≤ y1 , . . . , Y (tn ) ≤ yn )
X
=
P (A = a, Y (t1 ) ≤ y1 , . . . , Y (tn ) ≤ yn )
a∈S
=
X
P (A = a, X(t1 ) ≤ y1 /a, . . . , X(tn ) ≤ yn /a)
a∈S
=
X
=
X
P (A = a)P (X(t1 ) ≤ y1 /a, . . . , X(tn ) ≤ yn /a)
a∈S
P (A = a)FX(t1 ),...,X(tn ) (y1 /a, y2 /a, . . . , yn /a)
a∈S
=
X
P (A = a)FX(t1 +τ ),...,X(tn +τ ) (y1 /a, . . . , yn /a)
a∈S
= FY (t1 +τ ),...,Y (tn +τ ) (y1 , . . . , yn )
and hence {Y (t) | t ≥ 0} is a stationary process.
Problem 10. Is the stochastic process {X(t) | t ∈ T } stationary, whose probability distribution
under a certain condition given by
( (at)n−1
n = 1, 2, . . .
n+1
P {X(t) = n} = (1+at)
.
at
n=0
1+at
Solution. The process is not strictly stationary as its second moment is not independent of time.
X at n−1
1
E[X(t) ] =
·
n2
= 2at + 1.
(1 + at)2
1 + at
2
n≥1
If a = 0, P (X(t) = 1) is not defined therefore a 6= 0.
57
8 Discrete Time Markov Chains (Incomplete)
Problem 1. The owner of a local one-chair barber shop is thinking of expanding because there seem
to be too many people waiting. Observations indicate that in the time required to cut one person’s
hair there may be 0, 1 and 2 arrivals with probability 0.3, 0.4 and 0.3 respectively. The shop has a
fixed capacity of six people whose hair is being cut. Let X(t) be the number of people in the shop at
any time t and Xn = X(t+
n ) be the number of people in the shop after the time instant of completion
of the n-th person’s hair cut. Prove that {Xn | n = 1, 2, . . . } is a Markov chain assuming i.i.d arrivals.
Find its one step transition probability matrix.
+
Solution. Let Yn be the number of people that may arrive between in time (t+
n−1 , tn ]. We see that
Xn = min(6, max(Xn−1 − 1, 0) + Yn )
as the maximum capacity of the shop is 6 and if Xn−1 > 0 we cut the hair of one person who leaves
the barbershop.
We see that Xn = f (Xn−1 , Yn ) where Yn are i.i.d. and independent of X0 ,
P (Xn+1 = j | X1 , X2 , . . . , Xn = i) = P (f (i, Yn+1 ) = j | X1 , X2 , . . . , Xn = i)
= P (f (i, Yn+1 ) = j)
= P (f (Xn , Yn+1 ) = j | Xn = i)
= P (Xn+1 = j | Xn = i).
We get the second equality by seeing that (X1 , X2 , . . . , Xn ) is a
is independent to Yn+1 .
The probability matrix is this given by

0.3 0.4 0.3 0
0
0
0.3 0.4 0.3 0
0
0

 0 0.3 0.4 0.3 0
0

0
0
0.3
0.4
0.3
0

0
0
0 0.3 0.4 0.3

0
0
0
0 0.3 0.4
0
0
0
0
0 0.3
function of (X0 , Y1 , Y2 , . . . , Yn ) which

0
0

0

0

0

0.3
0.7
over the state space {0, 1, 2, 3, 4, 5, 6} as
Xn | (Xn−1 = 0) = Yn ,
Xn | (Xn−1 = i) = i − 1 + Yn for 1 ≤ i ≤ 5,
(
5 + Yn if Yn ∈ {0, 1}
Xn | (Xn−1 = 6) =
.
6
if Yn = 2
Problem 2. Let X0 be an integer-valued random variable, P (X0 = 0) = 1, that is independent of
the i.i.d. sequence Z1 , Z2 , . . . , where P (Zn = 1) = p, P (Zn = −1) = q, and P (Zn = 0) = 1 − (p + q).
Let Xn = max(0, Xn−1 + Zn ), n = 1, 2, . . . . Prove that {Xn , n = 0, 1, . . . } is a discrete time Markov
chain. Write the one-step transition probability matrix or draw the state transition diagram for this
Markov chain.
58
Solution. We see that Xn = f (Xn−1 , Zn ) where Zn are i.i.d., the same argument as the previous
problem gives us that Xn is Markov.
The transition probabilities are given by
P (Xn = i | Xn−1 = j) = P (Zn = i − j) if j > 0


P (Zn ∈ {0, −1}) i = 0
P (Xn = i | Xn−1 = 0) = P (Zn = 1)
.
i=1


0
otherwise
Thus, the transition matrix is

1−p
p
0
0
0
 q
1 − (p + q)
p
0
0

 0
q
1
−
(p
+
q)
p
0

 0
0
q
1
−
(p
+
q)
p

 0
0
0
q
1 − (p + q)

..
..
..
..
..
.
.
.
.
.

···
· · ·

· · ·

· · ·

· · ·

..
.
over the state space {0, 1, 2, . . . }.
Problem 3. Suppose that a machine can be in two states: 0 = working and 1 = out of order on a day.
The probability that a machine is working on a particular day depends on the state of the machine
during two previous days. Specifically assume that P (X(n + 1) = 0 | X(n − 1) = j, X(n) = k) = qjk ,
j, k ∈ {0, 1} where X(n) is the state of the machine on day n.
(a) Show that {X(n) | n = 1, 2, . . . } is not a discrete Markov chain.
(b) Define a new state space for the problem by taking the pairs (j, k) where j and k are 0 or 1. We
say that machine is in state (j, k) on day n if the machine is in state j on day (n − 1) and in
state k on day n. Show that with this changed state space the system is a discrete time Markov
chain.
(c) Suppose the machine was working on Monday and Tuesday. What is the probability that it will
be working on Thursday?
Solution. In this question they assume that time-homogeneity
P (Xn+1 = i | Xn = j) = P (Xn = i | Xn−1 = j)
automatically implies Markov-ness
P (Xn+1 = i | X0 , X1 , . . . , Xn−1 , Xn = j) = P (Xn+1 = i | Xn = j).
i will cri.png (it doesn’t do so btw)
(a) Actually, it’s a Markov chain if q0k = q1k for all k ∈ {0, 1}.
If q0k 6= q1k for some k ∈ {0, 1}, for X(n) to be a markov chain we must have
P (X(n + 1) = 0 | X(n) = k) = P (X(n + 1) = 0 | X(n − 1) = j, X(n) = k) = qjk .
for all j ∈ {0, 1} which would mean q0k = q1k .
59
(b) The random vector (X(n), X(n + 1)) satisfy the relation that
P ((X(n), X(n + 1)) = (a, b) | (X(n − 1), X(n)) = (c, d))
(
0
=
P (X(n + 1) = b | X(n) = a, X(n − 1) = c) = qca
a 6= d
a = d.
which tells us that it is a DTMC.
(c) The transition matrix is given by


q00 1 − q00 0
0
0
0
q01 1 − q01 

P =
q10 1 − q10 0
0 
0
0
q11 1 − q11
over the space {(0, 0), (0, 1), (1, 0), (1, 1)}.
If we take X(0) to be the state of machine on Monday, we wish to find X(3). We get this by
looking at
2
(1, 0, 0, 0)P 2 = (q00 , 1 − q00 , 0, 0)P = (q00
, q00 (1 − q00 ), (1 − q00 )q01 , (1 − q00 )(1 − q01 ))
We want the probability that the X(3) = 0 which we get by summing up the first and third
columns
2
q00
+ q01 (1 − q00 ).
Problem 4. The transition probability
discrete time Markov chain {Xn , n = 0, 1, . . . }
 matrix of a
0.3 0.4 0.3
having three states 1, 2 and 3 is P = 0.6 0.2 0.2 and the initial distribution π = (0.7, 0.2, 0.1).
0.5 0.4 0.1
(a) Compute P (X2 = 3), (b) Compute P (X3 = 2, X2 = 3, X1 = 3, X0 = 2).
Solution.
(a) Just find the third value of πP 2 , this comes out to 0.212.
(b) By repeatedly applying the Markov property we see that
P (X3 = 2, X2 = 3, X1 = 3, X0 = 2)
= P (X3 = 2 | X2 = 3)P (X2 = 3 | X1 = 3)P (X1 = 3 | X0 = 2)P (X0 = 2)
which is 0.4 × 0.1 × 0.2 × 0.2 = 0.0016.
Problem 5. Consider a time-homogeneous discrete time Markov chain {Xn , n = 0, 1, . . . } with state
space S = {0, 1, 2, 3, 4} and one-step transition probability matrix


1
0
0
0
0
0.5 0 0.5 0
0



P =  0 0.5 0 0.5 0 

0
0 0.5 0 0.5
0
0
0
0
1
(a) Classify the states of the chain as transient, +ve recurrent or null recurrent.
60
(b) When P (X0 = 2) = 1, find the expected number of times the Markov chain visit state 1 before
being absorbed.
(c) When P (X0 = 1) = 1, find the probability that the Markov chain absorbs in state 0.
Solution. The state diagram is,
1
0
1/2
1/2
1
1
1/2
3
2
1/2
1/2
1/2
4
(a) From the state diagram we see that the states 0 and 4 are positive recurrent, and 1, 2, 3 are
transient. There are no null recurrent states in a finite markov chain.
(b) Let mi be the expected number of visits to state 1 before reaching state 0 if we start from state
i.
Since we never visit any other state after reaching 0 or 4, we get that m0 = m4 = 0. Writing
down the equations for the rest,
m1 = 1 +
1
2
· 0 + 12 m2 ,
m2 = 12 m1 + 12 m3 ,
m3 = 12 m2 +
1
2
· 0.
We get these by looking at the one-step transition of each state. Solving these tells us that
m2 = 1.
(c) Let pi be the probability that the Markov chain absorbs in state 0 if we start at state i. Clearly,
p0 = 1 and p4 = 0. Setting up equations for the rest tells us
p1 = 12 p0 + 12 p2 ,
p2 = 12 p1 + 12 p3 ,
p3 = 12 p2 + 12 p4 ,
which on solving gives us that p1 = 3/4.


0.4 0.6 0
Problem 6. Consider a DTMC with transition probability matrix 0.4 0 0.6. Find the sta0 0.4 0.6
tionary distribution for this Markov chain.
Solution. A stationary distribution will be a left eigenvector of the transition matrix, so we solve the
equation


0.4 0.6 0
[x, y, z] 0.4 0 0.6 = [x, y, z]
0 0.4 0.6
along with x + y + z = 1 to get
π=
[4, 6, 9]
.
19
61
Problem 7. Two gamblers, A and B, bet on successive independent tosses of an unbiased coin that
lands heads up with probability p. If the coin turns up heads, gambler A wins a rupee from gambler
B, and if the coin turns up tails, gambler B wins a rupee from gambler A. Thus the total number
of rupees among the two gamblers stays fixed, say N . The game stops as soon as either gambler is
ruined; i.e., is left with no money! Assume the initial fortune of gambler A is i. Let Xn be the amount
of money gambler A has after the n-th toss. If Xn = 0, then gambler A is ruined and the game stops.
If Xn = N , then gambler B is ruined and the game stops. Otherwise the game continues. Prove that
{Xn , n = 0, 1, . . . } is a discrete time Markov chain. Write the one-step transition probability matrix
or draw the state transition diagram for this Markov chain.
Solution. The random variable Xn satisfies
Xn+1 | (Xn = 0) = 0,
(
x + 1 coins turns up heads
Xn+1 | (Xn = x) =
for 0 < x < N,
x − 1 coins turns up tails
Xn+1 | (Xn = N ) = N.
Since each coin toss is independent we get that Xn is Markov over the space {0, 1, . . . , N }.
The state transition matrix is given by
1
0
1−p
1
···
2
1−p
1
p
p
p
1−p
N −1
p
N
1−p
Problem 8. One way of spreading information on a network uses a rumor-spreading paradigm.
Suppose that there are 5 hosts currently on the network. Initially, one host begins with a message.
In every round, each host that has the message contacts another host chosen independently and
uniformly at random from the other 4 hosts, and sends the message to the host. The process stops
when all hosts has the message. Model this process as discrete time Markov chains with
(a) Xn be state of host (i = 1, 2, . . . , 5) who received the message at the end of the n-th round.
(b) Yn be number of hosts having the message at the end of n-th round.
Find one step transition probability matrix for the above discrete time Markov chains. Classify the
states of the chains as transient, positive recurrent or null recurrent.
Solution. (a) I assume Xn is the host who last received the message. By the definition of the process
we see that
(
1
i 6= j
P (Xn = i | Xn−1 = j) = 4
.
0 i=j
Since all pairs of state communicate we see that all states are positive recurrent.
(b) The state space of Yn is {1, 2, 3, 4, 5}. Note that at each step Yn+1 ∈ {Yn , Yn + 1}, the former
occurs if we visit an already visited state and the latter occurs if we visit a new state.
We see that,
P (Yn+1 = k | Yn = k) =
62
k−1
4
as each visited host is connected to k − 1 other visited hosts which are chosen with probabilitiy
(k − 1)/4. This gives us the transition matrix


0 1 0 0 0
0 1 3 0 0 
4
4


0 0 1 1 0  .
2
2


0 0 0 3 1 
4
4
0 0 0 0 1
Clearly state 5 is absorbing and hence positive recurrent.
Problem 9. For j = 0, 1, . . . , let Pj,j+2 = vj and Pj = 1 − vj , define the transition probability matrix
of Markov chain. Discuss the character of the states of this chain.
Solution. I assume that 0 < vi < 1 for all i.
The transition matrix is

1 − v0 0 v0 0 0 0 0
1 − v1 0 0 v1 0 0 0

1 − v2 0 0 0 v2 0 0

1 − v3 0 0 0 0 v3 0

1 − v 0 0 0 0 0 v
4
4

..
.. ..
..
..
..
..
.
. .
.
.
.
.

···
· · ·

· · ·

· · ·

· · ·

..
.
There is no transition from an even state to an odd state, however each odd state has a transition
to 0 and every transition from an odd state is to a strictly greater. This tells us that you don’t return
to an odd state after starting from it, which implies that the odd states are transient. Formally, for
odd i
fi,i = P (∃n ≥ 1 : Xn = i | X0 = i)
= 1 − P (Xn 6= i ∀n ≥ 1 | X0 = i)
X
=1−
P (Xn 6= i ∀n ≥ 1 | X1 = j)P (X1 = j | X0 = i)
j≥0
= 1 − P (Xn 6= i ∀n ≥ 1 | X1 = 0)P (X1 = 0 | X0 = i)
− P (Xn 6= i ∀n ≥ 1 | X1 = i + 2)P (X1 = i + 2 | X0 = i)
= 1 − 1 · (1 − vi ) − 1 · vi
=0
where we get that P (Xn 6= i | X1 = 0) = 1 from the observation that there is no transition from an
even to an odd state, and P (X1 6= i | Xi = i + 2) = 1 from the observation that there is no transition
from an odd state to a smaller odd state.
Clearly the even states form a closed communicating class. We give conditions on it being transient,
positive recurrent or null recurrent.
The probability that a chain starting at 0 returns to 0 is given by
f00 = P (∃n ∈ N : Xn = 0 | X0 = 0)
= 1 − P (Xn 6= 0 ∀n ≥ 1 | X0 = 0)
= 1 − P (Xi = 2i ∀i ≥ 0)
Y
=1−
v2i .
i≥0
63
We get the third equality by noticing the fact that if j is the smallest index such that Xj 6= 2j then
Xj−1 = 2(j − 1) and since Xj ∈ {0, Xj−1 + 2} we must have Xj = 0 which isnt’t allowed.
1. If
Q
i≥0 v2i
> 0, then f00 < 1 and the class is transient.
2. If
Q
i≥0 v2i
= 0, f00 = 1 and the class is recurrent.
Now suppose the class is positive recurrent. Since this class is aperiodic (as there exists a
self transition), we see that a positive stationary distribution exists. (Note that here we are
considering the chain restricted to even states hence we can treat it as an irreducible markov
chain.)
Let this distribution be π, it satisfies the equations
X
π2i = 1,
i≥0
π0 =
X
(1 − v2i )π2i ,
i≥0
πi+2 = vi πi for i ≥ 0.
The third equation tells us that π2i = π0
j=0 v2j .
Qi−1
So,
π0 =
X
π2i − v2i π2i
i≥0
=1−
X
v2i π2i
i≥0
= 1 − π0
i
XY
v2j
i≥0 j=0
=⇒ π0 =
1+
P
1
Qi
j=0 v2j
i≥0
Qk−1
=⇒ π2k =
v2j
Qi
j=0
1+
P
i≥0
j=0 v2j
.
P Q
We require that π2k > 0 which happens iff i≥0 ij=0 vj < ∞.
P Qi
So, if
i≥0
j=0 vj < ∞, then all the even states are positive recurrent and null recurrent
otherwise.
Problem 10.
Problem 11.
Problem 12.
Problem 13.
Problem 14.
Problem 15.
64
Problem 16.
Problem 17.
Problem 18.
Problem 19.
Problem 20.
Problem 21.
Problem 22.
Problem 23. 1 Given a DTMC {Xn , n = 0, 1, 2, . . . } with state space {1, 2, 3, 4} and one-step transition probability matrix


1
0
0
0
 0.5
0 0.25 0.25
.
P =
0.25 0.5
0
0.25
0
0
0
1
(a) Draw the state transition diagram for this DTMC model.
(b) Find the expected number of transitions until you reach state 4, considering the initial state 2.
(c) Compute the probability that you eventually reach state 1 given the initial state is 2.
Solution.
(a) The state transition diagram is,
1
1
2
1
1
4
2
1
4
1
2
3
1
4
4
1
4
1
(c) The one-step probability matrix of this chain with states in the order 1, 4, 2, 3 is


1
0
0
0
 0
1
0
0 


1/2 1/4 0 1/4 .
1/4 1/4 1/2 0
Let
A=
1/2 1/4
0 1/4
, and B =
1/4 1/4
1/2 0
be the submatrices representing the transitions from the recurrent states to the transient states,
ie, from {1, 4} to {2, 3}, and between the states transient states, ie {2, 3}, respectively.
The fundamental matrix is given by
M = (I − B)−1 =
1
−1 1
−1/4
8/7 2/7
=
−1/2
1
4/7 8/7
I decided to reword the problem for obvious reasons. You can’t model mental health as a DTMC.
65
with the state space {2, 3}.
The matrix
8/7 2/7
G = MA =
4/7 8/7
1/2 1/4
9/14 5/14
=
1/4 1/4
4/7 3/7
with row space {2, 3} and column space {1, 4}, represents the probability Gi,j of reaching state
j given that you start in state i.
We require G2,1 = 9/14 ≈ 0.6428.
(b) Let F be the event that we end in state 4. From the matrix G compute in problem 23(c), we
know that
P (F | X0 = 2) = 5/14, P (F | X0 = 3) = 3/7
and obviously, P (F | X0 = 4) = 1 and P (F | X0 = 0) = 0.
Let N be the number of transitions, we have to find E[N | X0 = 2, F ], let mi = E[N | X0 = i, F ].
From the Law of Total Expectation,
E[N | X0 = i, F ] = E[E[N | X1 , X0 = i, F ]]
X
=
P (X1 = j | X0 = i, F )E[N | F, X0 = i, X1 = j].
j
We compute,
P (X1 = j, X0 = i, F )
P (X0 = i, F )
P (F | X1 = j, X0 = i)P (X1 = j | X0 = i)P (X0 = i)
=
P (F | X0 = i)P (X0 = i)
P (F | X1 = j)
=
· P (X1 = j | X0 = i)
P (F | X0 = i)
P (F | X0 = j)
=
· P (X1 = j | X0 = i)
P (F | X0 = i)
P (X1 = j | X0 = i, F ) =
where we get the third equality by noting that the event F is nothing but Xn = 4 for some n
and then applying the Markov property.
Now note that for i ∈
/ {0, 4} using the markov property,
E[N | X1 = j, X0 = i, F ] = E[N | X1 = j, F ] + 1 = E[N | X0 = j, F ] + 1.
Now, writing down all the equations, we see that
m4 = 0,
3/7
1
1
1
0
1
× m3 +
× m4 +
× m0
5/14 4
5/14 4
5/14 2
3
7
=1+
m3 +
m4 ,
10
10
5/14 1
1
1
0
1
m3 = 1 +
× m2 +
× m4 +
× m1
3/7
2
3/7 4
3/7 4
5
7
=1+
m2 +
m4 .
12
12
m2 = 1 +
Solving these equations, we get
m2 =
52
≈ 1.4857.
35
66
9 Continuous Time Markov Chains


−5 3
2
Problem 1. Consider a CTMC with Q =  1 −3 2  and initial distribution (0, 1, 0). Find
2
4 −6
P (τ > t) where τ denotes the first transition time of the Markov chain.
Solution. The holding time of the second state is distributed as Exponential(−q22 ). Thus, the probability P (τ > t) = e−3t .
Problem 2. Suppose the arrival at a counter form a time homogeneous Poisson process with parameter λ and suppose each arrival is of type A or of type B with respective probabilities p and 1 − p.
Let X(t) be the type of the last arrival before time t. Write down the forward Kolmogorov equations
for the stochastic process {X(t), t ≥ 0}. Find the time dependent system state probabilities.
Solution. It can be written as a CTMC on space {A, B} with generator,
−(1 − p)λ (1 − p)λ
Q=
.
pλ
−pλ
The forward Kolmogorov equation is
P 0 (t) = P (t)Q.
Looking at the value of PA,A (t), we see that
0
PA,A
(t) = −(1 − p)λPA,A (t) + pλPA,B (t).
Since PA,A (t) + PA,B (t) = 1,
0
PA,A
(t) = −(1 − p)λPA,A (t) + pλ(1 − PA,A (t))
0
=⇒ PA,A
(t) = pλ − λPA,A (t)
d(eλt PA,A (t))
= pλeλt
dt
=⇒ eλt PA,A (t) − 1 = p(eλt − 1)
=⇒
=⇒ PA,A (t) = p + (1 − p)e−λt
p + (1 − p)e−λt (1 − p)(1 − e−λt )
=⇒ P (t) =
.
p(1 − e−λt )
(1 − p) + pe−λt
Problem 3.
(a) Let {N (t), t ≥ 0} be a Poisson process with rate λ. For any s, t ≥ 0, find
P (N (t + s) − N (t) = k | N (u), 0 ≤ u ≤ t).
(b) Let {N (t), t ≥ 0} be a Poisson process with rate 5. Compute
P (N (2.5) = 15, N (3.7) = 21, N (4.3) = 21).
Solution. Note that Poisson processes have independent increments so that for disjoint intervals [a, b)
and [c, d),
P (N (b) − N (a) = x, N (d) − N (c) = y) = P (N (b) − N (a) = x)P (N (d) − N (c) = y).
67
(a) For any u ∈ [0, t] the intervals [0, u) and [t, t + s) are disjoint
P (N (t + s) − N (t) = k | N (u)) = P (N (t + s) − N (t) = k | N (u) − N (0))
= P (N (t + s) − N (t) = k)
= e−λs ·
(b)
(λs)k
.
k!
P (N (2.5) = 15, N (3.7) = 21, N (4.3) = 21)
=P (N (2.5) − N (0) = 15, N (3.7) − N (2.5) = 6, N (4.3) − N (3.7) = 0)
=P (N (2.5) − N (0) = 15)P (N (3.7) − N (2.5) = 6)P (N (4.3) − N (3.7) = 0)
=P (N (2.5) − N (0) = 15)P (N (3.7) − N (2.5) = 6)P (N (4.3) − N (3.7) = 0)
=e−5×2.5
(5 × 2.5)15 −5×1.2 (5 × 1.2)6 −5×0.6 (5 × 0.6)0
·e
·e
.
15!
6!
0!
Problem 4. Let {N (t), t ≥ 0} be a Poisson process with parameter λ. Let T1 denote the time of the
first event and Tn denote the time between (n − 1)-th and n-th events. Find P (T2 > t | T1 = s).
Solution. Since the inter-arrival times in a Poisson process are exponentially distributed i.i.d. variables,
P (T2 > t | T1 = s) = P (T2 > t) = e−λt .
Problem 5. Let {X(t), t ≥ 0} be a Poisson process with parameter λ and X(0) = j where j is a
positive integer. Consider the random variable Tj = inf{t : X(t) = j + 1}, i.e., Tj is the time of
occurrence of the first jump after the j-th jump, j = 1, 2, . . .
(a) Find the distribution of T1 .
(b) Find the joint distribution of (T2016 , T2017 , T2018 ).
Solution. The interarrival time in a Poisson process is exponentially distributed.
(a) So, T1 ∼ Exponential(λ).
(b) The three variables are iid so simply take the product of their pdfs.
Problem 6. Assume the life times of N = 400 soldiers are iid following an exponential distribution
with parameter µ, then the process of the number of surviving soldiers by time t, {X(t), t ≥ 0}, is a
pure death process with death rates µi = iµ, i = 1, 2, . . . , N . Assume that, X(0) = N .
(a) Find P (X(t) = N − 1).
(b) Let SN be the time of the death of the last member of the population, i.e., SN is the time to
extinction. Find E[SN ].
Solution.
(a) From problem 17, we see that X(t) has distribution B(N, e−µt ) hence,
N
P (X(t) = N − 1) =
(e−λt )N −1 (1 − e−λt ).
N −1
(b) Let Ti be the time required to go from state i → i − 1. Since it is a pure death process, the
transition i → i + 1 is not possible and Ti is precisely the holding time of state i. Therefore,
Ti ∼ Exponential(iµ). Now
N
X
SN =
Ti
i=1
68
as SN is nothing but the time required to go N → N − 1 → · · · → 1 → 0. Therefore,
E[SN ] =
N
X
i=1
N
1X1
E[Ti ] =
.
µ
i
i=1
The official solution given is
N
1 X (−1)i−1 N
.
µ
i
i
i=1
This is equivalent to the solution I got as
N
X
(−1)i−1 N
i=1
i
i
=
N Z
X
N
i=1
1
Z
=
i
1−
1
(−x)i−1 dx
0
PN
i=0
N
i
(−x)i
x
Z 1
1 − (1 − x)N
=
dx
x
0
Z 1
1 − uN
=
du
1−u
0
Z 1X
N
=
ui−1 du
dx
0
0
=
N
X
i=1
i=1
1
.
i
Problem 7. Consider a population, denoted by {X(t), t ≥ 0}, in which each individual gives birth
after an exponential time of parameter λ, all independently. Suppose X(0) = 1. Then, find the mean
population size at any t > 0.
Solution 1. We model it as a CTMC, the states are {1, 2, . . . }. The generator matrix is given by
qi,i = −iλ, and qi,i+1 = iλ.
We can do this as, if Xj is the time until the the j-th person gives birth then, the time required to
move from state i → i + 1 is
min(X1 , X2 , . . . , Xi ).
Since Xj ∼ Exponential(λ) and they are independent, min(X1 , X2 , . . . , Xi ) ∼ Exponential(iλ) and
hence the rate of movement from state i → i + 1 is iλ.
We now write the forward Kolmogorov equation,
p0i,j (t) = pi,j (t)qj,j + pi,j−1 (t)qj−1,j = λ(−jpi,j (t) + (j − 1)pi,j−1 (t)).
We wish to find p1,i (t), simply write it as pi (t) for convenience. We inductively prove that
pj (t) = e−λt (1 − e−λt )j−1 .
For the base case,
p01 (t) = −λp1 (t) =⇒ p1 (t) = e−λt .
69
For higher values,
p0j (t) = λ(−jpj (t) + (j − 1)pj−1 (t))
⇐⇒ ejλt (p0j (t) + jλpj (t)) = (j − 1)λejλt pj−1 (t)
⇐⇒
d(ejλt pj (t))
= (j − 1)λejλt pj−1 (t)
dt
=⇒ pj (t) = (j − 1)λe
Z
−jλt
t
ejλs pj−1 (s) ds
0
= (j − 1)λe
Z
−jλt
= (j − 1)e−jλt
= e−λt (1 − e
Z
t
e(j−1)λs (1 − e−λs )j−2 ds
0
eλt
(u − 1)j−2 du
1
−λt j−1
)
,
where we use the substitution u = eλs .
Now since pj (t) = P (X(t) = j), we see that X(t) is distributed geometrically with parameter e−λt .
Therefore, the mean is given by eλt .
Solution 2. Similar to the previous solution, we model it as a CTMC. Let Ti be the time spent in
state i, as described in the previous solution Ti ∼ Exponential(iλ).
We see that
N
X
X(t) > N ⇐⇒
Ti ≤ t.
i=1
We induct to prove that
P
X
N
Ti ≤ t = (1 − e−λt )N
i=1
for t ≥ 0.
The base case N = 0 trivially follows, for higher values,
P
X
N
Ti ≤ t
Z
∞
=
P
0
i=1
Z
∞
=
P
0
Z
NX
−1
i=1
NX
−1
Ti ≤ t − x | TN
= x fXN (x) dx
Ti ≤ t − x fTN (x) dx
i=1
t
(1 − e−λ(t−x) )N −1 × N λe−N λx dx
0
Z 1
=N
(u − e−λt )N −1 du
=
e−λt
−λt N
= (1 − e
)
where we use the substitution u = e−λx .
Now,
P (X(t) = j) = P (X(t) > j − 1) − P (X(t) > j) = e−λt (1 − e−λt )j−1
and we get that the mean is eλt .
70
Solution 3. Model
PN it as a CTMC and define Ti as in the previous solution. We combinatorially find
the CDF of i=1 Ti using the fact that Ti ∼ Exponential(iλ).
P
From the argument in problem 6(b), we can view N
i=1 Ti as having the same distribution as the
last extinction time for a population of size N in where each component’s time to death is independent
of each other and is distributed as Exponential(λ), that is we can write
N
X
N
Ti = max(Si )
i=1
i=1
where each Si represents the i-th component’s extinction time, and thus Si are iid each with distribution Exponential(λ).
Now we see that
P
X
N
N
Ti ≤ t = P (max(Si ) ≤ t)
i=1
i=1
= P (S1 ≤ t, S2 ≤ t, . . . , SN ≤ t)
=
N
Y
P (Si ≤ t)
i=1
= (1 − e−λt )N .
From the previous solution, we see that the mean is eλt .
Solution 4. Let F (t) = E[X(t)] be the mean population at time t. Using the notation and result of
solution 1,
p0j (t) = −jλpj (t) + (j − 1)λpj−1 (t).
Now,
F (t) =
X
jP (X(t) = j) =
j≥1
=⇒ F (t) =
X
=
X
0
X
jpj (t)
j≥1
jp0j (t)
j≥1
−j 2 λpj (t) + j(j − 1)λpj−1 (t)
j≥1
= −λE[X(t)2 ] + λ
X
((j − 1)2 + (j − 1))pj−1 (t)
j≥1
2
= −λE[X(t) ] + λE[X(t)2 ] + λE[X(t)]
= λF (t)
=⇒ E[X(t)] = F (t) = eλt
as F (0) = 1.
Problem 8. A dental surgery has two operation rooms. The service times are assumed to be independent, exponentially distributed with mean 15 minutes. Mr. Ram arrives when both operation
rooms are empty. Mr. Rajesh arrives 10 minutes later while Ram is still under medical treatment.
Another 20 minutes later Mr. Ajit arrives and both Ram and Rajesh are still under treatment. No
other patient arrives during this 30 minute interval.
(a) What is the probability that the medical treatment will be completed for Ajit before Ram?
71
(b) Derive the distribution function of the waiting time in the system for Ajit?
Solution. Let A, B, C be the time in minutes spent be Ram, Rajesh and Ajit in the operation room,
respectively. We know that A, B, C are iid each with distribution Exponential(1/15).
The condition that both Ram and Rajesh are still under treatment is equivalent to conditionining
on (A > 30, B > 20).
Let X, Y be the time spent under treatment by Ram and Rajesh after Ajit’s arrival under the
assumption that they are under treatment when Ajit arrived. That is X = (A − 30) | (A > 30)
and Y = (B − 20) | (B > 20). By the memoryless property of exponential distribution, we see that
X, Y ∼ Exponential(1/15).
(a) Ajit will be treated before Ram if C + Y < X. Thus, the probability is given by
P (X > C + Y ) = P (X > C + Y | X > Y )P (X > Y ) + P (X > C + Y | X ≤ Y )P (X ≤ Y )
= P (X > C + Y | X > Y )P (X > Y ) as X > C + Y is not possible if X ≤ Y
= P (X > C)P (X > Y )
=
1
2
·
1
2
= 1/4.
We get that P (X > C) = P (X > Y ) =
1
2
as X, Y, C are iid.
(b) The time spent waiting by Ajit is min(X, Y ) as he occupies the spot as soon as one of Ram
and Rajesh leaves. Thus, its distribution is Exponential(2/15) as minimum of independent
exponential rvs is an exponential rv.
Remark. In the first part I assumed that memorylessness of exponential distribution,
P (X > a + b | X > a) = P (X > b)
holds when a, b are random variables.
We have the result that if X ∼ Exponential(λ), A and B are random variables such that X, A, B
are all independent then
P (X > A + B) = P (X > A)P (X > B).
If we know that B ≥ 0, this tells us that
P (X > A + B | X > A) = P (X > B).
For a proof,
Z Z
P (X > A + B | A = a, B = b) dFA (a) dFB (b)
P (X > A + B) =
Z Z
=
P (X > a + b) dFA (a) dFB (b)
Z Z
e−(a+b)λ dFA (a) dFB (b)
Z
Z
−aλ
−bλ
=
e
dFA (a)
e
dFB (b)
Z
Z
=
P (X > A | A = a) dFA (a)
P (X > B | B = b) dFB (b)
=
= P (X > A)P (X > B).
Problem 9. Four workers share an office that contains four telephones. At any time, each worker
is either ‘working’ or ‘on the phone’. Each ‘working’ period of worker i lasts for an exponentially
distributed time with rate λi , and each ‘on the phone’ period lasts for an exponentially distributed
time with rate µi , i = 1, 2, 3, 4. Let Xi (t) equal 1 if worker i is working at time t, and let it be 0
otherwise. Let X(t) = (X1 (t), X2 (t), X3 (t), X4 (t)).
72
(a) Argue that {X(t), t ≥ 0} is a continuous-time Markov chain and give its infinitesimal rates.
(b) What proportion of time are all workers ‘working’?
Solution. (a) {Xi (t), t ≥ 0} is a Markov chain as the holding time of each state is exponential. Note
that
P (Xi (t) | Xi (s), Xi (s0 )) = P (Xi (t) | Xi (s)).
for s0 < s < t. By the independence of Xi we see that
0
P (X(t) | X(s), X(s )) =
4
Y
0
P (Xi (t) | Xi (s), Xi (s )) =
i=1
4
Y
P (Xi (t) | Xi (s)) = P (X(t) | X(s))
i=1
for s0 < s < t, and hence {X(t), t ≥ 0} is a CTMC.
The rate is given by
q(0,x,y,z),(1,x,y,z) = µ1 and q(1,x,y,z),(0,x,y,z) = λ1
and 0 for all other values except qv,v . Similar rates are true for others.
(b) Looking at the CTMC, {Xi (t), t ≥ 0}. We see that it has the generator,
−µi µi
Q=
.
λi −λi
The limiting distribution satisfies πQ = 0. Solving this, we see that
π0 =
λi
µi
and π1 =
.
λ i + µi
λi + µ i
Now the proportion of workers working at a large time t is
P (X(t) = (1, 1, 1, 1)) ≈ P (X(∞) = (1, 1, 1, 1))
=
4
Y
P (Xi (∞) = 1)
i=1
=
4 Y
i=1
µi
λ i + µi
.
Problem 10. Suppose that you arrive at a single-teller bank to find seven other customers in the
bank, one being served (First Come First service basis) and the other six waiting in line. You join the
end of the line. Assume that, service times are independent and exponential distributed with rate µ.
Model this situation as a birth and death process.
(a) What is the distribution of time spend by you in the bank?
(b) What is the expected amount of time you will spend in the bank?
Solution. Let Ti for 1 ≤ i ≤ 8, be the time the i-th person in line spends being serviced. It’s given
that Ti ∼ Exponential(µ).
P
(a) The time spent in the bank is 8i=1 Ti ∼ Gamma(8, µ).
P
P
(b) The expected time is E[ 8i=1 Ti ] = 8i=1 E[Ti ] = 8/µ.
Problem 11. Consider a Poisson process with parameter λ. Let T1 be the time of occurrence of the
first event and let N (T1 ) denote the number of events occurred in the next T1 units of time. Find the
mean and variance of N (T1 )T1 .
73
Solution. N (t) ∼ Poisson(λt) and T1 ∼ Exponential(λ), therefore the mean
E[N (T1 )T1 ] = E[E[N (T1 )T1 | T1 ]]
= E[T1 E[N (T1 ) | T1 ]]
= λE[T12 ]
2
= .
λ
For the variance, we find the second moment,
E[T12 N (T1 )2 ] = E[T12 E[N (T1 )2 | T1 ]]
= E[T12 ((λT1 )2 + (λT1 ))]
= λ2 E[T14 ] + λE[T13 ]
4!
3!
= λ2 · 4 + λ · 3
λ
λ
30
= 2.
λ
Thus, the variance Var(N (T1 )T1 ) =
26
.
λ2
Problem 12. Let {X(t), t ≥ 0} be a pure birth process with λn = nλ, n = 1, 2, . . . , λ0 = λ; µn =
0, n = 0, 1, 2, . . . . Find the conditional probability that X(t) = n given that X(0) = i (1 ≤ i ≤ n).
Also, find the mean of this conditional distribution.
Solution 1. In problem 7, we proved that the X(t) ∼ Geometric(e−λt ) if X(0) = 1.
Now for 1 ≤ j ≤ i, let Xj (t) be the number of descendents of the j-th person at time t. The total
number of people at time t, will be
i
X
X(t) =
Xj (t).
j=1
Since Xj (t) ∼
Geometric(e−λt )
and they are iid, X(t) ∼ NB(i, e−λt ). Its mean E[X(t)] = ieλt .
Solution 2. From problem 7, we get the Kolmogorov equations,
p0i,j (t) = λ(−jpi,j (t) + (j − 1)pi,j−1 (t)).
We induct on j to prove that
j−1
pi,j (t) =
(e−λt )i (1 − e−λt )j−i
i−1
for j ≥ i and 0 otherwise.
For j = i,
p0i,i (t) = −iλpi,i (t) =⇒ pi,i (t) = e−iλt .
74
For higher values, we do the same manipulations as problem 7 to get,
Z t
pi,j (t) = (j − 1)λe−jλt
ejλs pi,j−1 (s) ds
0
Z t
−jλt
jλs j − 2
= (j − 1)λe
e
(e−λs )i (1 − e−λs )j−1−i ds
i−1
0
Z t
j−2
−jλt
= (j − 1)
λe
eλs (eλs − 1)j−1−i ds
i−1
0
Z λs
j − 2 −jλt e
= (j − 1)
e
(u − 1)j−1−i ds
i−1
1
Z λs
j − 2 −jλt e
= (j − 1)
e
(u − 1)j−1−i ds
i−1
1
j − 2 −jλt (eλs − 1)j−i
= (j − 1)
e
i−1
j−i
j − 1 −iλt
=
e
(1 − e−λt )j−i
i−1
where we use the substitution u = eλs .
Problem 13. Suppose the people immigrate into a territory at time homogeneous Poisson process
with parameter λ = 1 per day. Let X(t) be the number of people immigrate on or before time t. What
is the probability that the elapsed time between the 100-th and 101-th arrival exceeds two days?
Solution. As the process is Poisson, the interarrival time has distribution Exponential(1) and thus,
the probability that it exceeds 2 days is e−2 .
Problem 14. A rural telephone switch has C circuits available to carry C calls. A new call is
blocked if all circuits are busy. Suppose calls have duration which has exponential distribution with
mean µ1 and inter-arrival time of calls is exponential distribution with mean λ1 . Assume calls arrive
independently and are served independently. Model this process as a birth and death process and
write the forward Kolmogorov equation for this process. Also find the probability that a call is blocked
when the system in steady state.
Solution. This can be modelled as a CTMC with X(t) being the number of circuits that are currently
busy, the state space is {0, 1, . . . , C} and the generator matrix
qi,i−1 = iµ, and qi,i+1 = λ
for 0 < i < C, q0,1 = λ and qC,C−1 = Cµ. Everything else other than qi,i is 0.
The limiting distribution satisfies, πQ = 0 that is
−λπ0 + µπ1 = 0,
λπi−1 − (λ + iµ)πi + (i + 1)µπi+1 = 0 for 0 < i < C,
λπC−1 − CµπC = 0.
Setting ρ = λ/µ, we see that πi = ρπi−1 /i which implies
πn =
Since
PC
i=0 πi
ρn
π0 .
n!
= 1,
ρn /n!
πn = PC
.
i
i=0 ρ /i!
75
A call is blocked if all circuits are busy, that is X(t) = C, so the probabiliity is
ρC /C!
πC = PC
.
i
i=0 ρ /i!
Problem 15. Consider the random telegraph signal, denoted by X(t), jumps between two states, −1
and 1, according to the following rules. At time t = 0, the signal X(t) start with equal probability
for the two states, i.e., P (X(0) = −1) = P (X(0) = 1) = 1/2, and let the switching times be decided
by a Poisson process {N (t), t ≥ 0} with parameter λ independent of X(0). At time t, the signal
X(t) = X(0)(−1)N (t) , t > 0.
Write the Kolmogorov forward equations for the continuous time Markov chain {X(t), t ≥ 0}. Find
the time-dependent probability distribution of X(t) for any time t.
Solution 1. As the process is Poisson the interarrival time is Exponential(λ), thus the holding time is
Exponential(λ) and the rate is λ. For the state space {−1, +1}, the rate matrix
−λ λ
Q=
.
λ −λ
The forward Kolmogorov equation tells us that
P 0 (t) = P (t)Q
=⇒ p0−1,−1 (t) = −λp−1,−1 (t) + λp−1,+1 (t),
p0−1,+1 (t) = −λp−1,+1 (t) + λp−1,−1 (t),
p0+1,−1 (t) = −λp+1,−1 (t) + λp+1,+1 (t),
p0+1,+1 (t) = −λp+1,+1 (t) + λp+1,−1 (t).
Along with this, we have pi,−1 (t) + pi,+1 (t) = 1 and pi,i (0) = δi,j for i, j ∈ {−1, +1}. So the first
equation simply reduces to
p0−1,−1 (t) = λ(1 − 2p−1,−1 (t))
=⇒ e2λt (p0−1,−1 (t) + 2λp−1,−1 (t)) = λe2λt
d(e2λt p−1,−1 (t))
= λe2λt
dt
=⇒ e2λt p−1,−1 (t) − 1 = 12 (e2λt − 1)
=⇒
=⇒ p−1,−1 (t) =
We get
1
P (t) =
2
1
2
+ 12 e−2λt
− 12 e−2λt
1
2
1
2
1
2
+ 12 e−2λt .
− 12 e−2λt
.
+ 12 e−2λt
So,
P (X(t) = +1) = P (X(t) = 1 | X(0) = −1)P (X(0) = −1) + P (X(0) = 1 | X(0) = 1)P (X(0) = 1)
= 12 12 − 12 e−2λt + 12 12 + 12 e−2λt
= 12 ,
and similarly P (X(t) = −1) = 12 .
76
Solution 2. Here’s an alternate way to get pi,j (t) without solving the Kolmogorov equations,
P (X(t) = +1 | X(0) = +1) = P ((−1)N (t) = +1)
= P (N (t) ≡ 0 mod 2)
X
=
P (N (t) = 2n)
n≥0
= e−λt
−λt
=e
=
X (λt)2n
(2n)!
n≥0
λt
e
+ e−λt
2
1 + e−2λt
.
2
Problem 16. Same as 3.
Problem 17. The birth and death process {X(t), t ≥ 0} is said to be a pure death process if λi = 0
for all i. Suppose µi = iµ, i = 1, 2, 3, . . . and initially X0 = n. Show that X(t) has B(n, p) distribution
with p = e−µt .
Solution 1. Modeling it as a CTMC, qi,i−1 = iµ and qi,i = −iµ. The Kolmogorov equations tell us
that
p0i,j (t) = iµ(pi−1,j (t) − pi,j (t)).
We induct on i to prove that pi,j (t) ∼ Binomial(i, e−µt ).
For i = j,
p0i,i (t) = −iµpi,i (t) =⇒ pi,i (t) = e−iµt .
For the base case i = 0, we see that p0,0 (t) = 1. For higher values of i and j < i,
p0i,j (t) = iµ(pi−1,j (t) − pi,j (t))
=⇒ eiµt (p0i,j (t) + iµpi,j (t)) = eiµt iµpi−1,j (t)
d(eiµt pi,j (t))
= eiµt iµpi−1,j (t)
dt
Z t
iµt
=⇒ e pi,j (t) = iµ
eiµs pi−1,j (s) ds
0
Z t
iµs i − 1
= iµ
e
(e−µs )j (1 − e−µs )i−j−1 ds
j
0
Z t
i−1
= iµ
eµs (eµs − 1)i−j−1 ds
j
0
Z eµt
i−1
=i
(u − 1)i−j−1 ds
j
1
i − 1 (eµt − 1)i−j
=i
j
j−i
i
=⇒ pi,j (t) =
(e−µt )j (1 − e−µt )i−j
j
=⇒
where we use the substitution u = eµs .
Solution 2. Let Xi (t) be 1 if the i-th agent survives till t, 0 otherwise. Since the holding time of a
−µt ).
CTMC is exponential, Xi (t) ∼ Bernoulli(e
Pn
Now the total population, X(t) = i=1 Xi (t). Since Xi (t) are iid Bernoulli variables we see that,
X(t) ∼ Binomial(n, e−µt ).
77
Problem 18. Consider a taxi station where taxis and customers arrive independently in accordance
with Poisson processes with respective rates of one and two per minute. A taxi will wait no matter
how many other taxis are in the system. Moreover, an arriving customer that does not find a taxi also
will wait no matter how many other customers are in the system. Note that a taxi can accommodate
ONLY ONE customer by first come first service basis. Define
(
−n if n number of taxis waiting for customers at time t
X(t) =
.
n
if n number of customers waiting for taxis at time t
(a) Write the generator matrix Q or draw the state transition diagram for the process {X(t), t ≥ 0}.
(b) Write the forward Kolmogorov equations for the Markov process {X(t), t ≥ 0}.
(c) Does a unique equilibrium probability distribution of the process exist? Justify your answer.
Solution.
(a) The state space is Z and the generator matrix is
qi,i+1 = 2, qi,i = −3, and qi,i−1 = 1.
The state transition diagram is
2
2
···
−2
1
2
2
−1
1
+1
0
1
2
1
2
···
+2
1
1
(b) The forward Kolmogorov equation is given by P 0 (t) = P (t)Q which gives,
p0i,j (t) = 2pi,j−1 (t) − 3pi,j (t) + pi,j+1 (t).
(c) Since 2/1 > 1, we see that the embedded markov chain is transient. Therefore, a unique limiting
distribution doesn’t exist.
Problem 19. Accidents in Delhi roads involving Blueline buses obey Poisson process with 9 per
month of 30 days. In a randomly chosen month of 30 days,
(a) What is the probability that there are exactly 4 accidents in the first 15 days?
(b) Given that exactly 4 accidents occurred in the first 15 days, what is the probability that all the
four occurred in the last 7 days out of these 15 days?
Solution. If time is measured in days, it’s a Poisson process with mean 9/30.
(a) N (15) ∼ Poisson(4.5), so
P (N (15) = 4) = e−4.5 ·
78
(4.5)4
≈ 0.1898.
4!
(b) We wish to find the probability that
P (N (15) − N (8) = 4 | N (15) = 4) = P (N (8) = 0 | N (15) = 4)
P (N (8) = 0)
P (N (15) = 4)
P (N (8) = 0)
= P (N (15) − N (8) = 4 | N (8) = 0) ·
P (N (15) = 4)
P (N (8) = 0)
= P (N (15) − N (8) = 4) ·
P (N (15) = 4)
P (N (8) = 0)
= P (N (7) = 4) ·
P (N (15) = 4)
= P (N (15) = 4 | N (8) = 0) ·
= e−7×9/30
(7 × 9/30)4 e−8×9/30
·
4
4!
e−4.5 (4.5)
4!
= (7/15)4 ≈ 0.04742.
79
10 Simple Markovian Queues
Problem 1. Consider a M/M/1 queuing model. Find the waiting time distribution for any customer
in this model. Deduce the mean waiting time from the above distribution.
Solution. Let W be the waiting time in the steady state, µ be the service rate, λ be the arrival rate,
ρ = λ/µ and N be the number of people in the system.
From problem 4(a), we know that P (N = n) = (1 − ρ)ρn .
The probability W = 0 if the queue is empty that is P (W = 0) = P (N = 0) = 1 − ρ.
Now, to compute P (0 < W ≤ t). Let Si be the time it takes the i-th person to be serviced,
W | (N = n) =
n
X
Si .
i=1
Since Si are iid and Si ∼ Exponential(µ), W | (N = n) ∼ Gamma(n, µ). Therefore,
X
P (0 < W ≤ t) =
P (0 < W ≤ t | N = n)P (N = n)
n≥1
=
XZ
n≥1 0
Z t
t
µn xn−1 e−µx
(1 − ρ)ρn dx
(n − 1)!
(1 − ρ)e−µx
=
0
Z
=
X λn xn−1
dx
(n − 1)!
n≥1
t
(1 − ρ)e−µx × λeλx dx
0
= ρ(1 − e−(µ−λ)t ).
Now the CDF
FW (t) = P (W ≤ t) = P (W = 0) + P (0 < W ≤ t) = 1 − ρe−(µ−λ)t
for t ≥ 0, and 0 for t < 0.
The mean of W is given by
Z
∞
Z
(1 − FW (t)) dw =
E[W ] =
0
∞
ρe−(µ−λ)t dt =
0
ρ
.
µ−λ
Problem 2. Consider a M/M/1 queuing model with arrival rate λ and service rate µ. Find the
service rate where customers arrive at a rate of 3 per minute, given that 95% of the time the queue
contains less than 10 customers.
Solution. The condition is equivalent
95%
P to saying that there are 10 or fewer people in the system
n (1 − ρ)
of the time, which is equivalent to 10
π
≥
95%.
From
problem
4(a),
we
know
that
π
=
ρ
n
n=0 n
where ρ = λ/µ. So, we have to find a µ such that
10
X
(1 − ρ)ρn = 1 − ρ11 ≥ 0.95 =⇒ ρ11 ≤ 0.05 =⇒ µ ≥
n=0
80
3
= 3.939.
0.051/11
Problem 3. Consider an M/M/1/∞ queueing model. Suppose at time 0 there are i > 0 customers
in the queue. Let T denote the time taken to serve the first customer and S denote the time of the
next arrival. Find, the probability of the event {T < S}.
Solution. We know that T ∼ Exponential(µ), S ∼ Exponential(λ) and they are independent. So,
Z ∞
P (S > T ) =
P (S > t | T = t)fT (t) dt
Z0 ∞
=
e−λt × µe−µt dt
0
=
µ
.
λ+µ
Problem 4. Consider a M/M/1 queuing model with arrival rate λ and service rate µ.
(a) Derive the expression for πn the steady state probability that n customers in the system.
(b) Find the average time spend in the queue by any customer.
(c) Find the service rate where customers arrive at a rate of 3 per minute, given that 95% of the
time the queue contains less than 10 customers.
Solution.
(a) The state transition diagram is,
λ
λ
0
λ
1
µ
2
µ
λ
...
3
µ
µ
So the steady state distribution satisfies,
−λπ0 + µπ1 = 0,
λπi−1 − (λ + µ)πi + µπi+1 = 0 for i > 0.
If we let ρ = λ/µ, we see that πi+1 = ρπi by inducting on i. Now,
πn = ρn π0 .
Since
P
n≥0 πn
= 1 therefore,
π0 = P
1
n
n≥0 ρ
= 1 − ρ =⇒ πn = (1 − ρ)ρn
for ρ < 1. If ρ ≥ 1, then a unique limiting distribution doesn’t exist.
(b) From problem 1, the average time spent in the queue is
ρ
λ
1
1
ρ2
=
=
− =
.
µ−λ
µ(λ − µ)
λ−µ µ
λ(1 − ρ)
(c) Same as problem 2.
Problem 5. Patients visit a doctor in accordance with a Poisson process at the rate of 8 per hour,
and the time doctor takes to examine any patient is exponential with mean 6 minutes. All arriving
patients attended by the doctor.
(a) Find the probability that the patient has to wait on arrival.
81
(b) Find the expected total time spent (including the service time) by any patient who visits the
doctor.
Solution. Let λ = 8/hour and µ = 1/(6 min) = 10/hour, be the arrival and service rate, and ρ = λ/µ.
(a) The probability that the patient has to wait is 1 − π0 = ρ = 4/5.
(b) The total time is the waiting time and the service time, we have proved that the expected value
1
of the former is µ−λ
− µ1 . Since the latter is Exponential(µ). The total expected time is
1
=
µ−λ
1
2
hour = 30 minutes.
Problem 6. Consider a multiplexer that collects traffic formed by messages arriving according to
exponentially distributed interarrival times. The multiplexer is formed by a buffer and a transmission
line. Assume that, the transmission time of a message is exponentially distributed with the mean
value 10 ms. From measurements on the state of the buffer, we know that the idle buffer probability
is 0.8.
(a) What is the underlying queueing model?
(b) What is the mean delay (waiting time) for the message?
Solution. (a) The queueing model is an M/M/1 model with service rate λ = 100Hz. The buffer is
idle if the system has 0 or 1 messages in it, that is π0 + π1 = 0.8. Therefore,
√
√
(1 − ρ) + (1 − ρ)ρ = 0.8 =⇒ ρ2 = 0.2 =⇒ ρ = 0.2 =⇒ µ = 100/ 0.2Hz
where µ is the arrival rate and ρ = λ/µ.
(b) The mean waiting time is
1
1
− = 3.618ms
µ−λ µ
.
Problem 7. Consider that two identical M/M/1 queuing systems with the same rates λ, µ are in
operation side by side (with separate queues) in a premises. Find the distribution of the total number,
N , in the two systems taken together in long-run.
Solution. Let N1 , N2 be the number in the two systems, note that N1 and N2 are independent. We
know that Ni ∼ Geometric(1 − λ/µ). Therefore, the total number
N = N1 + N2 ∼ NB(2, 1 − λ/µ)
that is,
λ
P (N = n) = (n + 1) 1 −
µ
2 n
λ
.
µ
Problem 8. Prove that mean time spent in an M/M/1 system having arrival rate λ and service rate
2µ is less than the mean time spent in an M/M/2 system with arrival rate λ and each service rate µ.
Solution. We find πn for an M/M/c system first. The generator matrix satisfies the equation,
qi,i−1 = min(C, i)µ, qi,i+1 = λ, and qi,i = −(qi,i−1 + qi,i+1 ).
82
Therefore, we get the equations,
0 = −λπ0 + µπ1 ,
0 = λπi−1 − (λ + iµ)πi + (i + 1)µπi+1 for 1 ≤ i < c,
0 = λπi−1 − (λ + cµ)πi + cµπi+1 for i ≥ c.
λ
Let ρ = cµ
. By inducting on i, we see that πi = cρπi−1 /i for i ≤ c and πi = ρπi−1 for i > c. These
combine to give us
( i
(cρ)
π0 0 ≤ i ≤ c
πi = cci!ρi
i>c
c! π0
P
along with i≥0 πi = 1.
The mean number of elements in the queue is given by
Lq =
X
(i − c)πi =
i≥c
(cρ)c π0 X i
cc ρc+1
iρ =
π0 .
c!
c!(1 − ρ)2
i≥0
By Little’s formula, the mean time in the queue, Tq = λLq . Adding the service time, we see that the
mean total time in the system is
cc ρc+1
1
T =
π0 + .
λc!(1 − ρ)2
µ
where

−1 "
#−1
c−1
c−1
i
i
c
X
X cc ρi
X
(cρ)
(cρ)
(cρ)
 =
π0 = 
+
+
.
i!
c!
i!
c!(1 − ρ)
i=0
i=0
i≥c
Now, the mean time spent in the two systems is
T1 =
ρ2
1
2ρ3
1
+
, and T2 =
+
λ(1 − ρ) 2µ
λ(1 − ρ2 ) µ
where ρ = λ/(2µ).
We wish to prove that T1 < T2 ,
2ρ3
1
T2 − T1 =
+
2
λ(1 − ρ ) µ
λ
ρ2
=⇒ λ(T2 − T1 ) =
−
2µ 1 + ρ
ρ
=
1+ρ
=⇒ T2 > T1 .
−
ρ2
1
+
λ(1 − ρ) 2µ
Problem 9. Write the backward Kolmogorov equations for the M/M/1/N (N > 1) queuing model.
Derive the steady state probability for the above queuing model.
Solution. The generator matrix for this model is
qi,i−1 = µ, and qi,i+1 = λ
for 0 < i < N , q0,1 = λ, qN,N −1 = µ and 0 for everything else other than qi,i .
The backward Kolmogorov equation for the matrix is
P 0 (t) = QP (t)
83
which gives
p00,j (t) = −λp0,j (t) + λp1,j (t)
p0i,j (t) = µpi−1,j (t) − (µ + λ)pi,j (t) + λpi+1,j (t) for 0 < i < N
p0N,j (t) = µpN −1,j (t) − µpN,j (t).
While I did write the backward Kolmogorov equation since the question asked, they don’t help in
finding stationary distribution, using limt→∞ pi,j (t) = πj the equations reduce to πj0 (t) = 0 which we
already knew.
Using the forward Kolmogorov equations, we see that the stationary state satisfies the equations,
0 = −λπ0 + µπ1 ,
0 = λπi−1 − (λ + µ)πi + µπi+1 for 0 < i < N ,
0 = λπN −1 − µπN .
Let ρ = λ/µ. By induction on i, we get that πi+1 = ρπi for 0 < i ≤ N . Therefore, πi = ρi π0 . Now,
X
−1 ( 1−ρ
N
N
X
ρ 6= 1
N +1
πi = 1 =⇒ π0 =
ρi
= 1−ρ
.
1
ρ=1
N +1
i=0
i=0
Therefore,
(
πi =
ρi (1−ρ)
1−ρN +1
1
N +1
ρ 6= 1
ρ=1
.
Problem 10. Ms. H. R. Cutt runs a one-person, unisex hair salon. She does not make appointments,
but runs the salon on a first-come, first-served basis. She finds that she is extremely busy on Saturday
mornings, so she is considering hiring a part-time assistant and even possibly moving to a larger
building. Having obtained a master’s degree in operations research (OR) prior to embarking upon her
career, she elects to analyze the situation carefully before making a decision. She thus keeps careful
records for a succession of Saturday mornings and finds that customers seem to arrive according to a
Poisson process with a mean arrival rate of 5/hr. Because of her excellent reputation, customers were
always willing to wait. The data further showed that customer processing time (aggregated female
and male) was exponentially distributed with an average of 10 min. Cutt interested to calculate the
following measures:
(a) What is the average number of customers in the shop?
(b) What is the average number of customers waiting for a haircut?
(c) What is the percentage of time an arrival can walk right in without having to wait at all?
(d) If waiting room has only four seats at present, what is the probability that a customer, will not
able to find a seat and have to stand.
Solution. We can model it as a M/M/1 system with service rate µ = 6/hour, and arrival rate
λ = 5/hour. The steady state distribution of people is given by
πn = ρn (1 − ρ)
where ρ = λ/µ = 5/6.
(a) The average number of customers in the shop is
X
X
Ls =
nπn =
nρn (1 − ρ) =
n≥0
n≥0
84
ρ
= 5.
1−ρ
(b) The average number of people waiting is
X
X
X
Lq =
(n − 1)πn =
nπn −
πn = Lq − (1 − π0 ) = 4 + 1/6 = 25/6.
n≥1
n≥1
n≥1
(c) This happens when the number of people in the shop is zero, that is for π0 = 1/6.
(d) This happens if there are more than 5 in the shop that is,
X
X
πi = (1 − ρ)
ρi = ρ6 = (5/6)6 ≈ 0.334.
i>5
i>5
Problem 11. City Hospital’s eye clinic offers free vision tests every Wednesday evening. There are
three ophthalmologists on duty. A test takes, on the average, 20 min, and the actual time is found to
be approximately exponentially distributed around this average. Clients arrive according to a Poisson
process with a mean of 6/hr, and patients are taken on a first-come, first-served basis. The hospital
planners are interested in knowing:
(a) What is the average number of people waiting?
(b) What is the average amount of time a patient spends at the clinic?
Solution. It can be modelled as an M/M/3 system with service rate µ = 3/hour and an arrival rate
λ = 6/hour. Let ρ = λ/(3µ) = 2/3.
(a) As proved in problem 8, the mean number of people in the queue is
−1
33 ρ4
3 3 ρ3
8
2
Lq =
× 1 + (3ρ) + (3ρ) /2 +
= .
2
3!(1 − ρ)
3!(1 − ρ)
9
(b) The total time is sum of the service time and the waiting time, so the mean total time Ts is
Tq + 1/µ where Tq is the time spent in the queue.
Using Little’s formula, Lq = λTq . Therefore,
Ts = Tq +
Lq
1
1
13
=
+ =
hour ≈ 28.88 minutes.
µ
λ
µ
27
Problem 12. Consider New Delhi International Airport. Suppose that, it has three runway. Airplanes have been found to arrive at the rate of 20 per hour. It is estimated that each landing takes
3 minutes. Assume that a Poisson process for arrivals and an exponential distribution for landing
times. Without loss of generality, assume that the system is modeled as a birth and death process.
What is the steady state probability that there is no waiting time to land? What is the expected
number of airplanes waiting to land? Find the expected waiting time to land.
Solution. We’ll use the results proved in problem 8.
It can be modelled as an M/M/3 system with service rate µ = 20/hour and an arrival rate λ =
20/hour. Let ρ = λ/(3µ) = 1/3.
There is no waiting time if there are atmost 2 other planes in the system, that is the probability is
π0 + π1 + π2 ,
(3ρ)2
(3ρ)3
π0 = 1 + (cρ) +
+
2
3!(1 − ρ)
85
−1
1 1 −1
4
= 1+1+ +
= .
2 4
11
Now for i ≤ 3, πi = (3ρ)i π0 /i!. So the probability is
π0 + π1 + π2 =
10
.
11
The expected number of planes in the queue are
Lq =
3 3 ρ4
1
π0 = .
2
3!(1 − ρ)
22
Using Little’s formula, the expected waiting time is
Tq =
Lq
1
=
hour = 8.18 seconds.
λ
440
Problem 13. Consider a telephone switching system consisting of n trunks with an infinite caller
population. We assume that an arrival call is lost if all trunks are busy.
(a) Find the expression for πn the steady state probability that n trunks are busy.
(b) This above system must design the number of trunks, n, in order to guarantee a blocking
probability lower than or equal to 2%. The following data are available: The arrival stream is
Poisson with rate 6 per minute. Each call has a duration modeled by an exponentially distributed
variable with mean value of 3 minutes. Find the smallest possible value of n.
Solution. We model it as a M/M/n/n queue with service rate µ and arrival rate λ. Let ρ = λ/µ.
(a) We proved that
ρi /i!
πi = Pn
j
j=0 ρ /j!
in Problem 14 of Tutorial 9.
(b) The call is blocked if all n trunks are busy, that is the probability is πn . We are given ρ =
18. We have to find the smallest n such that πn ≤ 0.02.
6
1/3
=
This comes out to be n = 26, I used a computer to find it.
Problem 14. A service center consists of two servers, each working at an exponential rate of two
services per hour. If customers arrive at a Poisson rate of three per hour, then, assuming a system
capacity of at most three customers.
(a) What fraction of potential customers enter the system?
(b) What would the value of part (a) be if there was only a single server, and his rate was twice as
fast(that is µ = 4)?
Solution. (a) We model it as a M/M/2/3 queue with service rate µ = 2/hour, and arrival rate
λ = 3/hour.
λ
0
λ
1
λ
2
µ
2µ
86
3
2µ
We find the steady state distribution π using the relations,
−λπ0 + µπ1 = 0,
λπ0 − (λ + µ)π1 + 2µπ2 = 0,
λπ1 − (λ + 2µ)π2 + 2µπ3 = 0,
λπ2 − 2µπ3 = 0.
We get the relations
π1 = ρπ0 , π2 = ρπ1 /2, and π3 = ρπ2 /2
where ρ = λ/µ. These imply that,
π1 = ρπ0 , π2 = ρ2 π0 /2, and π3 = ρ3 π0 /4.
Since
P
πi = 1, we see that
−1
π0 = 1 + ρ + ρ2 /2 + ρ3 /4
.
A customer enters into the system if the system is not full, that is, the probability is
1 − π3 = 1 −
ρ3 /4
116
=
≈ 0.8111.
2
3
1 + ρ + ρ /2 + ρ /4
143
(b) We model it as a M/M/1/3 queue with service rate µ = 4/hour, and arrival rate λ = 3/hour.
λ
0
λ
1
λ
2
µ
µ
3
µ
We find the steady state distribution π using the relations,
−λπ0 + µπ1 = 0,
λπ0 − (λ + µ)π1 + µπ2 = 0,
λπ1 − (λ + µ)π2 + µπ3 = 0,
λπ2 − µπ3 = 0.
We get the relations
π1 = ρπ0 , π2 = ρπ1 , and π3 = ρπ2
where ρ = λ/µ. These imply that,
π1 = ρπ0 , π2 = ρ2 π0 , and π3 = ρ3 π0 .
Since
P
πi = 1, we see that
−1
π 0 = 1 + ρ + ρ2 + ρ3
.
A customer enters into the system if the system is not full, that is, the probability is
1 − π3 = 1 −
ρ3
148
=
≈ 0.8457.
2
3
1+ρ+ρ +ρ
175
87
Problem 15. Consider an automobile emission inspection station with three inspection stalls, each
with room for only one car. It is reasonable to assume that cars wait in such a way that when a
stall becomes vacant, the car at the head of the line pulls up to it. The station can accommodate
at most four cars waiting (seven in the station) at one time. The arrival pattern is Poisson with a
mean of one car every minute during the peak periods. The service time is exponential with mean
6 min. The chief inspector wishes to know the average number in the system during peak periods,
the average time spent (including service), and the expected number per hour that cannot enter the
station because of full capacity.
Solution. We model it as an M/M/3/7 queue with service rate µ = 1/6 min−1 and arrival rate
λ = 1 min−1 . From the steady-state equations we get the relation,
π1 = ρπ0 , π2 = ρπ1 /2, and πi = ρπi−1 /3 for 3 ≤ i ≤ 7
where ρ = λ/µ = 6. These in turn imply that
π1 = ρπ0 , and πi =
Since
P
ρi π0
for 2 ≤ i ≤ 7.
2 × 3i−2
πi = 1,
"
π0 = 1 + ρ +
7
X
i=2
ρi
2 × 3i−2
#−1
=
1
.
1141
The average number in the system is
Ls =
7
X
iπi
i=0
#
7
1 X iρi
= ρ+
π0
2
3i−2
i=2
"
#
7
9X i
= 6+
i2 π0
2
"
i=2
6918
=
≈ 6.0631.
1141
The effective arrival rate λeff is given by λ(1 − π7 ). So, the effective arrival rate is
λeff = λ(1 − π7 ) = 1 −
576
565
=
≈ 0.49518 min−1 .
1141
1141
Using Little’s formula, the mean total time spent is given by
Ts =
Ls
6918
=
min ≈ 12.2442 min.
λeff
565
The average rate of number of people that cannot enter due to full capacity is λπ7 . Thus, the total
number of people in an hour is given by
60 min × 1 min−1 ×
576
≈ 30.289.
1141
Problem 16. In a parking lot with N spaces the incoming traffic is according to Poisson process
with rate λ, but only as long as empty spaces are available. The occupancy times have an exponential
distribution with mean 1/µ. Let X(t) be the number of occupied parking spaces at time t.
88
(a) Determine rate matrix Λ and the forward Kolmogorov equations for the Markov process X(t).
(b) Determine the limiting equilibrium probability distribution of the process.
Solution. (a) We model it as an M/M/N/N queue with service rate µ, and arrival rate λ. The rate
matrix,


−λ
λ
0
0
···
0
0
0
 µ −λ − µ
λ
0
···
0
0
0 


 0

2µ
−λ
−
2µ
λ
·
·
·
0
0
0


 0
0
3µ
−λ − 3µ · · ·
0
0
0 


Λ=Q= .
..
..
..
..
..
..  .
..
 ..
.
.
.
.
.
.
. 


 0
0
0
0
· · · −λ − (N − 2)µ
λ
0 


 0
0
0
0
···
(N − 1)µ
−λ − (N − 1)µ
λ 
0
0
0
0
···
0
Nµ
−N µ
(b) We computed it Problem 13.
Problem 17. A toll bridge with 10 booths at the entrance can be modeled as a 10 server Markovian
queueing system with infinite capacity. Assume that the vehicle arrival follows a Poisson process
with parameter 8 per minute and the service times are independent exponential distributed random
variables with mean 1 minute.
(a) Draw the state transition diagram for a birth and death process for the system.
(b) Find the limiting state probabilities.
(c) If 2 more booths are installed, i.e, total 12 booths, what is the maximum arrival rate such that
the limiting state probabilities exist?
Solution. We model it as M/M/10 queue with arrival rate λ = 8 min−1 and service rate µ = 1 min−1 .
(a) The state transition diagram is
λ
0
λ
1
µ
λ
...
2
2µ
λ
λ
9
3µ
9µ
λ
10
10µ
λ
...
11
10µ
10µ
(b) We computed the limiting probabilities for a general M/M/c queue in problem 8.
(c) The limiting probabilities exist in an M/M/c queue as long ρ =
arrival λmax is
λmax < 12µ = 12 min−1 .
λ
cµ
< 1. That is, the maximum
Problem 18. Consider the central library of IIT Delhi where there are 4 terminals. These terminals
can be used to obtain information about the available literature in the library. If all terminals are
occupied when someone wants information, then that person will not wait but leave immediately (to
look for the required information somewhere else). A user session on a terminal takes exponential
distributed time with average 2.5 minutes. Since the number of potential users is large, it is reasonable
to assume that users arrive according to a Poisson process. On average 25 users arrive per hour.
(a) Determine the steady state probability that i terminals are occupied, i = 0, 1, 2, 3, 4.
(b) What is the mean time spent in the system by any person?
89
(c) How many terminals are required such that at most 5% of the arriving users find all terminals
occupied?
Solution. Model it as an M/M/4/4 queue with service rate µ = 24 hour−1 and arrival rate λ =
25 hour−1 .
(a) We computed it for a general M/M/n/n queue in Problem 13.
(b) Every person in the system is currently on the user session, therefore the mean time is 2.5
minutes.
(c) If there are n terminals, we model it as an M/M/n/n queue. A user will find it occupied if the
system is in state n. Therefore, we have to find the smallest n such that πn ≤ 0.05. We know
that
ρn /n!
πn = Pn
i
i=0 ρ /i!
where ρ = λ/µ = 25/24.
This comes out to be true for n = 4.
90
Download