Unit 4

advertisement
Unit 4
1. Random Process
2. Markov Chain
3. Poisson Process
1
Chapter 1
Random Process
1.1
Classification of random processes
Consider a random experiment with outcomes s ∈ S where S is a sample space.
If to every outcomes s ∈ S , we assign areal value time function X(t, s) and the
collection of such time functions is called random process or stochastic process.
Definition 1.1.1 A random process is a collection of random variables {X(t, s)}
that are functions of a real variable, namely time t where s ∈ S and t ∈ T ( S :
sample space and T : parameter set or index set)
2
Types of random process
Definition 1.1.2 (Discrete random sequence) If both T and S are discrete,
the random process is called a discrete random sequence.
Example 1.1.3 If Xn represents the outcome of the nth toss of a fair die,
then {Xn , n ≥ 1} is a discrete random sequence, since T = {1, 2, 3, . . .} and
S = {1, 2, 3, 4, 5, 6} . If X(t) = the number of defective items found at trial
t = 1, 2, 3, . . . then {X(t)} is a discrete random sequence.
Definition 1.1.4 (Discrete random process) If T is continuous and S is
discrete, the random process is called a discrete random process.
Example 1.1.5 If X(t) represents the number of telephone calls received in the
interval (0, t) , then {X(t)} is a discrete random process, since S = {1, 2, 3, . . .} .
Also if X(t) = the number of defective items found at time t ≥ 0 , then {X(t)}
is a discrete random process.
Definition 1.1.6 (Continuous random sequence) If T is discrete and S is
continuous, the random process is called a continuous random sequence.
Example 1.1.7 If X(t) = amount of rainfall measured at trial, t = 1, 2, 3, . . . ,
then X(t) is a continuous random sequence.
3
Example 1.1.8 If Xn represents the temperature at the end of the nth hour of
a day, then Xn , 1 ≤ n ≤ 24 , is a continuous random sequence.
Definition 1.1.9 (Continuous random process) If both T and S are continuous, the random process is called a continuous random process.
Example 1.1.10 If X(t) represents the maximum temperature at a place in the
interval (0, t) , {X(t)} is a continuous random process.
Example 1.1.11 If X(t) = amount of rainfall measured at time t , t ≥ 0 , then
{X(t)} is a continuous random process.
Statistical Averages or Ensemble Averages
Let {X(t)} be a random process.
1. The mean of {X(t)} is defined by E[ X(t) ] = µX (t) =
R∞
−∞
xfx (x, t)dx
where X(t) is treated as a random variable for a fixed value of t .
2. The auto correlation (A.C.F ) of X(t) is defined by
RXX (t1 , t2 ) = E[X(t1 )X(t2 )] =
R∞ R∞
−∞
−∞
x1 x2 fx (x1 , x2 : t1 , t2 )dx1 dx2
3. The auto co-variance of {X(t)} is defined by
CXX (t1 , t2 ) = E[{X(t1 )−µX (t1 )}{X(t2 )−µX (t2 )}] = RXX (t1 , t2 )−µX (t1 )µX (t2 )
where µX (t1 ) = E[X(t1 )] and µX (t2 ) = E[X(t2 )]
4
Definition 1.1.12 (Stationary processes) If certain probability distribution
or averages do not depend on t , then the random process {X(t)} is called stationary.
1.2
Types of stationary
Definition 1.2.1 (Strict Sense Stationary) (SSS)
A random process is called a strict sense stationary if all its finite dimensional
distribution are invariant under translation of time parameter.
fX (x1 , x2 , . . . , xn ; t1 , t2 , . . . , tn ) = fX (x1 , x2 , . . . , xn ; t1 +∆, t2 +∆, . . . , tn +∆) for
any t and any real number ∆ .
First order stationary
Definition 1.2.2 A random process {X(t)} is said to be a first order SSS process
if f (x, t1 + c) = f (x, t1 ) . That is the first order density of a stationary process
{X(t)} is independent of time t . Thus E[X(t)] = µ = a constant in a first order
stationary random process.
Second order stationary
Definition 1.2.3 A random process {X(t)} is said to be a second order stationary process if f (x1 , x2 , t1 , t2 ) = f (x1 , x2 , t1 + c, t2 + c) for any c . That is the
5
second order density must be invariant under translation of time.
Wide-Sense Stationary (or) Weakly Stationary (or) Co-variance Stationary
Definition 1.2.4 A random process {X(t)} is called a Wide-Sense Stationary
(W SS) if its mean is a constant and the auto-correlation depends only on the
time difference. That is, EX(t) = a constant and E[X(t)X(t + τ )] = RXX (τ )
depends only on τ . It follows that auto co-variance of a W SS process also
depends only on the time difference τ . Thus CXX (t, t + τ ) = RXX (τ ) − µ2X .
Definition 1.2.5 (Jointly WSS process) Two processes {X(t)} and {Y (t)}
are called Jointly W SS process if each is WSS and their cross correlation depends
only on the time difference ( τ ). That is RXY (t, t + τ ) = E[X(t)Y (t + τ )] =
R( XY )(τ ) . It follows that the cross co-variance of jointly WSS process {X(t)}
and {Y (t)} also depends only on ( τ ), the time difference. Thus CXY (t, t + τ ) =
RXY (τ ) − µX µY .
Remark 1.2.6 For a two dimensional random process {X(t), Y (t) ; t ≥ 0} , we
define the following
1. Mean = E[X(t)] = µX (t)
2. Auto correlation = E[X(t1 )X(t2 )]
6
3. Cross correlation = E[X(t1 )Y (t2 )]
4. Auto co-variance = E[{X(t1 ) − µX (t1 )}{X(t2 ) − µX (t2 )}]
5. Cross co-variance = E[{X(t1 ) − µX (t1 )}{Y (t2 ) − µY (t2 )}]
Definition 1.2.7 A random process that is not stationary in any sense is called
an evolutionary random process.
1.3
Ergodic random process
Time averages of a random process
1. The time averaged of a sample function X(t) of a random process {X(t)}
is defined as X T = limT →∞
1
2T
R −T
T
X(t)dt .
2. The time - averaged auto correlation of the ramdon process {X(t)} is
defined by Z T = RXX (t, t + τ ) = limT →∞
1
2T
R −T
T
X(t)X(t + τ )dt .
Definition 1.3.1 (Ergodic random process) A random process {X(t)} is said
to be ergodic random process if its ensemble averages are equal to appropriate time
averages.
Definition 1.3.2 (Mean-Ergodic process) A random process {X(t)} is said
7
to be mean ergodic if ensembled mean is equal to time average mean. If E[X(t)] =
µ and X T = limT →∞
1
2T
RT
−T
X(t)dt , then µ = X T with probability one.
Definition 1.3.3 (Correlation Ergodic) A random process {X(t)} is said to
be correlation ergodic if ensembled A.C.F is equal to time averaged A.C.F.
Definition 1.3.4 (Cross Correlation Function) For a two dimensional random process {X(t), Y (t) : t ≥ 0} , the cross correlation function RXY (τ ) is
defined by RXX (t, t + τ ) = E[X(t)Y (t + τ )] .
Properties of cross correlation.
(i) RY X (τ ) = RXY (−τ ) .
p
(ii) RXY (τ ) ≤ RXX (τ )RY Y (τ ) .
(iii) RXY (τ ) ≤ 12 {RXX (0) + RY Y (0)} .
(iv) If the processes {X(t)} and {Y (t)} are orthogonal if RXY (τ ) = 0 .
(v) If the process {X(t)} and {Y (t)} are independent, then RXY (τ ) = µX µY .
8
1.4
Examples of Stationary Processes
Example 1.4.1 Examine whether the Poisson process {X(t)} given by the probability law
P {X(t) = r} =
e−λt (λt)r
,
r!
r = 0, 1, 2, . . .
is not covariance stationary.
Solution: The Poisson process has the probability distribution as that of a
Poisson distribution with parameter λt . Then E[X(t)] = λt and E[X 2 (t)] =
λ2 t2 + λt . Both are functions of t . Since E[X(t)] is not constant, the Poisson
process is not covariance stationary.
Example 1.4.2 Show that the random process X(t) = A sin(ωt + φ) where A
and ω are constants, φ is a random variable uniformly distributed in (0, 2π). is
first order stationary. Also find the auto correlation function of the process.
Solution: Given X(t) = A sin(ω + φ) .
Claim: E[X(t)] is constant.
Z
∞
E[X(t)] =
X(t)f (φ) dφ
−∞
9
(1)
Since φ is uniformly distributed in (0, 2π) ,
f (φ) =



1

 2π
0 < φ < 2π



0
otherwise
(2)
Substituting (2) in (1), we have
E[X(t)] =
R 2π
0
1
X(t) 2π
dφ
(3)
=
A
2π
R 2π
=
A
2π
2π
− cos(ωt + φ) 0
=
A
2π
− cos(ωt + 2π) + cos(ωt)]
=
A
2π
− cos(ωt) + cos(ωt)] = 0 , a constant.
0
sin(ωt + φ) dφ
Hence {X(t)} is a first order stationary process.
Find auto correlation function:
By definition,
RXX (t, t + τ ) = E[X(t)X(t + τ )]
= E A sin(ωt + φ) A sin(ω(t + τ ) + φ)
= A2 E sin(ωt + φ) sin(+ωτ + φ)
(4)
=
A2
2
E cos(ωt + φ − (ωt + ωτ + φ)) − cos(ωt + ωτ + φ + ωt)
=
A2
2
E cos(−ωτ ) − cos(2ωt + ωτ + 2φ)
=
A2
2
E cos(−ωτ )] −
A2
E[cos(2ωt
2
+ ωτ + 2φ)
Applying the formula of (3) in the second term of (5) we get
RXX (t, t + τ ) =
A2
2
E cosωτ ] −
A2
2
R 2π
0
1
cos(2ωt + ωτ + 2φ) 2π
dφ
10
(5)
=
A2
cosωτ
2
−
A2
8π
sin(2ωt + ωτ + 2φ ]2π
0
=
A2
cosωτ
2
−
A2
8π
sin(2ωt + ωτ ) − sin(2ωt + ωτ )
=
A2
cosωτ
2
.
Hence auto correlation function =
A2
cosωτ
2
Example 1.4.3 Given a random variable Y with characteristic function φ(ω) =
E[eiωY ] = E[cos ωY +i sinωY ] and a random process defined by X(t) = cos(λt+
Y ) , show that the random process {X(t)} is stationary in the wide sense if
φ(1) = φ(2) = 0 .
Solution:
E{X(t)}
= E{cos(λt + Y )}
= E{cosλt cosY − sinλt sinY }
= cosλt E[cosY ] − sinλt E[sinY ]
(1)
Since φ(1) = 0 , E{cosY + isinY } = 0 .
Therefore E(cosY ) = E(sinY ) = 0
(2)
using(2) in (1), we get E{X(t)} = 0
(3)
E{X(t1 ) X(t2 )} = E{cos(λt1 + Y ) cos(λt2 + Y )}
= E{(cosλt1 cosY − sinλt1 sinY )(cosλt2 cosY − sinλt2 sinY )}
= E{cosλt1 cosλt2 cos2 Y + sinλt1 sinλt2 sin2 Y − cosλt1 sinλt2 cosY sinY −
sinλt1 cosλt2 cosY sinY )}
= cosλt1 cosλt2 E[cos2 Y ] + sinλt1 sinλt2 E[sin2 Y ] − sinλ)t1 + t2 ) E[cosY sinY ]
11
= cosλt1 cosλt2 E[ 1+cos2Y
]+sinλt1 sinλt2 E[ 1−cos2Y
]− 21 sinλ)t1 +t2 ) E[sin2Y ]
2
2
(4)
Since φ(2) = 0 , E{cos 2Y + isin2Y } = 0 and so E[cos2Y ] = 0 = E[sin2Y ]
(5)
using (5) in (4), we get,
E{X(t1 )X(t2 )} = R(t1 , t2 ) = 21 cosλt1 cosλt2 + 21 sinλt1 sinλt2
= 12 cosλ(t1 − t2 ) .
(6)
From (3) and (6), mean is a constant and ACF is a function of τ = t1 − t2 only.
Hence {X(t)} is WSS process.
Example 1.4.4 Two random processes {X(t)} and {Y (t)} are defined by X(t) =
A cos ωt+B sin ωt and Y (t) = A cos ωt−B sin ωt . Show that X(t) are jointly
WSS if A and B are uncorrelated random variables with zero means and the
same variances and ω is constant.
Solution: Given E(A) = E(B) = 0
V ar(A) = V ar(B) ⇒ E(A2 ) = E(B 2 ) = k(say)
Since A and B are uncorrelated, E(AB) = 0
Let us prove that X(t) and Y (t) are individually WSS processes.
E[X(t)] = E[A cos ωt + B sin ωt] = cos ωt E(A) + sin ωt E(B) = 0 , a constant
R(t1 , t2 ) = E[X(t1 ) X(t2 )]
= E[(A cos ωt1 + B sin ωt1 )(A cos ωt2 + B sin ωt2 )]
= E[A2 cos ωt1 cos ωt2 +AB cos ωt1 sin ωt2 +AB sin ωt1 cos ωt2 +B 2 sin ωt1 sin ωt2 ]
12
= E(A2 ) cos ωt1 cos ωt2 + E(B 2 ) sin ωt1 sin ωt2
= k[cos ωt1 cos ωt2 + sin ωt1 sin ωt2 ] , since E(A2 ) = E(B 2 ) = k and
E(AB) = 0
R(t1 , t2 ) = kcosω(t1 − t2 ) ,a function of τ = t1 − t2
Hence {X(t)} is a WSS process.
Now prove that Y (t) is a WSS process:
RXY (t1 , t2 ) = E[X(t1 ) Y (t2 )]
= E[(A cos ωt1 + B sin ωt1 )(B cos ωt2 − A sin ωt2 )]
= E[AB cos ωt1 cos ωt2 −A2 cos ωt1 sin ωt2 +B 2 sin ωt1 cos ωt2 −AB sin ωt1 sin ωt2 ]
= E[B 2 sin ωt1 cos ωt2 − A2 cos ωt1 sin ωt2 ]
= k sin ω(t1 − t2 ) .
Therefore RXY (t1 , t2 ) is a function of τ = t1 − t2 and so {X(t)} and {Y (t)}
are jointly WSS process.
Example 1.4.5 If {X(t)} is a WSS process with auto correlation R(τ ) = Ae−α|τ | ,
determine the second order moment of the RV X(8) − X(5) .
Solution: The second moment of X(8) − X(5) is given by
E[{X(8) − X(5)}2 ] = E {X 2 (8)} + E {X 2 (5)} − 2E {X(8)X(5)} ——– 1
Given R(τ ) = Ae−α|τ | . Then R(t1 , t2 ) = Ae−τ |t1 −t2 |
E[X 2 (t)] = R(t, t) = A
13
E[X 2 (8)] = E[X 2 (5)] = A ———-2
E[X(8)X(5)] = R(8, 5) = Ae−3α ———-3
Using 2 and 3 in 1, we get E[{X(8) − X(5)}2 ] = 2A − 2Ae−3α = 2A(1 − e−3α ) .
Example 1.4.6 Show that the random process χ(t) = Acos(ωt + θ) is a WSS
process if A and ω are constants and θ is a uniformly distributed random variable
in (0, 2π) .
Proof. The Pdf of uniformal distribution is f (θ) =
1
2π
, 0 ≤ θ ≤ 2π
We have to show that the mean is a constant.
2π
Z
E[x(t)] =
X(t)
0
1
=
2π
Z
1
dθ
2π
2π
Acos(+θ)dθ
0
A
[sin(ωt + θ)]02π
2π
A
[sinωt − sinωt]
=
2π
=
= 0, a constant.
Therefore Mean is a constant.
Now E[X(t) X(t + τ )] = RXX (t, t + τ ) = E[Acos(ωt + θ)Acos(ωt + ωτ + θ)]
= 12 E[2A2 cos(ωt + θ)cos(ωt + ωτ + θ)]
=
A2
E[cosωt
2
=
A2
cosωτ
2
+
+ cos(2ωt + ωτ + 2θ)]
A2
E[cos(2ωt
2
14
+ ωτ + 2θ)]
R 2π
=
A2
cosωτ
2
+
A2
4π
=
A2
cosωτ
2
+
A2 sin(2ωt+ωτ +2θ) 2π
4π
2
0
=
A2
cosωτ
2
+ A2 4π(0)
=
A2
cosωτ
2
, a function τ only.
0
cos(2 + ωτ + 2θ)dθ
Therefore mean is a constant and Auto correlation function depends only τ and
so X(t) is a WSS process.
Example 1.4.7 Consider a random process {X(t)} defined by X(t) = Y cos(ωt+
φ) where Y and φ are independent random variables and are uniformly distributed over (−A, A) and (−π, π) respectively.
a) Find E[X(t)] .
b) Find the auto correlation function RXX (t, t + τ ) of X(t) .
c) Is the process X(t) W.S.S.
Solution
(a) The p.d.f of Y is f (Y ) =
The p.d.f of θ is f (θ) =
1
2π
1
2π
E[X(t)] = E[Y cos(ωt + θ)] = E[Y ]E[cos(ωt + θ)] .
Now E[Y ] =
RA
Y
−A
1
2A
dY =
1 Y2 A
2A 2 −A
=
1
4A
A2 − A2 = 0 .
Therefore E[X(t)] = 0 , a constant.
(b) RXX (t, t + τ ) = E X(t) X(t + τ )] = E[Y cos(ωt + θ) Y cos(ωt + ωτ + θ)]
15
= E[Y 2 ] E[cos(ωt + θ) cos(ωt + ωτ + θ)]
Since V ar(Y ) = σ 2 = E[Y 2 ] − (E[Y ])2 , σ 2 = E[Y 2 ] .
σ2
E
2
Therefore RXX (t, t + τ ) =
2 cos(ωt + θ) cos(ωt + ωτ + θ)
=
σ2
E
2
=
σ2
2
cos ωτ +
σ2
E
2
=
σ2
2
cos ωτ +
σ2
4π
R
π
−π
=
σ2
2
cos ωτ +
σ2
8π
sin(2ωt + ωτ + 2θ)
=
σ2
2
cos ωτ +
σ2
8π
[0] =
cos(2ωt + ωτ + 2θ) + cos ωτ
Hence RXX (τ ) =
cos(2ωt + ωτ + 2θ)
σ2
2
cos(2ωt + ωτ + 2θ) dθ
σ2
2
π
−π
cos ωτ .
cos ωτ , a function of τ only.
(c) Yes, X(t) is W.S.S process, because mean is constant and RXX (τ ) is function
of τ only.
Example 1.4.8 The probability distribution of the process {X(t)} is given by



(at)n−1

 (1+at)
if n = 1, 2, 3, . . .
n+1
P (X(t) = n) =



 at
,n=0
1+at
Show that it is not stationary.
Solution
P
E X(t) = n P (n) = 0
at
1+at
+1
1
(1+at)2
=
1
(1+at)2
1 + 2u + 3u2 + . . . , where u =
=
1
(1+at)2
−2
1−u
=
1
(1+at)2
1−
at −2
1+at
16
2
(at)
at
+ 2 (1+at)
3 + 3 (1+at)4 + . . .
at
1+at
=
1+at−at −2
1
2
(1+at)
1+at
= 1,
Therefore E[X(t)] = 1 , a constant.
E[X 2 (t)] =
P
n2 P (n) =
=
1
(1+at)2
P∞
=
1
(1+at)2
P∞
2 n−=1
=
1
(1+at)2
2 1−
=
2(1+at)3
(1+at)2
n−=1
−
P∞
n(n + 1)
n−1
at
1+at
n−1
n(n+1)
at
2
1+at
−3
at
1+at
(1+at)2
(1+at)2
n−=1 {n(n
− 1−
−
−
n−1
(at)
+ 1) − n} (1+at)
n+1
P∞
n−=1
P∞
n−=1
−2 at
,
1+at
n
n
n−1 at
1+at
n−1 at
1+at
since (1 − x)−3 =
P∞
n=1
n(n+1) n−1
x
2
= 2(1 + at) − 1 = 1 + 2at .
Therefore V ar(X(t)) = 2at , a function of t and so {X(t)} is not stationary.
Example 1.4.9 Assume that a random process {X(t)} with four sample functions X(t, s1 ) = cos t ; X(t, s2 ) = − cos t , X(t, s3 ) = − sin t , X(t, s4 ) = − sin t ,
all of which are equally likely. Show that it is WSS process.
Solution
E[X(t)] =
1
4
P4
i=1
X(t, si ) = 41 [cos t − cos t + sin t − sin t] = 0
A.C.F RXX (t1 , t2 ) = E[X(t1 ) X(t2 )] =
1
4
P4
i=1
X(t1 , si ) X(t2 , si )
=
1
4
cos t1 cos t2 + cos t1 cos t2 + sin t1 sin t2 + sin t1 sin t2
=
1
2
cos t1 cos t2 + sin t1 sin t2 = 21 cos(t2 − t1 ) =
cos τ
2
, where τ = t2 − t1
Since E[X(t)] is constant and RXX (t1 , t2 ) = a function of t2 − t1 , the process
{X(t)} is WSS.
Example 1.4.10 Consider a random process {X(t)} = P + Qt , where P and
17
Q are independent random variables E(P ) = p , E(Q) = q , V ar(P ) = σ12
and V ar(Q) = σ22 . Find E {X(t)} , R(t1 , t2 ) and C(t1 , t2 ) . Is the {X(t)}
stationary?
Solution: Given the Random process X(t) = P+Qt. Then
E[{X(t)}] = E(P ) + tE(Q) = p + qt
A.C.F. R(t1 , t2 ) = E[X(t1 )X(t2 )]
= E[(P + Qt1 )(P + Qt2 )]
= E[P 2 + P Q(t1 + t2 ) + Q2 t1 t2 ]
= E[P 2 ] + E[P Q](t1 + t2 ) + E[Q2 ]t1 t2
Since P and Q are independent, E(P Q) = E(P ) E(Q)
E(P 2 ) = V (P ) + [E(P )]2 = σ12 + p2
E(Q2 ) = V (Q) + [E(Q)]2 = σ22 + q 2
Therefore R(t1 , t2 ) = σ12 + p2 + pq(t1 + t2 ) + t1 t2 (σ22 + q 2 )
R(t1 , t2 ) = σ12 + t1 t2 σ22 + p2 + t1 t2 q 2 + pq(t1 + t2 )
E[X 2 (t)] = E[P 2 + Q2 t2 + 2P Qt]
= E(P 2 ) + t2 E(Q2 ) + 2tE(P Q)
= σ12 + p2 + t2 (σ22 + q 2 ) + 2tE(P )E(Q)
= σ12 + p2 + t2 (σ22 + q 2 ) + 2tpq
V ar[X(t)] = E[X 2 (t)] − E[{X(t)}]2
= σ12 + p2 + t2 (σ22 + q 2 ) + 2tpq − (P + qt)2 = σ12 + t2 σ22
18
Since E[X(t)] and V ar[X(t)] are functions of time t , the Random process
{X(t)} is not stationary in any sense. It is evolutionary.
Auto covariance:
CXX (t1 , t2 ) = RXX (t1 , t2 ) − E[{X(t1 )}] E[{X(t2 )}]
= σ12 + t1 t2 σ22 + p2 + t1 t2 q 2 + pq(t1 + t2 ) − (p + qt1 )(p + qt2 ) .
Therefore CXX (t1 , t2 ) = σ12 + t1 t2 σ22 .
Example 1.4.11 If X(t) = Y cos t+Z sin t for all where Y and Z are independent binary random variables, each of which assumes the values −1 and 2 with
probabilities
2
3
and
1
3
respectively, prove that {X(t)} is wide sense stationary.
Solution: E(Y ) = (−1)( 23 ) + 2( 31 ) = 0
E(Y 2 ) = (−1)2 ( 23 ) + 4( 13 ) = 2
V ar(Y ) = E(Y 2 ) − [E(Y )]2 = 2
Similarly, E(Z) = 0 ; V ar(Z) = 2
E[X(t)] = E[Y cos t + Z sin t] = E[Y ] cos t + E[Z] sin t = 0
RXX (t, t + τ ) = E[X(t)X(t + τ )]
= E[(Y cos t + Z sin t)(Y cos(t + τ ) + Z sin(t + τ ))]
= E[Y 2 cos t cos(t+τ )]+E[Z 2 sin t sin(t+τ )]+E[Y Z]cos t sin(t + τ ) + sin t cos(t + τ )
= E[Y 2 ] cos(t + τ ) cos t + E(Z 2 ) sin(t + τ ) sin t + E(Y )E(Z) sin(2t + τ )
= 2[cos(t + τ ) cos t + sin(t + τ ) sin t] + 0
Therefore RXX (τ ) = 2 cos τ .
19
Since E[{X(t)}] is a constant and A.C.F is a function of τ only, {X(t)} is WSS
process.
Example 1.4.12 If X(t) = R cos(ωt + φ) , where R and φ are independent
random variables with E(R) = 2 and V ar(R) = 6 and φ is uniformly distributed
in (−π, π) . Prove that {X(t)} is WSS process.
Solution: Since φ is uniformly distributed in (−π, π) , the P.d.f of φ is



1

 2π
if − π < φ < π
f (φ) =




0 otherwise.
Rπ 1
(cos(ωt + φ)dφ
Now E[X(t)] = E(R)E[cos(ωt + φ)] = 2 −π 2π
= π1 [sin(ωt + φ)]π−π
= π1 [sin(ωt + π) + sin(π − ωt)]
= π1 [− sin(ωt) + sin(ωt)] = 0
RXX (t, t + τ ) = E[X(t)X(t + τ )]
= E[R2 cos(ωt + φ) cos(ωt + ωτ + φ)]
= E[R2 ]E[cos(ωt + φ) cos(ωt + ωτ + φ)]
Since V ar(R) = 6 , we get E(R2 ) = V (R) + [E(R)]2 = 6 + 4 = 10
Therefore RXX (t, t + τ ) =
10
E[2 cos(ωt
2
+ φ) cos(ωt + ωτ + φ)]
= 5E[cos(2ωt + ωτ + 2φ) + cos(ωt + φ)]
Rπ
= 5 cos ωτ +
5
(2π)
= 5 cos ωτ +
5
[sin(2ωt
(2π)
−π
cos(2ωt + ωτ + 2φ)dφ
+ ωτ + 2φ)]π−π = 5 cos ωτ + 0
20
RXX (t, t + τ ) = 5 cos ωτ , a function of τ only.
Therefore E[X(t)] is a constant and A.C.F is a function of τ only, {X(t)} is a
W.S.S process.
Example 1.4.13 Given a random variable Ω with density f (ω) and another
random variable φ is uniformly distributed in (−π, π) and is independent of Ω
and X(t) = a cos(Ωt + φ) , prove that {X(t)} is a WSS process.
Solution:
E[X(t)] = aE[cos(Ωt + φ)]
= aE E cos(Ωt + φ)/Ω}]
= aE[cos ΩtE(cos φ) − sin ΩtE(sin φ)]
1
)
Therefore E[X(t)] = aE[cos Ωt ( 2π
Rπ
1
cos φ dφ − sin Ωt ( 2π
)
−π
Rπ
−π
sin φdφ]
= aE[cos Ωt(0) − sin Ωt(0)] = 0 .
E[X(t1 )X(t2 )] = a2 E[cos(Ωt1 + φ) cos(Ωt2 + φ)]
= a2 E[E cos Ωt1 cos Ωt2 cos2 φ + sin Ωt1 sin Ωt2 sin2 φ − sin Ω(t1 + t2 ) sin φ cos φ /Ω]
o
n
Rπ
1
2
2
= a E cos Ωt1 cos Ωt2 (2π) −π cos φdφ
o R
n
Rπ
π
2
1
1
+ sin Ωt1 sin Ωt2 (2π)
sin
φ
dφ
−
sin
Ω(t
−
1
+
t
)
sin 2φ dφ
2
4π −π
−π
=
a2
E[cos Ωt1
2
=
a2
E[cos Ω(t1
2
cos Ωt2 + sin Ωt1 sin Ωt2 ]
− t2 )] . Therefore RXX (t1 , t2 ) is a function of t1 − t2 whatever
be the value of f (ω) . Hence {X(t)} is a W.S.S process.
21
Example 1.4.14 Show that the random process X(t) = A sin t + B cos t , where
A and B are independent random variables with zero means and equal standard
deviations is stationary of the second order.
Solution:
The A.C.F of a second order stationary process of a function of time difference
τ only and not on absolute time. Consider RXX (t, t + τ ) = E[X(t)X(t + τ )] .
Therefore RX X (t, t + τ ) = E[(A sin t + B cos t) (A sin(t + τ ) + B cos(t + τ ))]
= E[A2 ] sin(t+τ ) sin t+E[B 2 ] cos(t+τ ) cos t+E[AB] {sin t cos(t + τ ) + sin(t + τ ) cos t}
Given E(A) = E(B) = 0 → E(AB) = 0 and E(A2 ) = E(B 2 ) = σ 2
Since V (A) = V (B) = σ 2 ,
Therefore RX X (t, t + τ ) = σ 2 [sin(t + τ ) sin t + cos(t + τ ) cos t]
= σ 2 cos(t + τ − t)
= σ 2 cos τ
Therefore A.C.F is a function of time difference τ only. Hence {X(t)} is stationary of order two.
Example 1.4.15 Consider a random process Y (t) = X(t) cos(ωt + θ) , where
X(t) is wide stationary and θ is uniformly distributed in ( −π, π ) and is independent of X(t) and ω is a constant. Prove that Y (t) is wide sense stationary.
22
Solution:
E[Y (t)] = E[X(t)]E[cos(ωt + θ)]
n
o
Rπ
1
= E[X(t)] (2π)
cos(ωt
+
θ)dθ
= E[X(t)](0) = 0 .
−π
RY Y (t, t + τ ) = E[X(t)X(t + τ )]E[cos(ωt + ωτ + θ) cos(ωt + θ)]
=
RXX (τ )
E[(cos ωτ )
2
+ cos(2ωt + ωτ + 2θ)] , since {X(t)} is W.S.S.
Therefore RY Y (t, t + τ ) =
=
RX X (τ )
2
cos ωτ +
RX X (τ )
[cos ωτ
2
RX X (τ )
(0)
2
=
RX X (τ )
2
+
1
(2π)
Rπ
−π
cos(2ωt + ωτ + 2θ)dθ]
cos ωτ
Therefore A.C.F of Y (t) is a function of τ only .
Since E[Y (t)] is a constant and A.C.F of Y (t) is a function of τ only , {Y (t)}
is WSS process.
Example 1.4.16 Let X(t) = A cos λt + B sin λt , where A and B are independent normally distributed random variables N (0, σ 2 ) . Obtain the covariance
function of the process {X(t) : −∞ < t < ∞} . Is {X(t)} covariance stationary?
Solution:
E[X(t)] = E[A] cos λt + E[B] sin λt
Since A and B are independent normal random variables N (0, σ 2 ) , E(A) =
E(B) = 0 and V (A) = V (B) = σ 2 .
and E(A2 ) = E(B 2 ) = σ 2 . Thus E[X(t)] = 0 .
23
The A.C.F of {X(t)} is given by
E[X(t)X(s)] = E[{A cos λt + B sin λt} {A cos λs + B sin λs}]
= E[A2 ] cos λt cos λs + E[B 2 ] sin λt sin λs + E[AB]{cos λt sin λs + sin λt cos λs}
Since E(A2 ) = E(B 2 ) = σ 2 and E(AB) = E(A)E(B) = 0 ,
R(t, s) = σ 2 {cos λt cos λs + sin λt sin λs} = σ 2 cos λ(t − s)
Covariance C(t, s) = R(t, s) − E[X(t)]E[X(s)] = R(t, s) = σ 2 cos λ(t − s)
Since covariance function is a function of τ = t−s only, and E[X(t)] is constant,
{X(t)} is covariance stationary.
Example 1.4.17 Consider a random process X(t) = B cos(50t + φ) , where B
and φ are independent random variables. B is a random variable with mean 0
and variance 1 . φ is uniformly distributed in the interval ( −π, π ). Show that
{X(t)} is WSS.
Solution:
E(B) = 0 and V ar(B) = 1 ⇒ E(B 2 ) = 1 .
E[X(t)] = E[B]E[cos(50t + φ)] ⇒ E[X(t)] = 0.
RX X (t1 , t2 ) = E[B cos(50t1 + φ)B cos(50t2 + φ)]
=
E[B 2 ]
E[2 cos(50t
2
− 1 + φ) cos(50t2 + φ)]
= 12 E[cos 50(t1 − t2 )] + 21 E[cos(50t1 + 50t2 + 2φ)]
⇒ RX X (t1 , t2 ) =
=
cos 50(t1 −t2 )
2
+
cos 50(t1 −t2 )
2
1
[sin(50t1
(8π)
+
1
(4π)
Rπ
−π
cos(50t1 + 50t2 + 2φ)dφ
+ 50t2 + 2φ)]π−π =
24
cos 50(t1 −t2 )
2
Therefore RX X (t1 , t2 ) = a function of time difference t1 − t2 .
Since E[X(t)] is a constant and A.C.F is a function of τ = t1 − t2 only, {X(t)}
is WSS.
Example 1.4.18 Consider a random process defined on a finite sample space
with three sample points and is defined by the three sample functions X(t, s1 ) = 3 ,
X(t, s2 ) = 3 cos t and X(t, s3 ) = 4 sin t and all of which are equally likely ie,
P (S1 ) =
1
3
for all i, compute the mean and a.C.F . Is tha process strict-sense
stationary? Is it wide sense stationary?
Solution:
Mean =
P3
i=1
X(t, si ) P (si ) = 31 (3) + 31 (3 cos t) + 13 (4 sin t) = 1 + cos t +
A.C.F = E[X(t1 )X(t2 )] =
P3
i=1
4 sin t
3
X(t1 , si )X(t2 , si ) P (si )
= 31 [9 + 9 cos t12 + 16 sin t12 ] = 3 + cos(t1 − t2 ) + 37 sin t1 sin t2
⇒ RXX (t1 , t2 ) is not a function of t1 − t2 only.
Since E[X(t)] is not a constant ⇒ X(t) is not WSS. Also RXX (t1 , t2 ) is not a
function t2 − t1 , X(t) is not S.S.S.
25
1.5
Examples of Ergodic Process
Example 1.5.1 If the WSS process {X(t)} is given by X(t) = 10 cos(100t + θ)
where θ is uniformly distributed over (−π, π) . Prove that {X(t)} is correlation
ergodic.
Solution:
RXX (t, t + τ ) = X(t)X(t + τ )
= E 10 cos(100t + 100τ + θ) 10 cos(100t + θ)
= 50E 2 cos(100t + 100τ + θ) cos(100t + θ)
= 50 cos(100τ ) +
50
2π
Rπ
= 50 cos(100τ ) +
50
4π
−π
cos(200t + 100τ + 2θ) dθ
sin(200t + 100τ + 2θ)
RXX (t, t + τ ) = 50 cos(100τ ) +
50
[0]
4π
π
−π
= 50 cos(100τ )
Consider the time averaged ACF
limT →∞ ZT = limT →∞
= limT →∞
= limT →∞
RT
−T
25
T
RT
−T
X(t) X(t + τ ) dt
100 cos(100t + θ) cos(100t + 100τ + θ) dt
RT
−T
cos(100τ )dt + limT →∞
= 50 cos 100τ + limT ∞
25
T
sin(200t+100τ +2θ) T
8T
−T
RT
−T
cos(200t + 100τ + 2θ) dt
= 50 cos(100τ )
Therefore limT →∞ ZT = R(XX)(τ ) = 50 cos(100τ )
Since the ensembled A.C.F= Time averaged ACF, {X(t)} is correlation ergodic.
26
Example 1.5.2 Prove that the random process {X(t)} defined by X(t) = A cos(ωt+
θ) where A and ω are constants and θ is uniformly distributed over (0, 2π) is
ergodic in both the mean and the auto correlation function.
Solution:
Ensembled Mean and ACF
E[X(t)] =
A
2π
R 2π
0
cos(ωt + θ) dθ =
A
2π
2π
sin(ωt + θ) 0 =
A2
2
ACF RXX (t, t + τ ) = X(t)X(t + τ ) =
=
A2
2
E cos ωτ + cos(2ωt + ωτ + 2θ)
=
A2
2
cos ωτ +
A2
4π
R 2π
=
A2
2
cos ωτ +
A2
8π
0
sin ωt − sin ωt = 0
E 2 cos(ωt + ωτ + θ) cos(ωt + θ)
cos(2ωt + ωτ + 2θ) dθ
sin(2ωt + ωτ + 2θ) dθ
Therefore RXX (τ ) =
A
2π
A2
2
2π
0
=
A2
2
cos ωτ
cos ωτ
Time averaged ACF and Mean:
Time Averaged Mean = XT = limT →∞
= limT →∞
A
2T
RT
−T
= limT →∞
RT
−T
X(t) dt
cos(ωt + θ) dt
T
A sin(ωt+θ)
2ωT
1
2T
−T
= 0.
Therefore XT = 0 .
Time averaged ACF x(t + τ )X(t) = limT →∞
= limT →∞
A2
2T
RT
= limT →∞
A2
4T
RT
= limT →∞
A2
4T
(cos ωτ )(2T ) + limT →∞
−T
−T
1
2T
RT
−T
x(t + τ )X(t) dt
A2 cos(ωt + ωτ + θ) cos(ωt + θ) dt
{cos ωτ + cos(2ωt + ωτ + 2θ)} dt
A2
4T
27
RT
−T
cos(2ωt + ωτ + 2θ) dt
=
A2
2
cos ωτ + limT →∞
=
A2
2
cos ωτ
A2 sin(2ωt+ωτ +2θ) T
2T
2ω
−T
Therefore x(t + τ )X(t) =
A2
2
cos ωτ
Since ensembled mean=Time aveaged mean, {X(t)} is mean ergodic. Also since
ensembled ACF=Time averaged ACF, {X(t)} is correlation ergodic.
Example 1.5.3 {X(t)} is the random telegraph signal process with E[X(t)] = 0
and R(τ ) = e−2λ|τ | . Find the mean and variance of the time average of {X(t)}
over (−T, T ) . Is it mean ergodic?
Solution:
Mean of the time average of {X(t)}
XT =
1
2T
RT
−T
X(t) dt
Therefore E[ XT ] =
RT
1
2T
−T
E[X(t)] dt = 0 , since E[X(t)] = 0 .
To find V ar(XT )
V ar(XT ) =
=
1
T
R 2T
=
1
T
=
1
2λT
0
−
1
T
R
2T
0
e−2λτ dτ −
e−2λτ 2T
2λ
0
−
|τ | 2T
1−
1
2T 2
(1 − e−4λτ ) +
R 2T
1
T
1
2λT
Therefore V ar(XT ) =
C(τ ) dτ =
τ
2T
−
τ e−2λτ
2λ
−
e−4λT +
−
R 2T
0
1−
τ
2T
e−2λτ dτ
e−2λτ dτ
0
1
2λT
1
T
e−2λτ 2T
4λ2 0
1
8λ2 T 2
1
8λ2 T 2
(e−4λτ − 1) .
(e−4λτ − 1) .
Since limT →∞ V ar(XT ) = 0 , the process is mean ergodic.
28
Example 1.5.4 Let {X(t) : t ≥ 0} be a random process where X(t) = total




1
if k is even
.
number of points in the interval (0, t) = k say and X(t) =



−1 if k is odd
Find the ACF of X(t) . Also if P (A = 1) = P (A = −1) =
1
2
and A is
independent of X(t) , find the ACF of Y (t) = A X(t) .
Solution:
Probability law of {X(t)} is given by P [X(t) = k] =
e−λt (λt)k
k!
, k = 1, 2, 3, . . .
Then P [X(t) = −1] = P [X(t) is even] = P [X(t) = 0] + P [X(t) = 2] + . . .
= e−λt 1 +
λt
2!
+
(λt)2
4!
+ . . . = e−λt cosh λt .
P [X(t) = −1] = P [X(t) is odd] = P [X(t) = 1] + P [X(t) = 3] + . . .
= e−λt
λt
1!
+
(λt)3
3!
+ . . . = e−λt sinh λt .
P [X(t1 ) = 1, X(t2 ) = 1] = P [X(t1 ) = 1/X(t2 ) = 1] × P [X(t2 ) = 1]
= (e−λτ cosh λτ )(e−λt2 cosh λt2 ) , where τ = t1 − t2
P [X(t1 ) = −1, X(t2 ) = −1] = (e−λτ cosh λτ )(e−λt2 sinh λt2 )
P [X(t1 ) = 1, X(t2 ) = −1] = (e−λτ sinh λτ )(e−λt2 sinh λt2 )
P [X(t1 ) = −1, X(t2 ) = 1] = (e−λτ sinh λτ )(e−λt2 cosh λt2 ) .
Then P [X(t1 )X(t2 ) = 1] = e−λτ cosh λτ and P [X(t1 )X(t2 ) = −1] = e−λτ sinh λτ
Therefore R(t1 , t2 ) = 1 × e−λτ cosh λτ − 1 × e−λτ sinh λτ = e−2λτ = e−2λ(t1 −t2 ) .
To find ACF of Y (t) = AX(t)
E(A) = (1)P [X = 1] + (−1)P [X = −1] =
29
1
2
−
1
2
=0
E(A2 ) = (1)2 P [X = 1] + (−1)2 P [X = −1] =
1
2
+
1
2
=1
RY Y (t1 , t2 ) = E[A2 X(t1 )X(t2 )] = E(A2 )RXX (t1 , t2 ) = 1 × e−2λτ = e−2λ(t1 −t2 ) .
30
Chapter 2
Markov Process and Markov
chain
2.1
Basic Definitions
Definition 2.1.1 (Markov Process) A random process {X(t)} is called a Markov
process if P [X(tn ) = an /X(tn−1 ) = an−1 , X(tn−2 ) . . . X(t2 ) = a2 , X(t1 ) = a1 ] =
P [X(tn ) = an /X(tn−1 ) = an−1 ] for all t1 < t2 < . . . tn . In other words, if the
future behavior of a process depends on the present value but not on the past, then
the process is called a Markov process.
Example 2.1.2 The probability of raining today depends only on previous weather
31
conditions existed for the last two days and not on past weather conditions.
Definition 2.1.3 (Markov Chain) If the above condition is satisfied for all n ,
then he process {Xn }; n = 0, 1, 2 . . . is called a Markov chain and the constants
(a1 , a2 , . . . , an ) are called the states of the Markov chain. In other words, a discrete parameter Markov process is called a Markov Chain.
Definition 2.1.4 (One-Step Transition Probability) The conditional transition probability P [Xn = aj /Xn−1 = ai ] is called the one-step transition probability from state ai to state aj at the nth step and is denoted by Pij (n − 1, n) .
Definition 2.1.5 (Homogeneous Markov Chain) If the one-step transition
probability does not depend on the step i.e., Pij (n − 1, n) = Pij (m − 1, m) , the
Markov chain is called a homogeneous Markov chain.
Definition 2.1.6 (Transition Probability Matrix) When the Markov chain
is homogeneous, the one-step transition probability is denoted by pij . The matrix
P = (pi j) is called the transition probability matrix (tpm) satisfying the conditions
(i) pij ≥ 0 and (ii)
P
pij = 1 for all i i.e., the sum of the elements of anyrow
of the t.pm is 1 .
Definition 2.1.7 (n-Step Transition Probability) The conditional probability that the process is in state aj at step n , given that it was in state ai at step
32
0, P [Xn = aj /X0 = ai ] is called n-step transition probability and is denoted by
(n)
(n)
Pij . That is Pij = P [Xn = aj /X0 = ai ]
Chapman-Kolmogorov theorem:
If P is the tpm of a homogeneous Markov chain,then n-step tpm P (n) is equal
(n)
to P n .Thus [Pij ] = [Pij ]n .
Definition 2.1.8 (Regular Markov Chain) A stochastic matrix P is said to
be a regular matrix if all the entries of P m (for some positive integer m) are
positive. A homogeneous Markov chain is said to be regular if its tpm is regular.
Definition 2.1.9 (Steady state distribution) If a homogeneous Markov chain
is regular, then every sequence of state probability distribution approaches a unique
fixed distribution of the Markov chain. That is limn→∞ {P (n) } = π where
(1)
(2)
(n)}
P (n) = {p1 , p2 . . . pk
and π = (π1 , π2 , π3 , . . . πk )
If P is the tpm of the regular Markov chain, and π = (π1 , π2 , π3 , . . . πk ) is the
steady state distribution, then πp = π and π1 + π2 + π3 . . . + πk = 1 .
2.2
Classification of states of Markov chain
(n)
Definition 2.2.1 If pij > 0 for some n and for all i and j , then every state
can be reached from every other state. When this condition is satisfied, the Markov
33
chain is said to be irreducible.The tpm of an irreducible chain is an irreducible
matrix. Otherwie the chain is said to be non-irreducible or reducible.
(n)
Definition 2.2.2 state i of a Markov chain is called a return state,i f pii > 0
for some n > 1 .
Definition 2.2.3 The period di of a return state i is defined as the greatest
(m)
common divisor of all m such that pii
(m)
> 0 i.e., di = GCD{m : pii
> 0}
state i is said to be periodic with period di if di > 1 and aperiodic if di = 1 .
Obviously state i is aperiodic if pii 6= 0
Definition 2.2.4 (Recurrence time probability) The probability that the chain
returns to state i , starting from state i , for the first time at the nth step is called
the recurrence time probability or the first return time probability. It is denoted
by fii (n) .
(n)
{n, fii } , n = 1, 2, 3, . . . is the distribution of recurrence time of the
state i .
If Fii =
P∞
n=1
(n)
fii = 1 , then return to state i is certain and µii =
P∞
n=1
(n)
nfii
is called the mean recurrence time of the state i .
Definition 2.2.5 A state i is said to be persistent or recurrent if the return to
state i is certain i.e., if Fii = 1 .
34
The state i is said to be transient if the return to state i is uncertain i.e.,
Fii < 1 .
The state i is said to be non-nullpersistent if its mean recurrence time µii is finite
and null persistent if µii = ∞ .
The non-null persistent and aperiodic state is called ergodic.
Remark 2.2.6
1. If a Markov chain is irreducible,all its states are of the same
type.They are all transient,all null persistent or all non-null persistent.All
its states are either aperiodic or periodic with the same period.
2. If a Markov chain is finite irreducible, all its states are non-null persistent.
Definition 2.2.7 (Absorbing state): A state i is called an absorbing state if
pij = 1 and pij = 0 for i 6= j .
Calculation of joint probability
P [X0 = a, X1 = b, . . . Xn−2 = i, Xn−1 = j, Xn = k]
= P [Xn = k/Xn−1 = j]P [Xn−1 = j/Xn−2 = i] . . . P [X1 = b/X0 = a]P [X0 = a]
= Pjk Pij . . . Pab P (X0 = a)
35
2.3
Examples
Example
2.3.1 Consider
a Markov chain with transition probability matrix






0.5 0.4 0.1













P =
0.3 0.4 0.3 . Find the steady state probabilities of the system.














0.2 0.3 0.5
Solution : Let the invariant probabilities of P be π = (π1 , π2 , π3 )
By the property
 of π , πp =
π




0.5 0.4 0.1













 = (π1 π2 π3 )
(π1 π2 π3 ) 

0.3
0.4
0.3














0.2 0.3 0.5
0.5π1 + 0.3π2 + 0.2π3 = π1 ⇒ 0.5π1 − 0.3π2 − 0.2π3 = 0
(1)
0.4π1 + 0.4π2 + 0.3π3 = π2 ⇒ 0.4π1 − 0.6π2 − 0.3π3 = 0
(2)
0.1π1 + 0.3π2 + 0.5π3 = π3 ⇒ 0.1π1 + 0.3π2 − 0.5π3 = 0
(2)+(3) ⇒ 0.5 π1 -0.3 π2 -0.2 π3 =0 which is (1)
36
(3)
Since π is the probability distribution π1 + π2 + π3 = 1 ,
(4)x 0.3 ⇒ 0.3π1 +0.3 π2 +0.3 π3 =0.3
(5)
(1)+(5) ⇒ 0.8π1 +0.1 π3 =0.3
(6)
(1)+(3) ⇒ 0.6π1 -0.7 π3 =0
(7)
(6)x7 ⇒ 5.6π1 +0.7 π3 =2.1
(8)
(4)
2.1
= 0.3
(7)+(8) ⇒ 6.2π1 =2.1 ⇒ π1 = 6.2
put π1 =0.3 in (7), 0.18-0.7 π3 = 0 ⇒ π3 =
0.18
0.2
= 0.3
put π1 = π2 = 0.3 in (4) we get, 0.6+ π2 = 1 ⇒ π2 =0.4
Hence the invariant probabilities of P are (0.3, 0.4, 0.3) .
Example 2.3.2 At an intersection, a working traffic light will be out of order the
next day with probability 0.07 , and out of order traffic light will be working the
next day with probability 0.88 . Let Xn = 1 if a day n the traffic will work; Xn = 0
if on day n the traffic light will not work.Is {Xn ; n = 0, 1, 2 . . .} a Markov chain?.
If so, write the transition probability matrix.
Solution: Yes, {Xn ; n = 0, 1, 
2 . . .} is a Markov
chain with state space {0, 1} .





0.12 0.88





.
Transition probability matrix 







0.07 0.93
Example 2.3.3 Let {Xn } be a Markov chain with state space {0, 1, 2} with ini37
(0)
tial probability
 vector P =
 (0.7, 0.2, 0.1) and the one step transition probability




0.1 0.5 0.4













.
matrix P = 

0.6
0.2
0.2














0.3 0.4 0.3
Compute P (X2 = 3) and P (X3 = 2, X2 = 3, X1 = 3, X0 = 2).



Solution: P (2)






0.1 0.5 0.4 0.1 0.5 0.4



















2




=P =
 0.6 0.2 0.2
0.6
0.2
0.2





















0.3 0.4 0.3 0.3 0.4 0.3






0.43 0.31 0.26












2


P =

0.24
0.42
0.34














0.36 0.35 0.29
(i) P [X2 = 3] =
P3
i=1
P [X2 = 3/X0 = i]P [X0 = i]
38
given P (0) = (0.7, 0.2, 0.1)
This gives P [X0 = 1] = 0.7 ; P [X0 = 2] = 0.2 and P [X0 = 3] = 0.1
(2)
(2)
(2)
Therefore P [X2 = 3] = P13 P (X0 = 1) + P23 (X0 = 2) + P33 P (X0 = 3)
= 0.26 × 0.7 + 0.34 × 0.2 + 0.29 × 0.1
= 0.182 + 0.068 + 0.029 = 0.279
(ii) P {X1 = 30 = 2} = P23 = 0.2
(1)
P {X1 = 3/X0 = 2} = P {X1 = 3/X0 = 2} × P {X0 = 2}
= 0.2 × 0.2 = 0.04 (by(1))
(2)
P {X2 = 3, X1 = 3, X0 = 2} = P {X2 = 3/X1 = 3, X0 = 2} × P {X1 = 3, X0 = 2}
= P {X2 = 3/X1 = 3} × P {X1 = 3, X0 = 2}
(by Markov property)
= 0.3 × 0.04 = 0.012 (by(2))
P {X3 = 2, X2 = 3, X1 = 3, X0 = 2} = P {X3 = 2/X2 = 3, X1 = 3, X0 = 2}
× P{X 2 = 3, X1 = 3, X0 = 2}
= P {X3 = 2/X2 = 3} × P {X2 = 3, X1 = 3, X0 = 2}
= 0.4 × 0.012 = 0.0048 .
Example 2.3.4 A fair dice is tossed repeatedly. If Xn denotes the maximum of
the number occuring in the first n tosses,find the transition probability matrix P
of the Markov chain {Xn } . Find also P 2 and P (X2 = 6) .
Solution: State space {1, 2, 3, 4, 5, 6} . Let Xn = the maximum of the numbers
39
occuring in the first n trails = 3(say)
Xn+1 =3 if the (n+1)th trail results in 1,2 or 3
=4 if the (n+1)th trail results in 4
=5 if the (n+1)th trail results in 5
=6 if the (n+1)th trail results in 6
Therefore P {Xn+1 = 3/Xn = 3} =
P {Xn+1 = 3/Xn = 3} =
1
6
1
6
+ 16 +
1
6
=
when i= 4,5,6
The transition
probabilitymatrix of the chain is


1

6






0







0


P =




0







0







0
1
6
2
6
0
0
0
0










1
1
1
1 
6
6
6
6 






3
1
1
1 
6
6
6
6 






4
1
1 
0 6 6 6 






0 0 56 16 







0 0 0 1 
1
6
1
6
1
6
1
6
40
3
6



1







0







0


1 
P 2 = 36




0







0







0



3 5 7 9 11







4 5 7 9 11







0 9 7 9 11







0 0 16 9 11







0 0 0 25 11







0 0 0 0 36
Initial state probability distribution is P (0) = ( 16 , 61 , 16 , 61 , 16 , 61 ) since all the values
1, 2, 3 . . . , 6 are equally likely.
P {X2 = 6} =
P6
i=1
P {X2 = 6/X0 = i} x P {X0 = i}
P6
=
1
6
=
1 1
(11
6 36
=
91
216
i=1
Pi62
+ 11 + 11 + 11 + 11 + 36)
Example 2.3.5 Three girls G1 , G2 , G3 are throwing ball to each other G1 always throws the ball to G2 and G2 always throws the ball to G3 , but G3 is just
41
as likely to throws the ball to G2 as to G1 . Prove that the process is Markovian.Find the transition matrix and classify the states.
Solution:

 The transition probability matrix of the process {Xn } is given by




 0 1 0














 0 0 1 . States of Xn depend only on states of Xn−1 , but not on states of














 1 1 0
2
2
Xn−2 , Xn−3. . . or earlier
 states.
 {Xn } is a Markov chain.
 Therefore






0 0 0
1


2















2
3


Now P = 
 1 1 0 , P = 0
2 2 



















0 1 1 
1
2
(3)
(2)
2
4
(2)
1
2
1
2
1
4


0







1
2






1
2
(2)
(2)
(1)
P11 > 0 , P13 > 0 , P21 > 0 , P22 > 0 , P33 > 0 and all other Pij > 0 .
Therefore the chain is irreducible.
42




0






4
P =
1
4






1
4
(2)
1
2
1
4
1
2
(3)






1
1
4
2










 ,P5 = 
1
1
4
2












1
1
4
(5)
8
(6)
Pii , Pii , Pii , Pii
1
4
1
2
3
8






1
1
4
2










 and P 6 = 
1
1
4
4












1
1
2
8
1
2
3
8
3
8


1
4






1  and so on.
2






3
8
etc are > 0 for i = 1, 2, 3 and the GCD of 2, 3, 5, 6 . . . = 1
Therefore the state 1 (state G1 ) is periodic with period 1(aperiodic).
Example
2.3.6
 Find the nature of the states of the Markov chain with the tpm





0 1 0














P = 1
1 .
0
2
2












0 1 0
Solution :
43







 



 

0 1 0 0 1 0  1 0 1 


 2
2


 



 



 



 



 

2






P = 1
1 1
1 = 

0
1
0
0
0
2


2 2
2


 



 



 



 



 



 

0 1 0 0 1 0  1 0 1 
2
2






 



 


 1 0 1  0 1 0  1 0 1 
 2
2
2 
2

 



 



 



 









=

P 3 = P 2 .P = 
0 1 0  1 0 1  0 1 0 = P

 2


2

 



 



 



 



 



 


 1 0 1  0 1 0  1 0 1 
2
2
2
2







 



 

1 0 1 1 0 1 1 0 1
2
2
2 2
2
2


 



 



 



 



 

4
2
2

 

P = P .P = 
 0 1 0   0 1 0  =  0 1 0  = P and so on.


 



 



 



 



 



 



 

1 0 1 1 0 1 1 0 1
2
2
2
2
2
2
In general, P 2n = P 2 , P 2n+1 = P
(2)
(1)
(2)
Also, P00 > 0 , P01 > 0 , P02 > 0
44
(1)
(2)
(1)
(2)
(1)
(2)
P10 > 0 , P11 > 0 , P12 > 0
P20 > 0 , P21 > 0 , P22 > 0 .
Therefore, the Markov chain is irreducible.
(2)
(4)
(6)
Also Pii = Pii = Pii . . . > 0 , for all i, all the states of the chain are periodic,
with period 2. Since the chain is finite and irreducible, all its states are non-null
persistent. All states are not ergodic.
Example 2.3.7 A gambler has Rs 2. He bets Rs.1 at a time and wins Rs 1 with
probability
1
2
. He stops playing if he loses Rs 2 or wins Rs 4.
(a) What is the tpm of the related Markov chain?
(b) What is the probability that he lost his money at the end of 5 plays?
(c)What is the probability that the game lasts more than 7 plays?
Solution: Let Xn represent the amount with the player at the end of the nth
round of the play. State space of {Xn } = (0, 1, 2, 3, 4, 5, 6) , as the game ends, if
the player loses all the money (Xn = 0) or wins Rs.4 that is has Rs.6 (Xn = 6) .
45



1







1
2






0






(a)The tpm of the Markov chain is P = 
0







0







0







0

0 0 0 0 0
0
1
2
0 0 0
1
2
0
1
2
0 0
0
1
2
0
1
2
0
0 0
1
2
0
1
2
0 0 0
1
2
0
0 0 0 0 0


0







0







0






 Since the player
0







0







1
2






1
has got Rs.2 initially the initial probability distribution of {Xn } is
46

P (0)


1







1
2






0











=
0 0 1 0 0 0 0 
0






0







0







0

0 0 0 0 0
0
1
2
0 0 0
1
2
0
1
2
0 0
1
2
0
1
2
0
0 0
1
2
0
1
2
0 0 0
1
2
0
0
0 0 0 0 0
47


0







0







0


 


 
=
0
0







0







1
2






1

1
2
0
1
2


0 0 0



1







1
2






0







0 0 0 
0






0







0







0


P2 = PP = 
0
1
2
1
2
0

0 0 0 0 0
0
1
2
0 0 0
1
2
0
1
2
1
2
0
1
2
0
0 0
1
2
0
1
2
0 0 0
1
2
0
0
0 0 0 0 0



similiary, P 2 = P 3) P = 
1
4
1
4
0
3
8

1
8
0


P 4 = P 3P = 
3 0
8
5
16
0
1
4
0


1 
16



P (5) = P (4) P = 
3
8
5
32
0
9
32
0
1
8
0 0


1 
16
48


0


0







0







0


 


 
=
1 0
0

4






0







1
2






1

1
2
0
1
4


0 0



P (6) = P (5) P = 
 29 0
64
7
32
0
13
64


0 18 



P (7) = P (6) P = 
 29
64
7
64
0
27
128
0
13
128


1
8
(b) P{the man has lost money at the end of 5 plays}
= P {X5 = 0}
=the entry corresponding to state 0 in P (5)
=
3
8
(c) P{the game lasts more than 7 days}
=P{the system is neither in state 0 nor in 6 at the end of the seventh round}
= P {X7 , 1, 2, 3, 4or5}
=
7
64
=
14+27+13
128
+0+
27
128
=
+0+
54
128
=
13
128
27
64
.
Example 2.3.8 On a given day, a retired professor Dr.Charles Fish, amuses
himself with only one of the following activities. Reading(activity 1), gardening(activitity 2)or working on his book about a river vally(activity 3). For 1 ≤
i ≤ 3 , let Xn = i if Dr. Fish devotes day 0 n0 to activity i. Suppose that
{Xn : n = 1, 2, 3 . . .} is a Markov chain and depending on which of these ac-
49






0.30 0.25 0.45













.
tivities on the next day is given by the tpm P = 

0.40
0.10
0.50














0.25 0.40 0.35
Solution: Let π1 , π2 and π3 be the proportion of days Dr.Fish devotes to reading, gardening and writing respectively. Then,( π1 π2 π3 )P = (π1 π2 π3 )






0.30 0.25 0.45













( π1 π2 π3 ) 
0.40 0.10 0.50 = (π1 π2 π3 )














0.25 0.40 0.35
0.30π1 + 0.40π2 + 0.25π3 = π1
. . . (1)
0.25π1 + 0.10π2 + 0.40π3 = π2
. . . (2)
0.45π1 + 0.50π2 + 0.35π3 = π3
. . . (3)
π1 + π2 + π 3 = 1
. . . (4)
from (4), we have π3 = 1 − π1 − π2
. . . (5)
substituting π3 in (1) we get,
50
0.30π1 + 0.40π2 + 0.25(1 − π1 − π2 ) = π1
0.30π1 + 0.40π2 + 0.25 − 0.25π1 − 0.25π2 = π1
0.05π1 + 0.15π2 − π1 = −0.25
−0.95π1 + 0.15π2 = −0.25
. . . (6)
subtituting π3 in (2) we get,
0.25π1 + 0.10π2 + 0.40(1 − π1 − π2 ) = π2
0.25π1 + 0.10π2 + 0.40 − 0.40π1 − 0.40π2 = π2
− 0.15π1 − 1.30π2 = −0.40
. . . (7)
(6) × 26 + (7) × 3 ⇒ [−0.95 × 26 − 0.15 × 3]π1 = −0.25 × 26 − 0.40 × 3
[−24.70 − 0.45] π1 = −6.5 − 1.20
-25.15 π1 =-7.70
π1 =0.3060
substituting π1 in (6) we get,
-0.95(0.306)+ 0.15 π2 =-0.25
0.15 π2 =0.04
π2 =0.267
π3 = 1 − π1 − π 2
=1-0.306-0.267
π3 =0.427
Thererfore Dr.Charles devotes approximately 30 percentage of the days to read-
51
ing, 27 percentage of the days to gardening and 43 percentage of the days to
writing.
Example
2.3.9

 The tpm of a Markov chain with three states 0,1,2 is


3
4





P = 
1
4






0
1
,
3
1
4
1
2
3
4


0







1  and the initial state distribution of the chain is P [X0 = i] =
4






1
4
i = 0, 1, 2.
Find (i) P [X2 = 2]
(ii) P [X3 = 1, X2 = 2, X1 = 1, X0 = 2] (iii). P [X2 = 1, X0 = 0] .




3
4





Solution: Given P = 
1
4






0
1
4
1
2
3
4


0







1
4






1
4
52
P (2)




3
4





=
1
4






0


3
0
 4






1 1
4 4






1 
0
1
4
1
2
3
4
4

1
4
1
2
3
4


 
 
5
0
  18
 
 
 
 
 
 
1 =  5

4
  16
 
 
 
 
 
1
3
4
16


1 
16 






3 .
16 






4 
5
16
8
16
9
16
16
From the definition of conditional probability,
(i) P [X2 = 2] =
P2
i=0
P [X2 = 2/X0 = i]P [X0 = i]
= P [X2 = 2/X0 = 0]P [X0 = 0] + P [X2 = 2/X0 = 1]P [X0 = 1]
+ P [X2 = 2/X0 = 2]P [X0 = 2]




5
 18





2
P =
5
 16






3
16
5
16
8
16
9
16


1 
16 






3 
16 






4 
16
2
2
2
P [X2 = 2] = P02
P [X0 = 0] + P12
P [X0 = 1] + P22
P [X0 = 2]
1
+
= 13 [ 16
3
16
+
4
] = 61
16
53



3
4





(ii) P = 
1
4






0

1
4
1
2
3
4


0







1
4






1
4
P [X3 = 1.X2 = 2, X1 = 1, X0 = 2] = P [X3 = 1/X2 = 2]
P [X2 = 2/X1 = 1]P [X1 = 1/X0 = 2]P [X0 = 2]
(1)
(1)
(1)
= P21 P12 P21 P [X0 = 2]
=
3131
4443
3
= 64
(2)
From P 2 ,we get P [X2 = 1/X0 = 0] = P01 =
5
16
P [X2 = 1; X0 = 0] = P[ X2 = 1/X0 = 0]P [X0 = 0] =
5 1
16 3
=
5
48
.
Example 2.3.10 There are 2 white balls in bag A and 3 red balls in bag B. At
each step of the process,a ball is selected from each bag and the 2 balls selected are
interchanged. Let the state ai of the system be the number of red ball in A after
i change. What is the probability that there are 2 red balls in A after 3 steps? In
the long run, what is the probability that there are 2 red balls in bag A?
Solution: State space of the chain {Xn } = (0, 1, 2) , since the number of balls
in the bag A is always 2. Let the transition probability matrix of the chain {Xn }
54






P00 P01 P02 














be P = 

P
P
P
11
12 
 10












P

20 P21 P22
P00 = 0 [the state 6= 0 , interchange of balls]
P02 = P20 = 0 (After the process of interchanging,the number of red balls in bag
cannot increase or decrease by 2)
A → 0 red balls(before interchange)
A → 1 red balls(after interchange)
Let Xn =1, that is A contains 1 red ball(and 1 white ball) and B contains 1 white
and 2 red balls.
P {Xn+1 = 0/Xn = 1} = P10 = 12 13 =
P12 = 12 23 =
1
6
1
3
Since P is a stochastic matrix, P10 + P11 + P12 = 1
P11 = 21
P21 = 32
and P22 = 1 − (P20 + P21 ) = 13
55






0 1 0













Therefore P =  1 1 1 

6 2 3












0 2 1 
3
3
Now P (0) = (1, 0, 0) as there is no red ball in A in the beginning.
P (1) = P (0) P = (0, 1, 0)
P (2) = P (1) P = ( 16 , 12 , 13 )
1 23 5
, 26 , 18 )
P (3) = P (2) P = ( 12
(3)
5
P { there are 2 red balls in bag A after 3 steps } = P {X3 = 2} = P2 = 18
. Let
the stationary probability distribution of the chain be π = (π0 , π1 , π2 ) . By the
property of π 
, we haveπP = π and π0 + π1 + π2 =1



π0 π1




0 1 0




 







 


 

1
1 = 
1


π2  6 2 3 
π0 π1 π 2 












0 2 1 
3
π1
6
3
= π0
π0 +
π1
2
+
2π2
3
= π1
56
π1
3
+
π2
3
= π2
Therefore π2 = 6π1 ;6 π1 +3 π2 +4 π3 =6 π2 and π2 =2 π3
Therefore 3 π1 = π3 ; π2 =2 π3 ; π1 + π2 + π3 = 1
⇒ π3 13 + 2π3 + π3 = 1 ⇒ 10π3 = 3
Therefore π3 =
3
10
, π2 =
1
,
distribution is π = ( 10
6
10
and π1 =
6
,
10
3
10
1
10
. Therefore the steady state probability
). Thus in the long run, 30 percentage of the
time, there will be two red marbles in urn A.
2.4
Exercise
Two marks questions
1. Define Markov chain and one-step transition probability.
2. What is meant by steady-state distribution of Markov chain.
3. Define Markov process and example.
4. What is stochastic matrix? when it is said to be regular?
5. Define irreducible Markov chain and state Chapman-Kolmogorov theorem.
6. Find the invariant probabilities for the Markov chain [Xn ; n ≥ 1] with state
57






1 0





.
space S = [0, 1] and one-step TPM P = 







1 1
2
2
7. At an intersection, a working traffic light will be out of order the next day
with probability 0.07, and out of order traffic light will be working the next day
with probability 0.88. Let Xn = 1 if a day n the traffic will work; Xn =0 if on
day n the traffic light will not work. Is {Xn ; n = 0, 1, 2 . . .} a Markov chain?. If
so, write the transition probability matrix.
8. The
 tpm ofa Markov chain with three states 0,1,2 is


3
4





P =
1
4






0
1
4
1
2
3
4


0






1

1  and the initial state distribution of the chain is P [X0 = i] = 3 ,
4






1
4
i=0,1,2. Find P [X3 = 1, X2 = 2, X1 = 1, X0 = 2] .
9. Define recurrent state, absorbing state and transient state of a Markov chain.
10. Define regular and ergodic Markov chains.
58
Choose the Correct Answer
(1). All regular Markov chains are
(a)ergodic (b)Stationary (c)WSS (d)None
Ans:(a)
(2). If a Makkov chain is finite irreducible,all its state are
(a)Transient (b)null persistent (c)non-null persistent (d)return state.
Ans:(c)
(3). If di =1,then state i is said to be
(a)periodic (b)return state (c)recurrent (d)aperiodic
Ans:(d)
(4). A non null persistent and aperiodic state is
(a)regular (b)ergodic (c)transient (d)mean recurrence time
Ans:(b)
(5). The sum of the elements of any row in the transition probability matrix of
a finite state Markov chain is
(a) 0 (b) 1 (c) 2 (d) 3
Ans:(b)
59
Chapter 3
Poisson Process
3.1
Basic Definitions
Definition 3.1.1 If X(t) represents the numbers of occurrences of certain events
in (0, t) , then the discrete random process {X(t)} is called the Poisson process,
provided the following postulates are satisfied,
(i) P [1 occurrence in (t, t + ∆t)] = λ∆t
(ii) P [0 occurrence in (t, t + ∆t)] = 1 − λ∆t
(iii) P [2 or more occurrence in (t, t + ∆t)] = 0
(iv) X(t) is independent of the number of occurrences of the event in any interval
prior and after the interval (0, t)
(v) The probability that the event occurs a specified number of times in (t0 , t0 + t)
60
depends only on t , but not on t0
Example 3.1.2 (i) The arrival of a customer at a bank.
(ii) The occurance of lighting strike within some prescribed area.
(iii) The failure of some component in a system.
(iv) The emission of an electron from the surface of a light sensitive material.
Probability law for the Poisson Process {X(t)}
Let λ be the number of occurrences of the event in unit time.
Let Pn (t) = P [X(t) = n] Pn (t + ∆t) = P [X(t + ∆t) = n]
= P [(n − 1) calls in (0, t) and 1 callin (t, t + ∆t)] +
P [n calls in (0, t) and no call in (t, t + ∆t)] .
Therefore Pn (t + ∆t) = Pn−1 (t)λ∆t + Pn (t)(1 − λ∆t)
Pn (t + ∆t) − Pn (t) = Pn−1 (t)λ∆t − λPn (t)∆t = λ∆t [Pn−1 (t) − Pn (t)]
Therefore
Pn (t+∆t)−Pn (t)
∆t
= λ [Pn−1 (t) − Pn (t)]
Taking limit as ∆t → 0,
lim∆t→0
Pn (t+∆t)−Pn (t)
∆t
= lim∆t→0 λ [Pn−1 (t) − Pn (t)]
Pn0 (t) = λ [Pn−1 (t) − Pn (t)] ——-(1)
Let the solution of the equation (1) be Pn (t) =
Differentiate (2) with respect to 0 t0
0
Pn (t) =
λn
[ntn−1 f (t)
n!
+ tn f 0 (t)] ———(3)
Using (2) and (3) in (1),
61
(λt)n
f (t)
n!
——-(2)
We get
λn
[ntn−1 f (t)
n!
(λt)n 0
f (t)
n!
Therefore
n−1
n
+ tn f 0 (t)] = λ (λt)
f (t) − λ (λt)
f (t)
(n−1)!
n!
n
= − λ(λt)
f (t)
n!
⇒ f 0 (t) = −λf (t)
f 0 (t)
f (t)
Therefore
= −λ
Integrating, log f (t) = −λt + log k ⇒ f (t) = ke−λt
From (2),
P0 (t) = f (t)i.e., f (0) = P0 (0) = P [X0 (0) = 0]
= P [ no event occurs in (0, 0)] = 1 .
But f (0) = k Therefore k = 1
Hence f (t) = e−λt
Therefore P [X(t) = n] = Pn (t) =
e−λt(λt)n
,n
n!
= 0, 1, 2...
This is the probability law for Poisson process. It is to be observed that the
probability distribution of X(t) is the Poisson distribution with parameter λt .
Mean of the Poisson process:
Mean = E[X(t)] =
=
P∞
n=0
P∞
n=0
nPn (t)
n(λt)n −λt
e
n!
λtn
n=0 n−1!
= e−λt
P∞
= e−λt
h
λt
1
+
λ2 t2
1!
h
= λte−λt 1 +
λt
1!
+
λ3 t3
2!
+
λ2 t2
2!
i
+ ...
i
+ ...
= λte−λt eλt = λt
62
Variance of the Poisson Process:
Variance= E[X 2 (t)] − [E[X(t)]]2 ——-(1)
E[X 2 (t)] =
P∞
n=0
n2 Pn (t) =
P∞
n=0
n
n2 e−λt(λt)
n!
Now n2 = n(n + 1) + n
Hence E[X 2 (t)] =
=
P∞
n=0
= e−λt
P∞
n=0
n(n−1)e−λt (λt)n
n!
+
[n(n−1)+n]e−λt (λt)n
n!
P∞
n=0
ne−λt (λt)n
n!
n
P
n=0
2
(λt)
∞ (n−2)!
+ λt
)
+
= e−λt [ (λt)
0!
(λt)3
1!
+ ...] + λt
= e−λt λ2 t2 e−λt + λt
= λ2 t2 + λt
Hence V ar[X(t)] = λ2 t2 + λt − (λt)2 = λ2 t2 + λt − λ2 t2 = λt
Auto Correlation of the Poisson Process:
RXX (t, t + τ ) = E[X(t)X(t + τ )] (or) RXX (t1 , t2 ) = E[X(t1 )X(t2 )]
= E[X(t1 ){X(t2 ) − X(t1 ) + X(t1 )}]
= E[X(t1 )]E[X(t2 − X(t1 ))] + E[X 2 (t1 )]
Since {X(t)} is a process of independent increment, we get
RXX (t1 , t2 ) = λt1 λ(t2 − t1 ) + λt1 + λ2 t21 , if t2 ≥ t1
⇒ RXX (t1 , t2 ) = λ2 t1 t2 + λ min(t1 , t2 )
63
Auto Co-variance of the Poisson Process:
C(t1 , t2 ) = R(t1 , t2 ) − E[X(t1 )]E[X(t2 )]
= λ2 t1 t2 + λt1 − λ2 t1 t2 = λt1 if t2 ≥ t1 .
Therefore C(t1 , t2 ) = λ min{t1 , t2 } .
3.2
Properties of the Poisson Process
Property 1: Poisson process is a Markov process.
Proof:
Consider P [X(t3 ) = n3 /X(t2 ) = n2 , X(t1 ) = n1 ] =
=
P [X(t1 )=n1 ,X(t2 )=n2 ,X(t3 )=n3 ]
P [X(t1 )=n1 X(t2 )=n2 ]
e−λ(t3 −t2 ) λn3 −n2 (t3 −t2 )n3 −n2
(n3 −n2 )!
= P [X(t3 ) = n3 /X(t2 ) = n2 ]
This means that the conditional probability distribution of X(t3 ) given all the
past values X(t1 ) = n1 , X(t2 ) = n2 depends only on the most recent value
X(t2 ) = n2 . Therefore Poisson processes the Markovian property. Hence Poisson
process is a Markov Process.
Property 2: The sum of independent Poisson processes is a Poisson process.
Proof:
P [X(t) = n] =
Pn
k=0 P [X1 (t) = k] P [X2 (t) = n−k] =
64
Pn
k=0
e−λ1 t (λ1 t)k e−λ2 t (λ2 t)n−k
k!
(n−k)!
=
P [X(t) = n] =
e−(λ1 +λ2 )t
n!
Pn
n!
k=0 k! (n−k)!
e−(λ1 +λ2 )t
(λ1 t
n!
(λ1 t)k (λ2 t)n−k
+ λ2 t)n , n = 0, 1, 2, . . .
Therefore X(t) = X1 (t) + X2 (t) is a Poisson process with parameter (λ1 + λ2 )t .
Hence the sum of two independent Poisson processes is also a Poisson process.
Property 3: The difference of two independent Poisson processes is not a Poisson process.
Proof:
Let X(t) = X1 (t) − X2 (t) . Then
E[X(t)] = λ1 t − λ2 t = (λ1 − λ2 )t
= E[X12 (t) − 2X1 (t)X2 (t) + X22 (t)]
= E[X 2 (t)] − 2E[X( t) X2 (t)] + E[X22 (t)]
= λ21 t2 + λ1 t + λ2 t2 + λ2 t − 2λ1 λ2 t2
Since X1 (t) and X2 (t) are independent, E[X 2 (t)] = (λ1 +λ2 )t+[(λ1 −λ2 )t]2
We know that E[X 2 (t)] for a Poisson process with parameter λt is given by
E[X 2 (t)] = λt + λ2 t2
Since X(t) = X1 (t) − X2 (t) , E[X 2 (t)] = λ1 − λ2 )t + (λ1 − λ2 )2 t2
Expression (1) shows that X1 (t) − X2(t) is not a Poisson process.
65
(1)
Property 4: The inter arrival time of a Poisson process i.e., the interval between
two successive occurrences of a Poisson process with parameter λ has an exponential distribution with mean
1
λ
i.e., with parameter λ .
Proof:
Let two consecutive occurrences of the event be Ei and Ei+1
Let Ei take place at time instant ti and T be the interval between the occurrences of Ei and Ei+1 .
T is a continuous random variable.
P [T > t] = P {Ei did not occur in (ti , t+1 )}
= P [ no event occurs in an interval of length t]
= P [X(t) = 0] = e−λt
Therefore cumulative distribution function of T is given by
F (t) = P [T ≤ t] = 1 − P [T < t] = 1 − e−λt
Therefore the p.df of T is the exponential distribution with parameter λ , given
by f (t) = λe−λt
(t ≥ 0) .
i.e., T has an exponential distribution with mean
1
λ
.
Property 5: If the number of occurrences of an event E in an interval of length
t is a Poisson process {X(t)} with parameter λ and if each occurrence of E has
a constant probability p of being recorded and the recordings are independent
of each other, then the number N (t) of the recorded occurrences n t is also a
66
Poisson process with parameter λp .
Hence P [N (t) = n] =
3.3
e−λpt (λpt)n
,
n!
n = 0, 1, 2, . . .
Example
Example 3.3.1 If {N1 (t)} and {N2 (t)} are two independent Poisson process
with parameters λ1 and λ2 respectively, show that P [N1 (t) = k/N1 (t) + N2 (t) =
n] nCr pk q n−k where p =
λ1
λ1 +λ2
and
λ2
λ1 +λ2
Solution: By definition, P [N1 (t) = r] =
.
e−λ1 t (λ1 t)r
r!
, r = 0, 1, 2 . . . and
P [N2 (t) = r] = e−λ2 t (λ2 t)r r! , r = 0, 1, 2 . . . .
P [N1 (t) = k/N1 (t) + N2 (t) = n] =
P [N1 (t)=k∩N1 (t)+N2 (t)=n]
P [N1 (t)+N2 (t)=n]
=
P [N1 (t)=k∩N2 (t)=n−N1 (t)]
P [N1 (t)+N2 (t)=n]
=
P [N1 (t)=k∩N2 (t)=n−k]
P [N1 (t)+N2 (t)=n]
=
P [N1 (t)=k]P [N2 (t)=n−k]
P [N1 (t)+N2 (t)=n]
=
n!e−λ1 t (λ1 t)k e−λ2 t (tλ2 )n−k
k!(n−k)!e−(λ1 +λ2 )t [(λ1 +λ2 )t]n
λk λn−k
= nCk (λ11+λ2 2 )n
1
2
= nCk ( λ1λ+λ
)k ( λ1λ+λ
)n−k
2
2
Taking p =
λ1
λ1 +λ2
, q =1−p=
λ2
λ1 +λ2
, we have
P [N1 (t) = k/N1 (t) + N2 (t) = n] = nCk pk q n−k .
Example 3.3.2 A radio active source emits particles at a rate 6 per minute in
67
accordance with Poisson process. Each particle emitted has a probability
1
3
of
being recorded. Find the probability that at least 5 particles are recorded in a 5
minute period.
Solution: Let N(t)be the number of recorded particles. Then {N (t)} is a Poisson process with λp as parameter. Now, λp = 6( 31 ) = 2
P [N (t) = n] =
e−2t (2t)n
n!
, n = 0, 1, 2 . . .
Therefore P[at least 5 particles are recorded in a 5 minute period]= P [X(5) ≥ 5]
= 1 − P [X(5) < 5]
=1- {P [X(5) = 0] + P [X(5) = 1] + P [X(5) = 2] + P [X(5) = 3] + P [X(5) = 4]}
= 1 − e−10 [1 + 10 +
102
2!
+
103
3!
+
104
]
4!
= 0.9707 .
Example 3.3.3 If customers arrive at a counter in accordance with a Poisson
process with a rate of 3 per minute, find the probability that the interval between
2 consecutive arrivals is
1. more than 1 minute
2. between 1 minute and 2 minutes
3. 4 minutes or less
Solution: By property 4 of the Poisson process, we have the interval T between
68
2 consecutive arrivals follows an exponential distribution with parameter λ = 3 .
The pdf of the exponential distribution = λe−λx .
1. P (T > 1) = 3
R∞
1
e−3t dt
− ∞
= 3 e−33t 1
= e−3
2. P (1 < T < 2) =
3. P (T ≤ 4) = 3
R4
0
R2
1
− 2
3e−3t dt = 3 e−33t 1 = e−3 − e−6
e−3t dt
− 4
= 3 e−33t 0
= 1 − [e−12 − e−0 ] = 1 − e−12 .
Example 3.3.4 A machine goes out of order, whenever a component fails. The
failure of this part follows a Poisson process with a mean rate of 1 per week. Find
the probability that 2 weeks have elapsed since last failure. If there are 5 spare
parts of this component in an inventory and that the next supply is not due in 10
weeks, find the probability that the machine will not be out of order in the next
10 weeks.
Solution:Here the unit time t = 1 week.
Mean failure rate = mean number of failures in the week i.e., λ = 1
69
1. P[no failures in the 2 weeks since last failure]= P [X(2) = 0]
=e
− 2(20 )
0!
= e−2 = 0.135
2. There are only 5 spare parts and the machine should not go out of order in
the next 10 weeks.
Therefore P[for this event]= P [X(10) ≤ 5]
=
e−10 (10)n
n!
P5
n=0
= e−10 [1 + 10 +
102
2!
+
103
3!
+
104
4!
+
105
]
5!
= 0.068 .
Example 3.3.5 What will be the superposition of n independent Poisson processes with respective average rates λ1 , λ2 , . . . , λn .
Solution: The super position of n independent Poisson processes with average
rates λ1 , λ2 . . . λn is another Poisson process with average rate λ1 , λ2 . . . λn .
Example 3.3.6 If {X(t)} is a Poisson process, prove that
P [X(s) = r/X(t) = n] = nCr ( st )r (1 − st )n−r where s > t .
Solution:
P [X(s) = r/X(t) = n] =
=
P [X(s)=r∩X(t)=n]
P [X(t)=n]
P [X(s)=r∩X(t−s)=n−r]
P [X(t)=n]
70
=
P [X(s)=r]P [X(t−s)=n−r]
P [X(t)=n]
since {X(t)} is a process of independent increments.
−λs (λs)r
Therefore P [X(s) = r/X(t) = n] = { e
r!
}{ e
−λ(t−s) [λ(t−s)]n−r
=
sr (t−s)n−r
n!
r!(n−r)!
tn
=
n−r
n!
( s )r ( t−s
)
r!(n−r)! t
t
(n−r)!
} e−λtn!(λt)n
Hence P [X(s) = r/X(t) = n] = nCr ( st )r (1 − st )n−r
.
Example 3.3.7 A fisherman catches fish at a Poisson rate of 2 per hour from a
large lake with lots of fish.If he starts fishing at 10:00am,what is the probability
that he catches one fish by 10:30 am and 3 fish by noon?
Solution: Let N (t) be the total number of fish caught at or before time 0 t0 .
Then N (0) ; {N (t) : t ≥ 0} has stationary and independent increments.
Therefore {N (t) : t ≥ 0} is a Poisson process. Here λ = 2
P [N (t) = n] =
e−2t (2t)n
n!
, n = 0, 1, 2 . . .
P [N ( 21 ) = 1 and N ( 32 ) = 2] = P [N ( 21 ) = 1]P [N ( 32 ) = 2]
−1
= { e1! }{ e
−3 32
2!
}
= 0.082
71
3.4
Birth and Death Process:
Now we will go to the random process that has wide applications in several fields
of natural phenomena such as spread of queuing problems, telephone exange,
traffic maintenance, epidemics, and population growth. The name of the process
is birth and death process.
Definition 3.4.1 If X(t) represents the number of individuals present at time
t in a population[or the size of the population of time t]in which two types of
events occur-one representing birth which contributes to its increase and the other
representing death which contributes to its decrease, then the discrete random
process {X(t)} is called the birth and death process, provided the two events
namely birt and death are governed by the following postulates:
(i) P [1 birth in (t, t + ∆t)] = λn (t) ∆t + 0(∆t)
(ii) P [0 birth in (t, t + ∆t)] = 1 − λn (t) ∆t + 0(∆t)
(iii) P [2 or more births in (t, t + ∆t)] = 0(∆t)
(iv) Births occurring in (t, t + ∆t)] are independent of time since last birth.
(v) P [1 death in (t, t + ∆t)] = µn (t) ∆t + 0(∆t)
(vi) P [0 death in (t, t + ∆t)] = 1 − µn (t) ∆t + 0(∆t)
72
(vii) P [2 or more deaths in (t, t + ∆t)] = 0(∆t)
(viii) Deaths occurring in (t, t + ∆t) are independent of time since last death.
(ix) The birth and death occur independently of each other at any time.
Probability Distribution of X(t)
Let Pn (t) = P [X(t) = n] = probability that the size of the population is n at
time t. Then Pn (t + ∆t) = P {X(t + ∆t) = n} = probability that the size of the
population is n at time t + δt . Now the event X(t + δt) = n can happen in any
one of the following four mutually exclusive ways:
(i) X(t) = n and no birth or death in (t, t + ∆t)
(ii) X(t) = n − 1 and 1 birth and no death in (t, t + ∆t)
(iii) X(t) = n + 1 and no birth and 1 death in (t, t + ∆t)
(vi) X(t) = n and 1 birth and 1 death in (t, t + ∆t)
Therefore
Pn (t + ∆t) = Pn (t)(1 − λn ∆t)(1 − µn ∆t) + Pn−1 (t)λn−1 ∆t(1 − µn−1 ∆t)
+ Pn+1 (t)(1 − λn+1 ∆t)µn+1 ∆t + Pn (t)λn (t)∆tµn (t)∆t
Pn (t + ∆t) = Pn (t) − (λn + µn )Pn (t)∆t + λn−1 Pn−1 (t)∆t + µn+1 Pn+1 (t)∆t
Omitting terms containing (∆t)2
73
Pn (t + ∆t) − Pn (t) = [−(λn + µn )Pn (t) + λn−1 Pn−1 (t) + µn+1 Pn+1 (t)]∆t
Therefore
Pn (t+∆t)−Pn (t)
∆t
= λn−1 Pn−1 (t)−(λn +µn )Pn (t)+µn+1 Pn+1 (t)
(1)
Taking limits on both sides of(1)as ∆t → 0 ,we get
lim∆t→0
Pn (t+∆t)−Pn (t)
∆t
= lim∆t→0 [λn−1 Pn−1 (t) − (λn + µn )Pn (t) + µn+1 Pn+1 (t)]
0
Pn (t) = λn−1 Pn−1 (t) − (λn + µn )Pn (t) + µn+1 Pn+1 (t)
(2)
The expression(2)holds good for n ≥ 1 . It is not valid when n = 0 as no death
is possible in (t, t + ∆t) and X(t) = n − 1 = −1 is not possible.
Therefore P0 (t + ∆t) = P0 (t)(1 − λ0 ∆t) + P1 (t)(1 − λ1 ∆t)µ1 ∆t
P0 (t + ∆t) − P0 (t) = −P0 (t)λ0 ∆t + P1 (t)(1 − λ1 ∆t)µ1 ∆t
P0 (t + ∆t) − P0 (t) = −P0 (t)λ0 ∆t + P1 (t)µ1 ∆t − λ1 µ1 (∆t)2
Omitting terms containing (∆t)2
Therefore P0 (t + ∆t) − P0 (t) = −P0 (t)λ0 ∆t + P1 (t)µ1 ∆t
P0 (t+∆t)−P0 (t)
∆t
= −P0 (t)λ0 + P1 (t)µ1
(3)
Taking limits as ∆t → in (3), we get
lim∆t→0
P0 (t+∆t)−P0 (t)
∆t
= lim∆t→0 [−P0 (t)λ0 + P1 (t)µ1 ]
0
P0 (t) = −λ0 P0 (t) + µ1 P1 (t)
(4)
On solving equations (2) and (4), we get Pn (t) = P [X(t) = n],
probability distribution of X(t) .
74
n ≥ 0 , the
Steady State Solution:
dPn (t)
dt
The steady state solution of this process can be obtained by setting
Then the above differential-difference equations (2) and (4) reduce into
(λn + µn )Pn (t) = λn−1 Pn−1 (t) + µn+1 Pn+1 (t) − λ0 P0 (t) + µ1 P1 (t) = 0
Using Pn (t) = Pn , Pn−1 (t) = Pn−1 and Pn+1 (t) = Pn+1 , we have
(λn + µn )Pn = λn−1 Pn−1 + µn+1 Pn+1 (t) , n ≥ 1
(5)
−λ0 P0 = µ1 P1
and
(6)
These equations (5) and (6) are known as the balance equations.
By re-arranging the equation(5), we get
λn Pn − µn+1 Pn+1 = λn−1 Pn−1 − µn Pn = . . . = λ0 P0 − µ1 P1
But from (6), −λ0 P0 = µ1 P1 ⇒ P1 =
λ0
P
µ1 0
It follows that λn−1 Pn−1 = µn Pn and hence Pn =
λn−1
Pn−1
µn
This is true for n ≥ 1 . From (7), we have the following,
P2 =
Therefore P2 =
λ1
P
µ2 1
Proceeding, Pn =
λ2
P
µ3 2
λ0 λ1 λ2
P
µ1 µ2 µ3 0
⇒ P3 =
λ0 λ1 λ2 ...λn−1
P0
µ1 µ2 µ3 ...µn
⇒ Pn = P0
P
λ1 λ0
P
µ2 µ1 0
λ0 λ1
P
µ1 µ2 0
P3 =
Since
⇒ P2 =
Qn−1
λi
i=0 µi+1
, n≥1
n≥0 Pn = 1 , we have P0 [1 +
Therefore P0 =
(8)
P∞ Qn−1
n=1
1
P
Qn−1 λi
1+ ∞
)
n=1
i=0 ( µ
λi
i=0 µi+1 ] =1.
(9)
i+1
75
(7)
=0.
Therefore the steady state solutions of the Birth-Death process are given by
Pn (t) = P0 (t)
P0 (t) =
Qn−1
λi
i=0 µi+1
, n ≥ 1 and
1
Qn−1 λi
P
1+ ∞
)
n=1
i=0 ( µ
i+1
Thus the limiting distribution (P0 , P1 . . .) is now completely determined and
these are non-zero, provided that the series
3.5
P∞ Qn−1
n=1
λi
i=0 µi+1
converges.
Exercise
Choose the correct answer
1. The Poisson process is
(a)Stationary (b) Markovian (c)Continuous random process (d)None
Ans:(b)
2. Correlation coefficient of the Poisson process is
q
q
(a) tt12 (b) tt21 (c) λt (d) None
Ans:(a)
3. Consider a computer system with Poisson job arrival stream at an average
rate of 60 per hour.What is the probability that the time interval between
successive job arrivals is longer than 4 minutes.
(a)0.9997
(b)0.1328
(c)0.0183
76
(d)0.1353
Ans:(c)
4. A birth and death process is a
(a)Ergodic
(b)finite Markovian
(c)Stationary
(d)None
Ans:(b)
5. Which one of the following is Poisson process
(a)The arrival of a customer at a bank (b)Random walk with reflecting barriers (c)The time duration between consecutive cars passing a fixed location
on a road.
Ans:(a)
Two marks questions
1. Define Poisson process and example.
2. What are the postulates of a Poisson process.
3. Prove that Poisson is a Markov process.
4. Prove that difference at two independent Poisson process
is not a Poisson process.
5. Define Birth and Death process.
6. Prove that the sum of independent Poisson processes is a Poisson process.
77
7. What will be the superposition of n independent Poisson processes with
respective average rates λ1 , λ2 , . . . , λn .
78
Download