M5A42 APPLIED STOCHASTIC PROCESSES PROBLEM SHEET 1

advertisement
M5A42 APPLIED STOCHASTIC PROCESSES
PROBLEM SHEET 1 SOLUTIONS
Term 1 2010-2011
1. Calculate the mean, variance and characteristic function of the following probability density functions.
(a) The exponential distribution with density
f (x) =
λe−λx x > 0,
0
x < 0,
with λ > 0.
(b) The uniform distribution with density
f (x) =
1
b−a
0
a < x < b,
x∈
/ (a, b),
with a < b.
(c) The Gamma distribution with density
(
f (x) =
λ
α−1 e−λx
Γ(α) (λx)
x > 0,
x < 0,
0
with λ > 0, α > 0 and Γ(α) is the Gamma function
Z ∞
Γ(α) =
ξ α−1 e−ξ dξ,
α > 0.
0
SOLUTION
(a)
Z
+∞
E(X) =
−∞
=
E(X 2 ) =
xe−λx dx
0
1
.
λ
Z
+∞
x2 f (x) dx = λ
−∞
=
+∞
Z
xf (x) dx = λ
Z
0
2
.
λ2
1
+∞
x2 e−λx dx
Consequently,
var(X) = E(X 2 ) − (EX)2 =
1
.
λ2
The characteristic function is
φ(t) = E(eitx ) = λ
∞
Z
eitx e−λx dt =
0
λ
.
λ − it
(b)
Z
+∞
xf (x) dx =
E(X) =
−∞
a
x
dx
b−a
a+b
.
2
=
Z
2
b
Z
+∞
Z
2
E(X ) =
b
x f (x) dx = λ
−∞
x2
a
x2
dx
b−a
b2 + ab + a2
.
3
=
Consequently,
var(X) = E(X 2 ) − (EX)2 =
(b − a)2
.
12
The characteristic function is
φ(t) = E(e
itx
b
Z
eitx
)=λ
a
eitb − eita
1
dt =
.
b−a
it(b − a)
(c)
E(X) =
=
Z +∞
λα
λ
xα e−λx dx
Γ(α) 0
Γ(α + 1)
α
= .
λΓ(α)
λ
Z
2
E(X ) = λ
+∞
x1+α e−λx dx
0
α(α + 1)
Γ(α + 2)
=
.
2
λ Γ(α)
λ2
=
Consequently,
var(X) = E(X 2 ) − (EX)2 =
α
.
λ2
The characteristic function is
Z ∞
λα
φ(t) = E(e ) =
eitx xα−1 dt
Γ(α) 0
Z ∞
λα
1
=
e−y y α−1 dy
Γ(α) (λ − it)α 0
λα
=
.
(λ − it)α
itx
2
2.
(a) Let X be a continuous random variable with characteristic function φ(t). Show that
EX k =
1 (k)
φ (0),
ik
where φ(k) (t) denotes the k-th derivative of φ evaluated at t.
(b) Let X be a nonnegative random variable with distribution function F (x). Show that
Z +∞
(1 − F (x)) dx.
E(X) =
0
(c) Let X be a continuous random variable with probability density function f (x) and characteristic
function φ(t). Find the probability density and characteristic function of the random variable
Y = aX + b with a, b ∈ R.
(d) Let X be a random variable with uniform distribution on [0, 2π]. Find the probability density of
the random variable Y = sin(X).
SOLUTION
(a) We have
itx
φ(t) = E(e
Z
)=
eitx f (x) dx.
R
Consequently
φ(k) (t) =
Z
(ix)k eitx f (x) dx.
R
Thus:
φ
(k)
Z
(ix)k f (x) dx = ik EX k ,
(0) =
R
and EX k =
1 (k)
φ (0).
ik
(b) Let R > 0 and consider
Z
R
P(X < R) =
xf (x) dx
0
Z
R
dF
dx
dx
0
Z R
R
= xF (x)|0 −
F (x) dx
0
Z R
=
(F (R) − F (x)) dx.
=
x
0
Thus,
EX =
lim P(X < R)
R→∞
Z ∞
(1 − F (x)) dx,
=
0
3
where the fact limx→∞ F (x) = 1 was used.
Alternatively:
Z
∞
Z
∞Z ∞
(1 − F (x)) dx =
f (y) dydx
Z0 ∞ Zxy
0
f (y) dxdy
=
0
Z
∞
=
0
yf (y) dx = EX.
0
(c) We have:
P(Y ≤ y) = P(aX + b ≤ y)
y−b
= P(X ≤
)
a
Z y−b
a
f (x) dx.
=
−∞
Consequently,
∂
1
fY (y) =
P(Y ≤ y) = f
∂y
a
y−b
a
.
Similarly,
φY (t) = EeitY = Eeit(aX+b)
= eitb EeitaX = eitb φ(at).
(d) The density of the random variable X is
fX (x) =
n
1
2π ,
0,
x ∈ [0, 2π],
x∈
/ [0, 2π].
The distribution function is
FX (x) =
n 0
x
2π ,
1,
x < 0,
x ∈ [0, 2π],
x > 2π.
The random variable Y takes values on [−1, 1]. Hence, P(Y ≤ y) = 0 for y ≤ −1 and
P(Y ≤ y) = 1 for y ≥ 1. Let now y ∈ (−1, 1). We have
FY (y) = P(Y ≤ y) = P(sin(X) ≤ y).
The equation sin(x) = y has two solutions in the interval [0, 2π]: x = arcsin(y), π − arcsin(y)
for y > 0 and x = π − arcsin(y), 2π + arcsin(y) for y < 0. Hence,
FY (y) =
π + 2 arcsin(y)
,
2π
4
y ∈ (−1, 1).
The distribution function of Y is
n
FY (y) =
y ≤ 0,
y ∈ (−1, 1),
y ≥ 1.
0
π+2 arcsin(y)
,
2π
1,
We differentiate the above expression to obtain the probability density:
1
π
n
fY (y) =
√1
, y ∈ (−1, 1),
0,
y∈
/ (−1, 1).
1−y 2
3. Let X be a discrete random variable taking vales on the set of nonnegative integers with probability
P
mass function pk = P(X = k) with pk ≥ 0, +∞
k=0 pk = 1. The generating function is defined as
g(s) = E(sX ) =
+∞
X
pk sk .
k=0
(a) Show that
EX = g 0 (1) and
EX 2 = g 00 (1) + g 0 (1),
where the prime denotes differentiation.
(b) Calculate the generating function of the Poisson random variable with
pk = P(X = k) =
e−λ λk
,
k!
k = 0, 1, 2, . . .
and
λ > 0.
(c) Prove that the generating function of a sum of independent nonnegative integer valued random
variables is the product of their generating functions.
(a) We have
g 0 (s) =
+∞
X
kpk sk−1
and
g 00 (s) =
k=0
+∞
X
k(k − 1)pk sk−2 .
k=0
Hence
g 0 (1) =
+∞
X
kpk = EX
k=0
and
00
g (1) =
+∞
X
2
k pk −
k=0
+∞
X
kpk = EX 2 − g 0 (1)
k=0
from which it follows
EX 2 = g 00 (1) + g 0 (1).
(b) We calculate
g(s) =
+∞ −λ k
X
e λ
k!
k=0
λ(s−1)
= e
5
.
sk
(c) Consider the independent nonnegative integer valued random variables Xi , i = 1, . . . d. Since
the Xi ’s are independent, so are the random variables eXi , i = 1, . . . . Consequently,
(s)
i=1 Xi
gPd
= E(e
Pd
i=1
Xi
) = Πdi=1 E(eXi ) = Πdi=1 gXi (s).
4. Let b ∈ Rn and Σ ∈ Rn×n a symmetric and positive definite matrix. Let X be the multivariate
Gaussian random variable with probability density function
1
1
1 −1
p
γ(x) =
exp − hΣ (x − b), x − bi .
2
(2π)n/2 det(Σ)
(a) Show that
Z
γ(x) dx = 1.
Rd
(b) Calculate the mean and the covariance matrix of X.
(c) Calculate the characteristic function of X.
(a) From the spectral theorem for symmetric positive definite matrices we have that there exists a
diagonal matrix Λ with positive entries and an orthogonal matrix B such that
Σ−1 = B T ΛB.
Let z = x − b and y = Bz. We have
hΣ−1 z, zi = hB T ΛBz, zi
= hΛBz, Bzi = hΛy, yi
=
d
X
λi yi2 .
i=1
Furthermore, we have that det(Σ−1 ) = Πdi=1 λi , that det(Σ) = Πdi=1 λ−1
i and that the Jacobian
of an orthogonal transformation is J = det(B) = 1. Hence,
Z
Z
1 −1
1 −1
exp − hΣ z, zi dz
exp − hΣ (x − b), x − bi dx =
2
2
Rd
Rd
!
Z
d
1X
=
exp −
λi yi2 |J| dy
2
Rd
i=1
Z
1
= Πni=1
exp − λi yi2 dyi
2
R
p
−1/2
= (2π)n/2 Πni=1 λi
= (2π)n/2 det(Σ),
from which we get that
Z
γ(x) dx = 1.
Rd
6
(b) From the above calculation we have that
γ(x) dx = γ(B T y + b) dy
1
1
n
2
p
=
Πi=1 exp − λi yi dyi .
2
(2π)n/2 det(Σ)
Consequently
Z
xγ(x) dx
EX =
d
ZR
(B T y + b)γ(B T y + b) dy
=
RZd
γ(B T y + b) dy = b.
= b
Rd
We note that, since Σ−1 = B T ΛB, we have that Σ = B T Λ−1 B. Furthermore, z = B T y. We
calculate
Z
zi zj γ(z + b) dz
E((Xi − bi )(Xj − bj )) =
Rd
!
Z X
X
1
1X
p
=
Bki yk
λ` y`2 dy
Bmi ym exp −
2
(2π)n/2 det(Σ) Rd k
m
`
!
Z
X
1
1X
2
p
=
Bki Bmj
yk ym exp −
λ` y` dy
2
(2π)n/2 det(Σ) k,m
Rd
`
X
−1
=
Bki Bmj λk δkm
k,m
= Σij .
(c) Let y be a multivariate Gaussian random variable with mean 0 and covariance I. Let also
√
C = B Λ. We have that Σ = CC T = C T C. We have that
X = CY + b.
To see this, we first note that X is Gaussian since it is given through a linear transformation of a
Gaussian random variable. Furthermore,
E((Xi − bi )(Xj − bj )) = Σij .
EX = b and
Now we have:
φ(t) = EeihX,ti = eihb,ti EeihCY,ti
T ti
P P
eihb,ti Eei j ( k Cjk tk )yj
P P
2
ihb,ti − 12 j | k Cjk tk |
= eihb,ti EeihY,C
=
= e
e
= e
ihb,ti − 12 hCt,Cti
= e
ihb,ti − 21 ht,C T Cti
= e
ihb,ti − 21 ht,Σti
e
e
e
7
.
Consequently,
1
φ(t) = eihb,ti− 2 ht,Σti .
8
Download