Probability Qualifying Exam Solution, Spring 2009 Shiu-Tang Li December 27, 2012

advertisement
Probability Qualifying Exam
Solution, Spring 2009
Shiu-Tang Li
December 27, 2012
1
(1)
E[SN ] = E[E[SN |N ]]
∞
X
= E[ ]1{N =i} E[Si ]]
i=0
∞
X
= E[ ]1{N =i} iµ]
i=0
∞
X
= µE[
]1{N =i} i] = µE[N ]
i=0
(2)
2
V ar[SN ] = E[E[SN
|N ]] − (ESN )2
∞
X
= E[
1{N =i} E[Si2 ]] − (µE[N ])2
i=0
∞
X
= E[
1{N =i} iσ 2 + i(i − 1)µ2 ] − µ2 (E[N ])2
i=0
2
= σ E[N ] + µ2 E[N 2 ] − E[N ] − µ2 (E[N ])2
= (σ 2 − µ2 )E[N ] + µ2 V ar[N ]
1
2
(a)
n X
1
n k
1
E[
]=
p (1 − p)n−k ·
X +1
k+1
k
k=0
n
X
1
(n + 1)!
=
pk+1 (1 − p)n−k
(n + 1)p k=0 (k + 1)!(n − k)!
n+1
X
1
(n + 1)!
=
pk+1 (1 − p)n−k − pn+1
(n + 1)p k=0 (k + 1)!(n − k)!
=
1
(1 − pn+1 )
(n + 1)p
(b)
∞
∞
X λk e−λ
X λk+1 e−λ
1
1
E[
]=
·
=
X +1
k!
k + 1 k=0 (k + 1)!
k=0
∞
1 X λk+1 e−λ
=
λ k=0 (k + 1)!
=
∞
1 X λk e−λ
− e−λ
λ k=0 k!
=
1
(1 − e−λ )
λ
3
(a)Given any A > 0,
Z
∞
P (|Xn | > A) = 2
A
n
dx
π(1 + n2 x2 )
2
= tan−1 (nx)|∞
A
π
2 π
= ( − tan−1 (nA)) → 0 as n → ∞
π 2
Thus Xn → 0 in pr.
2
(b)E[|Xn − 0|] = E[|Xn |] =
Xn 9 0 in L1 .
2
π
R∞
0
nx
1+n2 x2
=
1
nπ
ln(1 + n2 x2 )|∞
0 = ∞. Thus
(c)Given A > 0,
∞
X
∞
2X π
( − tan−1 (nA))
π n=1 2
Z ∞
2
π
π
≥
− tan−1 (x) dx −
π 0 2
2
Z π
2
2
π
=
tan(x) dx −
π 0
2
2
π
π/2
=
ln(sec x)|0 −
=∞
π
2
P (|Xn − 0| > A) =
n=1
By Borel-Cantelli lemma, P (|Xn − 0| > A i.o.) = 1. This implies the failure
of a.s. convergence.
4
Hard.
5
S
S
S
n−1
n−1
n
n
(a)E[Mn |Fn−1 ] = λ0 n−1 E[λX
E[λX
.
0 |Fn−1 ] = λ0
0 ] = λ0
1
(b)Since E[λX
0 ] = 1, we let λ0 = a(cos θ + i sin θ) and we have
pa cos θ + (1 − p)a−1 cos θ = 1
(1)
and
pa sin θ − (1 − p)a−1 sin θ = 0
(2)
q
q
or sin θ = 0. If a = 1−p
, then by (1) we
by (2), we have either a = 1−p
p
p
have cos θ = √ 1
≥ 1, and this implies cos θ = 1 and p = 21 ⇒a = 1.
2
p(1−p)
If we start at the assumption that sin θ = 0, by (1) we have cos θ = 1 to
make pa cos θ + (1 − p)a−1 cos θ positive. Hence pa2 − a + (1 − p) = 0 and we
have a = 1 or a = 1−p
. Therefore, λ0 equals 1 or 1−p
, and for any n we have
p
p
E[|Mn |] = E[|X1 |]n = p|λ| + (1 − p)|λ−1 | = pλ + (1 − p)λ−1 = 1
3
which shows {Mn } is L1 bounded; martingale convergence theorem tells us
the limit exists and is finite a.s.
(c)Let X1 takes value on 1 and 3 with probability 12 each, and λ0 = aeiθ .
We have
1
1
a cos θ + a3 (4 cos3 θ − 3 cos θ) = 1
(3)
2
2
and
1
1
a sin θ + a3 (3 sin θ − 4 sin3 θ) = 0
(4)
2
2
By (4) we have sin2 θ = 34 + 4a12 = 1 − cosθ . Substitute 41 − 4a12 for cosθ
in (3) we have cos θ = −1
. Therefore we have the equation
( −1 )2 = 14 − 4a12 ,
√ a3
√ a3
which solves
to a = 2 since a > 0. Therefore, cos θ = −4 2 , and we choose
√
sin θ = 414 .
√
√
1
1
1
We find that E[λX
] = 1 and P (|λX
| = 2) = P (|λX
| = 2 2) = 12 .
0
0
0
√
Thus |Mn | ≥ ( 2)n a.s., showing that the limit does not exist for a.s. ω.
6
Given i ∈ N, we may choose Ai s.t. P (|X| > Ai ) < 1/i and FX (x) is
continuous at both Ai and −Ai , and Ai → ∞ as i → ∞. Therefore,
Z
Z
xdµ = lim
{x>0}
xdµ
by monotone convergence theorem
Z
Z
Z
≤ lim sup |
xdµ −
xdµn | + lim sup |
xdµn |
i→∞
i→∞
{0<x≤Ai }
{0<x≤Ai }
{0<x≤Ai }
Z
Z
≤ lim sup |
xdµ −
xdµn | + (sup E[Xn2 ])1/2
i→∞
{0<x≤Ai }
i→∞
≤1+
{0<x≤Ai }
(sup E[Xn2 ])1/2 ,
n≥1
{0<x≤Ai }
n≥1
R
R
if we select n = n(i) above to satisfy | {0<x≤Ai } xdµ − {0<x≤Ai } xdµn | ≤ 1
for each i. This can be done because Xn ⇒ X implies E[f (Xn )] → E[f (X)]
forRany bounded continuous function. Similarly we can prove the finiteness
of {x≤0} xdµ. This proves X ∈ L1 . For the rest part,
4
Z
Z
x dµn (x) −
x dµ(x) ≤ Z
x dµn (x) + |x|>Ai
Z
Z
x dµn (x) −
+
|x|≤Ai
Z
x dµ(x)
|x|>Ai
x dµ(x)
|x|≤Ai
1
≤ ( · sup E[Xn2 ])1/2 + E[X; |X| > Ai ]
i n≥1
Z
Z
x dµn (x) −
x dµ(x).
+
|x|≤Ai
|x|≤Ai
Now we let n → ∞ to obtain
1
lim sup |EXn − EX| ≤ ( · sup E[Xn2 ])1/2 + E[X; |X| > Ai ]
i n≥1
n
And finally we let i → ∞ to have RHS → 0 and get the desired result.
7
First we show that we still need an extra condition so that the weak convergence works.
We compute
−1/2
E[ n
n
X
4
1 X
(X1 + · · · + Xn ) ] = 2
E[Xi4 ] + 2
E[Xi2 ]E[Xj2 ]
n i=1
1≤i<j≤n
≤
1
(n + n(n − 1)) = 1.
n2
Use similar techniques as we did in the previous problem using the fact
that {Xn } has bounded fourth moments, we claim that E[ n−1/2 (X1 + · · · +
2
σ).
Xn ) ] → E[N (0, σ)2 ], if we assume that n−1/2 (X1 + · · · + Xn ) → N
P(0,
n
1
2
As a result, we have to impose an additional condition: limn→∞ 3n
i=1 ai
exists and is finite. This value is actually σ 2 .
Now it’s time for applying Lindeberg-Feller theorem, which states as follows (See Durrett’s book):
For each n, let Xn,m , 1 ≤ m ≤ n be independent random variables with
E[Xn,m ] = 0. Suppose
5
P
2
(i) nm=1 E[Xn,m
] → σ 2 >P
0;
2
(ii)For all > 0, limn→∞ nm=1 E[Xn,m
; |Xn,m | > ] = 0.
Then Sn = Xn,1 + · · · + Xn,n ⇒ N (0, σ 2 ) as n → ∞.
−1/2
Let Xn,m
Xm . The condition (i) is satisfied since we assume
Pn= n2
1
limn→∞ 3n i=1 ai = σ 2 exists and is finite. For (ii), just note that P (|Xn,m | >
Thus by
) = P (|Xm | > n1/2 ) = 0 when n > 12 for every 1 ≤ m ≤ n. P
1
−1/2
Lindeberg-Feller theorem, n
(X1 + · · · + Xn ) → N (0, limn→∞ 3n ni=1 a2i ).
8
R∞ R∞
(a)The fact that g ≥ 0 a.e. in R2 is obvious. It is left to show −∞ −∞ g(x, y) dx dy =
1. To see this,
Z ∞Z y
Z ∞Z x
2f (x)f (y) dx dy =
2f (y)f (x) dy dx
−∞ −∞
−∞ −∞
Z ∞Z ∞
=
2f (y)f (x) dx dy
−∞ y
Z ∞Z ∞
Z Z
1 ∞ y
2f (y)f (x) dx dy)
2f (x)f (y) dx dy +
= (
2 −∞ −∞
−∞ y
Z ∞Z ∞
1
= ·2
f (x)f (y) dx dy = 1
2
−∞ −∞
Rt
R∞
(b) First we choose t ∈ R so that −∞ f (x) dx > 0 and t f (x) dx > 0.
R∞ Rt
We have P (X ≤ t, Y > t) = 0, but both P (X ≤ t) = −∞ −∞ 2f (x)f (y) dx dy =
Rt
R∞
2 −∞ f (x) dx > 0 and P (Y > t) = 2 y f (y) dy > 0. This shows X and Y
are not independent.
9
P (n−1/α max(X1 , · · · , Xn ) ≤ x) = P (max(X1 , · · · , Xn ) ≤ n1/α x)
= P (1 − (n1/α x)−α )n
x−α n
−α
= P (1 −
) → e−x
n
−α
as n → ∞. We may define F (x) = e−x , and it is continuous and nondecreasing on R, with F (∞) = 1, F (−∞) = 0. Thus it is the distribution
−α
function of some random variable X. So P (X > x) = 1 − F (x) = 1 − e−x .
6
10
(a)
∞
λα α−1 −λx itx
x e e dx
Γ(α)
0
Z ∞ α
∞
X
λ
(itx)n
α−1 −λx
=
x e
dx
Γ(α)
n!
0
n=0
Z
∞
X
(it)n λα Γ(n + α) ∞ λn+α
=
·
·
xn+α−1 e−λx dx
n+α
n! Γ(α)
λ
Γ(n + α)
0
n=0
∞
X
α
it
it
= (1 + )α .
=
( )n
n
λ
λ
n=0
Z
f (t) =
(b) fX1 +···+Xn (t) = fX1 (t)n = (1 + itλ )n . That is, it is Gamma(n, λ).
(c) We have: Sn + Xn+1 > k, Sn < k. Therefore, the probability is
Z
k
0
Z
=
0
Z
∞
λe−λx
k−s
k
λn
sn−1 e−λs ds
(n − 1)!
Z k
−λk
e
sn−1 ds
e−λ(k−s)
λn
(n − 1)!
(λk)n −λk
=
e .
n!
=
λn
sn−1 e−λs dx ds
(n − 1)!
0
7
Download