Solutions for Homework 5 Part II

advertisement
Autumn 08 Stat 620
Solutions for Homework 5 Part II
4.17 (a) P (Y = i + 1) =
R i+1
i
e−x dx = (−e−x )|i+1
= e−i (1 − e−1 ), which is geometric with p = 1 − e−1 .
i
(b) Since Y = i + 1 if and only if i ≤ X < i + 1, Y ≥ 5 ⇔ X ≥ 4.
P (X − 4 ≤ x|Y ≥ 5) = P (X − 4 ≤ x|X ≥ 4)
By the “memoryless” property (on page 101 of Casella & Berger) of the exponential distribution:
P (X > s|X > t) = P (X > s − t),
P (X − 4 ≤ x|X ≥ 4) = P (X ≤ 4 + x|X ≥ 4) = 1 − P (X > 4 + x|X ≥ 4)
= 1 − P (X > 4 + x − 4) = 1 − P (X > x) = P (X ≤ x) = 1 − e−x , x ∈ {0, 1, 2, . . .}
4.20 (a) This transformation is not one-to-one because you cannot determine the sign of X2 from Y1
0 = {−∞ < x < ∞, x = 0}, S 1 =
and Y2 . So partition the support of (X1 , X2 ) into SX
1
2
X
2 = {−∞ < x < ∞, x < 0}. The support of (Y , Y ) is
{−∞ < x1 < ∞, x2 > 0} and SX
1
2
1
2
0 } = 0. The inverse
SY = {0 < y1 < ∞, −1 < y2 < 1}. It is easy to see that P {X ∈ SX
p
1 is x = y √y and x =
transformation from SY to SX
y1 − y1 y22 with Jacobian
1
2
1
2
J1 (y1 , y2 ) = 1 √y2
2 y1
√ 2
1−y2
1 √
2
y1
√
y2
√
y1
√
y1
1−y22
1
= p
2 1 − y22
p
2 is x = y √y and x = − y − y y 2 with J = −J .
The inverse transformation from SY to SX
1
2
1
1 2
2
1
1
2
Then
1 −y1 /(2σ2 ) 1
1 −y1 /(2σ2 ) −1
1 −y1 /(2σ2 )
1
p
p
p
fY1 ,Y2 (y1 , y2 ) =
e
+
e
e
=
2
2
2
2
2
2πσ
2πσ
2πσ
2 1 − y2
2 1 − y2
1 − y22
(b) We can see in the above expression that the joint pdf factors into a function of y1 and a function
of y2 . So Y1 and Y2 are independent. Y1 is the square of the distance from (X1 , X2 ) to the origin. Y2
is the cosine of the angle between the positive x1 -axis and the line from (X1 , X2 ) to the origin. So
1
Autumn 08 Stat 620
independence says that the distance from the origin is independent of the orientation (as measured
by the angle).
4.23 (a) X ∼ beta(α, β), Y ∼ beta(α + β, γ). X and Y are independent.
The inverse transformation is y = v, x = u/v with Jacobian
J(u, v) = fU,V (u, v) =
∂x
∂u
∂x
∂v
∂y
∂u
∂y
∂v
1
v
=
0
−u 2
1
v
=
1 v
Γ(α + β) Γ(α + β + γ) u α−1
u
1
( )
(1 − )β−1 v α+β−1 (1 − v)γ−1 , 0 < u < v < 1
Γ(α)Γ(β) Γ(α + β)Γ(γ) v
v
v
So
fU (u) =
=
=
=
Z
Γ(α + β + γ) α−1 1 β−1
v − u β−1
u
)
dv
v
(1 − v)γ−1 (
Γ(α)Γ(β)Γ(γ)
v
u
Z 1
dv
Γ(α + β + γ) α−1
v−u
β+γ−1
β−1
γ−1
u
(1 − u)
, dy =
y
(1 − y)
dy y =
Γ(α)Γ(β)Γ(γ)
1−u
1−u
0
Γ(β)Γ(γ)
Γ(α + β + γ) α−1
u
(1 − u)β+γ−1
Γ(α)Γ(β)Γ(γ)
Γ(β + γ)
Γ(α + β + γ) α−1
u
(1 − u)β+γ−1 ,
0 < u < 1.
Γ(α)Γ(β + γ)
Thus, U ∼ beta(α, β + γ).
4.24 Let z1 = x + y, z2 =
x
x+y ,
then x = z1 z2 , y = z1 (1 − z2 ) and
|J(z1 , z2 )| = ∂x
∂z1
∂x
∂z2
∂y
∂z1
∂y
∂z2
z2
z
1
=
= z1
1
−
z
−z
2
1 The support SXY = {x > 0, y > 0} is mapped onto the set SZ = {z1 > 0, 0 < z2 < 1}. So
1
1
(z1 z2 )r−1 e−z1 z2 ·
(z1 − z1 z2 )s−1 e−z1 +z1 z2 · z1
Γ(r)
Γ(s)
Γ(r + s) r−1
1
=
z r+s−1 e−z1 ·
z (1 − z2 )s−1 ,
Γ(r + s) 1
Γ(r)Γ(s) 2
fZ1 ,Z2 (z1 , z2 ) =
2
0 < z1 < +∞, 0 < z2 < 1.
Autumn 08 Stat 620
Z1 and Z2 are independent because fZ1 ,Z2 (z1 , z2 ) can be factored into two terms, one depending only
on z1 and the other depending only on z2 . Inspection of these kernals shows Z1 ∼ gamma(r + s, 1),
Z2 ∼ beta(r, s).
4.26 (a) The only way to attack this problem is to find the joint cdf of (Z, W ). Now W = 0 or 1. For
0 ≤ w < 1,
P (Z ≤ z, W ≤ w) = P (Z ≤ z, W = 0)
= P (min(X, Y ) ≤ z, Y ≤ X) = P (Y ≤ z, Y ≤ X)
Z zZ ∞
1 −x/λ 1 −y/µ
=
e
e
dxdy
λ
µ
0
y
λ
1
1
=
1 − exp{−( + )z}
λ+µ
µ λ
and
P (Z ≤ z, W = 1) = P (min(X, Y ) ≤ z, X ≤ Y )
µ
= P (X ≤ z, X ≤ Y ) =
λ+µ
1
1
1 − exp{−( + )z}
µ λ
can be used to give the joint cdf.
(b)
Z
∞Z ∞
P (W = 0) = P (Y ≤ X) =
0
y
λ
1 −x/λ 1 −y/µ
e
e
dxdy =
λ
µ
µ+λ
P (W = 1) = 1 − P (W = 0) =
µ
µ+λ
P (Z ≤ z) = P (Z ≤ z, W = 0) + P (Z ≤ z, W = 1) = 1 − exp{−(
1
1
+ )z}
µ λ
Therefore, P (Z ≤ z, W = i) = P (Z ≤ z)P (W = i), for i = 0, 1, z > 0 which implies that Z and W
are independent.
4.36 (a)
EY =
n
X
i=1
EXi =
n
X
E(E(Xi |Pi )) =
i=1
n
X
i=1
3
E(Pi ) =
n
X
i=1
α
nα
=
α+β
α+β
Autumn 08 Stat 620
(b) Since the trials are independent,
VY =
n
X
VXi =
i=1
=
n
X
n
X
[V(E(Xi |Pi )) + E(V(Xi |Pi ))] =
i=1
=
[V(Pi ) + E(Pi (1 − Pi ))]
i=1
[V(Pi ) − E((Pi )2 ) + E(Pi )] =
i=1
n
X
n
X
n
X
[−(E(Pi ))2 + E(Pi )]
i=1
[−(
i=1
α 2
α
) +
]=
α+β
α+β
n
X
i=1
αβ
nαβ
=
(α + β)2
(α + β)2
α
(c) We show that Xi ∼ Bernoulli α+β
. Then, because Xi0 s are independent, Y =
Pn
i=1 Xi
∼
α
). Let fPi (p) denote the beta(α, β) density for the Pi s and conditioning on Pi gives
Binomial(n, α+β
1
Z 1
Γ(α + β) α−1
P (Xi = x|Pi = p)fPi (p) dp =
px (1 − p)1−x
p
(1 − p)β−1
Γ(α
+
β)
0
0
Z
Γ(α + β) 1 α+x−1
=
p
(1 − p)β−x+1−1 dp
Γ(α)Γ(β) 0


 β
Γ(α + β) Γ(α + x)Γ(β − x + 1)  α+β x = 0
=
=

Γ(α)Γ(β)
Γ(α + β + 1)

 α
x=1
α+β
Z
P (Xi = x) =
4.47 Because the sample space can be decomposed into the subsets [XY > 0] and [XY < 0] (and a set
of (X, Y ) probability zero), we have for z < 0, by definition of Z,
P (Z ≤ z) = P (X ≤ z and XY > 0) + P (−X ≤ z and XY < 0)
= P (X ≤ z and Y < 0) + P (X ≥ −z and Y < 0)
(because z < 0)
= P (X ≤ z)P (Y < 0) + P (X ≥ −z)P (Y < 0)
(because X and Y are independent)
= P (X ≤ z)P (Y < 0) + P (X ≤ z)P (Y < 0)
(symmetry of the distributions of X and Y )
= P (X ≤ z)
By a similar argument, for z > 0, we get P (Z > z) = P (X > z), and hence, Z ∼ X ∼ N (0, 1).
(b) By definition of Z, Z > 0 ⇔ either (i) X < 0 and Y > 0 or (ii) X > 0 and Y > 0. So Z and Y
always have the same sign, hence they cannot be bivariate normal.
4
Autumn 08 Stat 620
4.58 These parts are exercises in the use of conditioning.
(a)
Cov(X, Y ) = E(XY ) − E(X)E(Y )
= E[E(XY |X)] − E(X)E[E(Y |X)]
= E[XE(Y |X)] − E(X)E[E(Y |X)]
= Cov(X, E(Y |X))
(b)
Cov(X, Y − E(Y |X)) = Cov(X, Y ) − Cov(X, E(Y |X)) = 0
by (a)
So, X and Y − E(Y |X) are uncorrelated.
(c)
V ar(Y − E {Y |X}) = V ar(E {Y − E {Y |X} |X}) + E {V ar [Y − E {Y |X} |X]}
= V ar (E {Y |X} − E {Y |X}) + E {V ar (Y − E {Y |X} |X)}
= V ar(0) + E {V ar(Y |X)}
= E {V ar(Y |X)}
5
E {Y |X} is constant wrt the [Y |X] distribution
Download