Problem Set 6 Suggested Solutions

advertisement
ECO 2400F
First Half of Fall 2007
Problem Set 6
Suggested Solutions
1. Do the exercise in Lecture 6.
Note that symmetry of a distribution implies a skewness of zero.
We have
£
¤
µ3 ≡ E (X − µ)3
Z ∞
=
(x − µ)3 fX (x)dx
Z
Z−∞
µ
3
(x − µ) fX (x)dx +
=
∞
(x − µ)3 fX (x)dx.
−µ
−∞
If the distribution is symmetric, then fX (2µ − x) = f (x). Then
Z ∞
Z ∞
3
(x − µ) fX (x)dx =
(x − µ)3 fX (2µ − x)dx.
µ
µ
Let y ≡ 2µ − x. This implies that x = 2µ − y, and so dx = −dy. Therefore we
have that x = µ implies y = µ, while x → ∞ implies that y → −∞.
As such,
Z
∞
Z
3
−∞
(x − µ) fX (2µ − x)dx = −
µ
µ
−∞
Z
=
(µ − y)3 fY (y)dy
(y − µ)3 fY (y)dy
µ
Z
µ
= −
−∞
1
(y − µ)3 fY (y)dy,
which implies that
Z
Z
µ
µ3 =
3
∞
(x − µ) fX (x)dx +
Z−∞
µ
=
(x − µ)3 fX (x)dx −
−∞
(x − µ)3 fX (x)dx
Zµ µ
(y − µ)3 fY (y)dy.
(1)
−∞
As such, fX = fY (a.e.). Combining this with the observation that the range of
integration is the same between the two integrals in (1), we have that µ3 = 0.
2. From Casella and Berger (2002):
(a) Exercise 3.3
We have X = 1 if a car passes, and X = 0 if the opposite is the case. We
set Pr [X = 1] ≡ p and so Pr [X = 0] = 1 − p. A car must pass at the
fourth second. This must be followed by three consecutive incidences of nocar events.
Before the fourth second, there should not be 3 consecutive no-car cases.
Therefore there can be at most 2 no-car cases during the first 3 seconds. This
must be followed by a car passing at the fourth second, which in turn is followed by 3 consecutive no-car cases.
Note that the probability of a car passing at the 4th second followed by three
consecutive no-car events is p(1 − p)3 . In addition, the probability that during
the first 3 seconds there are at most 2 no-car cases is given by
µ ¶
µ ¶
µ ¶
3
3 2
3 3
2
p(1 − p) +
p (1 − p) +
p,
1
2
3
where the first term indicates the probability of observing one car in the first
3 seconds, the second term denotes the probability of seeing two cars in the
first 3 seconds and the final term gives the probability of 3 cars in the first 3
seconds.
It follows that the probability that the pedestrian has to wait exactly 4 seconds
before starting to cross the street is equal to
¢
¡
3p(1 − p)2 + 3p2 (1 − p) + p3 × p(1 − p)3
¢
¡
= 3p + 3p3 − 6p2 + 3p2 − 3p3 + p3 × p(1 − p)3
¢
¡
= 3p − 3p2 + p3 p(1 − p)3
¡
¢
= p2 − 3p + 3 p2 (1 − p)3 .
2
(b) Exercise 3.25
The hazard function is given by
P ( t ≤ T ≤ t + δ| T ≥ t)
.
δ→0
δ
hT (t) ≡ lim
Consider that
P (t ≤ T ≤ t + δ| T ≥ t)
P (t ≤ T ≤ t + δ)
=
P (T ≤ t)
R t+δ
fT (t)dt
= t
1 − FT (t)
FT (t + δ) − FT (t)
=
.
1 − FT (t)
As such,
hT (t) = lim
FT (t+δ)−FT (t)
1−FT (t)
δ
FT (t + δ) − FT (t)
lim
δ→0
δ (1 − FT (t))
FT (t + δ) − FT (t)
1
× lim
1 − FT (t) δ→0
δ
1
× fT (t)
1 − FT (t)
d
− log (1 − FT (t)) .
dt
δ→0
=
=
=
=
(c) Exercise 4.4
We have
½
f (x, y) ≡
C(x + 2y) ,
0
,
if 0 < y < 1 and 0 < x < 2;
otherwise.
i. We solve
Z
1
Z
2
1 =
C(x + 2y)dxdy
0
3
0
Z
1
= C
Z
0
µ
¶
x2
x=2
+ 2xy|x=0 dy
2
1
= C
(2 + 4y) dy
³
¯1 ´
= C 2y + 2y 2 ¯0
0
= C(2 + 2)
= 4C,
which implies that C = 41 .
ii. The marginal distribution of X on the interval (0, 2) is given by
Z Z
1 x 1
FX (x) =
(x + 2y) dydx
4 0 0
Z
¯y=1 ´
1 x³
=
xy + y 2 ¯y=0 dx
4 0
Z
1 x
(x + 1) dx
=
4 0
µ
¶
1 x2
=
+x
4 2
x2 x
=
+ .
8
4
Therefore the marginal distribution function of X is

0
,
x≤0

x2
x
FX (x) ≡
+4 , 0<x<2
 8
1
,
x ≥ 2.
iii. For values of x and y in the supports of X and Y , respectively, the joint
distribution of X and Y is given by
P (X ≤ x, Y ≤ y)
Z yZ x
1
=
(x + 2y) dxdy
0
0 4
¶
Z µ
1 y x2
x=x
+ 2xy|x=0 dy
=
4 0
2
4
¶
Z µ
1 y x2
=
+ 2xy dy
4 0
2
¶
µ
¯
1 x2 y
2 ¯y=y
=
+ xy y=0
4
2
1 2
1
=
x y + xy 2 .
8
4
The joint cdf of X and Y is accordingly given by
P (X ≤ x, Y ≤ y) ≡ F (x, y)

0
,

1 2
1
2
x
y
+
xy
,
≡
4
 8
1
,
iv. We have
Z≡
if y < 0, x < 0
if 0 < y < 1 and 0 < x < 2
if y > 1, x < 2.
9
.
(X + 1)2
For 1 < Z < 9, set Z ≡ g(X). Note that the density of X is given by
½ 1
(x + 1) , 0 < x < 2
4
fX (x) ≡
0
, otherwise.
We have for 1 < Z < 9
3
X = g −1 (Z) = √ − 1.
Z
Accordingly, for 1 < z < 9, the density of Z satisfies
¯ ¯
¡ −1 ¢ ¯ dx ¯
fZ (z) = fX g (z) ¯¯ ¯¯
dz
¯
µ
¶ ¯
¯ 1 3¯
3
1
¯
√ − 1 + 1 × ¯− 3 ¯¯
=
4
2 z2
z
9
.
=
8z 2
As such, the density of Z is given by
½ 9
,
8z 2
fZ (z) ≡
0 ,
5
1<z<9
otherwise.
(d) Exercise 4.13
i. We have
=
h
i
2
min E (Y − g(X))
g(·)
h
i
2
min E (Y − E [ Y | X] + E [ Y | X] − g(X))
g(·)
h
2
2
min E (Y − E [ Y | X]) + (E [ Y | X] − g(X))
=
+2 (Y − E [ Y | X]) (E [ Y | X] − g(X))]
h
i
h
i
2
2
min E (Y − E [ Y | X]) + E (E [ Y | X] − g(X))
=
g(·)
g(·)
+2E [(Y − E [ Y | X]) (E [ Y | X] − g(X))] .
Let
A ≡ (Y − E [ Y | X]) (E [ Y | X] − g(X)) .
Naturally,
E [A] = E [E [ A| X]] ,
which implies that
E [(Y − E [Y | X]) (E [Y | X] − g(X))]
= E [E [(Y − E [Y | X]) (E [Y | X] − g(X)) X]]
= E [E [Y − E [Y | X]| X] × E [E [Y | X] − g(X)| X]] .
But
E [ Y | X] − E [ E [Y | X]| X] = E [ Y | X] − E [ Y | X] = 0,
from which it follows that
£
¤
min E (Y − g(X))2
g(·)
£
¤
£
¤
= min E (Y − E [Y | X])2 + E (E [Y | X] − g(X))2 .
g(·)
(2)
Note that E [ Y | X] is a constant and g(X)
has no effect ¤on the first term
£
in (2). Since the first term in (2) is E (Y − E [ Y | X])2 , it follows that
g(X) should be equal to E [Y | X]. This makes
£
¤
E (E [ Y | X] − g(X))2 = 0,
which implies that
£
¤
£
¤
min E (Y − g(X))2 = E (Y − E [ Y | X])2 .
g(·)
6
ii. Equation (2.2.3) states that
£
¤
£
¤
min E (X − b)2 = E (X − E [X])2 .
b
Note that
h
min E (X
b
h
= min E (X
b
h
= min E (X
b
h
= min E (X
b
− b)
2
i
2
i
− E [X] + E [X] − b)
i
2
2
− E [X]) + (E [X] − b) + 2 (X − E [X]) (E [X] − b)
i
h
i
2
2
− E [X]) + E (E [X] − b) + 2E [(X − E [X]) (E [X] − b)] ,
where the last term involves
E [(X − E [X]) (E [X] − b)]
= E [(X − E [X]) (E [X] − b)]
= (E [X] − E [X]) (E [X] − b)
= 0.
It follows that
£
¤
min E (X − b)2
b
£
¤
£
¤
= min E (X − E [X])2 + E (E [X] − b)2 .
b
(3)
From the first part of this question, we have that the constant b should be
equal to E [X] in order to minimize the second term in (3). This implies
that
£
¤
£
¤
min E (X − b)2 = E (X − E [X])2 .
b
(e) Exercise 4.26
We have X and Y independent. Moreover, X ∼ EXP (λ) and Y ∼ EXP (µ).
As such, the densities of X and Y are given by
fX (x) ≡ λe−λx ,
fY (y) ≡ µe−µy ,
respectively. We also have the random variables
Z ≡ min {X, Y }
7
and
½
W ≡
1 ,
0 ,
if Z = X
if Z = Y .
i. We have
P (Z ≤ z, W = 0)
= P (Y ≤ z, X > Y )
Z zZ ∞
=
λe−λx · µe−µy dxdy
Z0 z y
³
¯x=∞ ´
=
µe−µy −e−λx ¯x=y dy
Z0 z
=
µe−µy e−λy dy
0
µ ³ −(µ+λ)y ¯¯z ´
=
−e
0
µ+λ
¢
µ ¡ −(µ+λ)z
=
−e
+1
µ+λ
¢
µ ¡
1 − e−(µ+λ)z .
=
µ+λ
In addition,
P (Z ≤ z, W = 1)
= P (X ≤ Z, Y > X)
Z zZ ∞
λe−λx µe−µy dydx
=
Z0 z x
¯∞ ¢
¡
=
λe−λx −e−µy ¯x dx
Z0 z
=
λe−λx e−µx dx
0
λ ³ −(µ+λ)x ¯¯z ´
=
−e
0
µ+λ
¢
λ ¡ −(µ+λ)z
=
−e
+1 ,
µ+λ
which implies that
P (Z ≤ z, W = 1) =
8
¢
λ ¡
1 − e−(µ+λ)z .
µ+λ
ii. We have
=
=
=
=
=
=
P (W = 0)
P (Z = Y )
P (X > Y )
Z ∞Z ∞
λe−λx µe−λy dxdy
Z0 ∞ y
µe−µy e−λy dy
0
¯∞
µ −(µ+λ)y ¯¯
e
¯
µ+λ
0
µ
,
µ+λ
which implies that
µ
µ+λ
λ
=
.
µ+λ
P (W = 1) = −
Now
P (Z ≤ z| W = 0) =
=
P (Z ≤ z, W = 0)
P (W = 0)
¡
¢
µ
−(µ+λ)z
1
−
e
µ+λ
µ
µ+λ
−(µ+λ)z
= 1−e
,
and
P (Z ≤ z| W = 1) =
=
P (Z ≤ z, W = 1)
P (W = 1)
¡
¢
λ
−(µ+λ)z
1
−
e
µ+λ
λ
µ+λ
−(µ+λ)z
= 1−e
.
Therefore
P ( Z ≤ z| W = 0) = P ( Z ≤ z| W = 1) = 1 − e−(µ+λ)z .
9
But
P (Z ≤ z) = P (Z ≤ z, W = 0) + P (Z ≤ z, W = 1)
¢
¢
µ ¡
λ ¡
=
1 − e−(µ+λ)z +
1 − e−(µ+λ)z
µ+λ
µ+λ
−(µ+λ)z
= 1−e
.
It follows that X and W are independent.
(f) Exercise 4.31
We have conditional on {X = x} that
Y ∼ bin(n, x),
where
X ∼ U nif (0, 1).
i. The conditional density of Y given X = x is given by
½ ¡ n¢ y
x (1 − x)n−y , 0 < x < 1
y
f (y|x) ≡
0
, otherwise.
Therefore
E [Y | X] = nX
and
E [Y ] = E [E [Y | X]]
= E [nX]
= nE [X]
1−0
= n×
2
n
=
.
2
In addition,
V ar [Y | X] = nX(1 − X),
and
V ar [Y ] = E [V ar [Y | X]] + V ar [E [Y | X]]
= E [nX(1 − X)] + V ar [nX]
10
Z
1
nx(1 − x)dx + n2 V ar [X]
0
Ã
¯1 !
(1 − 0)2
x2 x3 ¯¯
− ¯ + n2 ×
= n
2
3 0
12
µ
¶
n2
1 1
= n
−
+ .
2 3
12
=
ii. The joint distribution of X and Y is given by
≡
=
=
=
F (x, y)
P (0 < X ≤ x, Y = y)
Z x
f (x, y)dx
−∞
Z x
f (y|x)fX (x)dx
−∞
Z xµ ¶
n y
x (1 − x)n−y dx,
y
0
where
½
fX (x) ≡
1 ,
0 ,
0<x<1
otherwise.
iii. The marginal distribution of Y is given by
FY (y) ≡ P (0 < X < 1, Y = y)
Z 1µ ¶
n y
x (1 − x)n−y dx
=
y
µ0 ¶ Z 1
n
(xy − xn )dx
=
y
0
¯1 !
µ ¶ Ã y+1
n
x
xn+1 ¯¯
=
−
y
y + 1 n + 1 ¯0
µ ¶µ
¶
1
n
1
−
=
y
y+1 n+1
µ ¶µ
¶
n
n−y
=
.
y
(y + 1)(n + 1)
11
References
Casella, G., and R. L. Berger (2002) Statistical Inference, second ed. (Pacific Grove,
Calif.: Wadsworth)
12
Download