Math 5080-2, F 2015. Assignment 1. Chapter 6, Exercises 2,... 16(a). ≤ y) = P (X ≤ y

advertisement
Math 5080-2, F 2015. Assignment 1. Chapter 6, Exercises 2, 4, 9, 10, 13, 14(b),
16(a).
2. (a) For 0 < y < 1, FY (y) = P (X 1/4 ≤ y) = P (X ≤ y 4 ) = FX (y 4 ) = y 4 ,
so fY (y) = 4y 3 .
(b) For e−1 < w < 1, FW (w) = P (e−X ≤ w) = P (X ≥ − ln w) = 1 + ln w,
so fW (w) = 1/w.
(c) For 0 < z < 1 − e−1 , FZ (x) = P (1 − e−X ≤ z) = P (e−X ≥ 1 − z) =
P (X ≤ − ln(1 − z)) = − ln(1 − z), so fZ (z) = 1/(1 − z).
(d) For 0 < u < 1/4,
− X) ≤ u) = P ((X − 1/2)2 ≥ −u +
p FU (u) = P (X(1p
1/4) = P |X − 1/2| ≥ 1/4 − u) = 2(1 − 1/4 − u), so fU (u) = (1/4 − u)−1/2 .
β
4. (a) FX (x) = 1 − e−(x/θ) for x > 0, so FY (y) = P ((X/θ)β ≤ y) = P (X ≤
1/β
θy ) = FX (θy 1/β ) = 1 − e−y for y > 0, hence fY (y) = e−y for y > 0.
βw
β
(b) FY (w) = P (ln(X) ≤ w) = P (X ≤ ew ) = FX (ew ) = 1 − e−e /θ for w
βw
β
real, so fW (w) = e−e /θ (eβw /θβ )β for √
w real.
√
√
≤ z) = P (e− z ≤ X ≤
(c) FZ (z) = P ((ln X)2 ≤ z) = P√(− z ≤ ln X
√
√
√
√
z
β
− z
/θ)β
e z ) = FX (e z ) − FX (e− z ) = e−(e
− e−(e /θ) for z > 0, and fZ (z) is
the derivative of this.
9. (a) A = (0, 1), B = (0, 1). fY (y) = fX (y 4 )|(y 4 )0 | = 4y 3 for y ∈ B.
(b) A = (0, 1), B = (e−1 , 1). fW (w) = fX (− ln w)|(− ln w)0 | = 1/w for
w ∈ B.
(c) A = (0, 1), B = (0, 1 − e−1 ). fZ (z) = fX (− ln(1 − z))|(− ln(1 − z))0 | =
1/(1 − z) for z ∈ B.
(d) A1 = (0, 1/2), A2 = (1/2, 1), B = (0, 1/4). Now, if u ∈ (0, 1/4), u =
x(1 − x) implies
√
1 ± 1 − 4u
x=
2
with the minus sign if x ∈ A1 and the plus sign if x ∈ A2 . So
√
√
√
√
0 0 1 − 1 − 4u 1 − 1 − 4u 1 + 1 − 4u 1 + 1 − 4u fU (u) = fX
+
f
X
2
2
2
2
=
2(1 − 4u)−1/2 = (1/4 − u)−1/2
if u ∈ B.
10. (a) for x > 0, fY (y) = fX (−x)|(−x)0 | + fX (x)|(x)0 | = e−x , which is an
exponential density.
(b) P (W = 0) = 1/2 and P (W = 1) = 1/2 by symmetry of fX . So
P (W ≤ w) = 0 for w < 0, = 1/2 for 0 ≤ w < 1, and = 1 for 1 ≤ w.
√
√
13. y = u(x) = x2 so x = ± y if 0 < y < 4 and x = y if 4 < y < 16.
Therefore,
√
√
(y/24)(1/(2 y))2 if 0 < y < 4
y/24 if 0 < y < 4
√
= √
fY (y) =
y/48 if 4 < y < 16.
(y/24)(1/(2 y)) if 4 < y < 16
1
14. (b) X = V and Y = V /U , so
0
J = det
−v/u2
1
1/u
= v/u2 ,
so fU,V (u, v) = f (v, v/u)v/u2 = 4(v/u2 )e−2v(1+1/u) for v > 0 and v/u > 0, or
equivalently, u > 0 and v > 0.
16. (a) X1 = V and X2 = U/V , so
0
1
J = det
= −1/v,
1/v −u/v 2
so fU,V (u, v) = f (v)f (u/v)(1/v) = (1/v 2 )(1/(u/v)2 )(1/v) = 1/(u2 v) for v ≥ 1
and u/v ≥ 1, or equivalently, u ≥ v ≥ 1.
2
Math 5080-2, F 2015. Assignment 2. Chapter 6, Exercises 18, 25, 27, 28, 29,
31.
18. (a) X = T and Y = S − T , so
0
J = det
1
1
−1
= −1,
and fS,T (s, t) =Rf (t, s − t) = e−(s−t) , where 0 < t < s − t < ∞, or s > 2t > 0.
∞
(b) fT (t) = 2t e−(s−t) ds = et e−2t = e−t for t > 0.
R s/2 −(s−t)
(c) fS (s) = 0 e
dt = e−s (es/2 − 1) = e−s/2 − e−s for s > 0.
P∞ tn −λ n
25. If X is POI(λ), then MX (t) =
λ /n! = e−λ exp(λet ) =
n=0 e e
t
λ(e −1)
e
.
(a) Therefore,
t
t
e25(e −1) = MX1 (t)[e5(e −1) ]3 ,
t
hence MX1 (t) = e10(e −1) , so X1 is POI(10).
(b) W = X1 + X2 is POI(15), as can also be shown by means of MGFs.
27. Misprint:
Note
Qn
Pnthat Y1 should be Yi .
Qn
(a) ln i=1 Yi = i=1 ln Yi is N(µ1 + · · · + µn , σ12 + · · · + σn2 ), so i=1 Yi is
LOGN(µ1Q
+ · · · + µn ,P
σ12 + · · · + σn2 ) P
n
n
n
a
(b) ln i=1QYi = i=1 ln Yia = a i=1 ln Yi is N(a(µ1 + · · · + µn ), a2 (σ12 +
n
· · · + σn2 )), so i=1 Yia is LOGN(a(µ1 + · · · + µn ), a2 (σ12 + · · · + σn2 )).
(c) ln(Y1 /Y2 ) = ln(Y1 )−ln(Y2 ) is N(µ1 −µ2 , σ12 +σ22 ), so Y1 /Y2 is LOGN(µ1 −
µ2 , σ12 + σ22 ).
(d) If Y is LOGN(µ, σ 2 ), then E[Y ] is the MGF of N(µ, σ 2 ) at t = 1, hence
2
2
2
it is eµ+(1/2)σ . By part (a), the answer is eµ1 +···+µn +(1/2)(σ1 +···+σn ) .
28. (a) Now F (x) = x2 for 0 ≤ x ≤ 1, so by Eqs. (6.5.5) and (6.5.6),
g1 (y1 ) = 2[1 − y12 ]2y1 = 4y1 (1 − y12 ),
g2 (y2 ) = 2[y22 ]2y2 = 4y23 ,
for 0 < y1 < 1 and 0 < y2 < 1, resp.
(b) By Eq. (6.5.4),
g12 (y1 , y2 ) = 2 · 2y1 · 2y2 = 8y1 y2 ,
0 < y1 < y2 < 1.
(c) Let R = Y2 − Y1 and S = Y1 . Then Y1 = S and Y2 = R + S, so
fR,S (r, s) = g12 (s, r + s) = 8r(r + s),
r > 0, s > 0, r + s < 1.
R 1−r
Integrating out s we get fR (r) = 0 8r(r + s) ds = 8r2 (1 − r) + 4r(1 − r)2 =
r(1 − r)[8r + 4(1 − r)] = 4r(1 − r2 ), 0 < r < 1.
29. (a) f (y1 , . . . , yn ) = n!/(y1 · · · yn )2 , 1 ≤ y1 < y2 < · · · < yn < ∞.
(b) g1 (y1 ) = n(1/y1 )n−1 (1/y12 ), 1 < y1 < ∞.
(c) gn (yn ) = n(1 − 1/yn )n−1 (1/yn2 ), 1 < yn < ∞.
1
(d) See p. 220. g12 (y1 , y2 ) = 2/(y12 y22 ), 1 < y1 < y2 < ∞. Let R = Yn − Y1
and S = Y1 . Then Y1 = S and Yn = R + S and
fR,S (r, s) = g1n (s, r + s) = 2/(s2 (r + s)2 ),
r > 0, s > 1,
so the marginal density of R is
Z ∞
2 r(2 + r)
2
ds = 3
fR (r) =
− 2 log(1 + r) ,
s2 (r + s)2
r
1+r
1
r > 0.
(I used Mathematica to evaluate the integral.)
n!
r−1
(e) From (6.5.3), gr (yr ) = (r−1)!
(1/yr2 )(1/yr )r−1 , 1 < yr < ∞.
2 (1 − 1/yr )
31. (a) g1 (y1 ) = n[1 − (1 − e−y1 )]n−1 e−y1 = ne−ny1 , y1 > 0.
(b) gn (yn ) = n[1 − e−yn ]n−1 e−yn , yn > 0.
(c) g1n (y1 , yn ) = n(n − 1)e−y1 [e−y1 − e−yn ]n−2 e−yn , 0 < y1 < yn . Let
R = Yn − Y1 and S = Y1 . Then Y1 = S and Yn = R + S and
fR,S (r, s) = g1n (s, r + s) = n(n − 1)e−s [e−s − e−(r+s) ]n−2 e−(r+s)
= n(n − 1)e−ns e−r (1 − e−r )n−2
for r > 0 and s > 0. Integrate out s > 0 to get fR (r) = (n − 1)e−r (1 − e−r )n−2 ,
r > 0.
(d) The joint pdf of Y1 , . . . , Yn is n!e−(y1 +···+yn ) , 0 < y1 < · · · < yn . We
must integrate over yn > . . . > yr+1 (in that order), where yr+1 > yr . We get
n(n − 1) · · · (n − r + 1)e−(y1 +···+yr )−(n−r)yr , 0 < y1 < · · · < yr .
2
Assignment 3, F 2015. Chapter 7, Exercises 1, 2, 3.
1. (a) FX1:n (y) = P (X1:n ≤ y) = 1 − P (X1:n > y) = 1 − P (X1 > y)n =
1 − (1 − F (y))n = 1 − y −n for y ≥ 1.
(b) This converges to 1 if y ≥ 1, 0 if y < 1. Therefore X1:n converges in
distribution to the constant 1.
n
(c) Zn = X1:n
has cdf Fn (z) = FX1:n (z 1/n ) = 1 − z −1 for z ≥ 1.
2. (a) Xn:n has cdf F (x)n = (1 + e−x )−n → 0 as n → ∞, so Xn:n does not
converge in distribution.
(b) Xn:n − ln n has cdf F (y + ln n)n = (1 + e−y−ln n )−n = (1 + e−y /n)−n →
−y
e−e for y real. This is a cdf, so the answer is yes.
3. (a) X1:n has cdf G(y) = 1 − (1 − F (y))n = 1 − y −2n → 1 for y > 1 and
G(y) = 0 for y ≤ 1. Hence X1:n converges in distribution to the constant 1.
(b) Xn:n has cdf G(y) = F (y)n = (1 − y −2 )n → 0 for y > 1, so Xn:n does
not converge in distribution.
(c) n−1/2 Xn:n has cdf Fn (y) = F (n1/2 y)n = (1 − (n1/2 y)−2 )n = (1 −
−2
−2
y /n)n → e−y for y > 0. Since this defines a cdf, convergence in distribution is established.
1
Assignment 4. Chapter 7, Exercises 7, 15, 16, 17.
2
7. The WEI(1,2) √
distribution has pdf f (x) = 2xe−x for x > 0, mean
3
1
1
1
µ = Γ( 2 ) = 2 Γ( 2 ) = 2 π and variance σ 2 = Γ(2) − Γ( 23 )2 = 1 − π/4. √
(a) By the √
central limit theorem, this holds with a = µ − 1.96σ/ n and
b = µ + 1.96σ/ n, so if n = 35 we have a = 0.7328 and b = 1.0397.
(b) For n odd, X(n+1)/2:n is approximately N (x1/2 , c2 /n), where c2 = 1/(4f (x1/2 )2 ).
√
2
Now F (x1/2 ) = 12 , so since F (x) = 1 −√e−x , it follows that x1/2√= ln 2. Also,
c2 = 1/(4 ln 2). Now a = x1/2 − 1.96c/ n and b = x1/2 + 1.96c/ n, so if n = 35
we have a = 0.6336 and b = 1.0315.
RB
RB
15. (a) µ = 0 θB −θ wθ dw = Bθ/(θ + 1) and σ 2 = 0 θB −θ wθ+1 dw −
B 2 θ2 /(θ + 1)2 = B 2 [θ/(θ + 2) − θ2 /(θ + 1)2 ] = B 2 θ/[(θ + 1)2 (θ + 2)]. If θ = 3
and B = 80 we have µ = 60 and σ 2 = 240. By the central√limit theorem,
the probability is question is approximately 1 − Φ((6025 − nµ)/ nσ 2 ), which if
n = 100 gives 1 − Φ(.1614) = 0.4359.
(b) Fn (y) = 1 − (1 − F (y))n → 1 if y > 0 and → 0 if y ≤ 0. This gives
convergence in distribution to 0, therefore convergence in probability.
(c) Fn (y) = F (y)n → 0 if y < B and → 1 if y ≥ B.
(d) (Wn:n /B)n has cdf Fn (y) = F (By 1/n )n , where F (w) = (w/B)θ , so
Fn (y) → y θ for 0 < y < 1.
(e) It is asymptotically N (x1/2 , c2 /n), where c2 = 1/(4f (x1/2 )2 ). Setting
F (w) = 21 we find that x1/2 = B/21/θ and c2 = B 2 /(θ2 22/θ ).
(f) It converges to x1/2 = B/21/θ .
(g) n1/θ W1:n /B has cdf Fn (y) = 1 − (1 − F (Bn−1/θ y))n = 1 − (1 − y θ /n)n →
θ
1 − e−y for y > 0, which is WEI(1, θ).
16. (a) X n → µ in probability by the weak law of large numbers. Since e−µ
is continuous in µ, Yn = e−X n → e−µ in probability.
(b) By the theorem, Yn is asymtotically N (e−µ , µe−2µ /n), since [(e−µ )0 ]2 =
[−e−µ ]2 = e−2µ .
(c) This is true for the same reason that part (a) holds.
17. x1/2 = µ by symmetry, and c2 = 21 (1 − 12 )/f (µ)2 = 14 [(2πσ 2 )−1/2 ]−2 =
πσ 2 /2, so X̃n is asymptotically N (µ, πσ 2 /(2n)).
1
Assignment 5, F 2015. Chapter 8, Exercises 13, 15.
13. (a) Z is N (0, 1/16) so P (Z < 12 ) = P (Z/(1/4) < 2) = Φ(2) = 0.9772.
√
√
√(b) Z1 − Z2 is N (0, 2) so P (Z1 − Z2 < 2) = P ((Z1 − Z2 )/ 2 < 2) =
Φ( 2) = Φ(1.4142) ≈ 0.9213.
(c) Z1 + Z2 is N (0, 2), so answer is the same as in (b).
(d) This is P (χ2 (16) < 32) ≈ 0.988 from page 607.
(e) This is P (χ2 (15) < 25) = 0.950 from page 607.
15. (a) N (0, 2σ 2 ).
(b) N (3µ, 5σ 2 ). √
(c) (X1 − X2 )/(σ 2) is N (0, 1), so dividing by Sz , the sample standard
deviation of the Z sample, we get t(k − 1).
(d) χ2 (1).
(e) For the same reason as in (c), t(k − 1).
(f) χ2 (2).
(g) This is the difference between two independent GAM(2, 1/2) random
variables, which must be unknown.
(h) t(1).
(i) F (1, 1).
(j) By (h), Z1 /|Z2 | is t(1), and this is the same except for an independent
factor of ±1 with probability 1/2 each. Since t(1) is symmetric, the answer must
be t(1), which is equivalent to the Cauchy distribution.
(k) This is the ratio of two independent nonstandard normal random variables, which
must be unknown.
√
(l) n(X − µ)/σ is N (0, 1), and the sum is χ2 (k), so the result is t(k).
(m) The first term is χ2 (n) and the second is χ2 (k − 1), so the result is
2
χ (n + k − 1).
(n) The first term is N (µ/σ 2 , 1/(nσ 2 )) and the second is N (0, 1/k), so the
result is√N (µ/σ 2 , 1/(nσ 2 ) + 1/k).
(o) k[ Z ] is N (0, 1), so answer is χ2 (1).
(p) This is (Sx2 /σ 2 )/Sz2 , which is F (n − 1, k − 1).
1
Assignment 6. Chapter 9, Exercises 1–6.
1. (a) µ01 = θ/(1 + θ) = X, so θ̂ = X/(1 − X).
(b) µ01 = (θ + 1)/θ = X, so θ̂ = 1/(X − 1).
(c) X is GAM(1/θ, 2), so µ01 = 2/θ = X, and θ̂ = 2/X.
2. (a) NB(3, p). µ01 = 3/p = X, so p̂ = 3/X.
(b) GAM(2, κ). µ01 = 2κ = X, so κ̂ = X/2.
(c) WEI(θ, 1/2). µ01 = θΓ(1 + 1/(1/2)) = 2θ = X, so θ̂ = X/2.
0
µ02 = σ 2 + µ2 = 2θ2 + η 2 = M20 , so η̂ = X
(d) DE(θ,
q η). µ1 = η = X and
√
2
and θ̂ = (1/2)(M20 − X ) = σ̂/ 2.
0
0
2
2
2 2
2
0
(e)
√EV(θ, η). µ1 = η − γθ =
√X and µ2 = σ + µ = π θ /6 + (η − γθ) = M2 ,
θ̂ = ( 6/π)σ̂ and η̂ = X + γ( 6/π)σ̂.
(f) PAR(θ, κ). µ01 = θ/(κ − 1) = X and µ02 = σ 2 + µ2 = θ2 κ/[(κ − 2)(κ −
2
2
2
2
1) ] + θ2 /(κ − 1)2 = M20 , so κ̂/(κ̂ − 2) = σ̂ 2 /X , and κ̂ = (2σ̂ 2 /X )/(σ̂ 2 /X − 1)
and θ̂ = X(κ̂ − 1).
3. (a) L = θn (x1 · · · xn )θ−1 , so ln L = n ln θ + (θ − 1)(ln x1 + · · · + ln xn ), and
(∂/∂θ) ln L = n/θ +(ln x1 +· · ·+ln xn ) = 0 implies θ̂ = −n/(ln X1 +· · ·+ln Xn ).
This is justified by noticing that L → 0 as θ → 0 and as θ → ∞, since 0 < xi < 1
for i = 1, . . . , n.
(b) L = (θ + 1)n (x1 · · · xn )−θ−2 , so ln L = n ln(θ + 1) − (θ + 2)(ln x1 +
· · · + ln xn ), and (∂/∂θ) ln L = n/(θ + 1) − (ln x1 + · · · + ln xn ) = 0 implies
θ̂ = n/(ln X1 + · · · + ln Xn ) − 1. This is justified by noticing that L → 0 as
θ → −1 and as θ → ∞. Here we need to extend the parameter space to (−1, ∞),
for otherwise our MLE might not belong to the parameter space. (If we had
done that in Problem 1, the mean would not necessarily exist.)
(c) L = θ2n x1 · · · xn e−θ(x1 +···+xn ) , so ln L = 2n ln θ + ln x1 + · · · + ln xn −
θ(x1 + · · · + xn ), and (∂/∂θ) ln L = 2n/θ − (x1 + · · · + xn ) = 0 implies θ̂ = 2/X.
This is justified by noticing that L → 0 as θ → 0 and as θ → ∞.
4. (a) BIN(1, p). L = px1 +···+xn (1 − p)n−(x1 +···+xn ) , so ln L = (x1 + · · · +
xn ) ln p + (n − (x1 + · · · + xn )) ln(1 − p), and (∂/∂p) ln L = (x1 + · · · + xn )/p −
(n − (x1 + · · · + xn ))/(1 − p) = 0 implies p̂ = X.
(b) GEO(p). L = (1 − p)x1 +···+xn −n pn , so ln L = (x1 + · · · + xn − n) ln(1 −
p) + n ln p, and (∂/∂p) ln L = −(x1 + · · · + xn − n)/(1 − p) + n/p = 0 implies
p̂ = 1/X.
(c) NB(3, p). L = x12−1 · · · xn2−1 p3n (1−p)x1 +···+xn −3n , so ln L = ln x12−1 +
· · · + ln xn2−1 + (3n) ln p + (x1 + · · · + xn − 3n) ln(1 − p), and (∂/∂p) ln L =
3n/p − (x1 + · · · + xn − 3n)/(1 − p) = 0 implies p̂ = 3/X.
2
2
(d) N(0, θ). L = (2πθ)−n/2 e−(x1 +···+xn )/(2θ) , so ln L = −(n/2)(ln(2π) +
ln θ)−(x21 +· · ·+x2n )/(2θ), and (∂/∂θ) ln L = −n/(2θ)+(x21 +· · ·+x2n )/(2θ2 ) = 0
implies θ̂ = (X12 + · · · + Xn2 )/n.
1
(e) GAM(θ, 2). L = θ−2n x1 · · · xn e−(x1 +···+xn )/θ , so ln L = −2n ln θ +ln x1 +
· · · + ln xn − (x1 + · · · + xn )/θ, and (∂/∂θ) ln L = −2n/θ + (x1 + · · · + xn )/θ2 = 0
implies θ̂ = X/2.
(f) DE(θ, 0). L = (2θ)−n e−(|x1 |+···+|xn |)/θ , so ln L = −n(ln 2 + ln θ) − (|x1 | +
· · · + |xn |)/θ, and (∂/∂θ) ln L = −n/θ + (|x1 | + · · · + |xn |)/θ2 = 0 implies
θ̂ = (|X1 | + · · · + |Xn |)/n.
√
1/2
1/2
(g) WEI(θ, 1/2). L = 2−n θ−n/2 (x1 · · · xn )−1/2 e−(x1 +···+xn )/ θ , so ln L =
1/2
1/2 √
−n ln 2 − (n/2) ln θ − (ln x1 + · · · + ln xn )/2 − (x1 + · · · + xn )/ θ, and
1/2
1/2
1/2
(∂/∂θ) ln L = −n/(2θ) + (x1 + · · · + xn )/(2θ3/2 ) = 0 implies θ̂ = ((X1 +
1/2
· · · + Xn )/n)2 .
(h) PAR(1, κ). L = κn [(1+x1 ) · · · (1+x)]−κ−1 , so ln L = n ln κ−(κ+1)(ln(1+
x1 ) + · · · + ln(1 + xn )), and (∂/∂κ) ln L = n/κ − (ln(1 + x1 ) + · · · + ln(1 + xn )) = 0
implies κ̂ = n/(ln(1 + X1 ) + · · · + ln(1 + Xn )).
5. f (x; θ) = 2θ2 /x3 , x ≥ θ, θ > 0. L(θ) = 2n θ2n /(x1 · · · xn )3 if x1 ≥
θ, . . . , xn ≥ θ, that is, if x1:n ≥ θ. L(θ) is maximized at θ̂ = X1:n .
6. (a) L(θ1 , θ2 ) = 1/(θ2 − θ1 )n if x1 , . . . , xn all belong to [θ1 , θ2 ], or equivalently if θ1 ≤ x1:n and xn:n ≤ θ2 . Since L is maximized when θ1 is maximized
and θ2 is minimized, we must have θ̂1 = X1:n and θ̂2 = Xn:n .
(b) L(θ, η) = θn η nθ (x1 · · · xn )−θ−1 if x1 , . . . , xn are all in [η, ∞), that is, if
x1:n ≥ η. For each θ, L(θ, η) is maximized as a function of η at η̂ = X1:n .
Pn
Next, to find θ̂ we maximize ln L(θ, η̂) = n ln θ + nθ ln η̂ − (θ + 1) i=1 ln xi .
Differentiate with respect to θ to get
n
X
n
∂
L = + n ln η̂ −
ln xi = 0,
∂θ
θ
i=1
hence
θ̂ =
1
n
1
.
i=1 ln(Xi /X1:n )
Pn
2
Assignment 7, 5080-2, F 2015. Chapter 9, Exercises 13, 17, 21, 23, 26.
13.
2 −1
L = (2πσ12 )−n/2 e−(2σ1 )
Pn
i=1
(xi −µ)2
(2πσ22 )−m/2 e
−(2σ22 )−1
Pm
j=1
(yj −µ)2
Take the partials of ln L with respect to µ, σ12 , and σ22 , and set them equal to
0. We get
n
m
1 X
1 X
(x
−
µ)
+
(yj − µ) = 0,
i
σ12 i=1
σ22 j=1
n
1 X
n
− 2+ 4
(xi − µ)2 = 0,
2σ1
2σ1 i=1
and
−
m
1 X
m
+
(yj − µ)2 = 0.
2σ22
2σ24 j=1
However, it is not clear how to solve this nonlinear system simultaneously for
µ, σ12 , and σ22 . Therefore, we leave this one incomplete.
17. (a) E[X] = [(θ − 1) + (θ + 1)]/2 = θ.
(b) Fn:n (t) = (t − (θ − 1))n /2n , so fn:n (t) = n(t − θ + 1)n−1 /2n and
Z θ+1
E[Xn:n ] =
t[n(t − θ + 1)n−1 /2n ] dt
θ−1
θ+1
Z
=
[n(t − θ + 1)n /2n ] dt + (θ − 1)
θ−1
Z
θ+1
[n(t − θ + 1)n−1 /2n ] dt
θ−1
2n
− 1.
=θ+
n+1
Similarly,
E[X1:n ] = θ −
2n
+ 1,
n+1
so E[(X1:n + Xn:n )/2] = θ.
21. (a) f (x; p) = px (1 − p)1−x , hence (∂/∂p) ln f (x; p) = x/p − (1 − x)/(1 −
p) = (x − p)/[p(1 − p)], hence I1 (p) = Var(X)/[p(1 − p)]2 = 1/[p(1 − p)], and
CLRB = p(1 − p)/n.
(b) CRLB for p(1 − p) is (1 − 2p)2 p(1 − p)/n.
(c) UMVUE of p is p̂ = X since Var(X) = p(1 − p)/n = CRLB.
23. (a)
L(θ) = (2πθ)−n/2 e−(1/θ)
Pn
i=1
x2i
n
∂
n
1 X 2
x = 0,
ln L(θ) = − + 2
∂θ
2θ 2θ i=1 i
,
1
so θ̂ = (1/n)
(b)
Pn
I1 (θ) = E
i=1
x2i . It is clearly unbiased.
X12 − θ
2θ2
2 =
Var(X12 )
E[X14 ] − (E[X12 ])2
3θ2 − θ2
1
=
=
= 2,
4θ4
4θ4
4θ4
2θ
so CRLB = 2θ2 /n. But Var(θ̂) = (1/n)Var(X12 ) = 2θ2 /n = CRLB, so this is a
UMVUE of θ.
26. (a) We’ve seen that MLE = θ̂ = Xn:n .
(b) Since µ01 = θ/2, MME = θ̃ = 2X.
(c) E[θ̂] = [n/(n + 1)]θ, so no.
(d) E[θ̃] = θ, so yes.
(e)
MSE(θ̂) = Var(θ̂) + (nθ/(n + 1) − θ)2
2
n
n
1
2θ2
2
−
=
+
θ
=
n+2
n+1
(n + 1)2
(n + 1)(n + 2)
and
MSE(θ̃) = Var(2X) = 4Var(X) =
The MSE of θ̂ is smaller unless n = 1.
2
4
4 θ2
θ2
Var(X1 ) =
=
.
n
n 12
3n
Assignment 8, 5080-2, F 2015. Chapter 9, Exercises 28, 34, 40, 44.
28. (a) Var(θ̂1 ) = Var(X) = Var(X1 )/n = θ2 /n and Var(θ̂2 ) = (n/(n +
1)) Var(X) = nθ2 /(n + 1)2 .
(b) MSE(θ̂1 ) = Var(θ̂1 )+b(θ̂1 )2 = Var(θ̂1 ) = θ2 /n and MSE(θ̂2 ) = Var(θ̂2 )+
b(θ̂2 )2 = nθ2 /(n + 1)2 + (n/(n + 1) − 1)2 θ2 = (n + 1)θ2 /(n + 1)2 = θ2 /(n + 1).
(c) Var(θ̂1 ) = θ2 /2 > Var(θ̂2 ) = 2θ2 /9.
(d) MSE(θ̂1 ) = θ2 /2 > MSE(θ̂2 ) = θ2 /3.
(e) Assume squared error loss. Rθ̂1 (θ) = RX (θ) = E[(X − θ)2 ] = Var(X) =
R∞
θ2 /n, so the Bayes risk with an EXP(2) prior is 0 Rθ̂1 (θ)(1/2)e−θ/2 dθ =
E[θ2 ]/n = 2(22 )/n = 8/n.
2
P
P
34. (a) L(p) = pn (1P
− p) Xi so ln L(p) = n ln p + ( Xi ) ln(1 − p) and
(∂/∂p) ln L(p) = n/p − ( Xi )/(1 − p) = 0. Conclude that (1 − p̂)/p̂ = X or
p̂ = (1 + X)−1 .
(b) θ̂ = (1 − p̂)/p̂ = X by invariance.
(c) (∂/∂p) ln f = [(1 − p)/p − X1 ]/(1 − p), hence I1 (p) = Var(X1 )/(1 − p)2 =
[(1 − p)/p2 ]/(1 − p)2 = 1/[p2 (1 − p)]. If τ (p) = (1 − p)/p, then τ 0 (p) = −1/p2
and the CRLB of θ is τ 0 (p)2 /(nI1 (p)) = (1 − p)/(np2 ) = θ/(np) = θ(θ + 1)/n.
(d) The MLE is X and Var(X) = Var(X1 )/n = (1−p)/(np2 ), which matches
the CRLB. Since X is unbiased for θ, it is a UMVUE.
(e) MSE(X) = E[(X − θ)2 ] = Var(X) = (1 − p)/(np2 ) → 0. Yes.
(f) X is asymptotically normal with asymptotic mean θ and asymptotic
variance θ(θ + 1)/n.
(g) For X we get E[(X − θ)2 ]/[θ(θ + 1)] = Var(X)/[θ(θ + 1)] = 1/n. For
θ̃ = nX/(n + 1) we get E[(nX/(n + 1) − θ)2 ]/[θ(θ + 1)] which after much algebra
reduces to [n + θ/(θ + 1)]/(n + 1)2 , which is less than 1/(n + 1).
2
40. (a) E[(X − c)R2 ] = var(X) + (E[X]
R c − c) makes the result
R ∞ obvious.
(b) E[|X − c|] = |x − c|f (x) dx = −∞ (c − x)f (x) dx + c (x − c)f (x) dx.
Now differentiate with respect to c and set the derivative to 0 and solve for c.
It is basically a calculus problem.
P
P
44. (a) Posterior is proportional to e−βθ θn e−θ xi = θn+1−1 e−θ(β+ xi ) ,
which is the stated distribution.
P
(b) Mean of posterior is (n + 1)/(β + xi ).
R∞
(c) The −1 moment of GAM(θ, κ) is E[X −1 ] = 0 xκ−2 e−x/θ dx/(θκ Γ(κ)).
κ
This is θκ−1 Γ(κ −
P1)/(θ Γ(κ)) = 1/[(κ − 1)θ]. Apply this rule to the posterior
and we get (β + xi )/n.
(d) Now GAM(θ, κ)/(θ/2) = GAM(2, 2κ/2) = χ2 (2κ), so GAM(θ, κ) =
(θ/2)χ2 (2κ).
P Here the Bayes estimator is the median of the posterior, which is
(1/2)(β + xi )−1 times the median of the χ2 (2n + 2) distribution.
(e) Here the Bayes estimator is the median of the reciprocal of the posterior
random variable, which is the reciprocal of the median. So the answer is the
reciprocal of the answer in (d).
1
Assignment 9. 5080-2, F 2015. Chapter 10, Exercises 5, 10, 12, 13, 14.
5. (a) S is GAM(θ, 2n), so the conditional density is
n
Y
.
θ−2 xi e−xi /θ θ−2n Γ(2n)−1 s2n−1 e−s/θ ,
i=1
where s =
· · · + xn , which does not depend on θ.
Qnx1 +−2
(b) i=1 θ xi e−xi /θ = θ−2n x1 x2 · · · xn e−s/θ , so S = X1 + · · · + Xn is sufficient by
the factorization theorem.
10. (a)
f (x1 , . . . , xn ; µ) = (2πσ 2 )−n/2 e−(2σ
2 −1
= (2πσ 2 )−n/2 e−(2σ
2 −1
Since σ 2 is a constant,
Pn
i=1
)
Pn
i=1
)
Pn
i=1
(xi −µ)2
x2i (σ 2 )−1 µ
e
Pn
i=1
xi −(2σ 2 )−1 nµ2
e
.
Xi is sufficient.
Pn
2 −1
(xi −µ)2
2 −n/2 −(2σ )
i=1
(b)
f
(x
,
.
.
.
,
x
;
µ)
=
(2πσ
)
e
. By the factorization theorem,
1
n
Pn
2
i=1 (Xi − µ) is sufficient, and it is a statistic since µ is known.
12.
n −
f (x1 , . . . , xn ; θ, η) = (1/θ) e
P
(xi −η)/θ
n
Y
1{xi >η}
1
P
n −(
= (1/θ) e
xi −nη)/θ
1{x1:n >η} ,
so X and X1:n are jointly sufficient.
13.
f (x1 , . . . , xn ; θ1 , θ2 ) =
so
Qn
1
Xi and
Qn
1 (1
Γ(θ1 + θ2 )
Γ(θ1 )Γ(θ2 )
!n
(x1 · · · xn )θ1 −1 [(1 − x1 ) · · · (1 − xn )]θ2 −1 ,
− Xi ) are jointly sufficient.
14.
f (x1 , . . . , xn ; θ) = (1/θ)n
n
Y
1{θ<xi <2θ}
1
n
= (1/θ) 1{θ<x1:n and xn:n <2θ} ,
so X1:n and Xn:n are jointly sufficient, but there is no single sufficient statistic for θ.
1
Assignment 10. Math 5080-2, F 2015. Chapter 10, Exercises 17–22.
17. (a) X1:n has density ne−n(y−η) , y > η, so
Z ∞
Z
−n(y−η)
nη
E[g(X1:n )] =
g(y)ne
dy = e n
η
∞
g(y)e−ny dy.
η
If this is 0, then the integral is 0, and so is its derivative, hence g(η)e−nη = 0 a.e., hence
g = 0 a.e.
(b)
Z ∞
Z ∞
Z ∞
−n(y−η)
−nz
E[X1:n ] =
yne
dy = n
(z + η)e
dz =
(y/n + η)e−y dy = 1/n + η,
η
0
0
which suffices.
(c) 1 − p = 1 − F (xp ) = e−(xp −η) , so xp = η − ln(1 − p). Hence X1:n − 1/n − ln(1 − p)
is an unbiased estimator of xp and a function of X1:n .
2
18. (a) f (x; θ) = (2πθ)−1/2 e−x /(2θ) , so X 2 is complete and sufficient by the REC theorem. Alternatively, we know X 2 /θ is χ2 (1) or GAM(2, 1/2), hence X 2 is GAM(2θ, 1/2).
u be a function on the positive reals such that E[u(X 2 )] = 0 for all θ > 0, or
RLet
∞
u(y)y −1/2 e−y/(2θ) dy = 0 for all θ > 0. This implies that u(y)y −1/2 has zero Laplace
0
transform, hence is the zero function a.e.
(b) X is N (0, θ) and E[X] = 0 for all θ > 0, hence completeness fails, because we
have a nontrivial unbiased estimator of 0.
19. The N (µ, µ2 ) distribution has density f (x; µ) = (2πµ2 )−1/2 exp{−(2µ2 )−1 (x −
µ)2 } = (2πµ2 )−1/2 exp{−(2µ2 )−1 x2 + µ−1 x − 1/2}. The sum in the exponential has two
terms but µ is a scalar, not two-dimensional.
Pn
20. (a) f (x; p) = px (1 − p)1−x = (1 − p) exp{x ln(p/(1 − p))}, so 1 Xi is complete
and sufficient.
Pn
x ln µ
/x! = (e−µ /x!)e
, so 1 XP
(b) f (x; µ) = µx e−µ
i is complete and sufficient.
n
x−1 r x−r
x−1
r x ln q
(c) f (x; p) = r−1 p q
= r−1 (p/q) e
, so 1 Xi is complete and sufficient.
2 −1/2
(d) f (x : µ, σ 2 ) = (2πσ
) P exp{−(x − µ)2 /(2σ 2 )} = (2πσ 2 )−1/2 exp{−x2 /(2σ 2 ) +
P
n
n
µx/σ 2 − µ2 /(2σ 2 )}, so ( 1 Xi , P1 Xi2 ) is complete and sufficient.
n
(e) f (x; θ) = (1/θ)e−x/θ , so 1 Xi is complete and sufficient.
f (x; θ, κ) = {θκ Γ(κ)}−1 xκ−1 e−x/θ = {θκ Γ(κ)}−1 x−1 exp{κ ln x − x/θ}, so
Pn (f) P
n
( 1 Xi , 1 ln Xi ) is complete and sufficient.
θ1 −1
(g) f (x; θ1 , θP
(1 − x)θ2 −1 = C(θ1 , θ2 )x−1 (1 − x)−1 exp{θ1 ln x +
2 ) = C(θP
1 , θ2 )x
n
n
θ2 ln(1 − x), so ( 1 ln Xi , 1 ln(1 − Xi )) is complete and sufficient.
Pn
β
β
(h) f (x; θ) = (β/θβ )xβ−1 e−x /θ , so 1 Xiβ is complete and sufficient.
Pn
21. (a) By 20(a), S = 1 Xi is complete and sufficient for p, so any function of this
statistic that is unbiased for p(1 − p) or p2 will work. Now E[S] = np, Var(S) = np(1 − p),
and E[S 2 ] = np(1 − p) + (np)2 .
(a) (S 2 − S)/[n(n − 1)] works because its expectation is (−np2 + n2 p2 )/[n(n − 1)] = p2 .
(b) S/n minus the answer to (a) works, and
S
S2 − S
nS − S 2
−
=
.
n n(n − 1)
n(n − 1)
P
Xi
22. By 33(g) of Chapter 9, [(n − 1)/n] P
is unbiased for e−µ . It is also a function of
n
a complete sufficient statistic. Sufficiency of i=1 Xi follows by the factorization theorem.
Completeness
follows by the REC property, or we can verify it directly. Notice that
Pn
i=1 Xi is Poisson(nµ).
P So we must show that if u is a function on the nonnegative
integers
such
that
E[u(
Xi )] = 0 for every µ, then u is the zero function. We’re given
P∞
−nµ
j
that j=0 u(j)e
(nµ) /j! = 0 for all µ > 0. But if a power series is 0 throughout its
region of convergence, then all its coefficients are 0, hence u = 0 identically.
Assignment 11. Math 5080-2, F 2015. Chapter 11, Exercises 1, 5, 7, 13.
√
1. (a) 19.3 ± 1.645 · 3/ 16√
or (18.1, 20.5).
√
16
=
18.34.
upper:
19.3
+
1.282
·
3/
16 = 20.26.
(b) lower: 19.3 − 1.282
·
3/
√
√
2
(c) For x ± z1−α/2 σ/ n we have λ = 2z1−α/2 σ/ n, hence n = 4z1−α/2
σ 2 /λ2 . For the
second question, this√formula√gives 4 · 1.6452 · 9/22 = 24.35 so take n = 25.
(d) 19.3 ± 1.753 10.24/ 16 or (17.9, 20.7).
(e) Use (11.3.6): (15(10.24)/32.8, 15(10.24)/4.6) = (4.68, 33.39).
5. (a) X1 has density e−(x−η) , x > η; so X1:n has density ne−n(x−η) , x > η; hence Q
has density ne−nx , x > 0. Thus, its distribution is EXP(1/n).
(b) P (x(1−γ)/2 < X1:n − η < x(1+γ)/2 ) = γ, so the confidence interval is (X1:n −
x(1+γ)/2 , X1:n − x(1−γ)/2 ). Now EXP(1/n) has cdf Fn (x) = 1 − e−nx , and Fn (xα ) = α
implies e−nxα = 1−α, hence xα = −(1/n) ln(1−α). Thus, the confidence interval becomes
(X1:n + (1/n) ln((1 − γ)/2), X1:n + (1/n) ln((1 + γ)/2)).
(c) We assume the data Xi are from EXP(850, η). We can apply the results of (b) to
the observations Xi /850, which are EXP(1, η/850). We get a confidence interval for η/850,
and multiplying by 850, we have a confidence interval for η:
162
1
1 − 0.9 162
1
1 + 0.9
850
+
ln
,
+
ln
= (27.98, 159.7).
850 19
2
850 19
2
The answer in the book seems to be wrong.
7. (a) If X is P
WEI(θ, 2), then X 2 is EXP(θ2 ) byP
the transformation of densities
2
2
2
theorem. Therefore
Xi is GAM(θ , n), so Q = (2/θ ) Xi2 is χ2 (2n).
(b) Get a confidence interval for θ2 , then take square roots:
s
P
P
s
2 Xi2
2 Xi2
,
.
χ2(1+γ)/2 (2n)
χ2(1−γ)/2 (2n)
(c) A lower confidence limit for θ is
s P
2 Xi2
,
χ2γ (2n)
2
(1)
2
so since e−(t/θ) is an increasing function of θ, it suffices to substitute (1) into e−(t/θ) for
θ.
2
(d) We solve P (X > tp ) = 1 − ppfor tp . From e−(tp /θ) = 1 − p we get (tp /θ)2 =
− ln(1 − p) = | ln(1 − p)|, so tp = θ | ln(1 − p)|. It suffices to substitute (1) with γ
replaced by 1 − γ in for θ in the formula for tp .
P
−κ
−1 (κ−1) ln x−x/θ
13.
(a)
f
(x;
θ)
=
θ
Γ(κ)
e
,
so
Xi is sufficient for θ with κ known.
P
P
Now Xi is GAM(θ, nκ), hence (2/θ) Xi is GAM(2, nκ) or χ2 (2nκ) if 2nκ is an integer.
Therefore,
X
2
2
P χα/2 (2nκ) < (2/θ)
Xi < χ1−α/2 (2nκ) = 1 − α,
or
X
X
2
2
P 2
Xi /χ1−α/2 (2nκ) < θ < 2
Xi /χα/2 (2nκ) = 1 − α.
(b) If X1 is GAM(1, κ), then 2X1 is GAM(2, κ) or χ2 (2κ) if 2κ is an integer. So
P (χ20.05 (2κ) < 2X1 < χ20.95 (2κ)) = 0.90.
Now if x1 = 10, we must solve the equations χ20.05 (2κ) = 20 and χ20.95 (2κ) = 20 by
interpolation. Get 2κ from the table on page 604 and divide by 2 to get κ. The book gives
(5.62, 15.94).
Assignment 12. Math 5080-2, F 2015. Chapter 11, Exercises 8, 10, 11, 15,
19.
8. (a) P (Xn:n < θ < 2Xn:n ) = P (θ < 2Xn:n ) = P (Xn:n > θ/2) = 1 −
P (Xn:n ≤ θ/2) = 1 − P (X1 ≤ θ/2, . . . Xn ≤ θ/2) = 1 − P (X1 ≤ θ/2)n =
1 − (1/2)n .
(b) 1 − α = P (Xn:n < θ < cXn:n ) = P (θ < cXn:n ) = P (Xn:n > θ/c) =
1 − P (Xn:n ≤ θ/c) = 1 − P (X1 ≤ θ/c, . . . Xn ≤ θ/c) = 1 − P (X1 ≤ θ/c)n =
1 − (1/c)n , so α = (1/c)n or c = α−1/n .
p
p
10. (a) Use p̂ − z0.95 p̂(1 − p̂)/50 = 0.8 − 1.645 0.8(1 − 0.8)/50 = 0.707.
(b) e−5/θ > 0.707 is equivalent to θ > −5/ ln(0.707) = 14.4. Answer is
coincidentally the same as in Exercise 3.
p
p
11. Use p̂±z0.95 p̂(1 − p̂)/40 = 1/8±1.645 (1/8)(7/8)/40 = 0.125±0.086,
which is [0.039, 0.211].
15. (a) To use this method, we need to know the distribution function of
S1 = X1:n . By (6.5.5), S1 has CDF G(y) = 1 − (1 − F (y))n , where F is the CDF
β
β
of WEI(θ, β). Now F (x) = 1−e−(x/θ) , x > 0. Hence G(y) = 1−e−n(x/θ) . Now
we need to find h1 and h2 such that G(h1 (θ)) = α/2 and G(h2 (θ)) = 1 − α/2.
Solving, we get
− ln(1 − α/2)
h1 (θ) = θ
n
1/β
,
− ln(α/2)
h2 (θ) = θ
n
1/β
.
If S1 lies between these two limits with probability 1 − α, then θ belongs to
S1
− ln(α/2)
n
1/β
, S1
− ln(1 − α/2)
n
1/β with probability 1 − α.
β
(b) Y = X β has density fY (y) = fX (y 1/β )(1/β)y 1/β−1 = (1/θβ )e−y/θ ,
y > 0, which is EXP(θβ ). Therefore, S2 is GAM(θβ , n). Since we don’t know
an explicit form for the CDF of the gamma distribution, it is better to use the
pivotal quantity method here. 2S2 /θβ is GAM(2, n) or χ2 (2n). Thus,
P (χ2α/2 (2n) < 2S2 /θβ < χ21−α/2 (2n)) = 1 − α,
so
P ([2S2 /χ21−α/2 (2n)]1/β < θ < [2S2 /χ2α/2 (2n)]1/β ) = 1 − α.
Pn1
Pn2
19. We know that i=1
(Xi − µ1 )2 /σ12 and j=1
(Yj − µ2 )2 /σ22 are independent χ2 (n1 ) and χ2 (n2 ), and their numerators are sufficient statistics. Their
ratio (first over second) is a pivotal quantity for σ22 /σ12 . And if we adjust it
1
by dividing by n1 /n2 , the pivotal quantity then has an F (n1 , n2 ) distribution.
Thus,
Pn1
(Xi − µ1 )2 σ22
(1/n1 ) i=1
Pn2
P Fα/2 (n1 , n2 ) <
<
F
(n
,
n
)
= 1 − α,
2
1−α/2 1
(1/n2 ) j=1 (Yj − µ2 )2 σ12
implying that our confidence interval is
Pn1
Pn1
(Xi − µ1 )2
(Xi − µ1 )2
(1/n1 ) i=1
(1/n1 ) i=1
Pn2
P
,
F
(n
,
n
)
.
Fα/2 (n1 , n2 )
2
1−α/2 1
n2
(Yj − µ2 )2
(1/n2 ) j=1 (Yj − µ2 )2
(1/n2 ) j=1
2
Download