Partial solutions to assignment 7 Math 5080-1, Spring 2011

advertisement
Partial solutions to assignment 7
Math 5080-1, Spring 2011
p. 328, #33. (a) Here τ (µ) = µ; therefore, τ 0 (µ) = 1. Also, because I{X1 =
0, 1, . . .} = 1 with probability one, it follows that
f (X1 , µ) =
e−µ µX1
.
X1 !
Therefore,
ln f (X1 , µ) = −µ + X1 ln µ − ln (X1 !) ,
and hence
X1
d
ln f (X1 , µ) = −1 +
.
dµ
µ
As is always the case, the expectation of the preceding is zero. Therefore
"
2 #
d
d
E
ln f (X1 , µ)
= Var
ln f (X1 , µ)
dµ
dµ
X1
1
X1
Var(X1 )
= .
= Var −1 +
= Var
=
2
µ
µ
µ
µ
Therefore, the CRLB gives the following: If T is unbiased for µ, then
Var(T ) ≥
1
µ
= .
n/µ
n
(b) Now, τ (µ) = θ = e−µ . Therefore, the numerator in CRLB changes from
one [in part (a)] to the square of τ 0 (µ) = −e−µ , which is e−2µ . Therefore, the
CRLB gives the following: If T is unbiased for e−µ , then
Var(T ) ≥
e−2µ
µe−2µ
=
.
n/µ
n
(c) Var(X̄) = µ/n, which is the CRLB. Therefore, a UMVUE for µ is X̄.
(d) By the invariance property, θ̂ = e−µ̂ , where µ̂ is the MLE for µ. Therefore, we compute µ̂ next: The likelihood function is
Pn
e−nµ µ
L(µ) = Qn
i=1
Xi
j=1 (Xj !)
e−nµ µnX̄
= Qn
;
j=1 (Xj !)
and therefore,

ln L(µ) = −nµ + nX̄ ln µ − ln 
n
Y

(Xj !) .
j=1
1
We differentiate to find that
d
nX̄
L(µ) = −n +
.
dµ
µ
Set this equal to zero to find that µ̂ = X̄ is the MLE for µ. Therefore, θ̂ = e−X̄
is the MLE for θ = e−µ .
Pn
(e) Let S := i=1 Xi and note that θ̂ = e−S/n . Because S ∼ Poisson(nµ),
we can deduce that
E θ̂ = E e−S/n = MS (−1/n),
where MS (t) = E[etS ] is the MGF for S. The table in your text tells you that
MS (t) = enµ[e
and hence
E(θ̂) = enµ[e
t
−1]
−1/n
,
−1]
.
Therefore,
Bias(θ̂) = enµ[e
−1/n
−1]
h
i
−1/n
1
]
−1+ n
−1 .
− e−µ = e−µ enµ[e
Therefore, θ̂ is biased.
(f ) By Taylor’s expansion,
e−1/n = 1 −
1
1
+
± ···
n 2n2
Therefore,
e−1/n − 1 −
1
1
≈ 2
n
2n
as n → ∞.
Plug to find that
h
i
Bias(θ̂) ≈ e−µ eµ/(2n) − 1 .
Another round of Taylor expansions yields
eµ/(2n) ≈ 1 +
µ
.
2n
Therefore,
Bias(θ̂) ≈
µe−µ
→0
2n
as n → ∞.
This shows that θ̂ is asymptotically unbiased.
2
(g) Let S :=
Pn
i=1
Xi and α := (n − 1)/n. Because
θ̃ = αS = eS ln α ,
it follows that
E(θ̃) = MS (ln α) = enµ[e
ln α
−1]
= enµ[α−1] = e−µ .
Therefore, θ̃ is unbiased for θ = e−µ .
(h) First of all,
2 ln α
−1]
.
E [θ̃]2 = E α2S = E eS2 ln α = MS (2 ln α) = enµ[e
But
e
2 ln α
2
=α =
n−1
n
2
=
(n − 1)2
n2 − 2n + 1
2
1
=
= 1 − + 2.
n2
n2
n n
Therefore,
E [θ̃]2 = e−2µ+(µ/n) .
This and the computation of the expectation in (g) together tell us that
h
i µe−2µ
Var(θ̃) = e−2µ+(µ/n) − e2µ = e−2µ eµ/n − 1 ≈
n
as n → ∞,
after another round of Taylor expansion. This and the answer to (b) together
show that θ̃ achieves asymptotically [as n → ∞] the minimum possible variance among all unbiased estimators. Therefore, in a sense, θ̃ is “asymptotically
UMVUE” for θ.
3
Download