Uploaded by Dong Zhai

sheet7 2017-18 solutions (1)

advertisement
Problem Sheet VII
1. Let X the random variable that describes the life time of a lamp. It is known that the distribution
of X is:
p(x, θ) =
1 −x
xe θ , x > 0, θ > 0.
θ2
X1 , X2 , ..., Xn are the life times of n lamps.
(a) Calculate the maximum likelihood estimator of the parameter θ.
(b) In a random sample of 10 lamps, the following measurements are obtained:
1.2, 5.7, 6.01, 2.3, 8.4, 3.22, 2.8, 1.51, 7.2, 4.1
Estimate the expected value of the life time.
(c) Using the data from (b) and the given distribution, estimate the P (X > 10).
2. Let X1 , . . . Xn be i.i.d. Poisson(λ), that is
P(Xi = k) =
e−λ λk
,
k!
k = 0, 1, 2, . . . ,
i = 1, . . . , n.
Find the maximum likelihood estimator λ̂ of λ. Is it unbiased? Is it efficient? Is it consistent?
3. Let X1 , . . . , Xn be i.i.d. with p.d.f.
1
f (x; θ) = (1 + θx),
2
x ∈ (−1, 1), θ ∈ (−1, 1).
Find a consistent estimator of θ and show it is consistent.
4. Let X1 , . . . , Xn be i.i.d. Bernoulli random variables with P(X1 = 1) = p. Find the maximum
likelihood estimator p̂M LE of p. Show that it is consistent, efficient and sufficient.
5. (Optional) Let X1 , . . . , Xn be i.i.d. samples drawn from a distribution with density
(
f (x; α) =
αxα−1 , 0 ≤ x ≤ 1;
0,
otherwise.
Find the maximum likelihood estimator α̂ of α. Is it sufficient?
6. (Optional) Let X1 , . . . , Xn be i.i.d. samples drawn from a distribution with density
(
f (x; m, α) =
1
m−1 e−xm /α ,
α mx
0,
x > 0;
otherwise.
Suppose that m is known. Find the maximum likelihood estimator for α and show it is sufficient
for α.
1
Solutions VII
1. We have
(a) We have
n
1 − 1 Pn xi Y
θ
i=1
e
xi ,
θ2n
i=1
L(θ, ; x1 , . . . , xn ) =
n
n
Y
1X
xi + ln
xi ,
l(θ) = −2nlnθ −
θ i=1
i=1
n
d
2n
1 X
x̄
l(θ) = −
+ 2
xi = 0 =⇒ θ = .
dθ
θ
θ i=1
2
x̄
2
Also for the θ =
:
d2
8n
l(θ) = − 2 < 0.
2
d θ
x̄
Hence the MLE of θ is θ = x̄2 .
(b)
Z ∞
x
E(X) =
0
1
1 −x
xe θ dx = 2
θ2
θ
Z ∞
x
x2 e− θ dx
y=x/θ
=
0
θ2
θ
Z ∞
y 3−1 e−y dy = θΓ(3) = 2θ,
0
because Γ(3) = 2! = 2. So the MLE of E(X) is 2θ̂ = 2 x̄2 = x̄ = 4.244.
(c)
P (X > 10) =
Z ∞
1
10
For θ̂ =
x̄
2
θ
x
xe− θ dx =
2
1
θ2
Z ∞
x
xe− θ dx =
10
Z ∞
ye−y dy = (
10/θ
− 10
= 2.122: P̂ (X > 10) = ( 10 + 1)e
θ̂
θ̂
= 0.051.
2. We compute
L(λ; x1 , . . . xn ) = e−nλ λnx̄ /(x1 ! · · · xn !)
l(λ; x1 , . . . , xn ) = −nλ + nx̄ log(λ) + constants
∂
nx̄
l(λ; x1 , . . . , xn ) = −n +
;
∂λ
λ
∂2
nx̄
l(λ; x1 , . . . , xn ) = − 2 .
2
∂λ
λ
Equating the first derivative to 0 we find
λ̂ = x̄.
This is of course unbiased, since E Xi = λ, and its variance is
var(λ̂) =
var(X1 )
λ
= ,
n
n
since the variance is also λ.
To see why the variance is indeed λ
E(X 2 ) =
∞
X
k=0
k2
λk e−λ
k!
2
10
10
+ 1)e− θ .
θ
=
=
=
=
∞
X
k
k=1
∞
X
λk e−λ
(k − 1)!
(k − 1 + 1)
k=1
∞
X
(k − 1)
k=2
∞
X
λk e−λ
(k − 1)!
∞
X
λk e−λ
λk e−λ
+
(k − 1)! k=1 (k − 1)!
∞
X
λk e−λ
λk e−λ
+
k
(k − 2)! k=0
k!
k=2
= λ2
∞
X
λk e−λ
k!
k=0
+ EX
= λ2 + λ,
var(X) = λ2 + λ − λ2 = λ.
To find Fischer’s information we use the second derivative formula
#
"
X
n
∂2
l(λ; X) = n E 2 = .
In (λ) = −n E
2
∂λ
λ
λ
It is clear the estimator is efficient.
Consistency follows easily from the law of large numbers.
3. Here maximum likelihood can run into problems. However a quick calculation shows that
1 1
x(1 + θx)dx
2 −1
Z
1 1 2
=
θx d x
2 −1
θ h x3 i1
dx
=
2 3 −1
θ2
θ
=
dx = .
23
3
Z
EX =
Therefore θ̂ = 3X̄ is consistent, since X̄ is consistent for θ/3.
4. The likelihood is
L(p; x1 , . . . , xn ) =
l(p; x1 , . . . , xn ) =
∂
l(p; x1 , . . . , xn ) =
∂p
∂2
∂p2
n
Y
i=1
n
X
pxi (1 − p)1−xi
xi log p + (1 − xi ) log(1 − p);
i=1
n X
i=1
l(p; x1 , . . . , xn ) = −
xi 1 − xi
−
;
p
1−p
n X
xi
i=1
1 − xi
+
.
2
p
(1 − p)2
To find the MLE we set
n
X
∂
xi 1 − xi
l(p; x1 , . . . , xn ) =
−
= 0,
∂p
p
1−p
i=1
3
where we find that
nx̄
n − nx̄
=
,
p
p
which is solved by p̂ = x̄.
This is of course consistent for p by the LLN since
X̄ =
X1 + · · · + Xn
,
n
and E[X1 ] = p.
To see it is efficient
#
"
∂2
l(p; X)
In (θ) = −n E
∂p2
Xi
1 − Xi
= nE 2 +
p
(1 − p)2
E Xi 1 − E Xi
=n
+
p2
(1 − p)2
1
1−p+p
n
1
+
=n
=
.
=n
p (1 − p)
p(1 − p)
p(1 − p)
On the other hand the variance is
var(X̄) =
var(X1 )
p(1 − p)
1
=
=
,
n
n
In (θ)
so it is efficient.
We know efficient estimators are sufficient so we are done.
5. We compute
L(α; x1 , . . . xn ) = αn (x1 · · · xn )α−1
l(α; x1 , . . . , xn ) = n log(α) + (α − 1)
X
log(xi )
∂
n
l(α; x1 , . . . , xn ) = +
log(xi );
∂α
α
∂2
n
l(α; x1 , . . . , xn ) = − 2 < 0.
2
∂α
α
X
Equating the first derivative to 0 we find
α̂ = − P
n
,
log(yi )
which is a maximum since the second derivative is always negative.
To show sufficiency, notice first that
n
exp −
α̂
= exp −
!
n
= exp(log(x1 · · · xn )) = x1 · · · xn .
− log(x1n···xn )
Therefore
f (x1 , . . . , xn ; α) = α
n
n
exp −
α̂
(α−1)
,
which is just a function of the parameters and the statistic so we are done.
4
6. We compute
P m
1 n
m (x1 · · · xn )m−1 e− xi /α
n
α
1X m
l(α; x1 , . . . , xn ) = −n log(α) −
xi + constants;
α
∂
n
1 X m
xi .
l(α; x1 , . . . , xn ) = − + 2
∂α
α α
L(α; x1 , . . . xn ) =
Equating the derivative to 0 we find that
α̂ =
1X m
xi .
n
To see it is sufficient
f (x1 , . . . , xn ; α) = α−n e−nα̂/α
which is the desired factorisation.
5
mn (x1 · · · xn )m−1 ,
Download