A final note on Bayes and Minimax Estimators

advertisement
A final note on Bayes and Minimax Estimators
We have learned that minimax and Bayes principles are two ways of finding estimators based on
risk functions. There are also some interesting connections between the two types of estimators.
As as example, the following result shows how to find a minimax estimator (which is hard) from
a Bayes estimator (which can be done more easily).
Theorem: For some loss function L(t, θ), if T ∗ is a Bayes estimator with respect
to some prior and the risk of T ∗ is constant (i.e., RT ∗ (θ) = c for all θ ∈ Θ), then
T ∗ is the minimax estimator under the same loss function.1
Example: For X1 , . . . , Xn iid Bernoulli(θ), θ ∈ Θ = (0, 1), find the minimax
estimator of θ under the loss function L(t, θ) = (θ − t)2 /{θ(1 − θ)}.
Solution: For this loss function, the Bayes estimator of θ with respect to the
uniform(0,1) prior on Θ is given by T0 = X̄n (see Homework 3). Also note that,
for any θ ∈ (0, 1),
µ
¶
(θ − X̄n )2
1
RT0 (θ) = Eθ
=
MSEθ (X̄n )
θ(1 − θ)
θ(1 − θ)
1
Varθ (X̄n )
since X̄n is unbiased
=
θ(1 − θ)
1
=
since Varθ (X1 ) = θ(1 − θ) = nVarθ (X̄n )
n
Hence, T0 has constant risk so that, by the theorem, T0 = X̄n is also the minimax
estimator of θ.
1
Proof: Suppose that T ∗ is a Bayes estimator with respect to a prior pdf π(θ) on Θ. Then the Bayes risk of
R
R
R
T is BRT ∗ = Θ RT ∗ (θ)π(θ)dθ = c Θ π(θ)dθ = c, using that RT ∗ (θ) = c is constant and Θ π(θ)dθ = 1. Since
T ∗ is the Bayes estimator, it has minimal Bayes risk by definition so that, for any other estimator T , we have
¸
·
¸Z
Z
Z ·
c = BRT ∗ ≤ BRT =
RT (θ)π(θ)dθ ≤
max RT (θ) π(θ)dθ = max RT (θ)
π(θ)dθ = max RT (θ)
∗
Θ
Θ
θ∈Θ
θ∈Θ
or c ≤ maxθ∈Θ RT (θ) for any estimator T . So it must be that
max RT ∗ (θ) = c = min max RT (θ),
T
θ∈Θ
implying that T ∗ is minimax.
1
θ∈Θ
Θ
θ∈Θ
Download