Rao-Blackwell Theorem & Completeness Definition f joint ˜

advertisement
Rao-Blackwell Theorem & Completeness Definition
Rao-Blackwell Theorem. Let f (x|θ) = f (x1 , . . . , xn |θ) be the joint pdf/pmf of
˜˜
˜
(X1 , . . . , Xn ) and S = (S1 , S2 , . . . , Sk ) be sufficient for θ = (θ1 , . . . , θp ) ∈ Θ ⊂ Rp .
˜
˜
∗
Also let T be any UE of a real-valued γ(θ) and T = E(T |S ) (which does not depend
˜
˜
on θ since S is sufficient). Then,
˜
˜
1. T ∗ is a function of S and an UE of γ(θ).
˜
˜
2. Varθ (T ∗ ) ≤ Varθ (T ), for all θ ∈ Θ.
˜
˜
˜
3. If, for some θ0 ∈ Θ, it holds that Varθ0 (T ∗ ) = Varθ0 (T ), then Pθ0 (T = T ∗ ) = 1.
˜
˜
˜
˜
Example: Suppose X1 , X2 are iid Exponential(θ).
(i) Note T = X1 is an UE of θ and Varθ (T ) = Varθ (X1 ) = θ2 .
(ii) Also note that S = X1 + X2 is sufficient by factorization theorem & S is
GAMMA(2,θ)-distributed.
Verify that T ∗ = E(T |S) is a function of S, doesn’t depend on θ, and is unbiased.
Compare Varθ (T ) and Varθ (T ∗ ).
Solution: Given S = s > 0, first find the conditional pdf of X1 |S = s as
 −2 −x /θ −(s−x )/θ
1
θ e 1 e
fX1 ,S (x1 , s|θ)
fX1 ,X2 (x1 , x2 = s − x1 |θ) 
= s−1 if 0 < x1 < s
f (x1 |S = s) =
=
=
θ−2 se−s/θ

fS (s|θ)
fS (s|θ)
0
otherwise
So, given S = s > 0, the distribution of X1 is
Hence, the conditional expectation is E(X1 |S = s) =
Now, treating S as a random variable, we have T ∗ = E(X1 |S) =
1
Notes
• Given an UE T of γ(θ), the theorem shows how to obtain an UE T ∗ that is at
˜
least as good as T in terms of variance (in fact, better than T unless T = T ∗
with probability 1 for all θ). That is, you can “Rao-Blackwellize” an UE T by
˜
conditioning on a sufficient statistic S .
˜
• For finding an UMVUE of γ(θ) we may restrict attention to the class of estimators
˜
that are functions of a sufficient statistics.
¡
¢
Proof of Theorem: For any two r.v.s X, Y , it holds that E E(X|Y ) = E(X) and Var(X) =
¡
¢
¡
¢
E Var(X|Y ) + Var E(X|Y ) . Hence,
¢
¡
i Eθ (T ∗ ) = Eθ E(T |S ) = Eθ (T ) = γ(θ) for all θ.
˜
˜
˜
˜
˜
˜
ii Since S is sufficient, the conditional means and variances do not dependent on θ in the following
˜
˜
(hence no subscript θ on the conditional expectations):
˜
·
³
´
³
´¸
³
´
∗
∗
Varθ (T ) − Varθ (T ) = Varθ E(T |S ) + Eθ Var(T |S ) − Varθ (T ) = Eθ Var(T |S ) ≥ 0
˜
˜
˜
˜
˜ | {z ˜}
˜
˜
˜
T∗
for all θ.
˜
iii Suppose for some θ0 ∈ Θ, Varθ0 (T ∗ ) = Varθ0 (T ) holds. Then by part(ii) above, it holds that
˜
˜
˜
¡
¢
Eθ0 Var(T |S ) = 0 ⇒ Var(T |S ) = 0 with probability 1 under θ0 ⇒ T = E(T |S) with probability
˜
˜
˜
˜
1 under θ0 (i.e., conditioned on S , there is no variability in T , so T must equal its conditional
˜
˜
mean).
“Completeness” Definition: Let f (x|θ) = f (x1 , . . . , xn |θ), θ ∈ Θ ⊂ Rp , be the
˜˜
˜ ˜
joint pdf/pmf of (X1 , . . . , Xn ) and let fT (t|θ) denote the pdf/pmf of a vector of sta˜ ˜˜
tistics T . Then, T (and/or the family of pdf/pmf FT ≡ {fT (t|θ) : θ ∈ Θ}) is called
˜
˜
˜
˜ ˜˜ ˜
complete if for any real-valued function u(T )
˜
h
i
(1)
Eθ u(T ) = 0 ∀θ
⇒
Pθ u(T ) = 0 = 1 ∀θ
˜
˜
˜
˜
˜
˜
T is called bounded complete if (1) holds for all bounded functions u(·).
˜
2
Download