Tutorial 2

advertisement
TUTORIAL 2
________________________________________________________________________
THEORY IN SUMMARY
Sufficiency

The data are always a sufficient statistic.

Let T denote a sufficient statistic for θ. Then (T ) is a sufficient statistic for θ
and T is a sufficient statistic for  () . These functions are 1-1 and onto.
Unbiasedness
Let X  ( X 1 , X 2 ,..., X n ) denote a random sample of size n from a distribution with
~
probability density function (pdf) f ( x | ) ,    . Then the statistic U ( X )  U is called
~
~
~
an unbiased statistic for  if E [U ]  ,    (assuming of course that E [U ]   ).
~
Referring to the definition of completeness the family F is complete if the only unbiased
statistic for zero is zero itself. If E [U ]   , then U is called a biased estimator of  and
the bias is defined as b(U ( X ))  E [U ]   . The definition can be generalized for a
~
function of  .
1
Example: The statistics X and S12 
 and  2 respectively whereas S 22 
1 n
( X i  X ) 2 are unbiased estimators of

n  1 i 1
1 n
( X i  X ) 2 is not.

n i 1
Completeness
Examples: The family of binomial, Gamma and Poisson distributions are complete
families. The normal distribution family is also a complete family except the case where
  known and    (take g ( x)  x   ). The U (, ) family is not complete (take
g ( x)  x ). Finally, X ( n ) from U (0, ) is a complete family.
Rao-Blackwell Theorem:
Let X  ( X 1 , X 2 ,..., X n ) denote a random sample from a distribution with probability
~
density function (pdf) f ( x | ) ,    , and T ( X ) denote a sufficient statistic for  .
~
~
~
Furthermore, let U ( X )  U denote an unbiased statistic for  . Then the statistic
~
(t )  E[U | T  t ] is an unbiased statistic for  and V ((T ))  V (U ),    assuming
that V (U )   .
Comments

This theorem provides us a way to reduce the variance of an unbiased estimator
while keeping the property of unbiasedness.
2

According to the theorem of Lehman-Scheffe, the above estimator ( t ) is also
unique.

These estimators are called Uniform Minimum Variance Unbiased Estimators
(UMVUE or UMVU estimators)

1.
EXERCISES
X  ( X 1 , X 2 ,..., X n )
Let
~
denote a random sample of size
U (, 2),   0 distribution. Prove that the statistics U 1 ( X ) 
~
U2 (X ) 
~
n from a
n 1
X ( n ) and
2n  1
n 1
(2 X ( n )  X (1) ) are unbiased estiamators for  and compare their
5n  4
variances.
2.
(Continuation of Tutorial 1-Ex. 2). After proving completeness and sufficiency
find a UMVU estimator for  .
3.
Let a random sample of size n be taken from a distribution of the discrete type
with pdf f ( x | ) 
1
, x  1,2,...,  , zero elsewhere, where  is positive. Show

that the largest item Y  X (n ) is a complete and sufficient statistic for  and that
the estimator [Y n1  (Y  1) n1 ] /[Y n  (Y  1) n ] is UMVU.
4.
Roussas: Statistical inference (in Greek), Volume I-Exercise 2.12 (page 108)
3
Download