Tutorial 1

advertisement
TUTORIAL 1
________________________________________________________________________
THEORY IN SUMMARY
Sufficiency
Let X  ( X 1 , X 2 ,..., X n ) denote a random sample of size n from a distribution with
~
probability density function (pdf) f ( x | ) ,    and F denote the family of pdf's
~
~
defined as follows. F  { f ( x | ) :   } . Then the statistic T ( X )  T ( X 1 , X 2 ,..., X n ) is
~
~
~
called a sufficient statistic for  or for the family F if the conditional distribution of X
~
~
given T ( X )  t is independent of  for all the values of t that the conditional probability
~
~
is defined.
Comments:

To prove sufficiency we often make use of the Neyman-Fisher theorem ( T(X) is
~
a sufficient statistic if and only if f ( x | )  g[T ( X ) | ]  h( X ) ).
~

~
~
The definition of sufficiency is mainly used to prove that a statistic is NOT
sufficient.

Intuitively, a sufficient statistic incorporates all the necessary sample information
about the unknown parameters.

The data vector is always a sufficient statistic

If *  () is a 1-1 and onto function of  and T is a sufficient statistic for  ,
then T is a sufficient statistic for  * as well.
1
Completeness
Using the above notation and defining a function g : R n  R , then F is a complete
family if for every function g defined as before the relationship E  [ g ( X )]  0,    
~
~
implies g ( X )  0 for every X  R n that the probability P( X ) is not zero.
~
~
~
Comments:

This definition implies that the only unbiased estimator of 0 is 0 itself.

For an intuitive explanation of completeness, look at Papaioannou-Ferentinos:
Mathematical Statistics (in Greek, p.30)

It is possible that the function g (X ) is non-zero in values where the probability
P( X ) is zero.
~
Order statistics
Let X  ( X 1 , X 2 ,..., X n ) denote a random sample of size n from a continuous
~
distribution with probability density function (pdf) f X ( x | ) ,    and cumulative
~
~
density function (cdf) FX . Then the pdf and cdf of the n-th order statistic (i.e., the
maximum) Y  X (n ) is given by:
FY ( y)  [ FX ( y)] n and
f Y ( y)  n[ FX ( y)] n1 f X ( y)
Similarly for the pdf and cdf of the first order statistic (i.e., the minimum) Y  X (1) :
FY ( y)  1  [1  FX ( y)] n and
f Y ( y)  n[1  FX ( y)] n1 f X ( y)
and finally the pdf of the k-th order statistic Y  X (k )
2
f Y ( y) 
n!
[ FX ( y )] k 1[1  FX ( y )] nk f X ( y )
(k  1)!(n  k )!
Sketch of the proof:
For Y  X (n ) we obtain that FY (Y  y)  P[X ( n )  y]  P[X1  y  X n  y]  [FX ( y)] n
and then we take the derivative. We use a similar argument for the minimum order
statistic.

1.
EXERCISES
X  ( X 1 , X 2 ,..., X n )
Let
~
denote a random sample of size
n from a
U (, ),   0 distribution. Prove that the statistic T ( X )  max{  X (1) , X ( n ) } is a
~
sufficient statistic for  . Find other sufficient statistics for  .
2.
Show that the first order statistic of a random sample of size n from the
distribution having pdf f ( x | )  exp[ ( x  )],   x  ,       , zero
elsewhere is a complete and sufficient statistic for  .
3.
Roussas: Statistical Inference (in Greek), Volume I, page 69, Exercise 1.7 (i), (iii)
4.
Let X  ( X 1 , X 2 ,..., X n ) denote a random sample of size n from a U(0, ),   0
~
distribution. Find sufficient statistics for  .
3
Download