February 27th Today : Cramer-Rao lower bound Unbiased Estimator

advertisement
February 27th
Today : Cramer-Rao lower bound
Unbiased Estimator of  , say ˆ1 ,ˆ2 ,ˆ3 ,
It could be really difficult for us to compare Var (ˆi ) when there are many of them.
Theorem. Cramer-Rao Lower Bound
Let Y1 , Y2 ,
, Yn be a random sample from a population with p.d.f. f ( y,  ) . Let ˆ  h( y1 , y2 ,
, yn )
be an unbiased estimator of  .
Given some regularity conditions (continuous differentiable etc.) and the domain of f ( yi ,  ) does not
depend on  .
Then, we have
var(ˆ) 
1
  (ln f ( )) 
nE 



2

1
  (ln f ( )) 
n  E 

 2


2
Theorem. Properties of the MLE
i .i .d .
Let Yi ~ f ( y,  ) , i  1, 2,
,n
Let ˆ be the MLE of 
n 
Then, ˆ  N ( ,
1
  ln( f ( )) 
nE 



2
)
The MLE is asymptotically unbiased and its asymptotic variance : C-R lower bound
i .i .d .
Example 1. Let Y1 , Y2 ,
, Yn ~ Bernoulli ( p)
1. MLE of p ?
2. What are the mean and variance of the MLE of p ?
3. What is the cramer-rao lower bound for an unbiased estimator of p ?
Solution. P(Y  y)  f ( y; p)  p y (1  p)1 y ; y=0,1
n
1.
L   f ( yi ; p )
i 1
n
  p yi (1  p)1 yi
i 1
y
n y
 p  i (1  p)  i
l  ln L  ( yi ) ln p  (n   yi ) ln(1  p)
l

p
y
i
p

n   yi
1 p
0
n
pˆ 
2.
Y
i 1
i
n
E ( pˆ )  p
var( pˆ ) 
3.
: MLE
p (1  p)
n
ln f ( y, p)  y ln p  (1  y ) ln(1  p)
 ln f ( y, p) y 1  y
 
p
p 1 p
 2 ln f ( y, p)
y
1 y
 2 
2
p
p (1  p) 2
 Y
1 Y 
p
1 p
1
E  2 
 2 

2
2
p (1  p)
p(1  p)
 p (1  p) 
C-R lower bound
var( pˆ ) 
1
p(1  p)

2
n
  ln f 
nE 
2 
 p 
The MLE of p is unbiased and its variance = C-R lower bound.
Definition. Efficient Estimator
If ˆ is an unbiased estimator of  and its variance = C-R lower bound, then ˆ is an efficient estimator
of  .
Definition. Best Estimator
If ˆ is an unbiased estimator of  and var(ˆ )  var( ) for all unbiased estimator  , then ˆ is a best
estimator for  .
O
Efficient Estimator
Best Estimator
X
, Yn is a random sample from f ( y,  ) 
Example 2. (Ex 5.5.2, p396) If Y1 , Y2 ,
3
2
ˆ  Y is an unbiased estimator for  .
Compute 1. Var (ˆ) and 2. C-R lower bound for fY ( y; )
Solution.
3 
2 
1. Var (ˆ)  Var  Y  
91 n 
9
Yi   2


4  n i 1  4n

Var (Yi )  E (Yi 2 )   E (Yi )    y 2
2
0
Therefore, Var (ˆ) 
9 n 2  2

4n2 18 8n
n
Var (Y )
i 1
i
2
  2 2y  
dy

y
dy

 0
2
 2  18
2y
2
2y
2
, 2
y
 , then
2. C-R lower bound
ln fY ( y,  )  ln(
2y
2
)  ln 2 y  2 ln 
 ln fY ( y,  )
2



  ln fY ( y, ) 2 
4
 4   4 2y
E 
   E  2   0 2 2 dy  2

 

 
 


1
  ln fY ( y,  )  2 
nE 
 

 


2
4n
The value of 1 is less than the value of 2. But, it is NOT a contradiction to theorem. Because 0
domain depends on  . C-R doesn’t hold for this kind.
Chapter 6. Confidence Interval
Example. Let Y1 , Y2 ,
, Yn be a random sample from a normal population N (  ,  2 )
1. Find the MLE of 
2. Please construct a 95% confidence interval for  .
Solution.
1.
ˆ  Y
2.
P(a    b)  0.95
How to construct a CI for 
1. Start with the point estimator for 
X ~ N ( ,
2
n
)
2. Biased on X , we’ll construct a new R.V. that is called a pivotal quantity for  .
y
,
Definition. Pivotal Quantity
It is a function of the sample and the parameter of interest (  ). Furthermore, we know its
distribution entirely.
Z
X 

~ N (0,1)
n
Scenario 1 : If  2 is known (e.g.  2  3.7 ), then Z is a P.Q. for 
Scenario 2 : If  2 is unknown, then Z is NOT a P.Q. for 
3. Scenario 1 : Draw the pdf of your PQ.
P(Z0.025  Z  Z 0.025 )  0.95
P ( Z 0.025 
P ( Z 0.025
X 
 Z 0.025 )  0.95
 n

n
P ( X  Z 0.025
 X    Z 0.025

n

n
   X  Z 0.025
)  0.95

n
)  0.95

 

 the 95% confidence interval  (when  2 known) is  X  Z 0.025
, X  Z 0.025

n
n

(  is 0.05 in the above equations.)
Example.   3.7,   0.73, n  100

Solution. 0.73  1.96

3.7
3.7 
, 0.73  1.96

100
100 
Download