Methods of Point Estimation

advertisement
Methods of Point Estimation
Definition
Let X1 , X2 , . . . , Xn be a random sample from a distribution with pmf or
pdf f (x). For k = 1, 2, 3, . . . , the kth population moment, or kth
moment
of the distribution f (x), is E (X k ). The kth sample moment
P
is n1 ni=1 Xik .
Definition
Let X1 , X2 , . . . , Xn be a random sample from a distribution with pmf or
pdf f (x; θ1 , . . . , θm ), where θ1 , . . . , θm are parameters whose values are
unknown. Then the moment estimators θ̂1 , . . . , θ̂m are obtained by
equating the first m sample moments to the corresponding first m
population moments and solving for θ1 , . . . , θm .
Liang Zhang (UofU)
Applied Statistics I
July 10, 2008
1/5
Methods of Point Estimation
Suppose that a coin is biased, and it is known that the average proportion
of heads is one of the three values p = .2, .3, or .8. An experiment consists
of tossing the coin twice and observing the number of heads. This could
be modeled as a random sample X1 , X2 of size n = 2 from a Bernoulli
distribution, Xi ∼ BER(p), where the parameter is one of .2, .3, .8.
Consider the joint pdf of the random sample
f (x1 , x2 ; p) = p x1 +x2 (1 − p)2−x1 −x2
for xi
p
.2
.3
.8
= 0 or
(0,0)
.64
.49
.04
1. The
(0,1)
.16
.21
.16
Liang Zhang (UofU)
values of f (x1 , x2 ; p) are provided as follows
(1,0) (1,1)
.16
.04
.21
.09
.16
.64
Applied Statistics I
July 10, 2008
2/5
Methods of Point Estimation
The values of f (x1 , x2 ; p) are provided as follows
p (0,0) (0,1) (1,0) (1,1)
.2 .64
.16
.16
.04
.3 .49
.21
.21
.09
.16
.16
.64
.8 .04
The estimate that maximizes the “likelihood” for an observed pair (x1 , x2 )
is


.2 if (x1 , x2 ) = (0, 0)
p̂ = .3 if (x1 , x2 ) = (0, 1) or (1, 0)


.8 if (x1 , x2 ) = (1, 1)
Liang Zhang (UofU)
Applied Statistics I
July 10, 2008
3/5
Methods of Point Estimation
Definition
Let X1 , X2 , . . . , Xn have joint pmf or pdf f (x1 , x2 , . . . , xn ; θ) where the
parameter θ is unknown. When x1 , . . . , xn are the observed sample values
and the above function f is regarded as a function of θ, it is called the
likelihood function and often is denoted by L(θ). The maximum
likelihood estimate (mle) θ̂ is the value of θ that maximize the likelihood
function, so that
f (x1 , x2 , . . . , xn ; θ̂) ≥ f (x1 , x2 , . . . , xn ; θ)
for all θ
When the Xi s are substituted in place of the xi s, the maximu likelihood
estimator result.
Liang Zhang (UofU)
Applied Statistics I
July 10, 2008
4/5
Methods of Point Estimation
The Invariance Principle
Let θ̂ be the mle of the parameter θ. Then the mle of any function h(θ) of
this parameter is the function h(θ̂).
Proposition
Under very general conditions on the joint distribution of the sample, when
the sample size n is large, the maximum likelihood estimator of any
parameter θ is approximately unbiased [E (θ̂) ≈ θ] and has variance that is
nearly as small as can be achieved by any estimator. Stated another way,
the mle θ̂ is approximately the MVUE of θ.
Liang Zhang (UofU)
Applied Statistics I
July 10, 2008
5/5
Download