II. Maximum Likelihood Estimation

advertisement
II. Maximum Likelihood Estimation
Definition: Let f (x1 , . . . , xn |θ) be the joint pdf/pmf of (X1 , . . . , Xn ). Then,
L(θ) = f (x1 , . . . , xn |θ),
θ∈Θ
[as a function of θ, given (x1 , . . . , xn )] is called the likelihood function.
Note:
1. If X1 , . . . , Xn are iid with common pdf/pmf f (x|θ), then
2. If X1 , . . . , Xn are discrete r.v.’s, then
Definition: Let (X1 , . . . , Xn ) have point pdf/pmf f (x1 , . . . , xn |θ), θ ∈ Θ.
Then, for a given set of observations (x1 , . . . , xn ), the maximum likelihood
estimate (MLE) of θ is a point θ̂ in Θ, say θ̂ = h(x1 , . . . , xn ), such that
f (x1 , . . . , xn |θ̂) = max f (x1 , . . . , xn |θ)
θ∈Θ
And the maximum likelihood estimator (MLE) of θ is defined as θ̂ =
h(X1 , . . . , Xn ).
Example/Discussion:
1
Finding MLEs
If the likelihood function L(θ) = f (x1 , . . . , xn |θ) is differentiable, it can often
be maximized over Θ using calculus.
Assume Θ ⊂ R is open and that L(θ) is twice differentiable on Θ. Then,
dL(θ) ¯¯
d2 L(θ) ¯¯
θ̂ maximizes L(θ) ⇐⇒
¯ = 0 and
¯ < 0.
dθ θ̂
dθ2 θ̂
Since log(·) is an increasing function, θ̂ maximizes L(θ) ⇐⇒ θ̂ maximizes
log L(θ). Hence,
θ̂ is an MLE if
d log L(θ) ¯¯
¯ = 0 and
θ̂
dθ
d2 log L(θ) ¯¯
¯ < 0.
θ̂
dθ2
Example: Let X1 , . . . , Xn be a random sample from a Geometric(p) distribution, 0 < p < 1. Find the MLE of p.
2
Download