Uploaded by 사회과학계열/김수하

Evaluation of Point Estimators - Biostatistics Lecture

advertisement
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
.
.
Biostatistics 602 - Statistical Inference
Lecture 11
Evaluation of Point Estimators
Hyun Min Kang
February 14th, 2013
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
1 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Some News
• Homework 3 is posted.
• Due is Tuesday, February 26th.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
2 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Some News
• Homework 3 is posted.
• Due is Tuesday, February 26th.
• Next Thursday (Feb 21) is the midterm day.
• We will start sharply at 1:10pm.
• It would be better to solve homework 3 yourself to get prepared.
• The exam is closed book, covering all the material from Lecture 1 to
12.
• Last year’s midterm is posted on the web page.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
2 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Last Lecture
1.
What is a maximum likelihood estimator (MLE)?
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
3 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Last Lecture
1.
What is a maximum likelihood estimator (MLE)?
2.
How can you find an MLE?
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
3 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Last Lecture
1.
What is a maximum likelihood estimator (MLE)?
2.
How can you find an MLE?
3.
Does an ML estimate always fall into a valid parameter space?
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
3 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Last Lecture
1.
What is a maximum likelihood estimator (MLE)?
2.
How can you find an MLE?
3.
Does an ML estimate always fall into a valid parameter space?
4.
If you know MLE of θ, can you also know MLE of τ (θ)?
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
3 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Recap - Maximum Likelihood Estimator
.
Definition
.
• For a given sample point x = (x1 , · · · , xn ),
• let θ̂(x) be the value such that
• L(θ|x) attains its maximum.
• More formally, L(θ̂(x)|x) ≥ L(θ|x) ∀θ ∈ Ω where θ̂(x) ∈ Ω.
• θ̂(x) is called the maximum likelihood estimate of θ based on data x,
.
• and θ̂(X) is the maximum likelihood estimator (MLE) of θ.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
4 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Recap - Invariance Property of MLE
.
Question
.
If
. θ̂ is the MLE of θ, what is the MLE of τ (θ)?
.
Example
.
i.i.d.
X1 , · · · , Xn ∼ Bernoulli(p) where 0 < p < 1.
.
1.
What is the MLE of p?
2.
What is the MLE of odds, defined by η = p/(1 − p)?
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
5 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
p
Getting MLE of η = 1−p
from p̂
∑
∗
L (η|x) =
η xi
(1 + η)n
• From MLE of p̂, we know L∗ (η|x) is maximized when
p = η/(1 + η) = p̂.
• Equivalently, L∗ (η|x) is maximized when η = p̂/(1 − p̂) = τ (p̂),
because τ is a one-to-one function.
• Therefore η̂ = τ (p̂).
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
6 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Invariance Property of MLE
.
Fact
.
Denote the MLE of θ by θ̂. If τ (θ) is an one-to-one function of θ, then
MLE
of τ (θ) is τ (θ̂).
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
7 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Invariance Property of MLE
.
Fact
.
Denote the MLE of θ by θ̂. If τ (θ) is an one-to-one function of θ, then
MLE
of τ (θ) is τ (θ̂).
.
.
Proof
.
The likelihood function in terms of τ (θ) = η is
L∗ (τ (θ)|x) =
n
∏
fX (xi |θ)
i=1
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
7 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Invariance Property of MLE
.
Fact
.
Denote the MLE of θ by θ̂. If τ (θ) is an one-to-one function of θ, then
MLE
of τ (θ) is τ (θ̂).
.
.
Proof
.
The likelihood function in terms of τ (θ) = η is
L∗ (τ (θ)|x) =
n
∏
i=1
fX (xi |θ) =
n
∏
f(xi |τ −1 (η))
i=1
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
7 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Invariance Property of MLE
.
Fact
.
Denote the MLE of θ by θ̂. If τ (θ) is an one-to-one function of θ, then
MLE
of τ (θ) is τ (θ̂).
.
.
Proof
.
The likelihood function in terms of τ (θ) = η is
L∗ (τ (θ)|x) =
n
∏
fX (xi |θ) =
i=1
= L(τ
n
∏
f(xi |τ −1 (η))
i=1
−1
(η)|x)
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
7 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Invariance Property of MLE
.
Fact
.
Denote the MLE of θ by θ̂. If τ (θ) is an one-to-one function of θ, then
MLE
of τ (θ) is τ (θ̂).
.
.
Proof
.
The likelihood function in terms of τ (θ) = η is
L∗ (τ (θ)|x) =
n
∏
fX (xi |θ) =
i=1
= L(τ
n
∏
f(xi |τ −1 (η))
i=1
−1
(η)|x)
We know this function is maximized when τ −1 (η) = θ̂, or equivalently,
.when η = τ (θ̂).
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
7 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Invariance Property of MLE
.
Fact
.
Denote the MLE of θ by θ̂. If τ (θ) is an one-to-one function of θ, then
MLE
of τ (θ) is τ (θ̂).
.
.
Proof
.
The likelihood function in terms of τ (θ) = η is
L∗ (τ (θ)|x) =
n
∏
fX (xi |θ) =
i=1
= L(τ
n
∏
f(xi |τ −1 (η))
i=1
−1
(η)|x)
We know this function is maximized when τ −1 (η) = θ̂, or equivalently,
.when η = τ (θ̂). Therefore, MLE of η = τ (θ) is τ (θ̂).
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
7 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Induced Likelihood Function
.
Definition
.
• Let L(θ|x) be the likelihood function for a given data x1 , · · · , xn ,
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
8 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Induced Likelihood Function
.
Definition
.
• Let L(θ|x) be the likelihood function for a given data x1 , · · · , xn ,
• and let η = τ (θ) be a (possibly not a one-to-one) function of θ.
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
8 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Induced Likelihood Function
.
Definition
.
• Let L(θ|x) be the likelihood function for a given data x1 , · · · , xn ,
• and let η = τ (θ) be a (possibly not a one-to-one) function of θ.
We define the induced likelihood function L∗ by
L∗ (η|x) =
sup
θ∈τ −1 (η)
L(θ|x)
where τ −1 (η) = {θ : τ (θ) = η, θ ∈ Ω}.
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
8 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Induced Likelihood Function
.
Definition
.
• Let L(θ|x) be the likelihood function for a given data x1 , · · · , xn ,
• and let η = τ (θ) be a (possibly not a one-to-one) function of θ.
We define the induced likelihood function L∗ by
L∗ (η|x) =
sup
θ∈τ −1 (η)
L(θ|x)
where τ −1 (η) = {θ : τ (θ) = η, θ ∈ Ω}.
.
• The value of η that maximize L∗ (η|x) is called the MLE of η = τ (θ).
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
8 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Invariance Property of MLE
.
Theorem 7.2.10
.
If θ is the MLE of θ̂, then the MLE of η = τ (θ) is τ (θ̂), where τ (θ) is any
.function of θ.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
9 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Invariance Property of MLE
.
Theorem 7.2.10
.
If θ is the MLE of θ̂, then the MLE of η = τ (θ) is τ (θ̂), where τ (θ) is any
.function of θ.
.
Proof - Using Induced Likelihood Function
.
L∗ (η̂|x) = sup L∗ (η|x)
η
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
9 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Invariance Property of MLE
.
Theorem 7.2.10
.
If θ is the MLE of θ̂, then the MLE of η = τ (θ) is τ (θ̂), where τ (θ) is any
.function of θ.
.
Proof - Using Induced Likelihood Function
.
L∗ (η̂|x) = sup L∗ (η|x) = sup
η
η
sup
θ∈τ −1 (η)
L(θ|x)
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
9 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Invariance Property of MLE
.
Theorem 7.2.10
.
If θ is the MLE of θ̂, then the MLE of η = τ (θ) is τ (θ̂), where τ (θ) is any
.function of θ.
.
Proof - Using Induced Likelihood Function
.
L∗ (η̂|x) = sup L∗ (η|x) = sup
η
η
sup
θ∈τ −1 (η)
L(θ|x)
= sup L(θ|x)
θ
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
9 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Invariance Property of MLE
.
Theorem 7.2.10
.
If θ is the MLE of θ̂, then the MLE of η = τ (θ) is τ (θ̂), where τ (θ) is any
.function of θ.
.
Proof - Using Induced Likelihood Function
.
L∗ (η̂|x) = sup L∗ (η|x) = sup
η
η
sup
θ∈τ −1 (η)
L(θ|x)
= sup L(θ|x) = L(θ̂|x)
θ
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
9 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Invariance Property of MLE
.
Theorem 7.2.10
.
If θ is the MLE of θ̂, then the MLE of η = τ (θ) is τ (θ̂), where τ (θ) is any
.function of θ.
.
Proof - Using Induced Likelihood Function
.
L∗ (η̂|x) = sup L∗ (η|x) = sup
η
η
sup
θ∈τ −1 (η)
L(θ|x)
= sup L(θ|x) = L(θ̂|x)
θ
L(θ̂|x) =
sup
L(θ|x)
θ∈τ −1 (τ (θ̂))
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
9 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Invariance Property of MLE
.
Theorem 7.2.10
.
If θ is the MLE of θ̂, then the MLE of η = τ (θ) is τ (θ̂), where τ (θ) is any
.function of θ.
.
Proof - Using Induced Likelihood Function
.
L∗ (η̂|x) = sup L∗ (η|x) = sup
η
η
sup
θ∈τ −1 (η)
L(θ|x)
= sup L(θ|x) = L(θ̂|x)
θ
L(θ̂|x) =
sup
L(θ|x) = L∗ [τ (θ̂)|x]
θ∈τ −1 (τ (θ̂))
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
9 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Invariance Property of MLE
.
Theorem 7.2.10
.
If θ is the MLE of θ̂, then the MLE of η = τ (θ) is τ (θ̂), where τ (θ) is any
.function of θ.
.
Proof - Using Induced Likelihood Function
.
L∗ (η̂|x) = sup L∗ (η|x) = sup
η
η
sup
θ∈τ −1 (η)
L(θ|x)
= sup L(θ|x) = L(θ̂|x)
θ
L(θ̂|x) =
sup
L(θ|x) = L∗ [τ (θ̂)|x]
θ∈τ −1 (τ (θ̂))
Hence,
L∗ (η̂|x) = L∗ [τ (θ̂)|x] and τ (θ̂) is the MLE of τ (θ).
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
9 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Properties of MLE
1.
Optimal in some sense : We will study this later
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
10 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Properties of MLE
1.
Optimal in some sense : We will study this later
2.
By definition, MLE will always fall into the range of the parameter
space.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
10 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Properties of MLE
1.
Optimal in some sense : We will study this later
2.
By definition, MLE will always fall into the range of the parameter
space.
3.
Not always easy to obtain; may be hard to find the global maximum.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
10 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Properties of MLE
1.
Optimal in some sense : We will study this later
2.
By definition, MLE will always fall into the range of the parameter
space.
3.
Not always easy to obtain; may be hard to find the global maximum.
4.
Heavily depends on the underlying distributional assumptions (i.e. not
robust).
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
10 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Method of Evaluating Estimators
.
Definition : Unbiasedness
.
Suppose θ̂ is an estimator for θ, then the bias of θ is defined as
Bias(θ) = E(θ̂) − θ
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
11 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Method of Evaluating Estimators
.
Definition : Unbiasedness
.
Suppose θ̂ is an estimator for θ, then the bias of θ is defined as
Bias(θ) = E(θ̂) − θ
If
. the bias is equal to 0, then θ̂ is an unbiased estimator for θ.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
11 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Method of Evaluating Estimators
.
Definition : Unbiasedness
.
Suppose θ̂ is an estimator for θ, then the bias of θ is defined as
Bias(θ) = E(θ̂) − θ
If
. the bias is equal to 0, then θ̂ is an unbiased estimator for θ.
.
Example
.
X1 , · · · ∑
, Xn are iid samples from a distribution with mean µ. Let
X = n1 ni=1 Xi is an estimator of µ.
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
11 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Method of Evaluating Estimators
.
Definition : Unbiasedness
.
Suppose θ̂ is an estimator for θ, then the bias of θ is defined as
Bias(θ) = E(θ̂) − θ
If
. the bias is equal to 0, then θ̂ is an unbiased estimator for θ.
.
Example
.
X1 , · · · ∑
, Xn are iid samples from a distribution with mean µ. Let
X = n1 ni=1 Xi is an estimator of µ. The bias is
Bias(µ) = E(X) − µ
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
11 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Method of Evaluating Estimators
.
Definition : Unbiasedness
.
Suppose θ̂ is an estimator for θ, then the bias of θ is defined as
Bias(θ) = E(θ̂) − θ
If
. the bias is equal to 0, then θ̂ is an unbiased estimator for θ.
.
Example
.
X1 , · · · ∑
, Xn are iid samples from a distribution with mean µ. Let
X = n1 ni=1 Xi is an estimator of µ. The bias is
Bias(µ) = E(X) − µ
( n
)
1∑
= E
Xi − µ
n
i=1
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
11 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Method of Evaluating Estimators
.
Definition : Unbiasedness
.
Suppose θ̂ is an estimator for θ, then the bias of θ is defined as
Bias(θ) = E(θ̂) − θ
If
. the bias is equal to 0, then θ̂ is an unbiased estimator for θ.
.
Example
.
X1 , · · · ∑
, Xn are iid samples from a distribution with mean µ. Let
X = n1 ni=1 Xi is an estimator of µ. The bias is
Bias(µ) = E(X) − µ
( n
)
1∑
= E
Xi − µ
n
i=1
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
11 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Method of Evaluating Estimators
.
Definition : Unbiasedness
.
Suppose θ̂ is an estimator for θ, then the bias of θ is defined as
Bias(θ) = E(θ̂) − θ
If
. the bias is equal to 0, then θ̂ is an unbiased estimator for θ.
.
Example
.
X1 , · · · ∑
, Xn are iid samples from a distribution with mean µ. Let
X = n1 ni=1 Xi is an estimator of µ. The bias is
Bias(µ) = E(X) − µ
( n
)
n
1∑
1∑
= E
Xi − µ =
E (Xi ) − µ
n
n
i=1
i=1
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
11 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Method of Evaluating Estimators
.
Definition : Unbiasedness
.
Suppose θ̂ is an estimator for θ, then the bias of θ is defined as
Bias(θ) = E(θ̂) − θ
If
. the bias is equal to 0, then θ̂ is an unbiased estimator for θ.
.
Example
.
X1 , · · · ∑
, Xn are iid samples from a distribution with mean µ. Let
X = n1 ni=1 Xi is an estimator of µ. The bias is
Bias(µ) = E(X) − µ
( n
)
n
1∑
1∑
= E
Xi − µ =
E (Xi ) − µ = µ − µ = 0
n
n
i=1
i=1
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
11 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Method of Evaluating Estimators
.
Definition : Unbiasedness
.
Suppose θ̂ is an estimator for θ, then the bias of θ is defined as
Bias(θ) = E(θ̂) − θ
If
. the bias is equal to 0, then θ̂ is an unbiased estimator for θ.
.
Example
.
X1 , · · · ∑
, Xn are iid samples from a distribution with mean µ. Let
X = n1 ni=1 Xi is an estimator of µ. The bias is
Bias(µ) = E(X) − µ
( n
)
n
1∑
1∑
= E
Xi − µ =
E (Xi ) − µ = µ − µ = 0
n
n
i=1
i=1
Therefore
X is an unbiased estimator for µ.
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
11 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
How important is unbiased?
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
12 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
How important is unbiased?
• θ̂1 (blue) is unbiased but has a chance to be very far away from θ = 0.
• θ̂2 (red) is biased but more likely to be closer to the true θ than θ̂1 .
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
12 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Mean Squared Error
.
Definition
.
Mean Squared Error (MSE) of an estimator θ̂ is defined as
MSE(θ̂) = E[(θ̂ − θ)]2
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
13 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Mean Squared Error
.
Definition
.
Mean Squared Error (MSE) of an estimator θ̂ is defined as
MSE(θ̂) = E[(θ̂ − θ)]2
.
.
Property of MSE
.
MSE(θ̂) = E[(θ̂ − Eθ̂ + Eθ̂ − θ)]2
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
13 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Mean Squared Error
.
Definition
.
Mean Squared Error (MSE) of an estimator θ̂ is defined as
MSE(θ̂) = E[(θ̂ − θ)]2
.
.
Property of MSE
.
MSE(θ̂) = E[(θ̂ − Eθ̂ + Eθ̂ − θ)]2
= E[(θ̂ − Eθ̂)2 ] + E[(Eθ̂ − θ)2 ] + 2E[(θ̂ − Eθ̂)]E[(Eθ̂ − θ)]
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
13 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Mean Squared Error
.
Definition
.
Mean Squared Error (MSE) of an estimator θ̂ is defined as
MSE(θ̂) = E[(θ̂ − θ)]2
.
.
Property of MSE
.
MSE(θ̂) = E[(θ̂ − Eθ̂ + Eθ̂ − θ)]2
= E[(θ̂ − Eθ̂)2 ] + E[(Eθ̂ − θ)2 ] + 2E[(θ̂ − Eθ̂)]E[(Eθ̂ − θ)]
= E[(θ̂ − Eθ̂)2 ] + (Eθ̂ − θ)2 + 2(Eθ̂ − Eθ̂)E[(Eθ̂ − θ)]
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
13 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Mean Squared Error
.
Definition
.
Mean Squared Error (MSE) of an estimator θ̂ is defined as
MSE(θ̂) = E[(θ̂ − θ)]2
.
.
Property of MSE
.
MSE(θ̂) = E[(θ̂ − Eθ̂ + Eθ̂ − θ)]2
= E[(θ̂ − Eθ̂)2 ] + E[(Eθ̂ − θ)2 ] + 2E[(θ̂ − Eθ̂)]E[(Eθ̂ − θ)]
= E[(θ̂ − Eθ̂)2 ] + (Eθ̂ − θ)2 + 2(Eθ̂ − Eθ̂)E[(Eθ̂ − θ)]
.
= Var(θ̂) + Bias2 (θ)
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
13 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Example
i.i.d.
• X1 , · · · , Xn ∼ N (µ, 1)
• µ1 = 1, µ2 = X.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
14 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Example
i.i.d.
• X1 , · · · , Xn ∼ N (µ, 1)
• µ1 = 1, µ2 = X.
MSE(µ̂1 ) = E(µ̂1 − µ)2 = (1 − µ)2
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
14 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Example
i.i.d.
• X1 , · · · , Xn ∼ N (µ, 1)
• µ1 = 1, µ2 = X.
MSE(µ̂1 ) = E(µ̂1 − µ)2 = (1 − µ)2
MSE(µ̂2 ) = E(X − µ)2 = Var(X) =
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
1
n
.
.
.
February 14th, 2013
.
14 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Example
i.i.d.
• X1 , · · · , Xn ∼ N (µ, 1)
• µ1 = 1, µ2 = X.
MSE(µ̂1 ) = E(µ̂1 − µ)2 = (1 − µ)2
MSE(µ̂2 ) = E(X − µ)2 = Var(X) =
1
n
• Suppose that the true µ = 1, then MSE(µ1 ) = 0 < MSE(µ2 ), and no
estimator can beat µ1 in terms of MSE when true µ = 1.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
14 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Example
i.i.d.
• X1 , · · · , Xn ∼ N (µ, 1)
• µ1 = 1, µ2 = X.
MSE(µ̂1 ) = E(µ̂1 − µ)2 = (1 − µ)2
MSE(µ̂2 ) = E(X − µ)2 = Var(X) =
1
n
• Suppose that the true µ = 1, then MSE(µ1 ) = 0 < MSE(µ2 ), and no
estimator can beat µ1 in terms of MSE when true µ = 1.
• Therefore, we cannot find an estimator that is uniformly the best in
terms of MSE across all θ ∈ Ω among all estimators
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
14 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Example
i.i.d.
• X1 , · · · , Xn ∼ N (µ, 1)
• µ1 = 1, µ2 = X.
MSE(µ̂1 ) = E(µ̂1 − µ)2 = (1 − µ)2
MSE(µ̂2 ) = E(X − µ)2 = Var(X) =
1
n
• Suppose that the true µ = 1, then MSE(µ1 ) = 0 < MSE(µ2 ), and no
estimator can beat µ1 in terms of MSE when true µ = 1.
• Therefore, we cannot find an estimator that is uniformly the best in
terms of MSE across all θ ∈ Ω among all estimators
• Restrict the class of estimators, and find the ”best” estimator within
the small class.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
14 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Uniformly Minimum Variance Unbiased Estimator
.
Definition
. ∗
W (X) is the best unbiased estimator, or uniformly minimum variance
unbiased estimator (UMVUE) of τ (θ) if,
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
15 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Uniformly Minimum Variance Unbiased Estimator
.
Definition
. ∗
W (X) is the best unbiased estimator, or uniformly minimum variance
unbiased estimator (UMVUE) of τ (θ) if,
1.
E[W∗ (X)|θ] = τ (θ) for all θ (unbiased)
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
15 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Uniformly Minimum Variance Unbiased Estimator
.
Definition
. ∗
W (X) is the best unbiased estimator, or uniformly minimum variance
unbiased estimator (UMVUE) of τ (θ) if,
1.
2.
.
E[W∗ (X)|θ] = τ (θ) for all θ (unbiased)
and Var[W∗ (X)|θ] ≤ Var[W(X)|θ] for all θ, where W is any other
unbiased estimator of τ (θ) (minimum variance).
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
15 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Uniformly Minimum Variance Unbiased Estimator
.
Definition
. ∗
W (X) is the best unbiased estimator, or uniformly minimum variance
unbiased estimator (UMVUE) of τ (θ) if,
1.
2.
.
E[W∗ (X)|θ] = τ (θ) for all θ (unbiased)
and Var[W∗ (X)|θ] ≤ Var[W(X)|θ] for all θ, where W is any other
unbiased estimator of τ (θ) (minimum variance).
.
How to find the Best Unbiased Estimator
.
• Find the lower bound of variances of any unbiased estimator of τ (θ),
say B(θ).
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
15 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Uniformly Minimum Variance Unbiased Estimator
.
Definition
. ∗
W (X) is the best unbiased estimator, or uniformly minimum variance
unbiased estimator (UMVUE) of τ (θ) if,
1.
2.
.
E[W∗ (X)|θ] = τ (θ) for all θ (unbiased)
and Var[W∗ (X)|θ] ≤ Var[W(X)|θ] for all θ, where W is any other
unbiased estimator of τ (θ) (minimum variance).
.
How to find the Best Unbiased Estimator
.
• Find the lower bound of variances of any unbiased estimator of τ (θ),
say B(θ).
• If W∗ is an unbiased estimator of τ (θ) and satisfies
.
Var[W∗ (X)|θ] = B(θ), then W∗ is the best unbiased estimator.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
15 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Cramer-Rao inequality
.
Theorem 7.3.9 : Cramer-Rao Theorem
.
Let X1 , · · · , Xn be a sample with joint pdf/pmf of fX (x|θ). Suppose W(X)
is an estimator satisfying
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
16 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Cramer-Rao inequality
.
Theorem 7.3.9 : Cramer-Rao Theorem
.
Let X1 , · · · , Xn be a sample with joint pdf/pmf of fX (x|θ). Suppose W(X)
is an estimator satisfying
1.
E[W(X)|θ] = τ (θ), ∀θ ∈ Ω.
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
16 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Cramer-Rao inequality
.
Theorem 7.3.9 : Cramer-Rao Theorem
.
Let X1 , · · · , Xn be a sample with joint pdf/pmf of fX (x|θ). Suppose W(X)
is an estimator satisfying
1.
E[W(X)|θ] = τ (θ), ∀θ ∈ Ω.
2.
Var[W(X)|θ] < ∞.
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
16 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Cramer-Rao inequality
.
Theorem 7.3.9 : Cramer-Rao Theorem
.
Let X1 , · · · , Xn be a sample with joint pdf/pmf of fX (x|θ). Suppose W(X)
is an estimator satisfying
1.
E[W(X)|θ] = τ (θ), ∀θ ∈ Ω.
2.
Var[W(X)|θ] < ∞.
For h(x) = 1 and h(x) = W(x), if the differentiation and integrations are
interchangeable, i.e.
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
16 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Cramer-Rao inequality
.
Theorem 7.3.9 : Cramer-Rao Theorem
.
Let X1 , · · · , Xn be a sample with joint pdf/pmf of fX (x|θ). Suppose W(X)
is an estimator satisfying
1.
E[W(X)|θ] = τ (θ), ∀θ ∈ Ω.
2.
Var[W(X)|θ] < ∞.
For h(x) = 1 and h(x) = W(x), if the differentiation and integrations are
interchangeable, i.e.
∫
∫
∂
d
d
h(x) fX (x|θ)dx
h(x)fX (x|θ)dx =
E[h(x)|θ] =
dθ
dθ x∈X
∂θ
x∈X
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
16 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Cramer-Rao inequality
.
Theorem 7.3.9 : Cramer-Rao Theorem
.
Let X1 , · · · , Xn be a sample with joint pdf/pmf of fX (x|θ). Suppose W(X)
is an estimator satisfying
1.
E[W(X)|θ] = τ (θ), ∀θ ∈ Ω.
2.
Var[W(X)|θ] < ∞.
For h(x) = 1 and h(x) = W(x), if the differentiation and integrations are
interchangeable, i.e.
∫
∫
∂
d
d
h(x) fX (x|θ)dx
h(x)fX (x|θ)dx =
E[h(x)|θ] =
dθ
dθ x∈X
∂θ
x∈X
Then, a lower bound of Var[W(X)|θ] is
Var[W(X)] ≥
.
[τ ′ (θ)]2
]
∂
E { ∂θ
log fX (x|θ)}2
[
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
16 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Proving Cramer-Rao Theorem (1/4)
By Cauchy-Schwarz inequality,
[Cov(X, Y)]2 ≤ Var(X)Var(Y)
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
17 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Proving Cramer-Rao Theorem (1/4)
By Cauchy-Schwarz inequality,
[Cov(X, Y)]2 ≤ Var(X)Var(Y)
Replacing X and Y,
[
]2
[
]
∂
∂
Cov{W(X),
log fX (X|θ)}
≤ Var[W(X)]Var
log fX (X|θ)
∂θ
∂θ
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
17 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Proving Cramer-Rao Theorem (1/4)
By Cauchy-Schwarz inequality,
[Cov(X, Y)]2 ≤ Var(X)Var(Y)
Replacing X and Y,
[
]2
[
]
∂
∂
Cov{W(X),
log fX (X|θ)}
≤ Var[W(X)]Var
log fX (X|θ)
∂θ
∂θ
]2
[
∂
Cov{W(X), ∂θ
log fX (X|θ)}
]
[∂
Var[W(X)] ≥
Var ∂θ
log fX (X|θ)
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
17 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Proving Cramer-Rao Theorem (1/4)
By Cauchy-Schwarz inequality,
[Cov(X, Y)]2 ≤ Var(X)Var(Y)
Replacing X and Y,
[
]2
[
]
∂
∂
Cov{W(X),
log fX (X|θ)}
≤ Var[W(X)]Var
log fX (X|θ)
∂θ
∂θ
]2
[
∂
Cov{W(X), ∂θ
log fX (X|θ)}
]
[∂
Var[W(X)] ≥
Var ∂θ
log fX (X|θ)
Using Var(X) = EX2 − (EX)2 ,
[{
[
]
}2 ]
[
]2
∂
∂
∂
Var
log fX (X|θ) = E
log fX (X|θ)
−E
log fX (X|θ)
∂θ
∂θ
∂θ
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
17 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Proving Cramer-Rao Theorem (2/4)
[
]
]
[
∫
∂
∂
E
log fX (X|θ) =
log fX (x|θ) fX (x|θ)dx
∂θ
x∈X ∂θ
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
18 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Proving Cramer-Rao Theorem (2/4)
[
]
]
[
∫
∂
∂
E
log fX (X|θ) =
log fX (x|θ) fX (x|θ)dx
∂θ
x∈X ∂θ
∫
∂
∂θ fX (x|θ)
=
fX (x|θ)dx
x∈X fX (x|θ)
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
18 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Proving Cramer-Rao Theorem (2/4)
[
]
]
[
∫
∂
∂
E
log fX (X|θ) =
log fX (x|θ) fX (x|θ)dx
∂θ
x∈X ∂θ
∫
∂
∂θ fX (x|θ)
=
f (x|θ)dx
f (x|θ) X
∫x∈X X
∂
=
fX (x|θ)dx
x∈X ∂θ
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
18 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Proving Cramer-Rao Theorem (2/4)
[
]
]
[
∫
∂
∂
E
log fX (X|θ) =
log fX (x|θ) fX (x|θ)dx
∂θ
x∈X ∂θ
∫
∂
∂θ fX (x|θ)
=
f (x|θ)dx
f (x|θ) X
∫x∈X X
∂
=
fX (x|θ)dx
x∈X ∂θ
∫
d
=
f (x|θ)dx
(by assumption)
dθ x∈X X
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
18 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Proving Cramer-Rao Theorem (2/4)
[
]
]
[
∫
∂
∂
E
log fX (X|θ) =
log fX (x|θ) fX (x|θ)dx
∂θ
x∈X ∂θ
∫
∂
∂θ fX (x|θ)
=
f (x|θ)dx
f (x|θ) X
∫x∈X X
∂
=
fX (x|θ)dx
x∈X ∂θ
∫
d
=
f (x|θ)dx
(by assumption)
dθ x∈X X
d
=
1=0
dθ
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
18 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Proving Cramer-Rao Theorem (2/4)
[
]
]
[
∫
∂
∂
E
log fX (X|θ) =
log fX (x|θ) fX (x|θ)dx
∂θ
x∈X ∂θ
∫
∂
∂θ fX (x|θ)
=
f (x|θ)dx
f (x|θ) X
∫x∈X X
∂
=
fX (x|θ)dx
x∈X ∂θ
∫
d
=
f (x|θ)dx
(by assumption)
dθ x∈X X
d
=
1=0
dθ[
[
]
{
}2 ]
∂
∂
Var
log fX (X|θ) = E
log fX (X|θ)
∂θ
∂θ
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
18 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Proving Cramer-Rao Theorem (3/4)
[
]
∂
Cov W(X),
log fX (X|θ)
∂θ
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
19 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Proving Cramer-Rao Theorem (3/4)
[
]
∂
Cov W(X),
log fX (X|θ)
∂θ
[
]
[
]
∂
∂
= E W(X) ·
log fX (X|θ) − E [W(X)] E
log fX (X|θ)
∂θ
∂θ
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
19 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Proving Cramer-Rao Theorem (3/4)
[
]
∂
Cov W(X),
log fX (X|θ)
∂θ
[
]
[
]
∂
∂
= E W(X) ·
log fX (X|θ) − E [W(X)] E
log fX (X|θ)
∂θ
∂θ
[
]
∂
= E W(X) ·
log fX (X|θ)
∂θ
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
19 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Proving Cramer-Rao Theorem (3/4)
[
]
∂
Cov W(X),
log fX (X|θ)
∂θ
[
]
[
]
∂
∂
= E W(X) ·
log fX (X|θ) − E [W(X)] E
log fX (X|θ)
∂θ
∂θ
[
] ∫
∂
∂
= E W(X) ·
log fX (X|θ) =
W(x) log fX (x|θ)f(x|θ)dx
∂θ
∂θ
x∈X
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
19 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Proving Cramer-Rao Theorem (3/4)
[
]
∂
Cov W(X),
log fX (X|θ)
∂θ
[
]
[
]
∂
∂
= E W(X) ·
log fX (X|θ) − E [W(X)] E
log fX (X|θ)
∂θ
∂θ
[
] ∫
∂
∂
= E W(X) ·
log fX (X|θ) =
W(x) log fX (x|θ)f(x|θ)dx
∂θ
∂θ
x∈X
∫
∂
f (x|θ)
W(x) ∂θ X
=
f(x|θ)dx
f(x|θ)
x∈X
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
19 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Proving Cramer-Rao Theorem (3/4)
[
]
∂
Cov W(X),
log fX (X|θ)
∂θ
[
]
[
]
∂
∂
= E W(X) ·
log fX (X|θ) − E [W(X)] E
log fX (X|θ)
∂θ
∂θ
[
] ∫
∂
∂
= E W(X) ·
log fX (X|θ) =
W(x) log fX (x|θ)f(x|θ)dx
∂θ
∂θ
x∈X
∫
∫
∂
f (x|θ)
∂
W(x) ∂θ X
W(x) fX (x|θ)
=
f(x|θ)dx =
f(x|θ)
∂θ
x∈X
x∈X
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
19 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Proving Cramer-Rao Theorem (3/4)
[
=
=
=
=
]
∂
Cov W(X),
log fX (X|θ)
∂θ
[
]
[
]
∂
∂
E W(X) ·
log fX (X|θ) − E [W(X)] E
log fX (X|θ)
∂θ
∂θ
[
] ∫
∂
∂
E W(X) ·
log fX (X|θ) =
W(x) log fX (x|θ)f(x|θ)dx
∂θ
∂θ
x∈X
∫
∫
∂
f (x|θ)
∂
W(x) ∂θ X
W(x) fX (x|θ)
f(x|θ)dx =
f(x|θ)
∂θ
x∈X
x∈X
∫
d
W(x)fX (x|θ)
(by assumption)
dθ x∈X
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
19 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Proving Cramer-Rao Theorem (3/4)
[
=
=
=
=
=
]
∂
Cov W(X),
log fX (X|θ)
∂θ
[
]
[
]
∂
∂
E W(X) ·
log fX (X|θ) − E [W(X)] E
log fX (X|θ)
∂θ
∂θ
[
] ∫
∂
∂
E W(X) ·
log fX (X|θ) =
W(x) log fX (x|θ)f(x|θ)dx
∂θ
∂θ
x∈X
∫
∫
∂
f (x|θ)
∂
W(x) ∂θ X
W(x) fX (x|θ)
f(x|θ)dx =
f(x|θ)
∂θ
x∈X
x∈X
∫
d
W(x)fX (x|θ)
(by assumption)
dθ x∈X
d
d
E [W(X)] =
τ (θ) = τ ′ (θ)
dθ
dθ
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
19 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Proving Cramer-Rao Theorem (4/4)
From the previous results
[{
]
}2 ]
∂
∂
Var
log fX (X|θ) = E
log fX (X|θ)
∂θ
∂θ
[
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
20 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Proving Cramer-Rao Theorem (4/4)
From the previous results
[{
]
}2 ]
∂
∂
Var
log fX (X|θ) = E
log fX (X|θ)
∂θ
∂θ
[
]
∂
Cov W(X),
log fX (X|θ) = τ ′ (θ)
∂θ
[
Therefore, Cramer-Rao lower bound is
Var[W(X)] ≥
[
]2
∂
Cov{W(X), ∂θ
log fX (X|θ)}
[∂
]
Var ∂θ
log fX (X|θ)
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
20 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Proving Cramer-Rao Theorem (4/4)
From the previous results
[{
]
}2 ]
∂
∂
Var
log fX (X|θ) = E
log fX (X|θ)
∂θ
∂θ
[
]
∂
Cov W(X),
log fX (X|θ) = τ ′ (θ)
∂θ
[
Therefore, Cramer-Rao lower bound is
Var[W(X)] ≥
=
[
]2
∂
Cov{W(X), ∂θ
log fX (X|θ)}
[∂
]
Var ∂θ
log fX (X|θ)
[τ ′ (θ)]2
]
∂
E { ∂θ
log fX (X|θ)}2
[
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
20 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Cramer-Rao bound in iid case
.
Corollary 7.3.10
.
If X1 , · · · , Xn are iid samples from pdf/pmf fX (x|θ), and the assumptions
in the above Cramer-Rao theorem hold, then the lower-bound of
Var[W(X)|θ] becomes
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
21 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Cramer-Rao bound in iid case
.
Corollary 7.3.10
.
If X1 , · · · , Xn are iid samples from pdf/pmf fX (x|θ), and the assumptions
in the above Cramer-Rao theorem hold, then the lower-bound of
Var[W(X)|θ] becomes
[τ ′ (θ)]2
[ ∂
]
Var[W(X)] ≥
2
nE
{
log
f
(X|θ)}
X
∂θ
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
21 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Cramer-Rao bound in iid case
.
Corollary 7.3.10
.
If X1 , · · · , Xn are iid samples from pdf/pmf fX (x|θ), and the assumptions
in the above Cramer-Rao theorem hold, then the lower-bound of
Var[W(X)|θ] becomes
[τ ′ (θ)]2
[ ∂
]
Var[W(X)] ≥
2
nE
{
log
f
(X|θ)}
X
∂θ
.
.
Proof
.
We need to[show
[{
{ that
}2 ]
}2 ]
∂
∂
E
log fX (X|θ)
= nE
log fX (X|θ)
∂θ
∂θ
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
21 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Proving Corollary 7.3.10
[{
E
∂
log fX (X|θ)
∂θ
}2 ]
{
}2 
n
∏
∂
= E
log
fX (Xi |θ) 
∂θ
i=1
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
22 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Proving Corollary 7.3.10
[{
E
∂
log fX (X|θ)
∂θ
}2 ]
{
}2 
n
∏
∂
= E
log
fX (Xi |θ) 
∂θ
i=1
{
}2 
n
∑
∂
= E
log fX (Xi |θ) 
∂θ
i=1
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
22 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Proving Corollary 7.3.10
[{
E
∂
log fX (X|θ)
∂θ
}2 ]
{
}2 
n
∏
∂
= E
log
fX (Xi |θ) 
∂θ
i=1
{
}2 
n
∑
∂
= E
log fX (Xi |θ) 
∂θ
i=1
{
}2 
n
∑
∂
= E
log fX (Xi |θ) 
∂θ
i=1
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
22 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Proving Corollary 7.3.10
[{
E
∂
log fX (X|θ)
∂θ
}2 ]
=
=
=
=
{
}2 
n
∏
∂
E
log
fX (Xi |θ) 
∂θ
i=1
{
}2 
n
∑
∂
E
log fX (Xi |θ) 
∂θ
i=1
{
}2 
n
∑
∂
E
log fX (Xi |θ) 
∂θ
i=1
[∑ {
}2
n
∂
E
+
i=1 ∂θ log fX (Xi |θ)
]
∑
∂
∂
log
f
(X
|θ)
log
f
(X
|θ)
i
j
X
X
i̸=j ∂θ
∂θ
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
22 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Proving Corollary 7.3.10
Because X1 , · · · , Xn are independent,


∑ ∂
∂
E
log fX (Xi |θ) log fX (Xj |θ)
∂θ
∂θ
i̸=j
]
[
]
[
∑
∂
∂
=
log fX (Xi |θ) E
log fX (Xj |θ) = 0
E
∂θ
∂θ
i̸=j
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
23 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Proving Corollary 7.3.10
Because X1 , · · · , Xn are independent,


∑ ∂
∂
E
log fX (Xi |θ) log fX (Xj |θ)
∂θ
∂θ
i̸=j
]
[
]
[
∑
∂
∂
=
log fX (Xi |θ) E
log fX (Xj |θ) = 0
E
∂θ
∂θ
i̸=j
[{
E
[ n {
}2 ]
}2 ]
∑ ∂
∂
log fX (X|θ)
= E
log fX (Xi |θ)
∂θ
∂θ
i=1
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
23 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Proving Corollary 7.3.10
Because X1 , · · · , Xn are independent,


∑ ∂
∂
E
log fX (Xi |θ) log fX (Xj |θ)
∂θ
∂θ
i̸=j
]
[
]
[
∑
∂
∂
=
log fX (Xi |θ) E
log fX (Xj |θ) = 0
E
∂θ
∂θ
i̸=j
[{
E
[ n {
}2 ]
}2 ]
∑ ∂
∂
log fX (X|θ)
= E
log fX (Xi |θ)
∂θ
∂θ
i=1
[{
}2 ]
n
∑
∂
log fX (Xi |θ)
=
E
∂θ
i=1
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
23 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Proving Corollary 7.3.10
Because X1 , · · · , Xn are independent,


∑ ∂
∂
E
log fX (Xi |θ) log fX (Xj |θ)
∂θ
∂θ
i̸=j
]
[
]
[
∑
∂
∂
=
log fX (Xi |θ) E
log fX (Xj |θ) = 0
E
∂θ
∂θ
i̸=j
[{
E
[ n {
}2 ]
}2 ]
∑ ∂
∂
log fX (X|θ)
= E
log fX (Xi |θ)
∂θ
∂θ
i=1
[{
}2 ]
n
∑
∂
log fX (Xi |θ)
=
E
∂θ
i=1
[{
}2 ]
∂
= nE
log fX (X|θ)
∂θ
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
23 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Remark from Corollary 7.3.10
In iid case, Cramer-Rao lower bound for an unbiased estimator of θ is
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
24 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Remark from Corollary 7.3.10
In iid case, Cramer-Rao lower bound for an unbiased estimator of θ is
Var[W(X)] ≥
1
[ ∂
]
nE { ∂θ log fX (X|θ)}2
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
24 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Remark from Corollary 7.3.10
In iid case, Cramer-Rao lower bound for an unbiased estimator of θ is
Var[W(X)] ≥
1
[ ∂
]
nE { ∂θ log fX (X|θ)}2
Because τ (θ) = θ and τ ′ (θ) = 1.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
24 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Score Function
.
Definition: Score or Score Function for X
.
i.i.d.
X1 , · · · , Xn ∼ fX (x|θ)
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
25 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Score Function
.
Definition: Score or Score Function for X
.
i.i.d.
X1 , · · · , Xn ∼ fX (x|θ)
∂
S(X|θ) =
log fX (X|θ)
∂θ
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
25 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Score Function
.
Definition: Score or Score Function for X
.
i.i.d.
X1 , · · · , Xn ∼ fX (x|θ)
∂
S(X|θ) =
log fX (X|θ)
∂θ
E [S(X|θ)] = 0
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
25 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Score Function
.
Definition: Score or Score Function for X
.
i.i.d.
X1 , · · · , Xn ∼ fX (x|θ)
∂
S(X|θ) =
log fX (X|θ)
∂θ
E [S(X|θ)] = 0
∂
Sn (X|θ) =
log fX (X|θ)
∂θ
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
25 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Fisher Information Number
.
Definition: Fisher Information Number
.
[{
}2 ]
[
]
∂
I(θ) = E
log fX (X|θ)
= E S2 (X|θ)
∂θ
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
26 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Fisher Information Number
.
Definition: Fisher Information Number
.
[{
}2 ]
[
]
∂
I(θ) = E
log fX (X|θ)
= E S2 (X|θ)
∂θ
[{
}2 ]
∂
In (θ) = E
log fX (X|θ)
∂θ
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
26 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Fisher Information Number
.
Definition: Fisher Information Number
.
[{
}2 ]
[
]
∂
I(θ) = E
log fX (X|θ)
= E S2 (X|θ)
∂θ
[{
}2 ]
∂
In (θ) = E
log fX (X|θ)
∂θ
[{
}2 ]
∂
= nE
log fX (X|θ)
= nI(θ)
∂θ
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
26 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Fisher Information Number
.
Definition: Fisher Information Number
.
[{
}2 ]
[
]
∂
I(θ) = E
log fX (X|θ)
= E S2 (X|θ)
∂θ
[{
}2 ]
∂
In (θ) = E
log fX (X|θ)
∂θ
[{
}2 ]
∂
= nE
log fX (X|θ)
= nI(θ)
∂θ
The bigger the information number, the more information we have about
θ,
. the smaller bound on the variance of unbiased estimates.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
26 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Simplified Fisher Information
.
Lemma 7.3.11
.
If fX (x|θ) satisfies the∫ two interchangeability
∫ conditions
∂
d
fX (x|θ)dx =
fX (x|θ)dx
dθ x∈X
x∈X ∂θ
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
27 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Simplified Fisher Information
.
Lemma 7.3.11
.
If fX (x|θ) satisfies the∫ two interchangeability
∫ conditions
∂
d
fX (x|θ)dx =
fX (x|θ)dx
dθ x∈X
x∈X ∂θ
∫
∫
d
∂
∂2
fX (x|θ)dx =
f (x|θ)dx
2 X
dθ x∈X ∂θ
x∈X ∂θ
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
27 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Simplified Fisher Information
.
Lemma 7.3.11
.
If fX (x|θ) satisfies the∫ two interchangeability
∫ conditions
∂
d
fX (x|θ)dx =
fX (x|θ)dx
dθ x∈X
x∈X ∂θ
∫
∫
d
∂
∂2
fX (x|θ)dx =
f (x|θ)dx
2 X
dθ x∈X ∂θ
x∈X ∂θ
which are true for
then
[{exponential family,
]
}2 ]
[ 2
∂
∂
log fX (X|θ)
I(θ) = E
log fX (X|θ)
= −E
∂θ
∂θ2
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
27 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Example - Poisson Distribution
i.i.d.
• X1 , · · · , Xn ∼ Poisson(λ)
• λ1 = X
• λ2 = s2X
• E[λ1 ] = E(X) = λ.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
28 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Example - Poisson Distribution
i.i.d.
• X1 , · · · , Xn ∼ Poisson(λ)
• λ1 = X
• λ2 = s2X
• E[λ1 ] = E(X) = λ.
−1
Cramer-Rao lower bound is I−1
n (λ) = [nI(λ)] .
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
28 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Example - Poisson Distribution
i.i.d.
• X1 , · · · , Xn ∼ Poisson(λ)
• λ1 = X
• λ2 = s2X
• E[λ1 ] = E(X) = λ.
−1
Cramer-Rao lower bound is I−1
n (λ) = [nI(λ)] .
[{
}2 ]
[ 2
]
∂
∂
I(λ) = E
log fX (X|λ)
= −E
log fX (X|λ)
∂λ
∂λ2
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
28 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Example - Poisson Distribution
i.i.d.
• X1 , · · · , Xn ∼ Poisson(λ)
• λ1 = X
• λ2 = s2X
• E[λ1 ] = E(X) = λ.
−1
Cramer-Rao lower bound is I−1
n (λ) = [nI(λ)] .
[{
}2 ]
[ 2
]
∂
∂
I(λ) = E
log fX (X|λ)
= −E
log fX (X|λ)
∂λ
∂λ2
]
[ 2
]
[ 2
e−λ λX
∂
∂
log
(−λ
+
X
log
λ
−
log
X!)
= −E
=
−E
∂λ2
X!
∂λ2
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
28 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Example - Poisson Distribution
i.i.d.
• X1 , · · · , Xn ∼ Poisson(λ)
• λ1 = X
• λ2 = s2X
• E[λ1 ] = E(X) = λ.
−1
Cramer-Rao lower bound is I−1
n (λ) = [nI(λ)] .
[{
}2 ]
[ 2
]
∂
∂
I(λ) = E
log fX (X|λ)
= −E
log fX (X|λ)
∂λ
∂λ2
]
[ 2
]
[ 2
e−λ λX
∂
∂
log
(−λ
+
X
log
λ
−
log
X!)
= −E
=
−E
∂λ2
X!
∂λ2
[ ]
X
1
1
= E 2 = 2 E(X) =
λ
λ
λ
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
28 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Example - Poisson Distribution (cont’d)
Therefore, the Cramer-Rao lower bound is
Var[W(X)] ≥
1
λ
=
nI(λ)
n
where W is any unbiased estimator.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
29 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Example - Poisson Distribution (cont’d)
Therefore, the Cramer-Rao lower bound is
Var[W(X)] ≥
1
λ
=
nI(λ)
n
where W is any unbiased estimator.
Var(λ̂1 ) = Var(X) =
λ
Var(X)
=
n
n
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
29 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Example - Poisson Distribution (cont’d)
Therefore, the Cramer-Rao lower bound is
Var[W(X)] ≥
1
λ
=
nI(λ)
n
where W is any unbiased estimator.
Var(λ̂1 ) = Var(X) =
λ
Var(X)
=
n
n
Therefore, λ1 = X is the best unbiased estimator of λ.
Var(λ̂2 ) >
λ
n
(details is omitted), so λ̂2 is not the best unbiased estimator.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
29 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
With and without Lemma 7.3.11
.
With Lemma 7.3.11
.
[ 2
]
[ 2
]
∂
1
∂
I(λ) = −E ∂λ
2 log fX (X|λ) = −E ∂λ2 (−λ + X log λ − log X!) = λ
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
30 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
With and without Lemma 7.3.11
.
With Lemma 7.3.11
.
[ 2
]
[ 2
]
∂
1
∂
I(λ) = −E ∂λ
2 log fX (X|λ) = −E ∂λ2 (−λ + X log λ − log X!) = λ
.
.
Without Lemma 7.3.11
.
[{
[{
}2 ]
}2 ]
∂
∂
I(λ) = E
log fX (X|λ)
=E
(−λ + X log λ − log X!)
∂λ
∂λ
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
30 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
With and without Lemma 7.3.11
.
With Lemma 7.3.11
.
[ 2
]
[ 2
]
∂
1
∂
I(λ) = −E ∂λ
2 log fX (X|λ) = −E ∂λ2 (−λ + X log λ − log X!) = λ
.
.
Without Lemma 7.3.11
.
[{
[{
}2 ]
}2 ]
∂
∂
I(λ) = E
log fX (X|λ)
=E
(−λ + X log λ − log X!)
∂λ
∂λ
[{
]
} ]
[
E(X) E(X2 )
X 2
X X2
= E
−1 +
=E 1−2 + 2 =1−2
+
λ
λ
λ
λ
λ2
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
30 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
With and without Lemma 7.3.11
.
With Lemma 7.3.11
.
[ 2
]
[ 2
]
∂
1
∂
I(λ) = −E ∂λ
2 log fX (X|λ) = −E ∂λ2 (−λ + X log λ − log X!) = λ
.
.
Without Lemma 7.3.11
.
[{
[{
}2 ]
}2 ]
∂
∂
I(λ) = E
log fX (X|λ)
=E
(−λ + X log λ − log X!)
∂λ
∂λ
[{
]
} ]
[
E(X) E(X2 )
X 2
X X2
= E
−1 +
=E 1−2 + 2 =1−2
+
λ
λ
λ
λ
λ2
.
= 1−2
λ λ + λ2
1
E(X) Var(X) + [E(X)]2
=
1
−
2
=
+
+
2
2
λ
λ
λ
λ
λ
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
30 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Example - Normal Distribution
i.i.d.
• X1 , · · · , Xn ∼ N (µ, σ 2 ), where σ 2 is known.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
31 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Example - Normal Distribution
i.i.d.
• X1 , · · · , Xn ∼ N (µ, σ 2 ), where σ 2 is known.
• The Cramer-Rao bound for µ is [nI(µ)]−1 .
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
31 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Example - Normal Distribution
i.i.d.
• X1 , · · · , Xn ∼ N (µ, σ 2 ), where σ 2 is known.
• The Cramer-Rao bound for µ is [nI(µ)]−1 .
[
]
∂2
I(µ) = −E
log fX (X|µ)
∂µ2
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
31 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Example - Normal Distribution
i.i.d.
• X1 , · · · , Xn ∼ N (µ, σ 2 ), where σ 2 is known.
• The Cramer-Rao bound for µ is [nI(µ)]−1 .
[
]
∂2
I(µ) = −E
log fX (X|µ)
∂µ2
[ 2
{
(
)}]
∂
1
(X − µ)2
= −E
log √
exp −
∂µ2
2σ 2
2πσ 2
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
31 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Example - Normal Distribution
i.i.d.
• X1 , · · · , Xn ∼ N (µ, σ 2 ), where σ 2 is known.
• The Cramer-Rao bound for µ is [nI(µ)]−1 .
[
]
∂2
I(µ) = −E
log fX (X|µ)
∂µ2
[ 2
{
(
)}]
∂
1
(X − µ)2
= −E
log √
exp −
∂µ2
2σ 2
2πσ 2
[ 2 {
}]
∂
1
(X − µ)2
2
= −E
−
log(2πσ
)
−
∂µ2
2
2σ 2
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
31 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Example - Normal Distribution
i.i.d.
• X1 , · · · , Xn ∼ N (µ, σ 2 ), where σ 2 is known.
• The Cramer-Rao bound for µ is [nI(µ)]−1 .
[
I(µ) =
=
=
=
]
∂2
−E
log fX (X|µ)
∂µ2
[ 2
{
(
)}]
∂
1
(X − µ)2
−E
log √
exp −
∂µ2
2σ 2
2πσ 2
[ 2 {
}]
∂
1
(X − µ)2
2
−E
−
log(2πσ
)
−
∂µ2
2
2σ 2
{
}]
[
1
∂ 2(X − µ)
= 2
−E
2
∂µ
2σ
σ
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
31 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Applying Lemma 7.3.11
.
Question
.
.When can we interchange the order of differentiation and integration?
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
32 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Applying Lemma 7.3.11
.
Question
.
.When can we interchange the order of differentiation and integration?
.
Answer
.
• For exponential family, always yes.
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
32 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Applying Lemma 7.3.11
.
Question
.
.When can we interchange the order of differentiation and integration?
.
Answer
.
• For exponential family, always yes.
• Not always yes for non-exponential family. Will have to check the
.
individual case.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
32 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Applying Lemma 7.3.11
.
Question
.
.When can we interchange the order of differentiation and integration?
.
Answer
.
• For exponential family, always yes.
• Not always yes for non-exponential family. Will have to check the
.
individual case.
.
Example
.
i.i.d.
X1 , · · · , Xn ∼ Uniform(0, θ)
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
32 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Applying Lemma 7.3.11
.
Question
.
.When can we interchange the order of differentiation and integration?
.
Answer
.
• For exponential family, always yes.
• Not always yes for non-exponential family. Will have to check the
.
individual case.
.
Example
.
i.i.d.
X1 , · · · , Xn ∼ Uniform(0, θ)
∫ θ
∫ θ
d
∂
h(x)fX (x|θ)dx ̸=
h(x) fX (x|θ)dx
dθ 0
∂θ
0
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
32 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Summary
.
Today
.
• Invariance Property
• Mean Squared Error
• Unbiased Estimator
.
• Cramer-Rao inequality
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
33 / 33
Recap
.....
MLE
....
Evaluation
.....
Cramer-Rao
.................
Summary
.
Summary
.
Today
.
• Invariance Property
• Mean Squared Error
• Unbiased Estimator
.
• Cramer-Rao inequality
.
Next Lecture
.
• More on Cramer-Rao inequality
.
.
Hyun Min Kang
Biostatistics 602 - Lecture 11
.
.
.
.
February 14th, 2013
.
33 / 33
Download