Document 10639924

advertisement
Best Linear Unbiased Estimation in
Constrained Models
c
Copyright 2012
Dan Nettleton (Iowa State University)
Statistics 611
1 / 20
Suppose the GMM holds and suppose β ∈ Rp satisfies H0 β = h for
some known p×q
H of rank q and some known h.
c
Copyright 2012
Dan Nettleton (Iowa State University)
q×1
Statistics 611
2 / 20
The purpose of these notes is to show that if β̃ is the first component
of a solution to the RNE
"
X0 X H
H0
0
#" #
b
λ
=
" #
X0 y
h
,
then c0 β̃ is the BLUE of estimable c0 β under the constrained GMM
(CGMM).
c
Copyright 2012
Dan Nettleton (Iowa State University)
Statistics 611
3 / 20
Lemma 4.2:
If c0 β is estimable under the constrained model, then the following
equations have a solution
"
X0 X H
H0
c
Copyright 2012
Dan Nettleton (Iowa State University)
0
#" #
v1
v2
=
" #
c
0
.
Statistics 611
4 / 20
Proof of Lemma 4.2:
By Result 3.7, c0 β estimable under the constrained model
⇒ ∃ a and l 3 c = X0 a + Hl.
(Fact 1)
By Result 3.8, the RNE are consistent.
Thus, ∃ a solution to
"
#" #
X0 X H b
H0
0
λ
∀ y and ∀ h 3 H0 b = h is consistent.
c
Copyright 2012
Dan Nettleton (Iowa State University)
"
=
#
X0 y
h
(Fact 2)
Statistics 611
5 / 20
By Fact 2, ∃ b∗ and λ∗ 3
"
X0 X H
H0
0
#"
b∗
λ∗
#
"
=
#
X0 a
0
.
Thus, X0 Xb∗ + Hλ∗ = X0 a and H0 b∗ = 0.
Using this with Fact 1 ⇒
c = X0 Xb∗ + Hλ∗ + Hl
= X0 Xb∗ + H(λ∗ + l)
for some l
and
H0 b∗ = 0.
c
Copyright 2012
Dan Nettleton (Iowa State University)
Statistics 611
6 / 20
"
∴
X0 X H
H0
0
#"
b∗
λ∗ + l
#
=
" #
c
0
.
"
∴ There exists a solution to
X0 X H
H0
0
#" #
v1
v2
=
" #
c
0
.
c
Copyright 2012
Dan Nettleton (Iowa State University)
Statistics 611
7 / 20
Lemma 4.3:
If β̃ is the first part of a solution to the RNE and if c0 β is estimable, then
c0 β̃ is an unbiased linear estimator of c0 β under the restricted model.
c
Copyright 2012
Dan Nettleton (Iowa State University)
Statistics 611
8 / 20
Proof of Lemma 4.3:
By Lemma 4.2, ∃ v1 , v2 3
"
#" #
X 0 X H v1
H0
0
v2
=
" #
c
0
⇐⇒
X0 Xv1 + Hv2 = c
and
(1)
H0 v1 = 0.
(2)
Thus,
c0 β̃ = (v01 X0 X + v02 H0 )β̃
= v01 X0 Xβ̃ + v02 H0 β̃.
c
Copyright 2012
Dan Nettleton (Iowa State University)
(3)
Statistics 611
9 / 20
Now β̃ a leading subvector of a solution to the RNE
"
#" # " #
X0 y
X0 X H β̃
=
for some λ
⇐⇒
h
H0 0 λ
⇐⇒ X0 Xβ̃ + Hλ = X0 y
for some λ
(4)
and H0 β̃ = h.
c
Copyright 2012
Dan Nettleton (Iowa State University)
(5)
Statistics 611
10 / 20
It follows that
c0 β̃ = v01 (X0 y − Hλ) + v02 h
(by (3), (4), (5))
= v01 X0 y − v01 Hλ + v02 h
= v01 X0 y + v02 h.
c
Copyright 2012
Dan Nettleton (Iowa State University)
(by (2))
Statistics 611
11 / 20
E(c0 β̃) = v01 X0 E(y) + v02 h
= v01 X0 Xβ + v02 h
= (c0 − v02 H0 )β + v02 h (by (1))
= c0 β − v02 H0 β + v02 h
= c0 β
c
Copyright 2012
Dan Nettleton (Iowa State University)
∀ β 3 H0 β = h.
Statistics 611
12 / 20
Result 4.5:
Suppose the CGMM holds, β̃ is the leading subvector of a solution to
the RNE, and c0 β is estimable under the constrained model. Then c0 β̃
is the BLUE of c0 β under the constrained model.
c
Copyright 2012
Dan Nettleton (Iowa State University)
Statistics 611
13 / 20
Proof of Result 4.5:
By Result 3.7, any linear estimator unbiased for c0 β under the
constrained model has the form
l0 h + a0 y,
where
c
Copyright 2012
Dan Nettleton (Iowa State University)
c = X0 a + Hl.
(6)
Statistics 611
14 / 20
Var(l0 h + a0 y) = Var(l0 h + a0 y − c0 β̃ + c0 β̃)
= Var(l0 h + a0 y − c0 β̃) + Var(c0 β̃)
+ 2Cov(l0 h + a0 y − c0 β̃, c0 β̃).
Substituting v01 X0 y + v02 h for c0 β̃ (see slide 11) leads to the following
expression for the covariance:
c
Copyright 2012
Dan Nettleton (Iowa State University)
Statistics 611
15 / 20
Cov(a0 y − v01 X0 y, v01 X0 y) = Cov((a0 − v01 X0 )y, v01 X0 y)
= σ 2 (a0 − v01 X0 )Xv1
= σ 2 (a0 X − v01 X0 X)v1
= σ 2 ((c0 − l0 H0 ) − (c0 − v02 H0 ))v1
(by (6), (1))
= σ 2 (v02 − l0 )H0 v1
= σ 2 (v2 − l)0 0 = 0.
c
Copyright 2012
Dan Nettleton (Iowa State University)
(by (2))
Statistics 611
16 / 20
Thus, Var(l0 h + a0 y) ≥ Var(c0 β̃) with equality iff
Var(l0 h + a0 y − c0 β̃) = 0
⇐⇒ Var(a0 y − c0 β̃) = 0
⇐⇒ Var(a0 y − v01 X0 y − v02 h) = 0
⇐⇒ Var(a0 y − v01 X0 y) = 0
⇐⇒ Var((a − Xv1 )0 y) = 0
⇐⇒ σ 2 (a − Xv1 )0 (a − Xv1 ) = 0
⇐⇒ a = Xv1
⇐⇒ a = Xv1
c
Copyright 2012
Dan Nettleton (Iowa State University)
and Hv2 = Hl because...
Statistics 611
17 / 20
c = X0 a + Hl = X0 Xv1 + Hv2 ,
by (1) and (6) so that a = Xv1 ⇒ Hl = Hv2 .
Recall that H is of full-column rank. Thus
Hl = Hv2 ⇐⇒ H(l − v2 ) = 0
c
Copyright 2012
Dan Nettleton (Iowa State University)
⇐⇒ l − v2 = 0
⇐⇒ l = v2 .
Statistics 611
18 / 20
It follows that
Var(l0 h + a0 y) ≥ Var(c0 β̃)
with equality iff
l0 h + a0 y = v01 X0 y + v02 h
= c0 β̃.
∴ the constrained BLUE is unique.
c
Copyright 2012
Dan Nettleton (Iowa State University)
Statistics 611
19 / 20
Note that we can handle BLUE in the constrained version of the AM by
transforming to the GMM.
The constraint is unaffected by transformation.
V −1/2 y = V −1/2 Xβ + V −1/2 ε
= Uβ + δ.
c
Copyright 2012
Dan Nettleton (Iowa State University)
Statistics 611
20 / 20
Download