THE AITKEN MODEL

advertisement
THE AITKEN MODEL
y = Xβ + , ∼ (0, σ 2 V)
Identical to the Gauss-Markov linear model except that
Var() = σ 2 V instead of σ 2 I.
V is assumed to be a known nonsingular variance matrix.
σ 2 is an unknown positive variance parameter.
c
Copyright 2012
(Iowa State University)
Statistics 511
1 / 11
A Transformation of the Model
By the Spectral Decomposition Theorem, there exists a
nonsingular symmetric matrix V 1/2 such that V 1/2 V 1/2 = V.
Using V −1/2 to denote (V 1/2 )−1 , we have
V −1/2 y = V −1/2 Xβ + V −1/2 .
With z = V −1/2 y, W = V −1/2 X, and δ = V −1/2 , we have
z = Wβ + δ, δ ∼ (0, σ 2 I) because
Var(δ) = Var(V −1/2 ) = V −1/2 σ 2 VV −1/2
= σ 2 V −1/2 V 1/2 V 1/2 V −1/2 = σ 2 I.
c
Copyright 2012
(Iowa State University)
Statistics 511
2 / 11
Thus, after transformation, we are back to the
Gauss-Markov model we are familiar with.
We can apply all the results we have established previously
to the Gauss-Markov model
z = Wβ + δ, δ ∼ (0, σ 2 I).
c
Copyright 2012
(Iowa State University)
Statistics 511
3 / 11
Estimation of E(y) under the Aitken Model
Note that
E(y) = E(V 1/2 V −1/2 y) = V 1/2 E(V −1/2 y) = V 1/2 E(z).
Because the Gauss-Markov model holds for z, we already
know that the best estimate of E(z) is
ẑ =
=
=
=
PW z = W(W 0 W)− W 0 z
V −1/2 X((V −1/2 X)0 V −1/2 X)− (V −1/2 X)0 V 1/2 y
V −1/2 X(X0 V −1/2 V −1/2 X)− X0 V −1/2 V −1/2 y
V −1/2 X(X0 V −1 X)− X0 V −1 y.
Thus, to estimate E(y) = V 1/2 E(z), we should use
V 1/2 ẑ = V 1/2 V −1/2 X(X0 V −1 X)− X0 V −1 y = X(X0 V −1 X)− X0 V −1 y.
c
Copyright 2012
(Iowa State University)
Statistics 511
4 / 11
Likewise, if Cβ is estimable, we know the BLUE is the
ordinary least squares (OLS) estimator.
C(W 0 W)− W 0 z = C(X0 V −1/2 V −1/2 X)− X0 V −1/2 V −1/2 y
= C(X0 V −1 X)− X0 V −1 y.
C(X0 V −1 X)− X0 V −1 y = Cβ̂ V is called a Generalized Least
Squares (GLS) estimator.
c
Copyright 2012
(Iowa State University)
Statistics 511
5 / 11
β̂ V = (X0 V −1 X)− X0 V −1 y is a solution to the Aitken Equations:
X0 V −1 Xb = X0 V −1 y
which follow from the Normal Equations
W 0 Wb = W 0 z
c
Copyright 2012
(Iowa State University)
⇐⇒
⇐⇒
X0 V −1/2 V −1/2 Xb = X0 V −1/2 V −1/2 y
X0 V −1 Xb = X0 V −1 y.
Statistics 511
6 / 11
Recall that solving the Normal Equations is equivalent to
minimizing
(z − Wb)0 (z − Wb) over b ∈ IRp .
Note that
(z − Wb)0 (z − Wb) =
=
=
=
c
Copyright 2012
(Iowa State University)
(V −1/2 y − V −1/2 Xb)0 (V −1/2 y − V −1/2 Xb)
||V −1/2 y − V −1/2 Xb||2
||V −1/2 (y − Xb)||2
(y − Xb)0 V −1 (y − Xb).
Statistics 511
7 / 11
Thus, β̂ V = (X0 V −1 X)− X0 V −1 y is a solution to this generalized
least squares problem.
When V is diagonal, the term “Weighted Least Squares”
(WLS) is often used instead of GLS.
c
Copyright 2012
(Iowa State University)
Statistics 511
8 / 11
An unbiased estimator of σ 2 is
z0 (I − PW )z
k(I − PW )zk2
=
n − rank(W)
n − rank(W)
2
k(I − W(W 0 W)− W 0 )zk
=
n − rank(W)
(I − V −1/2 X(X0 V −1 X)− X0 V −1/2 )V −1/2 y2
=
n − rank(V −1/2 X)
c
Copyright 2012
(Iowa State University)
Statistics 511
9 / 11
=
=
=
2
−1/2
y − V −1/2 X(X0 V −1 X)− X0 V −1 y
V
n − rank(X)
2
−1/2
(y − X(X0 V −1 X)− X0 V −1 y)
V
n−r
2
−1/2
(y − Xβ̂ V )
V
n−r
(y − Xβ̂ V )0 V −1 (y − Xβ̂ V )
=
n−r
2
= σ̂V .
c
Copyright 2012
(Iowa State University)
Statistics 511
10 / 11
Inference Under the Normal Theory Aitken Model
The Normal Theory Aiken Model:
y = Xβ + , ∼ N(0, σ 2 V).
Under the Normal Theory Aitken Model, we can back
transform to convert known formulas in terms of z and W to
formulas in terms of y and X to allow inference about
estimable Cβ under the Normal Theory Aitken Model.
c
Copyright 2012
(Iowa State University)
Statistics 511
11 / 11
Download