1 Best Linear Unbiased Prediction and Related Inference in the Mixed Model

advertisement
1
Best Linear Unbiased Prediction and Related
Inference in the Mixed Model
In the usual mixed model
Y = Xβ + Zu + ²
with
E
µ
u
²
and using the notation
¶
= 0 and Var
µ
u
²
¶
=
µ
G 0
0 R
¶
V = ZGZ0 +R
we consider problems of prediction related to the random effects contained in
u. Under the MVN assumption
E [u|Y] = GZ0 V−1 (Y − Xβ)
(1)
which, obviously, depends on the fixed effect vector β. (For what it is worth,
µ
¶ µ
¶
u
G
GZ0
Var
=
Y
ZG ZGZ0 + R
and the GZ0 appearing in (1) is the covariance between u and Y.) Something
that is close to the conditional mean (1), but that does not depend on fixed
effects is
³
´
b∗
b = GZ0 V−1 Y−Y
(2)
u
b ∗ is the generalized least squares (best linear unbiased) estimate of the
where Y
mean of Y,
¡
¢
b ∗ = X X0 V−1 X − X0 V−1 Y
Y
The predictor (2) turns out to be the Best Linear Unbiased Predictor of u, and
if we temporarily abbreviate
¢−
¡
(3)
B = X X0 V−1 X X0 V−1 and P = V−1 (I − B)
this can be written as
b = GZ0 V−1 (Y − BY) = GZ0 V−1 (I − B) Y = GZ0 PY
u
(4)
b and the problems of quoting appropriWe consider here predictions based on u
ate “precision” measures for them.
b is an obvious approximation and a
To begin with the u vector itself, u
precision of prediction should be related to the variability in the difference
u−b
u = u − BY = u − B (Xβ + Zu + ²) = u − BXβ − BZu − B²
This random vector has mean 0 and covariance matrix
Var (u−b
u) = (I − BZ) G (I − BZ)0 + BRB0 = G − GZ0 PZG
1
(5)
(This last inequality is not obvious to me, but is what McCulloch and Searle
promise on page 170 of their book.)
b in (4) is not available unless knows the covariance matrices G and
Now u
V. If one has estimated variance components and hence has estimates of G
and V (and for that matter, R, B, and P) the approximate BLUP
b
b
b 0 PY
b = GZ
u
may be used. A way of making a crude approximation to a measure of precision of the approximate BLUP (as a predictor u) is to plug estimates into the
relationship (5) to produce
³
d b´ b b 0 b b
b = G−GZ PZG
Var u−u
Consider now the prediction of a quantity
l = c0 β + s0 u
for an estimable c0 β (estimability has nothing to do with the covariance structure of Y and so “estimability” means here what it always does). As it turns
out, if c0 = a0 X, the BLUP of l is
³
´
0
0
b
b ∗ + s0 u
b = a0 BY + s0 GZ PY = a0 B + s0 GZ P Y
(6)
l= a0 Y
To quantify the precision of this as a predictor of l we must consider the random
variable
b
l−l
The variance of this is the unpleasant (but not impossible) quantity
³
´
¡
¢−
0
Var b
l − l = a0 X X0 V−1 X X0 a + s0 GZ PZGs−2a0 BZGs
(7)
(This variance is from page 256 of McCulloch and Searle.)
Now b
l of (6) is not available unless one knows covariance matrices. But with
estimates of variance components and corresponding matrices, what is available
as an approximation to b
l is
´
³
bb
b 0 GZ
b 0P
b Y
l = a0 B+s
A way of making a crude approximation to a measure of precision of the approximate BLUP (as a predictor of l) is to plug estimates into the relationship
(7) to produce
³
´−
³
´
d
0b b
b −1 X X0 a + s0 GZ
b 0 PZ
b Gs−2a
b
BZGs
Var b
l − l = a0 X X0 V
2
Download