Stat 511 HW#2 Spring 2003 4 4.001

advertisement
Stat 511 HW#2 Spring 2003
1. (Koehler) Consider the matrices
4.001
4.001 
 4
 4
A= 
and B = 


 4.001 4.002
 4.001 4.002001
Obviously, these matrices are nearly identical. Use R and compute the determinants and
inverses of these matrices. (Note that A−1 ≈ −3B −1 even though the original two matrices
are nearly the same. This shows that small changes in the in the elements of nearly
singular matrices can have big effects on some matrix operations.)
2. (Koehler) use the eigen() function in R to compute the eigenvalues and
eigenvectors of
 3 −1 1 
V =  −1 5 −1


 1 −1 3 
Then use R to find an "inverse square root" of this matrix. That is, find a symmetric
matrix W such WW = V −1 . See slides 87-89 and 91of Koehler's notes for help with this.
Koehler's Result 1.12 is Christensen's Theorem B.19.
3. In the context of Christensen’s Example 1.0.2 and the "data vector" used in Problem 6
′
of HW1, use R and compute all of Yˆ , Y − Yˆ , Yˆ ′ Y − Yˆ , Y ′Y , Yˆ ' Yˆ , and Y − Yˆ Y − Yˆ .
(
)
(
)(
)
4. In class Vardeman argued that a linear combination of model parameters c′β is
% %
something that can be sensibly estimated (is the same for all β vectors leading to a given
%
E Y ∈ C ( X ) ) precisely when c′ is a linear combination of rows of X , or equivalently
%
when c ∈ C ( X ′) . But c ∈ C ( X ′) exactly when c = PX ′c , that is when c is its own
%
%
%
%
%
projection onto C ( X ′) . Presumably, the same kind of formula given in class for PX can
be given for PX ′ , namely PX ′ = X ′ ( XX ′) X . Use R and find this matrix for the
situation of Christensen's Example 1.0.2. Then use this matrix and R to decide which of
the following linear combinations of parameters are "estimable" in this example:
µ ,α1 , µ − α1 , µ + α1 , µ + α1 + α 2 , 2 µ + α1 + α 2 and α1 − α 2
−
For those that are estimable, find the 6 × 1 row vector c′ ( X ′X ) X ′ that when multiplied
%
by Y produces the ordinary least squares estimate of c′β and say why (in retrospect) the
% %
results are perfectly sensible (at least under the Gauss-Markov model).
−
5. Twice now you've been asked to compute projection matrices in R. It seems like it
would be helpful to "automate" this. Have a look at Chapter 10 of An Introduction to R.
Write a function (call it, say, project) that for an input matrix produces a matrix that
projects vectors onto the column space of the input matrix. Test your function by running
it on both X and X ′ for Christensen's Example 1.0.2.
Download