Solutions to the Midterm Exam

advertisement
Solutions to the Midterm Exam
ECON 837
Prof. Simon Woodock, Spring 2014
1. (a) We know
E [^ ] =
,E
"
n
X
#
ci Xi =
i=1
,
n
X
ci E [Xi ] =
n
X
,
i=1
ci i =
,
i=1
n
X
ci i = 1:
i=1
(b) The sampling variance is
" n
#
n
n X
n
X
X
X
X
V ar [^ ] = V ar
ci Xi =
c2i V ar [Xi ]+2
ci cj Cov [Xi ; Xj ] =
c2i V ar [Xi ] =
i=1
i=1
i=1 j6=i
i=1
2
n
X
c2i :
i=1
(c) The minimum variance unbiased estimator solves
2
min
ci
2
L=
1
n
X
n
X
ci i = 1
i=1
"
c2i +
i=1
with FOCs:
2
c2i s.t.
i=1
which implies the Lagrangean
2
n
X
ci
i =
n
X
ci i =
n
X
1
#
ci i
i=1
0 for i = 1; 2; :::; n
0:
i=1
We need to eliminate : From the …rst n FOCs we see that for each i;
X
X
i = 2 2 ci ) i2 = 2 2 ci i )
i2 = 2 2
ci i = 2 2 )
P
upon substituting in the constraint ci i = 1: Therefore, for each i we have
2
2
2 2
=P 2
i
i
6i
2 2i
:
ci = P 2 ) ci = P 2 =
i
i
n (n + 1) (2n + 1)
Of course we need to check the second order conditions as well to be sure we’re at a minimum.
You can verify that the Hessian of the Lagrangean is
2
H=
where n0 = [1 2 3
2
In
n0
n
0
n] ; and H is psd.
2.
(a) We know that
^
=
(X0 X)
^
=
(Z0 Z)
1
1
X0 y
Z0 y:
Since Z = XG and G is non-singular, we have
^
=
0
(XG) (XG)
= G
= G
(X0 X)
1^
:
1
1
1
(G0 )
1
0
(XG) y = [G0 X0 XG]
1
G0 X 0 y = G
1
1
G0 X 0 y
(X0 X)
1
X0 y
(b)
V ar [^ jX]
0
= E (^ E [^ jX]) (^ E [^ jX]) jX = E
= E G
1
= E G
1
h
i
^ E ^ jX
1
G
h
i
^ E ^ jX
h
i
^ E ^ jX
h
i
^ E ^ jX
0
h
i
h
i
^ E ^ jX
^ E ^ jX
h
i
0
= G 1 V ar ^ jX G 1 :
= G
1
0
E
G
jX
h
E G
1^
G
0
1 0
Z (Z0 Z)
= MZ y = I
= I
XGG
1
(X0 X)
1
1
Z0 = I
(G0 )
1
jX
i
G
1^
h
E G
1^
jX
jX
G
1 0
(c) The residual vector in the regression of y on X is eX = MX y = I
ual vector in the regression of y on Z is
eZ
1^
XG (G0 X0 XG)
G0 X 0 = I
1
X (X0 X)
1
X0 y: The resid-
0
(XG)
X (X0 X)
1
X 0 = eX
so they are the same.
0
(d) We know that R2 = 1 e0 e=y Jy: Since we have the same dependent variable in both regressions
(y) and the same residual vector (e = eX = eZ ), R2 is the same in both regressions.
(a) Notice that E ["i jX] = ( 1) (2=3) + (2) (1=3) = 0: Hence our linearity assumption is satis…ed,
because E [Yi jXi ] = + Xi + E ["i jXi ] = + Xi and it follows that the OLS estimator of and
is unbiased. Our spherical errors assumption is also satis…ed because the variance of "i doesn’t
depend on Xi and we are told that E ["i "j jX] = 0 for i 6= j: Thus the assumptions of the GMT
are satis…ed, and we know that OLS is BLUE.
(b) Notice that given what we are told about the distribution of Xi and "i ; only 4 values of Yi are
possible. When Xi = 0; Yi =
1 with Pr = 2=3; and Yi = + 2 with Pr = 1=3; and when
Xi = 1; Yi = +
1 with Pr = 2=3; and Yi = + + 2 with Pr = 1=3: With this knowledge,
we can recover the exact values of and from the data. These are better “estimates” than
given by OLS because they are the true values, and hence have zero variance. For example,
max
we could …nd the largest value of Yi in the sample that has Xi = 0 (call it YX=0
) and recover
max
max
= YX=0 2: Then …nd the largest value of Yi in the sample that has Xi = 1 (call it YX=1
and
max
max
max
2 = YX=1 YX=0 : Other approaches will also work.
recover = YX=1
(c) Although OLS will give us unbiased estimates, the estimates may depart from the true values
because they are only right “on average.”In contrast, we get true values via the approach in part
b. And while the OLS estimator is BLUE, we can do “better”in this case by exploiting what we
know about the error structure and using a nonlinear estimator (e.g., the approach in part b).
The key here is that the errors (and Yi ) only take values from a …nite set and we can use this to
solve for the true values exactly.
3. The constrained least squares estimator is
b = ^ + (X0 X)
1
h
R0 R (X0 X)
1
R0
i
1
r
R^
where ^ is the unconstrained estimator. Denoting the residuals from the unconstrained model by e
2
jX
i
0
jX
and from the constrained model by e ; we have
h
i0 h
i
e 0e = y X^
y X^
=
y
X ^ + (X0 X)
1
y
X ^ + (X0 X)
1
=
=
h
R0 R (X0 X)
h
R0 R (X0 X)
y
X^
X (X0 X)
1
y
X^
X (X0 X)
1
X (X0 X)
e
1
= e0 e + X (X0 X)
R0
1
R0
h
R0 R (X0 X)
h
R0 R (X0 X)
h
R0 R (X0 X)
1
1
1
R0
h
R0 R (X0 X)
since X0 e = e0 X = 0: Continuing,
0h
e 0 e = e0 e + r R ^
R (X0 X)
0h
= e0 e + r R ^
R (X0 X)
1
1
R0
R0
1
i
1
i
1
i
1
R0
1
R0
i
i
1
r
R^
1
R (X0 X)
R^
r
R^
1
1
0
r
0
0
X (X0 X)
X0 X (X0 X)
= E
R (X0 X)
R
= E tr
= E tr
= tr E
= tr
h
h
h
1
X0 "
0
h
1
X0 "
R (X0 X)
1
R0
R (X0 X)
1
R0
1
R0
1
R (X0 X)
=
2
tr
=
2
tr (Iq ) =
2
i
R0
q
0
i
1
i
1
X0 "
0
R (X0 X)
R (X0 X)
R (X0 X)
h
1
+ (X0 X)
r
1
h
1
1
R (X0 X)
1
R0
i
1
R (X0 X)
1
R0
i
1
r
1
X0 "
1
R (X0 X)
R (X0 X)
1
X0 ""0 X (X0 X)
1
R0
X0 E [""0 ] X (X0 X)
1
R0
+
k). So,
2
2
q=
(n
k + q)
is
s2 =
e 0e
(y
=
n k+q
3
0
R0
2
k)
0
h
R0 R (X0 X)
h
R0 R (X0 X)
+ (X0 X)
X0 "
X0 "
1
h
R0 R (X0 X)
1
R0
1
i
1
R0
R0
i
Xb) (y Xb)
:
n k+q
i
1
r
1
r
1
1
X0 "
since r = R
1
R (X0 X)
1
1
r
R^
; but what about the second
R
X0 " R (X0 X)
1
2
1
1
E [e 0 e ] = (n
2
i
R (X0 X)
where q is the number of restrictions (that is, R is q
and an unbiased estimator of
R0
k)
R (X0 X)
R (X0 X)
i
h
1
R^ :
r
We need to take the expectation of this. We know E [e0 e] = (n
term?
i 1
0h
1
E r R^
R (X0 X) R0
r R^
= E
0
X (X0 X)
e
R^
r
1
R^
R^
r
i
r
1
1
R0
i
i
since a scalar
cyclic permutation
linear operator
R^
R^
Download