1 Theoretical exercises ECON 837 - Econometrics Answer Keys for Assignment #1

advertisement
ECON 837 - Econometrics
Answer Keys for Assignment #1
1 Theoretical exercises
Exercise 1:
(a) (i) To get β̂ , we regress y on X and Z . The joint OLS estimator is:
(
α̂
β̂
)
(
′
= [(X Z) (X Z)]
−1
′
(X Z) y =
X ′X X ′Z
Z ′X Z ′Z
)(
X ′y
Z ′y
)
To extract β̂ , we use the formula for partitioned matrix inverse,
β̂ = (Z ′ Z−Z ′ X(X ′ X)−1 X ′ Z)−1 [−Z ′ X(X ′ X)−1 X ′ y+Z ′ y] = (Z ′ Mx Z)−1 (Z ′ Mx y)
By assumption, both X and Z are xed regressors (so no need to condition!)
E(β̂) = [Z ′ Mx Z]−1 [Z ′ Mx ]E(y)
= [Z ′ Mx Z]−1 [Z ′ Mx ](Xα + Zβ)
= β
since Mx X = 0
V ar(β̂) = [Z ′ Mx Z]−1 Z ′ Mx V ar(y)Mx Z[Z ′ Mx Z]−1
= σ 2 [Z ′ Mx Z]−1
since V ar(y) = σ 2 In .
(ii) To get β̂ ∗ , regress y on Z only. The OLS estimator and rst moments are:
β̂ ∗ = (Z ′ Z)−1 Z ′ y
E(β̂ ∗ ) = (Z ′ Z)−1 ZE(y) = (Z ′ Z)−1 Z ′ Xα + β
V ar(β̂ ∗ ) = (Z ′ Z)−1 Z ′ V ar(y)Z(Z ′ Z)−1 = σ 2 (Z ′ Z)−1
(iii) To get β̂ ∗∗ , rst, regress y on X to get α̂ = (X ′ X)−1 X ′ y ; second, regress the
associated residuals, (y − X α̂) on Z to get
β̂ ∗∗ = (Z ′ Z)−1 Z ′ (y − X(X ′ X)−1 X ′ y) = (Z ′ Z)−1 Z ′ Mx y
1
Then, we have:
E(β̂ ∗∗ ) = (Z ′ Z)−1 Z ′ Mx E(y) = (Z ′ Z)−1 (Z ′ Mx Z)β
V ar(β̂ ∗∗ ) = (Z ′ Z)−1 Z ′ Mx V ar(y)Mx Z(Z ′ Z)−1 = σ 2 (Z ′ Z)−1 (Z ′ Mx Z)(Z ′ Z)−1
(b) From (a), given that k1 = k2 = 1, we have:
∑n
(zi xi )
E(β̂ ) = ∑i=1
×α+β
n
2
i=1 (zi )
∗
β and E(β̂ ∗ ) do not necessarily share the same sign because it is∑aected by the
n
(zi xi )
value of α (which can be positive or negative) as well as the ratio ∑i=1
(which
n
(z 2 )
can also be positive or negative due to the numerator).
∑n
(z˜i 2 )
E(β̂ ) = ∑i=1
×β
n
2
i=1 (zi )
∗∗
i=1
i
with Z̃ = Mx Z
β and E(β̂ ∗∗ ) always have the same sign since the ratio
∑n
(z˜i 2 )
∑i=1
n
2
i=1 (zi )
is always positive.
(c) Recall that Mx is an orthogonal projection matrix. Hence,
∥Mx Z∥2 ≤ ∥Z∥2 ⇔ Z ′ Mx Z ≤ Z ′ Z
σ2
σ2
⇔
≥
Z ′ Mx Z
Z ′Z
⇔ V ar(β̂) ≥ V ar(β̂ ∗ )
In addition, we also have:
0 < (Z ′ Mx Z)(Z ′ Z)−1 ≤ 1 ⇔ σ 2 (Z ′ Z)−1 (Z ′ Mx Z)(Z ′ Z)−1 ≤ σ 2 (Z ′ Z)−1
⇔ V ar(β̂ ∗∗ ) ≤ V ar(β̂ ∗ )
(d) In this exercise the true (correct) model has 2 regressors, X and Z , and we are
comparing the bias and variance properties of 3 estimators of the coecient on Z
only: to some extend, we do not care about the eect of X on y . The results are
2
as follows:
- (i) the standard OLS estimator β̂ in the full (correct) model is unbiased, but
has a large variance;
- (ii) the OLS estimator β̂ ∗ in the small (incorrect/misspecied) model is (grossly)
biased, but has a smaller variance that β̂ . The intuition is clear: we have forgotten X in the model, so whenever it is correlated with Z it is going to create a
bias; however, because the small model contains less explanatory variables, the
estimator is more precise;
- (iii) the hybrid estimator β̂ ∗∗ is biased (but not as badly as β̂ ∗ ) and with an
even smaller variance. Note that this is not quite the FW 2 step method because
Z was not regressed on X : to some extend this estimator forgot to account for
the correlation between X and Z and this is why it is biased in general.
Standard eciency results do not apply to compare our 3 estimators, because the
BLUE (and Gauss-Markov if we were to assume normality of the errors) compare
linear estimators that are unbiased.
Exercise 2:
Consider the following model,
(a) By assumption, yi ∼ N (α + βxi , σ 2 ) and yi 's are independent. We can then write
a linear model,
yi = α + βxi + ui
where ui ∼ iidN (0, σ 2 ) and xi are xed regressors. To nd the OLS estimator of
α under the restriction that β = 1, we solve the following constrained optimization
problem:
min
α
n
∑
(yi − α − βxi )2
s.t. β = 1 ⇔ min
α
i=1
n
∑
(yi − α − xi )2
i=1
Here, you can either write the FOC and solve to get α̂, or apply directly the formula
for the OLS estimator to (yi − xi ) regressed on a constant only, to get:
1∑
(yi − xi )
n i=1
n
α̂ =
3
(b) - When the restriction is true, β = 1:
1∑
(yi − xi )
n i
1∑
=
E(α + xi + ui − xi )
n i
= α
σ2
1 ∑
V ar(ui ) =
V ar(α̂) =
n2 i
n
E(α̂) = E
- When the restriction is false, β ̸= 1:
1∑
(yi − xi )
n i
1∑
=
E(α + βxi + ui − xi )
n i
1∑
= α + (β − 1)
xi
n i
σ2
1 ∑
V
ar(u
)
=
V ar(α̂) =
i
n2 i
n
E(α̂) = E
(c) The constrained estimator is biased when the (imposed) restriction is false, and
unbiased otherwise. The unconstrained estimator is always unbiased. The variance is
not aected by the constrained.
Exercise 3:
- In model 1: yi = βxi + ϵi , and the OLS estimator of β is
∑
(xi yi )
β̂ = ∑i 2
i (xi )
- In model 2: yi = α + β ∗ xi + ui , and the OLS estimator of β ∗ is
∑
∑ ∑
(xi yi ) − i xi i yi /n
i∑
∑
β̂ =
2
2
i (xi ) − n[
i (xi )]
∗
It is easy to see that the 2 estimators are equal to each other whenever
4
∑
i
xi = 0.
Download