Uploaded by Farooq Siddiqui

Lec10 IntEcono v2

advertisement
OLS Asymptotics
YSS2211 Introductory Econometrics
Lecture 10
Ran Song
Yale-NUS College, Singapore
Semester 1, 2022/23
Semester 1, 2022/23
1 / 11
OLS Asymptotics
Asymptotics - Introduction
In the last few sections we covered finite sample (a.k.a.
small sample, or exact) properties of the OLS estimators.
Now interested in what happens when n → ∞.
When we added the assumption that the error terms were
normally distributed (MLR 6), we were able to derive exact
sampling (for finite n) distributions of the OLS estimators.
If the error is not normally distributed, we can no longer
say that the distribution of the t statistic is exactly t or that
the F statistic has an exact F distribution for any sample
size.
Semester 1, 2022/23
2 / 11
OLS Asymptotics
Consistency
In estimation, we are typically constrained by the amount
of information (in the form of a random sample) we have
to compute β̂ –we make a sample estimate of an unknown
parameter β.
When looking at consistency, we are interested in the
thought experiment - if you are not constrained by the
availability of data (n → ∞), what happens to the sampling
distribution of our estimator.
Semester 1, 2022/23
3 / 11
OLS Asymptotics
Consistency -plim
Definition An estimator θ̂n = f (Y1 , . . . , Yn ) is a consistent
of θ if for every ϵ > 0,
lim P{|θ̂n − θ | > ϵ} = 0
n→ ∞
If θ̂n is not consistent for θ, then we say it is inconsistent.
When θ̂n is consistent, we also say that θ is the probability
limit of θ̂n , written as
plim(θ̂n ) = θ.
Unlike unbiasedness which is a feature of an estimator for
a given sample size - consistency involves the behavior of
the sampling distribution of the estimator as the sample
size n gets large.
Semester 1, 2022/23
4 / 11
OLS Asymptotics
Consistency -plim continues
We want the limn→∞ Var(θ̂n ) = 0, and limn→∞ Bias(θ̂n ) = 0.
But Var(θ̂n ) does not always exist.
We want the sampling distribution to become more
concentrated on the true value θ as n increases.
Semester 1, 2022/23
5 / 11
OLS Asymptotics
Consistency - plim continues
The earliest and simplest demonstration of consistency is
the Weak Law of Large Numbers.
Let Y1 , Y2 , . . . , Yn be independent, identically distributed
random variables with mean µ and Var(Yi ) = σ2 < ∞.
Then,
plimȲ = µ
Notice that in the case of the sample mean,
Bias(Ȳ) = EȲ − µ = 0,
Var(Ȳ) = σ2 /n
Hence easy to show that limn→∞ Var(Ȳ) = 0, and
limn→∞ Bias(Ȳ) = 0.
Semester 1, 2022/23
6 / 11
OLS Asymptotics
Consistency - OLS Estimator
Theorem 5.1
If MLR1 to MLR 4 holds, β̂ j is a consistent estimator for β j
for all j = 1, 2 . . . , k
Consistency is a minimum quality/property we expect of
our estimators.
“If obtaining more and more data does not generally get us
closer to the parameter value of interest, then we are using
a poor estimation procedure.” - page 169, Wooldridge.
Recall the case of omitted variable bias - we can show that
β̂ 1 = β 1 +
1 n
n ∑i=1 (x1i − x̄1 )ui
1 n
2
n ∑i=1 (x1i − x̄1 )
cov(x ,u
So plim( β̂ 1 ) = β 1 + var(x1i ) i . that is, OLS is inconsistent
1
when a relevant regressor is omitted from the regression
model.
Semester 1, 2022/23
7 / 11
OLS Asymptotics
Some properties of plim
Supposed plim(Wn ) = θ, and g(·) is some continuous
function.
1. plim g(Wn ) = g(plim(Wn )).
2. If plim(Tn ) = α, and plim(Un ) = β, where β ̸= 0 then,
Tn
a) plim U
= αβ
n
b) plim(Tn + Un ) = α + β
c) plim(Tn Un ) = αβ
Semester 1, 2022/23
8 / 11
OLS Asymptotics
Applying plim rules
Examples
Consider the sample variance estimator
s2n = n−1 1 ∑(Yi − Ȳ)2 . We know that plim s2n = σ2 and
Es2n = σ2 .
Suppose we are interested in an estimator for σ, we know
that
q
q
2
E sn ̸= Es2n = σ
But, we know from Rule 1 of plim that
q
q
√
plim s2n = plim s2n = σ2 = σ
Semester 1, 2022/23
9 / 11
OLS Asymptotics
Applying plim rules continues
Examples
Consider Ȳ as the sample proportion of successes in n
trials. It is an estimator of θ, the probability of success in
each trial. Suppose we are interested in the odds ratio 1−θ θ ,
a natural estimator is γ̂ = 1−ȲȲ .
This is a biased estimator since
Eγ̂ ̸=
θ
.
1−θ
But it is a consistent estimator
plimγ̂ =
Semester 1, 2022/23
plimȲ
θ
=
1−θ
plim(1 − Ȳ)
10 / 11
OLS Asymptotics
Asymptotic Normality of the OLS Estimator
continues
Even when u are not normally distributed (MLR6 does not
hold), we can show that the OLS estimators satisfy
asymptotic normality
Theorem
Under the Gauss-Markov Assumptions MLR. 1 - MLR. 5:
β̂ j − β j
s.e.( β̂ j )
∼ N (0, 1)
where j = 1, 2, . . . , k and s.e.( β̂ j ) is the usual OLS standard
error.
The asymptotic normality theorem says that even if we do
not know the distribution of the error terms, β̂ j is still
normally distributed in large samples, and that the
standardized β̂ j goes to a standard normal distribution in
large
samples .
Semester 1, 2022/23
11 / 11
Download