Autocorrelation

advertisement
Autocorrelation
Walter Sosa-Escudero
Econ 471. Econometric Analysis. Spring 2009
April 23, 2009
Walter Sosa-Escudero
Autocorrelation
Time-Series Observations
Consider the following model
Yt = β1 + β2 X2t + · · · + βK XKt + ut ,
, t = 1, 2, . . . , T
Here t denotes periods, 1, . . . , T . This is a model for time series
observations.
Example: consumption and income for a given country, in several
periods.
In time-series analysis the way observations are sorted is very
important.
Walter Sosa-Escudero
Autocorrelation
The no-serial correlation assumption is
Cov(ut , us ) = 0, ,
t 6= s,
meaning that the error terms of two different periods must be
linearly unrelated. In the time-series context this assumption is
known as no autocorrelation.
Remember that serial correlation invalidates the Gauss-Markov
Theorem: OLS is still unbiasded (why?) but is not the best linear
unbiased estimator.
Walter Sosa-Escudero
Autocorrelation
A simple model for autocorrelation
Consider, WLOG, the simple model for time series:
Yt = β1 + β2 Xt + ut ,
t = 1, . . . , T
and let the error term be specified by
ut = φut−1 + t
This structure is known as the linear model with first-order
autorregresive serial correlation. We will assume |φ| < 1 and that
E(t ) = 0, V (t ) = σ2 and Cov(t , s ) = 0, t 6= 0.
Walter Sosa-Escudero
Autocorrelation
A closer look at ut = φut−1 + t
It is easy to see that the error term ut is linked over time,
impliying the presence of serial correlation.
ut is explicitely linked to its immediate past when φ = 0.
No autocorrelation in this setup: φ = 0.
ut = φut−1 + t is known as a first-order autorregresive
(AR(1)) process.
Intuition: suppose φ > 0, then ut tends to be ‘close’ to its
previous value ut−1 . Example: wheather?
The |φ| < 1 is a requisite for stability. What would happen is,
say, φ > 1?
Walter Sosa-Escudero
Autocorrelation
Testing for AR(1) autocorrelation
The Durbin-Watson test
It is a test for H0 : φ = 0 (no first order autocorrelation) vs.
HA : φ 6= 0.
The Durbin-Watson test is based on the following statistic:
PT
DW =
2
t=2 (et − et−1 )
PT 2
t=1 et
where et are OLS residuals.
Walter Sosa-Escudero
Autocorrelation
Intuition behind DW
PT
t=2 (et − et−1 )
PT 2
t=1 et
DW =
PT
PT
DW =
2
t=2 et
PT 2
t=1 et
+
2
t=2 et−1
PT 2
t=1 et
2
PT
t=2
−2 P
T
et et−1
2
t=1 et
Suppose that the number of observations is large.
Then the first term should be very close to one.
Second term: same thing
Third term: it can be shown that in the AR(1) structure
Cor(ut , ut−1 ) =
Cov(ut , ut−s )
=φ
V (ut )
Then the third term is nothing but an estimate of φ having
replaced ut by et .
Walter Sosa-Escudero
Autocorrelation
Then, the following approximation for the Durbin-Watson holds:
DW = 1 + 1 − 2φ̂ = 2(1 − φ̂)
Consider testing H0 : φ = 0 vs HA : φ > 0
When H0 : φ = 0 is true (no autocorrelation), DW ' 2
When φ > 0, DW < 2
Then we should accept H0 (no autocorrelation) when DW ' 2
and reject (positive serial correlation) when DW is significantly
smaller than 2
Walter Sosa-Escudero
Autocorrelation
A problem with DW
Consider H0 : φ = 0 vs HA : φ > 0 (a test for positive
autocorrelation).
According to the previous intuition, we should reject if DW is
significantly smaller than 2.
If we proceed as usual, we would reject if DW < dc where dc
is a critical value from the distribution of DV W .
Problem: the distribution of DW depends on the data used, it
cannot be tabulated in general: we cannot get dc
Solution: even though we cannot get dc , we can get bounds
dl and du such that
dl ≤ dc ≤ du
Walter Sosa-Escudero
Autocorrelation
Then the procedure for a test for positive serial correlation is as
follows
If DW > du , then DW > dc : accept H0 .
If DW < dl , then DW < dc : reject H0 .
If dl < DW < du we cannot tell if DW < dc or DW > dc :
the test is inconclusive
Walter Sosa-Escudero
Autocorrelation
Comments on the DW test
1
It is a rather old-fashioned test. It requires a very special
table.
2
We need to assume normal errors.
3
The model must include an intercept.
4
It is crucial that the X is non-stochastic.
5
The Durbin-Watson procedure is a test for a particular form
of serial correlation, the AR(1) process. It is not informative
about more general patterns of autocorrelation.
Walter Sosa-Escudero
Autocorrelation
More general autocorrelation: the Breusch-Pagan test
Now consider the following model:
Yt = β1 + β2 Xt + ut ,
t = 1, . . . , T
ut = φ1 ut−1 + φ2 ut−2 + · · · + φp ut−p + t
were t satisfied the same assumptions as before
This is the two-variable model with autocorrelation of order p
(AR(p))
No autocorrelation H0 : φ1 = φ2 = · · · = φp = 0.
The alternative hypothesis is
HA : φ1 6= 0 ∨ φ2 =6= 0 ∨ · · · ∨ φp 6= 0.
Walter Sosa-Escudero
Autocorrelation
(1)
(2)
The Breusch-Pagan test for AR(p) autocorrelation
It is very similar to our test for heteroskedasticity
1
2
3
Estimate by OLS, save residuals et
Regress et on et , et−1 , . . . , et−p and Xt , get R2 of this auxiliar
regression.
Test statistic (T − p)R2 . Under H0 , asymptotically, it has a
χ2 distribution with p degrees of freedom.
Intuition: like regressing ut on ut−1 , . . . , ut−p , but replacing the
ut ’s by et ’s. Under H0 the R2 of this auxiliar regression should be
zero, and different from zero under the alternative.
Walter Sosa-Escudero
Autocorrelation
Comments:
The Breusch-Pagan test does not require X to be non-random
It explores a more general pattern of serial correlation than
the DW test, which also explores the AR(1) case.
The choice of p is problematic: intuition suggests to go for a
large p. But for each lag we are loseone observations, a large p
reduces the number of observations and the power of the test.
Walter Sosa-Escudero
Autocorrelation
Estimation under Autocorrelaton: a modern view
We will see in the next lecture that the presence of
autocorrelation can be handled by using a dynamic regression
model.
Nevertheless, we will explore one possible strategy.
Walter Sosa-Escudero
Autocorrelation
Consider the simple linear model with AR(1) autocorrelation
Yt = β1 + β2 Xt + ut ,
ut = φut−1 + t ,
t = 1, . . . , T
|φ| < 1
Since the model is valid for every period, the two following
statements hold:
Yt = β1 + β2 Xt + ut
Yt−1 = β1 + β2 Xt−1 + ut−1
Walter Sosa-Escudero
Autocorrelation
Yt = β1 + β2 Xt + ut
Yt−1 = β1 + β2 Xt−1 + ut−1
Now multiply both sides by φ, and substract, and given that
ut = φut−1 + t we get:
Yt = β1 − φβ1 + φYt−1 + β2 Xt − β2 φXt−1 + ut − φut−1
= β1 − φβ1 + φYt−1 + β2 Xt − β2 φXt−1 + This is a non-linear (in parameters) regression model with no
autocorrelation: we have been able to get rid of serial correlation,
but now we need to estimate a non-linear model.
We will not cover the technical details, but estimation of such
model can be easily handled in any standard computer package.
Walter Sosa-Escudero
Autocorrelation
Download