Autocorrelation I

advertisement
Autocorrelation:
Nature and Detection
13.1
Aims and Learning Objectives
By the end of this session students should be able to:
• Explain the nature of autocorrelation
• Understand the causes and consequences of
autocorrelation
• Perform tests to determine whether a regression
model has autocorrelated disturbances
13.2
Nature of Autocorrelation
Autocorrelation is a systematic pattern in the
errors that can be either attracting (positive)
or repelling (negative) autocorrelation.
For efficiency (accurate estimation/prediction)
all systematic information needs to be incorporated into the regression model.
13.3
Regression Model
Yt = 1 + 2X2t + 3X3t + Ut
No autocorrelation:
Cov (Ui, Uj)
or E(Ui, Uj) = 0
Autocorrelation:
Cov (Ui, Uj)  0
or E(Ui, Uj)  0
Note: i  j
In general
E(Ut, Ut-s)  0
13.4
Postive
Auto.
No
Auto.
Negative
Auto.
Ut
0
Ut
Attracting
.
.
. . ..
.. . .
.
. .
...
. ..
.
..
t
Random
. . .. . . . . .
. .
.
0
.
..
. .
..
.
.
.
.
.
.
Repelling
.
Ut
.
.
. . .
.
.
.
0
.
. .
.
.
.
.
.
.
.
.t
. t
13.5
Order of Autocorrelation
Yt = 1 + 2X2t + 3X3t + Ut
1st Order:
Ut = Ut1 + t
2nd Order: Ut = 1 Ut1 + 2 Ut2 + t
3rd Order: Ut = 1 Ut1 + 2 Ut2 + 3 Ut3 + t
Where -1 <  < +1
We will assume First Order Autocorrelation:
AR(1) :
Ut = Ut1 + t
13.6
Causes of Autocorrelation
Direct
Indirect
• Inertia or persistence
• Omitted Variables
• Spatial correlation
• Functional form
• Cyclical Influences
• Seasonality
13.7
Consequences of Autocorrelation
1. Ordinary least squares still linear and
unbiased.
2. Ordinary least squares not efficient.
3. Usual formulas give incorrect standard
errors for least squares.
4. Confidence intervals and hypothesis tests
based on usual standard errors are wrong.13.8
^
^
Yt = 1 + 2Xt + et
E(et, et-s)  0
Autocorrelated disturbances:
Formula for ordinary least squares variance
(no autocorrelation in disturbances):
ˆ
Var (  2 ) 
Formula for ordinary least squares variance
(autocorrelated disturbances):
Var ( ˆ 2 ) 



1 
2 
xt 
2
1
x
2
t
2


2
2
xt

xi x j  


k
Therefore when errors are autocorrelated ordinary
13.9
least squares estimators are inefficient (i.e. not “best”)
Detecting Autocorrelation
Y t  ˆ 1  ˆ 2 X 2 t  ˆ 3 X 3 t  e t
et provide proxies for Ut
Preliminary Analysis (Informal Tests)
• Data - autocorrelation often occurs in time-series
(exceptions: spatial correlation, panel data)
• Graphical examination of residuals - plot et against
time or et-1 to see if there is a relation
13.10
Formal Tests for Autocorrelation
Runs Test: analyse the uninterrupted sequence of the
residuals
Durbin-Watson (DW) d test: ratio of the sum of
squared differences in successive residuals to the
residual sum of squares
Breusch-Godfrey LM test: A more general test
which does not assume the disturbances are AR(1).
13.11
Durbin-Watson d Test
H o:  = 0
vs. H1:  = 0 ,  > 0, or  < 0
The Durbin-Watson Test statistic, d, is :
n
d =
et et-1
2
t=2
n
et
2
t=1
Ratio of the sum of squared differences in successive
residuals to the residual sum of squares
13.12
The test statistic, d, is approximately related to ^
 as:
^
d  2(1)
When ^
 = 0 , the Durbin-Watson statistic is d  2.
When ^
 = 1 , the Durbin-Watson statistic is d  0.
When ^
 = -1 , the Durbin-Watson statistic is d  4.
13.13
DW d Test
4 Steps
Step 1: Estimate Yˆi  ˆ1  ˆ 2 X 2 i  ˆ 3 X 3 i
And obtain the residuals
Step 2: Compute the DW d test statistic
Step 3: Obtain dL and dU: the lower and upper points
from the Durbin-Watson tables
13.14
Step 4: Implement the following decision rule:
V a lu e o f d rela tiv e to d L a n d d U
D ecisio n
d < dL
R eject nu ll o f no p o sitive
au to co rrelatio n
dL  d  dU
N o d ecisio n
dU < d < 4 - dU
D o no t reject nu ll o f no
p o sitive o r neg ative
au to co rrelatio n
4 – dL < d < 4 - dU
N o d ecisio n
d > 4 - dL
R eject nu ll o f no neg ative
au to co rrelatio n
13.15
Restrictive Assumptions:
• There is an intercept in the model
• X values are non-stochastic
• Disturbances are AR(1)
• Model does not include a lagged dependent
variable as an explanatory variable, e.g.
Yt = 1 + 2X2t + 3X3t + 4Yt-1+ Ut
13.16
Breusch-Godfrey LM Test
This test is valid with lagged dependent variables
and can be used to test for higher order
autocorrelation
Suppose, for example, that we estimate:
Yt = 1 + 2X2t + 3X3t + 4Yt-1+ Ut
And wish to test for autocorrelation of the form:
U t   1U t 1   2U t  2   3U t  3  v t
13.17
Breusch-Godfrey LM Test
4 steps
Step 1. Estimate
Yt = 1 + 2X2t + 3X3t + 4Yt-1+ Ut
obtain the residuals (et)
Step 2. Estimate the following auxiliary regression
model:
e t  b1  b 2 X 2  b3 X 3  b 4 Yt 1
 c1 e t  1  c 2 e t  2  c 3 e t  3  w t
13.18
Breusch-Godfrey LM Test
Step 3. For large sample sizes, the test statistic is:
(n  p ) R ~ 
2
2
p
Step 4. If the test statistic exceeds the critical
chi-square value we can reject the null hypothesis
of no serial correlation in any of the  terms
13.19
Summary
In this lecture we have:
1. Analysed the theoretical causes and
consequences of autocorrelation
2. Described a number of methods for detecting
the presence of autocorrelation
13.20
Download