Autocorrelation

advertisement
Applied Econometrics
Applied Econometrics
Second edition
Dimitrios Asteriou and Stephen G. Hall
Applied Econometrics: A Modern Approach using Eviews and Microfit © Dr D Asteriou
Applied Econometrics
AUTOCORRELATION
1. What is Autocorrelation
2. What Causes Autocorrelation
3. First and Higher Orders
4. Consequences of Autocorrelation
5. Detecting Autocorrelation
6. Resolving Autocorrelation
2
Applied Econometrics
Learning Objectives
1. Understand the meaning of autocorrelation in the CLRM.
2. Find out what causes autocorrelation.
3. Distinguish among first and higher orders of autocorrelation.
4. Understand the consequences of autocorrelation on OLS estimates.
5. Detect autocorrelation through graph inspection.
6. Detect autocorrelation through formal econometric tests.
7. Distinguish among the wide range of available tests for detecting
autocorrelation.
8. Perform autocorrelation tests using econometric software.
9. Resolve autocorrelation using econometric software.
Applied Econometrics
What is
What
is Autocorrelation
Autocorrelation
Assumption 6 of the CLRM states that the
covariances and correlations between
different disturbances are all zero:
cov(ut, us)=0 for all t≠s
This assumption states that the disturbances ut
and us are independently distributed, which
is called serial independence.
Applied Econometrics
What is Autocorrelation
If this assumption is no longer valid, then the
disturbances are not pairwise independent, but
pairwise autocorrelated (or Serially Correlated).
This means that an error occurring at period t may be
carried over to the next period t+1.
Autocorrelation is most likely to occur in time series
data.
In cross-sectional we can change the arrangement of
the data without altering the results.
Applied Econometrics
What Causes Autocorrelation
One factor that can cause autocorrelation is omitted
variables.
Suppose Yt is related to X2t and X3t, but we wrongfully
do not include X3t in our model.
The effect of X3t will be captured by the disturbances
ut.
If X3t like many economic series exhibit a trend over
time, then X3t depends on X3t-1, X3t -2 and so on.
Similarly then ut depends on ut-1, ut-2 and so on.
Applied Econometrics
What Causes Autocorrelation
Another possible reason is misspecification.
Suppose Yt is related to X2t with a quadratic
relationship:
Yt=β1+β2X22t+ut
but we wrongfully assume and estimate a
straight line:
Yt=β1+β2X2t+ut
Then the error term obtained from the straight
line will depend on X22t.
Applied Econometrics
What Causes Autocorrelation
A third reason is systematic errors in measurement.
Suppose a company updates its inventory at a given
period in time.
If a systematic error occurred then the cumulative
inventory stock will exhibit accumulated
measurement errors.
These errors will show up as an autocorrelated
procedure
Applied Econometrics
First-Order Autocorrelation
The simplest and most commonly observed is the first-order
autocorrelation.
Consider the multiple regression model:
Yt=β1+β2X2t+β3X3t+β4X4t+…+βkXkt+ut
in which the current observation of the error term
ut is a function of the previous (lagged)
observation of the error term:
ut=ρut-1+et
Applied Econometrics
First-Order Autocorrelation
The coefficient ρ is called the first-order
autocorrelation coefficient and takes values fom
-1 to +1.
It is obvious that the size of ρ will determine the
strength of serial correlation.
We can have three different cases.
Applied Econometrics
First-Order Autocorrelation
(a) If ρ is zero, then we have no autocorrelation.
(b) If ρ approaches unity, the value of the previous
observation of the error becomes more important in
determining the value of the current error and therefore
high degree of autocorrelation exists. In this case we
have positive autocorrelation.
(c) If ρ approaches -1, we have high degree of negative
autocorrelation.
Applied Econometrics
First-Order Autocorrelation
Applied Econometrics
First-Order Autocorrelation
Applied Econometrics
Higher-Order Autocorrelation
Second-order when:
ut=ρ1ut-1+ ρ2ut-2+et
Third-order when
ut=ρ1ut-1+ ρ2ut-2+ρ3ut-3 +et
p-th order when:
ut=ρ1ut-1+ ρ2ut-2+ρ3ut-3 +…+ ρput-p +et
Applied Econometrics
Consequences of Autocorrelation
1. The OLS estimators are still unbiased and consistent. This
is because both unbiasedness and consistency do not
depend on assumption 6 which is in this case violated.
2. The OLS estimators will be inefficient and therefore no
longer BLUE.
3. The estimated variances of the regression coefficients will
be biased and inconsistent, and therefore hypothesis testing
is no longer valid. In most of the cases, the R2 will be
overestimated and the t-statistics will tend to be higher.
Applied Econometrics
Detecting Autocorrelation
There are two ways in general.
The first is the informal way which is done through graphs
and therefore we call it the graphical method.
The second is through formal tests for autocorrelation, like
the following ones:
1. The Durbin Watson Test
2. The Breusch-Godfrey Test
3. The Durbin’s h Test (for the presence of lagged
dependent variables)
4. The Engle’s ARCH Test
Applied Econometrics
Detecting Autocorrelation
We have the following series (quarterly data from 1985q1 to
1994q2):
lcons = the consumer’s expenditure on food
ldisp = disposable income
lprice = the relative price index of food
Typing in Eviews the following command
ls lcons c ldisp lprice
we get the regression results.
Applied Econometrics
Detecting Autocorrelation
Then we can store the residuals of this regression in a
vector by typing the command:
genr res01=resid
And a plot of the residuals can be obtained by:
plot res01
While a scatter of the residuals against their lagged
terms can be obtained by:
scat res01(-1) res01
Applied Econometrics
Detecting Autocorrelation
.12
.08
.04
.00
-.04
-.08
85
86
87
88
89
90
RES01
91
92
93
Applied Econometrics
Detecting Autocorrelation
.12
RES01
.08
.04
.00
-.04
-.08
-.08
-.04
.00
.04
RES01(-1)
.08
.12
Applied Econometrics
The Durbin Watson Test
The following assumptions should be satisfied:
1. The regression model includes a constant
2. Autocorrelation is assumed to be of firstorder only
3. The equation does not include a lagged
dependent variable as an explanatory variable
Applied Econometrics
The Durbin Watson Test
Step 1: Estimate the model by OLS and obtain
the residuals
Step 2: Calculate the DW statistic
Step 3: Construct the table with the calculated
DW statistic and the dU, dL, 4-dU and 4-dL
critical values.
Step 4: Conclude
Applied Econometrics
The Durbin Watson Test
Zone of
No
indecision autocorrelation
+ve autoc
0
dL
dU
2
Zone of
indecision
4-dU
-ve autoc
4-dL
4
Applied Econometrics
The Durbin Watson Test
Drawbacks of the DW test
1. It may give inconclusive results
2. It is not applicable when a lagged dependent variable is
used
3. It can’t take into account higher order of
autocorrelation
Applied Econometrics
The Breusch-Godfrey Test
It is a Lagrange Multiplier Test that resolves
the drawbacks of the DW test.
Consider the model:
Yt=β1+β2X2t+β3X3t+β4X4t+…+βkXkt+ut
where:
ut=ρ1ut-1+ ρ2ut-2+ρ3ut-3 +…+ ρput-p +et
Applied Econometrics
The Breusch-Godfrey Test
Combining those two we get:
Yt=β1+β2X2t+β3X3t+β4X4t+…+βkXkt+
+ρ1ut-1+ ρ2ut-2+ρ3ut-3 +…+ ρput-p +et
The null and the alternative hypotheses are:
H0: ρ1= ρ2=…= ρp=0 no autocorrelation
Ha: at least one of the ρ’s is not zero, thus,
autocorrelation
Applied Econometrics
The Breusch-Godfrey Test
Step 1: Estimate the model and obtain the
residuals
Step 2: Run the full LM model with the number
of lags used being determined by the assumed
order of autocorrelation.
Step 3: Compute the LM statistic = (n-ρ)R2 from
the LM model and compare it with the chisquare critical value.
Step 4: Conclude
Applied Econometrics
The Durbin’s
Durbin’s hh Test
The
Test
When there are lagged dependent variables (i.e. Yt-1) then the DW
test is not applicable.
Durbin developed an alternative test statistic, named the hstatistic, which is calculated by:
n
 DW 
h  1 

2
2  1  n ˆ

Where sigma of gamma hat square is the variance of the estimated
coefficient of the lagged dependent variable.
This statistic is distributed following the normal distribution
Applied Econometrics
The Durbin’s h Test
Dependent Variable: LOG(CONS)
Included observations: 37 after adjustments
Variable
C
LOG(INC)
LOG(CPI)
LOG(CONS(-1))
Coefficient
0.834242
0.227634
-0.259918
0.854041
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
Durbin-Watson stat
0.940878
0.935503
0.028001
0.025874
81.91016
1.658128
Std. Error
0.626564
0.188911
0.110072
0.089494
Mean dependent var
S.D. dependent var
Akaike info criterion
Schwarz criterion
F-statistic
Prob(F-statistic)
t-Statistic
1.331456
1.204981
-2.361344
9.542982
Prob.
0.1922
0.2368
0.0243
0.0000
4.582683
0.110256
-4.211360
-4.037207
175.0558
0.000000
Applied Econometrics
The Durbin’s h Test
n
 DW 
h  1 

2
2
1

n



ˆ
37
 1.658
 1 
 1.2971

2
2  1  37 * 0.089

Applied Econometrics
Resolving Autocorrelation
We have two different cases:
(a) When ρ is known
(b) When ρ is unknown
Applied Econometrics
Resolving Autocorrelation
(when ρ is known)
Consider the model
Yt=β1+β2X2t+β3X3t+β4X4t+…+βkXkt+ut
where
ut=ρ1ut-1+et
Applied Econometrics
Resolving Autocorrelation
(when ρ is known)
Write the model of t-1:
Yt-1=β1+β2X2t-1+β3X3t-1+β4X4t-1+…+βkXkt-1+ut-1
Multiply both sides by ρ to get
ρYt-1= ρβ1+ ρβ2X2t-1+ ρβ3X3t-1+ ρβ4X4t-1
+…+ ρ βkXkt-1+ ρut-1
Applied Econometrics
Resolving Autocorrelation
(when ρ is known)
Subtract those two equations:
Yt-ρYt-1= (1-ρ)β1+ β2(X2t-ρX2t-1)+ β3(X3t-ρX3t-1)+
+…+ βk(Xkt-ρXkt-1)+(ut-ρut-1)
or
Y*t= β*1+ β*2X*2t+ β*3X*3t+…+ β*kX*kt+et
Where now the problem of autocorrelation is resolved
because et is no longer autocorrelated.
Applied Econometrics
Resolving Autocorrelation
(when ρ is known)
Note that because from the transformation we lose
one observation, in order to avoid that loss we
generate Y1 and Xi1 as follows:
Y*1=Y1 sqrt(1- ρ2)
X*i1=Xi1 sqrt(1-ρ2)
This transformation is known as the quasidifferencing or generalised differencing.
Applied Econometrics
Resolving Autocorrelation
(when ρ is unknown))
The Cochrane-Orcutt iterative procedure.
Step 1: Estimate the regression and obtain residuals
Step 2: Estimate ρ from regressing the residuals to its lagged
terms.
Step 3: Transform the original variables as starred variables
using the obtained from step 2.
Step 4: Run the regression again with the transformed
variables and obtain residuals.
Step 5 and on: Continue repeating steps 2 to 4 for several
rounds until (stopping rule) the estimates of from two
successive iterations differ by no more than some
preselected small value, such as 0.001.
Download