autocorrelation

advertisement
Autocorrelation,
Box Jenkins or ARIMA
Forecasting
Autocorrelation and the DurbinWatson Test
An autocorrelation is a correlation of the values of a variable with values of the same variable
lagged one or more periods back. Consequences of autocorrelation include inaccurate estimates
of variances and inaccurate predictions.
Lagged Residuals
i
1
2
3
4
5
6
7
8
9
10
i
1.0
0.0
-1.0
2.0
3.0
-2.0
1.0
1.5
1.0
-2.5
i-1
*
1.0
0.0
-1.0
2.0
3.0
-2.0
1.0
1.5
1.0
i-2
*
*
1.0
0.0
-1.0
2.0
3.0
-2.0
1.0
1.5
i-3
*
*
*
1.0
0.0
-1.0
2.0
3.0
-2.0
1.0
i-4
*
*
*
*
1.0
0.0
-1.0
2.0
3.0
-2.0
The Durbin-Watson test (first-order autocorrelation):
H0: 1 = 0
H1:  0
The Durbin-Watson test statistic:
n
2
 ( ei  ei 1 )
d  i2 n
2
 ei
i 1
DW d Test
4 Steps
Step 1: Estimate
Yˆi  ˆ1  ˆ2 X 2i  ˆ3 X 3i
And obtain the residuals
Step 2: Compute the DW d test statistic
Step 3: Obtain dL and dU: the lower and upper points
from the Durbin-Watson tables
Step 4: Implement the following decision rule:
Value of d relative to dL and dU
Decision
d < dL
Reject null of no positive
autocorrelation
dL  d  dU
No decision
dU < d < 4 - dU
Do not reject null of no
positive or negative
autocorrelation
4 – dL < d < 4 - dU
No decision
d > 4 - dL
Reject null of no negative
autocorrelation
Critical Points of the Durbin-Watson Statistic: =0.05,
n= Sample Size, k = Number of Independent Variables
n
15
16
17
18
.
.
.
65
70
75
80
85
90
95
100
k=1
dL
1.08
1.10
1.13
1.16
1.57
1.58
1.60
1.61
1.62
1.63
1.64
1.65
dU
1.36
1.37
1.38
1.39
.
.
.
1.63
1.64
1.65
1.66
1.67
1.68
1.69
1.69
k=2
dL
0.95
0.98
1.02
1.05
1.54
1.55
1.57
1.59
1.60
1.61
1.62
1.63
dU
1.54
1.54
1.54
1.53
.
.
.
1.66
1.67
1.68
1.69
1.70
1.70
1.71
1.72
k=3
dL
0.82
0.86
0.90
0.93
1.50
1.52
1.54
1.56
1.57
1.59
1.60
1.61
dU
1.75
1.73
1.71
1.69
.
.
.
1.70
1.70
1.71
1.72
1.72
1.73
1.73
1.74
k=4
dL
0.69
0.74
0.78
0.82
1.47
1.49
1.51
1.53
1.55
1.57
1.58
1.59
dU
1.97
1.93
1.90
1.87
.
.
.
1.73
1.74
1.74
1.74
1.75
1.75
1.75
1.76
k=5
dL
0.56
0.62
0.67
0.71
1.44
1.46
1.49
1.51
1.52
1.54
1.56
1.57
dU
2.21
2.15
2.10
2.06
.
.
.
1.77
1.77
1.77
1.77
1.77
1.78
1.78
1.78
Durbin-Watson Test for Autocorrelation:
An Example
The Banner Rock Company
manufactures and markets its own
rocking chair. The company
developed special rocker for senior
citizens which it advertises
extensively on TV. Banner’s
market for the special chair is the
Carolinas, Florida and Arizona,
areas where there are many senior
citizens and retired people The
president of Banner Rocker is
studying the association between
his advertising expense (X) and the
number of rockers sold over the last
20 months (Y). He collected the
following data. He would like to use
the model to forecast sales, based
on the amount spent on advertising,
but is concerned that because he
gathered these data over
consecutive months that there
might be problems of
autocorrelation.
Month
Sales (000)
Ad ($millions)
1
153
5.5
2
156
5.5
3
153
5.3
4
147
5.5
5
159
5.4
6
160
5.3
7
147
5.5
8
147
5.7
9
152
5.9
10
160
6.2
11
169
6.3
12
176
5.9
13
176
6.1
14
179
6.2
15
184
6.2
16
181
6.5
17
192
6.7
18
205
6.9
19
215
6.5
20
209
6.4
Durbin-Watson Test for Autocorrelation: An Example
• Step 1: Generate the regression
equation
Durbin-Watson Test for Autocorrelation: An Example
• The resulting equation is: Ŷ = - 43.802 +
35.95X
• The coefficient (r) is 0.828
• The coefficient of determination (r2) is
68.5%
• There is a strong, positive association
between sales and advertising
• Is there potential problem with
autocorrelation?
Durbin-Watson Test for Autocorrelation: An Example
=-43.802+35.95*C3
=(E4-F4)^2
=E4^2
=B3-D3
=E3
∑(ei -ei-1)2
∑(ei)2
Durbin-Watson Test for Autocorrelation:
An Example
•
Hypothesis Test:
H0: No residual correlation (ρ = 0)
H1: Positive residual correlation (ρ > 0)
•
Critical values for d given α=0.5, n=20, k=1
dl=1.20 du=1.41
Reject H0
Positive Autocorrelation
Fail to reject H0
No Autocorrelation
Inconclusive
dl=1.20
n
d
 (e  e
t 1
t
t 2
n
 (e )
t 1
t
2
)2

2338.5829
 0.8522
2744.2685
du=1.41
Autoregressive Models
11
Box Jenkins
or
Arima
Forecasting
• All stationary time series can be modeled as
AR or MA or ARMA models
• A stationary time series is one with constant
mean ( ) and constant variance.
• Stationary time series are often called mean
reverting series—that in the long run the
mean does not change (cycles will always
die out).
• If a time series is not stationary it is often
possible to make it stationary by using fairly
simple transformations

Nonstationary Time series
• Linear trend
• Nonlinear trend
• Multiplicative seasonality
How to make them stationary
• Linear trend
– Take non-seasonal difference. What is left over will be
stationary AR, MA or ARMA
• Nonlinear trend
• Exponential growth
– Take logs – this makes the trend linear
– Take non--seasonal difference
• Non exponential growth ?
Take logs
• Multiplicative seasonality often occurs when growth is exponential.
Identification
• What does it take to make the time series
stationary?
• Is the stationary model AR, MA, ARMA
– If AR(p) how big is p?
– If MA(q) how big is q?
– If ARMA(p,q) what are p and q?
ARMA models
• If you can’t easily tell if the model is an AR
or a MA, assume it is an ARMA model.
Box-Jenkins Method
First of all, the analyst identifies a tentative model
considering the nature of the past data. This tentative
model and the data are entered in the computer. The
Box-Jenkins program then gives the values of the
parameters included in the model. A diagnostic check
is then conducted to find out whether the model gives
an adequate description of the data. If the model
satisfies the analyst in this respect, then it is used to
make the forecast.
Download