Slide 9 - Memorial University of Newfoundland

advertisement
ECON 4551
Econometrics II
Memorial University of Newfoundland
Time-series with stationary variables. Dynamic
Models, Autocorrelation and Forecasting
Adapted from Vera Tabakova’s notes

9.1 Introduction

9.2 Lags in the Error Term: Autocorrelation

9.3 Estimating an AR(1) Error Model

9.4 Testing for Autocorrelation

9.5 An Introduction to Forecasting: Autoregressive Models

9.6 Finite Distributed Lags

9.7 Autoregressive Distributed Lag Models
Principles of Econometrics, 3rd Edition
Slide 9-2
Figure 9.1
Principles of Econometrics, 3rd Edition
Slide 9-3



The model so far assumes that the
observations are not correlated with one
another.
This is believable if one has drawn a random
sample, but less likely if one has drawn
observations sequentially in time
Time series observations, which are drawn at
regular intervals, usually embody a structure
where time is an important component.
Principles of Econometrics, 3rd Edition

If we cannot completely model this structure
in the regression function itself, then the
remainder spills over into the unobserved
component of the statistical model (its error)
and this causes the errors be correlated with
one another.
Principles of Econometrics, 3rd Edition




In general, there are many situations where
inertia affects a time series
Or lagged effects of explanatory variables
Or autocorrelation is introduced by
“massaging” of the original data
(interpolation, smoothing when going from
monthly to quarterly, etc.)
Or nonstationarity
Principles of Econometrics, 3rd Edition


But, as with heteroskedasticity, we should
suspect first that there is a specification issue
Wrong functional form or missing variables
Principles of Econometrics, 3rd Edition
Three ways to view the dynamics:
yt  f ( xt , xt 1 , xt 2 ,...)
yt  f ( yt 1 , xt )
yt  f ( xt )  et
Principles of Econometrics, 3rd Edition
(9.1)
(9.2)
et  f (et 1 )
(9.3)
Slide 9-8
Assume
Stationarity
For now
Figure 9.2(a) Time Series of a Stationary Variable
Principles of Econometrics, 3rd Edition
Slide 9-9
Figure 9.2(b) Time Series of a Nonstationary Variable that is
‘Slow Turning’ or ‘Wandering’
Principles of Econometrics, 3rd Edition
Slide 9-10
Figure 9.2(c) Time Series of a Nonstationary Variable that ‘Trends’
Principles of Econometrics, 3rd Edition
Slide 9-11
9.2.1 Area Response Model for Sugar Cane (bangla.gdt)
ln  A  1  2 ln  P 
ln  At   1 2 ln  Pt   et
(9.4)
yt  1  2 xt  et
(9.5)
et  et 1  vt
Principles of Econometrics, 3rd Edition
(9.6)
Slide 9-12
yt  1  2 xt  et
(9.7)
et  et 1  vt
(9.8)
But assume v well behaved:
E(vt )  0 var(vt )  v2
Assume also
Principles of Econometrics, 3rd Edition
cov(vt , vs )  0 for t  s
1    1
(the series is stationary
(9.9)
(9.10)
Slide 9-13
E (et )  0
2

var(et )  e2  v 2
1 
cov  et , et k   e2k
(9.11)
OK
OK,
constant
k 0
Not
zero!!!
(9.12)
(9.13)
…but since we stick to stationary series, still constant
Principles of Econometrics, 3rd Edition
Slide 9-14
corr(et , et k ) 
cov(et , et k )
var(et ) var  et k 
cov(et , et k ) e2k

 2  k (9.14)
var(et )
e
corr(et , et 1 )  
Focus on k=1
(9.15)
aka autocorrelation coefficient
Running simple OLS
On sugarcane example
Principles of Econometrics, 3rd Edition
yˆt  3.893  .776 xt
(se) (.061) (.277)
(9.16)
Slide 9-15
Principles of Econometrics, 3rd Edition
Slide 9-16
We see “runs”
What if we saw
“bouncing”?
Figure 9.3 Least Squares Residuals Plotted Against Time
Principles of Econometrics, 3rd Edition
Slide 9-17
T
 ( xt  x )(yt  y )
General covariance formula:
rxy 
cov( xt , yt )

t 1
T
T
t 1
t 1
2
2
(
x

x
)
(
y

y
)
 t
 t
var( xt )var( yt )
Constant variance
(9.17)
Null expectation of x and y if they are errors
T
r1 
cov(et , et 1 )
var(et )

 eˆt eˆt 1
t 2
T
2
ˆ
e
 t 1
(9.18)
t 2
Principles of Econometrics, 3rd Edition
Slide 9-18
The existence of AR(1) errors implies:

The least squares estimator is still a linear and unbiased estimator, but
it is no longer best. There is another estimator with a smaller
variance.

The standard errors usually computed for the least squares estimator
are incorrect. Confidence intervals and hypothesis tests that use these
standard errors may be misleading.
Principles of Econometrics, 3rd Edition
Slide 9-19
As with heteroskedastic errors, you can salvage OLS when
your data are autocorrelated.
 Now you can use an estimator of standard errors that is
robust to both heteroskedasticity and autocorrelation
proposed by Newey and West.
 This estimator is sometimes called HAC, which stands for
heteroskedasticity autocorrelated consistent.

Principles of Econometrics, 3rd Edition
Slide 9-20
HAC is not as automatic as the heteroskedasticity consistent
(HC) estimator.
 With autocorrelation you have to specify how far away in
time the autocorrelation is likely to be significant
 The autocorrelated errors over the chosen time window are
averaged in the computation of the HAC standard errors;
you have to specify how many periods over which to
average and how much weight to assign each residual in
that average.
 That weighted average is called a kernel and the number of
errors to average is called bandwidth.

Principles of Econometrics, 3rd Editio
Slide 9-21



Usually, you choose a method of averaging (Bartlett kernel or Parzen
kernel) and a bandwidth (nw1, nw2 or some integer). Often your
software (example GRETL) defaults to the Bartlett kernel and a
bandwidth computed based on the sample size, N.
Trade-off: Larger bandwidths reduce bias (good) as well as precision
(bad). Smaller bandwidths exclude more relevant autocorrelations
(and hence have more bias) but use more observations to increase
precision (smaller variance).
Choose a bandwidth large enough to contain the largest
autocorrelations. The choice will ultimately depend on the frequency
of observation and the length of time it takes for your system to adjust
to shocks.
Principles of Econometrics, 3rd Edition
Slide 9-22

Once GRETL recognizes that your data are a time series, then the
robust command will automatically apply the HAC estimator of
standard errors with the default values of the kernel and Bandwidth
(or with your customized choices)

Again, remember that there are several choices about how to run
these corrections (so different software packages can yield slightly
different results)

In a way, this correction is just an extension of White’s correction for
heteroskedasticity

It is based on a formula that has the OLS formula as a special case
Principles of Econometrics, 3rd Edition
Slide 9-23
Sugar cane example
The two sets of standard errors, along with the estimated equation are:
yˆt  3.893  .776 xt
(.061) (.277)
(.062) (.378)
'incorrect' se's
'correct' se's
The 95% confidence intervals for β2 are:
(.211,1.340)
(incorrect)
(.006,1.546)
(correct)
Principles of Econometrics, 3rd Edition
Slide 9-24
However, as in the case of robust estimation under
heteroskedasticity, the HAC correction corrects
or accounts for autocorrelation but does not
exploit it
We can get a more precise estimator if we exploit the
Idea that some observations are different not because
of the effect of the regressor values, but because of the
effect of the adjacent errors
Principles of Econometrics, 3rd Edition
Slide 9-25
yt  1  2 xt  et
(9.19)
et  et 1  vt
(9.20)
yt  1  2 xt  et 1  vt
(9.21)
et 1  yt 1  1  2 xt 1
(9.22)
Principles of Econometrics, 3rd Edition
Slide 9-26
et 1  yt 1  1  2 xt 1
(9.23)
yt  1 (1  )  2 xt  yt 1  2 xt 1  vt
(9.24)
ln( At )  3.899  .888ln( Pt )
(se)
(.092) (.259)
et  .422et 1  vt
(.166)
(9.25)
Now this transformed nonlinear model has errors that are uncorrelated over time
Principles of Econometrics, 3rd Edition
Slide 9-27
Solving this nonlinear regression is still based on minimizing the sum least
squares, but it cannot be done with close formulae now  (because of the
nonlinearity in the parameters) so we call it Generalised Least Squares
again
It can be shown that nonlinear least squares estimation of (9.24) is
equivalent to using an iterative generalized least squares estimator called the
Cochrane-Orcutt procedure. Details are provided in Appendix 9A.
Principles of Econometrics, 3rd Edition
Slide 9-28




The nonlinear least squares estimator only requires that the errors be
stable (not necessarily stationary).
Other methods commonly used make stronger demands on the data,
namely that the errors be covariance stationary.
Furthermore, the nonlinear least squares estimator gives you an
unconditional estimate of the autocorrelation parameter and yields a
simple t-test of the hypothesis of no serial correlation.
Monte Carlo studies show that it performs well in small samples as
well.
Principles of Econometrics, 3rd Edition
Slide 9-29

But nonlinear least squares requires more computational power than
linear estimation, …though this is not much of a constraint these
days.

Nonlinear least squares (and other nonlinear estimators) use
numerical methods rather than analytical ones to find the minimum of
your sum of squared errors objective function. The routines that do
this are iterative.
Principles of Econometrics, 3rd Edition
Slide 9-30
yt  1 (1  )  2 xt  2 xt 1  yt 1  vt
(9.26)
yt    0 xt  1 xt 1  1 yt 1  vt
(9.27)
  1 (1  )
0  2
1  2
1  
We should then test the above restrictions (testing nonlinear restrictions is a possibility)
yˆt  2.366  .777 xt  .611 xt 1  .404 yt 1
(se) (.656) (.280) (.297)
Principles of Econometrics, 3rd Edition
(9.28)
(.167)
Slide 9-31
yt  1 (1  )  2 xt  2 xt 1  yt 1  vt
(9.26)
yt    0 xt  1 xt 1  1 yt 1  vt
(9.27)
This restricted model could be estimated with OLS as long as v is
well behaved, because there are no nonlinearities
Check that our intuition of having dynamics given by lagged effects of errors
boils down to having lagged effects of dependent and independent
variables!!!
Principles of Econometrics, 3rd Edition
Slide 9-32
9.4.1 Residual Correlogram
H0 :   0
z  T r1
H1 :   0
N (0,1)
z  34  .404  2.36  1.96
Principles of Econometrics, 3rd Edition
(9.29)
(9.30)
Slide 9-33
9.4.1 Residual Correlogram (more practically…)
1.96
r1 
T
1.96
rk 
T
or
1.96
r1  
T
1.96
rk  
T
(9.31)
cov(et , et k ) E (et et k )
k 

var(et )
E (et2 )
(9.32)
Principles of Econometrics, 3rd Edition
or
Slide 9-34
Figure 9.4 Correlogram for Least Squares Residuals from
Sugar Cane Example
Principles of Econometrics, 3rd Edition
Slide 9-35
Residual ACF
+- 1.96/T^0.5
0.4
0.2
0
In GRETL…
-0.2
-0.4
0
1
2
3
4
5
6
7
lag
Residual PACF
+- 1.96/T^0.5
0.4
0.2
0
-0.2
-0.4
0
1
2
3
4
5
6
7
lag
Figure 9.4 Correlogram for Least Squares Residuals from
Sugar Cane Example
Principles of Econometrics, 3rd Edition
Slide 9-36
yt  1  2 xt  et
yt  1 (1  )  2 xt  yt 1  2 xt 1  vt
For this nonlinear model, then, the residuals should be uncorrelated
Principles of Econometrics, 3rd Edition
Slide 9-37
Figure 9.5 Correlogram for Nonlinear Least Squares Residuals
from Sugar Cane Example with “whitened” error
Principles of Econometrics, 3rd Edition
Slide 9-38
Another way to determine whether or not your residuals
are autocorrelated is to use an LM (Lagrange multiplier) test.
For autocorrelation, this test is based on an auxiliary
regression where lagged OLS residuals are added
to the original regression equation. If the coefficient on the
lagged residual is significant then you conclude that the
model is autocorrelated.
Principles of Econometrics, 3rd Edition
Slide 9-39
yt  1  2 xt  et 1  vt
t = 2.439
F = 5.949
(9.33)
p-value = .021
We can derive the auxiliary regression for the LM test:
yt  1  2 xt  eˆt 1  vˆt
(9.34)
b1  b2 xt  eˆt  1  2 xt  eˆt 1  vˆt
First result from GRETL but it comes from the
Rearranged regression format with residual as the dependent variable
Contrary to what the book suggests!!!
Principles of Econometrics, 3rd Edition
Slide 9-40
eˆt  (1  b1 )  (2  b2 ) xt  eˆt 1  vˆt
(9.35)
 1   2 xt  eˆt 1  vˆt
LM  T  R 2  34  .16101  5.474
Centered around zero, so the power of this test now would come from
The lagged error, which is what we care about…
Second result from GRETL
Principles of Econometrics, 3rd Edition
Slide 9-41
eˆt  (1  b1 )  (2  b2 ) xt  eˆt 1  vˆt
(9.35)
 1   2 xt  eˆt 1  vˆt
LM  T  R 2  34  .16101  5.474
This was the Breusch-Godfrey LM test for autocorrelation and it is
distributed chi-sq
In STATA:
regress la lp
estat bgodfrey
Principles of Econometrics, 3rd Edition
Slide 9-42
There are other variants of this test
Note that it is only valid in large samples (theoretically infinitely big
ones)
GRETL also reports
Ljung-Box Q‘ test, which is good even if you do not have a
regression to work with and just want to analyze any given time
series but it is less powerful than Breusch-Godfrey’s test when the
null hypothesis of no autocorrelation is false…
The number of lags gives you the df for the chi-sq and the smaller
df for the F distributions you need
See appendix for popular (if infamous!) Durbin-Watson test and
variants
Principles of Econometrics, 3rd Edition
Slide 9-43













* -------------------------------------------------* LM test
* -------------------------------------------------regress la lp
estat bgodfrey
* -------------------------------------------------* Note: we set ehat(1) = 0 in order to match result in text. In practice this is
unnecessary.
* -------------------------------------------------predict ehat, residual
gen ehat_1=L.ehat
replace ehat_1 = 0 in 1
regress ehat lp ehat_1
di (e(N))*e(r2)
Principles of Econometrics, 3rd Edition

You should be able to work it out from these notes and by
comparing to the STATA code!!!
Principles of Econometrics, 3rd Edition
In general, p should be large enough so that vt is white noise
yt    1 yt 1  2 yt 2 
  p yt  p  vt
(9.36)
 CPIt  CPIt 1 
yt   ln(CPIt )  ln(CPIt 1 )  100  
 100
CPIt 1


Adding lagged values of y can serve to eliminate the error autocorrelation
INFLN t  .1883  .3733 INFLNt 1  .2179 INFLNt 2  .1013 INFLNt 3
(se)
(.0253) (.0615)
Principles of Econometrics, 3rd Edition
(.0645)
(.0613)
(9.37)
Slide 9-46
Helps decide
How many lags
To include in the
model
Figure 9.6 Correlogram for Least Squares Residuals from
AR(3) Model for Inflation
Principles of Econometrics, 3rd Edition
Slide 9-47
yt    1 yt 1  2 yt 2  3 yt 3  vt
(9.38)
yT 1    1 yT  2 yT 1  3 yT 2  vT 1
yˆT 1  ˆ  ˆ 1 yT  ˆ 2 yT 1  ˆ 3 yT 2
 .1883  .3733 .4468  .2179  .5988  .1013 .3510
 .2602
Principles of Econometrics, 3rd Edition
Slide 9-48
yˆT 2  ˆ  ˆ 1 yˆT 1  ˆ 2 yT  ˆ 3 yT 1
 .1883  .3733 .2602  .2179  .4468  .1013 .5988 (9.39)
 .2487
u1  yT 1  yˆT 1  (  ˆ )  (1  ˆ1 ) yT  (2  ˆ 2 ) yT 1  (3  ˆ 3 ) yT 2  vT 1
Principles of Econometrics, 3rd Edition
Slide 9-49
Principles of Econometrics, 3rd Edition
Slide 9-50
u1  vT 1
(9.40)
u2  1 ( yT 1  yˆT 1 )  vT 2  1u1  vT 2  1vT 1  vT 2
(9.41)
u3  1u2  2u1  vT 3  (12  2 )vT 1  1vT 2  vT 3
(9.42)
Principles of Econometrics, 3rd Edition
Slide 9-51
12  var(u1 )  v2
22  var(u2 )  v2 (1  12 )
32  var(u3 )  v2 [(12  2 )2  12  1]
 yˆ
T j
 1.96 ˆ j , yˆT  j  1.96 ˆ j 
Principles of Econometrics, 3rd Edition
(9.43)
Slide 9-52
This model is just a generalization of the ones
previously discussed.
In this model you include lags of the dependent variable
(autoregressive) and the contemporaneous and lagged
values of independent variables as regressors
(distributed lags).
The acronym is ARDL(p,q) where p is the maximum
distributed lag and q is the maximum autoregressive lag
Principles of Econometrics, 3rd Edition
Slide 9-53
yt   0 xt 1xt 1 2 xt 2 
q xt q  vt , t  q  1, ,T (9.44)
E ( yt )
 s
xt s
 WAGEt  WAGEt 1 
xt   ln(WAGEt )  ln(WAGEt 1 )  100  
 100
WAGEt 1


Principles of Econometrics, 3rd Edition
Slide 9-54
Principles of Econometrics, 3rd Edition
Slide 9-55
Principles of Econometrics, 3rd Edition
Slide 9-56
yt    0 xt  1xt 1 
 q xt q  1 yt 1 
yt    0 xt  1 xt 1  2 xt 2  3 xt 3 

     s xt  s  et
  p yt  p  vt (9.45)
 et
(9.46)
s 0
Principles of Econometrics, 3rd Edition
Slide 9-57
Figure 9.7 Correlogram for Least Squares Residuals from
Finite Distributed Lag Model
Principles of Econometrics, 3rd Edition
Slide 9-58
INFLN t  .0989  .1149 PCWAGEt  .0377 PCWAGEt 1  .0593 PCWAGEt 2
(se)
(.0288) (.0761)
(.0812)
(.0812)
 .2361 PCWAGEt 3  .3536 INFLNt 1  .1976 INFLNt 2
(.0829)
Principles of Econometrics, 3rd Edition
(.0604)
(9.47)
(.0604)
Slide 9-59
Figure 9.8 Correlogram for Least Squares Residuals from
Autoregressive Distributed Lag Model
Principles of Econometrics, 3rd Edition
Slide 9-60
yt    0 xt  1xt 1  2 xt 2  3 xt 3  1 yt 1  2 yt 2  vt
ˆ 0  ˆ 0  .1149
ˆ 1  ˆ 1ˆ 0  ˆ 1  .3536  .1149  .0377  .0784
ˆ 2  ˆ 1ˆ 1  ˆ 2ˆ 0  ˆ 2  .0643
ˆ 3  ˆ 1ˆ 2  ˆ 2ˆ 1  ˆ 3  .2434
ˆ 4  ˆ 1ˆ 3  ˆ 2ˆ 2  .0734
Principles of Econometrics, 3rd Edition
Slide 9-61
Figure 9.9 Distributed Lag Weights for Autoregressive
Distributed Lag Model
Principles of Econometrics, 3rd Edition
Slide 9-62














autocorrelation
autoregressive distributed lag
models
autoregressive error
autoregressive model
correlogram
delay multiplier
distributed lag weight
dynamic models
finite distributed lag
forecast error
forecasting
HAC standard errors
impact multiplier
infinite distributed lag
Principles of Econometrics, 3rd Edition









interim multiplier
lag length
lagged dependent variable
LM test
nonlinear least squares
sample autocorrelation function
standard error of forecast error
total multiplier
form of LM test
Slide 9-63
Principles of Econometrics, 3rd Edition
Slide 9-64
yt  1  2 xt  et
et  et 1  vt
yt  1  2 xt  yt 1  1  2 xt 1  vt
(9A.1)
yt yt 1  1 1  2  xt xt 1   vt
(9A.2)
yt  yt  yt 1
Principles of Econometrics, 3rd Edition
xt2  xt  xt 1
xt1  1  
Slide 9-65
yt  xt11  xt22  vt
yt 1 2 xt  ( yt 1 1 2 xt 1 )  vt
Principles of Econometrics, 3rd Edition
(9A.3)
(9A.4)
Slide 9-66
y1  1  x12  e1
1  2 y1  1  2 1  1  2 x12  1  2 e1
y1  x11 1  x12 2  e1
y1  1  2 y1
(9A.5)
x11  1  2
(9A.6)
x12  1  2 x1
Principles of Econometrics, 3rd Edition
e1  1  2 e1
Slide 9-67
2

var(e1 )  (1  2 ) var(e1 )  (1  2 ) v 2  v2
1 
Principles of Econometrics, 3rd Edition
Slide 9-68
H0 :   0
H1 :   0
T
d
  eˆt  eˆt 1 
t 2
2
(9B.1)
T
2
ˆ
e
t
t 1
Principles of Econometrics, 3rd Edition
Slide 9-69
T
d
T
 eˆ   eˆ
t 2
2
t
2
t 1
t 2
T
 2 eˆt eˆt 1
t 2
T
2
ˆ
e
t
t 1
T

2
ˆ
e
t
t 2
T
2
ˆ
e
t
t 1
T

2
ˆ
e
 t 1
t 2
T
2
ˆ
e
t
t 1
(9B.2)
T
2
 eˆt eˆt 1
t 2
T
2
ˆ
e
t
t 1
 1  1  2r1
Principles of Econometrics, 3rd Edition
Slide 9-70
d  2 1  r1 
(9B.3)
d  dc
Principles of Econometrics, 3rd Edition
Slide 9-71
Figure 9A.1:
Principles of Econometrics, 3rd Edition
Slide 9-72
The Durbin-Watson test.




used to be the standard and is exact (it works in small samples)
comes out of almost every software package’s default output
it is easy to calculate
but the critical values depend on the values of the variables=>
unless you have a computer to look up the exact distribution…you
are stuck

DW did provide the upper and lower distributions but then test may
end up being inconclusive
Principles of Econometrics, 3rd Edition
Slide 9-73
Figure 9A.2:
Principles of Econometrics, 3rd Edition
Slide 9-74
The Durbin-Watson bounds test.
 if d  d Lc , reject H 0 :   0 and accept H1 :   0;
 if d  dUc , do not reject H 0 :   0;

if d Lc  d  dUc , the test is inconclusive.

The size of the inconclusive area shrinks with sample size
Principles of Econometrics, 3rd Edition
Slide 9-75
The Durbin-Watson test.


used to be the standard, but…:
it cannot handle lagged values of the dependent variable (it is
biased for autoregressive moving average models, so that
autocorrelation is underestimated)

But for large samples one can compute the unbiased normally
distributed h-statistic or Durbin’s alternative test statistic )

But then Breusch-Godfrey test is much more powerful instead
Principles of Econometrics, 3rd Edition
Slide 9-76
The Durbin-Watson test.

used to be the standard, but…:

it assumes normality of the errors

it cannot handle more than one lag (it detects only AR1
autocorrelation)
Principles of Econometrics, 3rd Edition
Slide 9-77
Also remember that the Durbin-Watson test assumes

that that there is an intercept in the model

that the X regressors are nonstochastic

that there are no missing values in the series
Principles of Econometrics, 3rd Edition
Slide 9-78
yt    0 xt  1 xt 1  2 xt 2  3 xt 3 
yt    0 xt  1xt 1 
Principles of Econometrics, 3rd Edition

 et      s xt s  et
 q xt q  1 yt 1 
s 0
  p yt  p  vt
Slide 9-79
yt    0 xt  1 yt 1  vt
(9C.1)
yt 1    0 xt 1  1 yt 2
(9C.2)
yt    0 xt  1 yt 1    0 xt  1 (  0 xt 1  1 yt 2 )
   1  0 xt  10 xt 1  12 yt 2
Principles of Econometrics, 3rd Edition
Slide 9-80
yt    1  0 xt  10 xt 1  12 (  0 xt 2  1 yt 3 )
   1  12  0 xt  10 xt 1  120 xt 2  13 yt 3
yt    1  12 
 1j 
 1j 0 xt  j  1j 1 yt ( j 1)
 0 xt  10 xt 1  120 xt 2 
 (1  1   
2
1
Principles of Econometrics, 3rd Edition
(9C.3)
j
  )   01s xt s  1j 1 yt ( j 1)
j
1
s 0
Slide 9-81

yt     01s xt  s
(9C.4)
s 0
  (1  1   
2
1

)
1  1

yt      s xt  s  et
s 0
Principles of Econometrics, 3rd Edition
Slide 9-82
s  01s

 s  0 (1  1  
s 0
Principles of Econometrics, 3rd Edition
2
1

0
)
1  1
Slide 9-83
yt    0 xt  1xt 1  2 xt 2  3 xt 3  1 yt 1  2 yt 2  vt
(9C.5)
0   0
1  10  1
2  11  20   2
(9C.6)
3  12  21  3
4  13  2 2
 s  1 s 1   2 s 2
Principles of Econometrics, 3rd Edition
for s  4
Slide 9-84
yˆT 1 
yT  yT 1  yT 2
3
yˆT 1  yT  (1  )1 yT 1  (1  )2 yT 2 
(9D.1)
(1  ) yˆT  (1  ) yT 1  (1  )2 yT 2  (1  )3 yT 3  ..... (9D.2)
yˆT 1  yT  (1  ) yˆT
Principles of Econometrics, 3rd Edition
Slide 9-85
Figure 9A.3: Exponential Smoothing Forecasts for two alternative values of α
Principles of Econometrics, 3rd Edition
Slide 9-86
Download