I. Autoregressive Conditional Heteroskedastic(ARCH) Models

advertisement
Econ 240C
Lecture 16
1
May 29, 2007
I. Autoregressive Conditional Heteroskedastic(ARCH) Models
Many economic time series are nonstationary in mean and
variance. Other features that some economic time series
exhibit are episodes of unusually high variance which may
persist for awhile. One way of modeling these features is to
model the variance as well as the series.
In forecasting an economic time series, we have seen
the importance of using conditional forecasts, for example,
one period ahead forecasts conditional on all current and
past knowledge. In the same way, if the variance is not
constant, conditional forecasts of the variance can be
important to the forecaster, especially in situations where
risk is important. An example is portfolio analysis where
forecasts of the mean return for the holding period as well
as the variance for the holding period are critical to the
decision maker.
A. Autoregressive Error Variance Model
Suppose, for example, that the time series is an AR(1):
y(t) = a0 + a1 y(t-1) + e(t)
where the error has mean zero and,
ê2(t) = ê2(t-1) + ê2(t-2) + ... + WN(t).
If the parameters etc. are zero then the expected
estimated variance is constant or homoskedastic:
Et-1[ê2(t)] = 
Engle Multiplicative ARCH Model
Suppose the error process, e(t) has a multiplicative
structure:
Econ 240C
Lecture 16
2
May 29, 2007
e(t) = WN(t)√[e2(t-1)]
where the mean of the white noise series is zero and its
variance is one, the white noise and lagged error, e(t-1),
are independent, and  is greater than zero and  lies
between zero and one. The mean of the error process, e(t),
conditional or unconditional will be zero. The error process
will not be serially correlated, and its unconditional
variance will be constant. However, the conditional variance
of the error process will be autoregressive of order one,
i.e. ARCH(1).
Since WN(t) and e(t-1) are independent, their joint
density, f{WN(t), e(t-1)}, will be the product of the
marginal densities:
f{WN(t),e(t-1)} = g{WN(t)}h{e(t-1)}.
1. The unconditional expectation of e(t) is:
E[e(t)] = ∫ WN(t)√[e2(t-1)]f{WN(t),e(t-1)}
= ∫ WN(t)g{WN(t) ∫ √[e2(t-1)]h{e(t-1)}
= E[WN(t)] E{ √[e2(t-1)]}
= 0
since white noise has mean zero.
2.
The covariance of e(t) and e(t-1) will be zero since
E[e(t)e(t-1)] =E{WN(t)√[e2(t-1)]WN(t-1)√[e2(t2)]}
and using independence of WN(t) and e(t-1),
= E{WN(t)WN(t-1)} E{√[e2(t-1)]√[e2(t-2)]}
=0
since E{WN(t)WN(t-1)} is zero.
Econ 240C
3.
Lecture 16
3
May 29, 2007
The unconditional variance of the error is:
E[e(t)]2 = E{[WN(t)]2[e2(t-1)]}
and by independence,
E[e(t)]2 = E{[WN(t)]2 E[e2(t-1)]}
= 1 [e2(t-1)]],
and since for the unconditional variance it is true that:
E[e(t)]2 = E[e(t-1)]2 ,
the unconditional variance is constant:
E[e(t)]2 = 

The conditional mean of the error is, using
independence of WN(t) and e(t-1) :
Et-1[e(t)] = Et-1[WN(t)] Et-1[√[e2(t-1)]
= 0
since the conditional expected value of white noise is zero.
5.
The conditional variance of the error is
Et-1[e(t)]2 = Et-1{[WN(t)]2[e2(t-1)]}
and by independence:
= Et-1[WN(t)]2 Et-1[e2(t-1)]
= 1 [e2(t-1)],
so the one period ahead forecast of the variance is
autoregressive of order one and will be persistent. A shock
to the system, e(t-1), will increase the one period ahead
forecast of the variance and this larger variance will tend
to persist since the variance is autoregressive. The closer
is to one the longer the episode of high variance will
tend to persist.
II. Simulation of an ARCH(1) Time Series
Econ 240C
Lecture 16
4
May 29, 2007
Many economic time series are autoregressive of the
first order. Suppose we have an autoregressive time series:
y(t) = 0.9 y(t-1) + e(t)
where the conditional variance of e(t) is also
autoregressive
e(t) = WN(t)√[1 + 0.7e2(t-1)] .
A sample of 100 observations of white noise can be computer
generated and used to generate 100 observations of the
error, e(t), where e(0) is set to zero to initiate the error
series, and y(0) is set to zero to initiate the ARCH(1) time
series.
Plots of the white noise, error, the autocorrelation and
partial autocorrelation of the error squared, and
the
ARCH(1) series follow.
__________________________________________________________
One Hundred Observations of Simulat ed Whit e Noise
4
3
2
1
0
-1
-2
-3
20
40
60
80
100
WN
Econ 240C
Lecture 16
5
May 29, 2007
One Hundred Simulated Observations of an Error Series with an Autoregressive Variance
15
10
5
0
-5
-10
-15
20
40
60
80
100
ERROR
IDENT ERRORSQ
SMPL range:
1
-
100
Number of observations: 100
____________________________________________________________
Autocorrelations
ac
Partial Autocorrelations
pac
____________________________________________________________
.
° ********
|
.
° ********
|
1
0.629
|
. **
.
|
2
0.280 -
|
.
°
.
|
3
0.106
|
.
*
.
|
4 -0.018 -
0.629
.
° ****
0.190
.
° *.
0.027
.
°
0.096
.
Econ 240C
.
*
Lecture 16
.
|
.
6
May 29, 2007
° *.
|
5 -0.048
0.039
____________________________________________________________
Q-Statistic (5 lags)
Correlations
48.755
S.E. of
0.100
____________________________________________________________
One Hundred Observations of a Simulated ARCH(1) Time Series: y(t) = 0.9 y(t-1) +error
15
10
5
0
-5
-10
-15
-20
20
40
60
80
100
Y
_____________________________________________________
Note from the figures above that although we start with
white noise, the error has a conditional variance which is
heteroskedastic and autoregressive of the first order and
the autoregressive time series itself has some episodes of
high variance, especially relative to the first thirty
observations.
It is especially important to note that the error looks
like white noise, as illustrated by the following
Econ 240C
Lecture 16
7
May 29, 2007
autocorrelation and partial autocorrelation functions. It is
the error squared that is autoregressive.
IDENT ERROR
SMPL range:
1
-
100
Number of observations: 100
____________________________________________________________
Autocorrelations
ac
Partial Autocorrelations
pac
____________________________________________________________
.***
.
|
.***
.
|
1 -0.204 -
° *.
|
.
°
.
|
2
.
|
.
*
.
|
3 -0.087 -
° *.
|
.
° *.
|
4
|
.
°
|
5 -0.002
0.204
.
0.040 -
0.002
.
*
0.083
.
0.097
0.066
.
°
.
.
0.034
____________________________________________________________
Q-Statistic (5 lags)
Correlations
6.003
S.E. of
0.100
____________________________________________________________
Econ 240C
Lecture 16
8
May 29, 2007
III. Generalized Autoregressive Conditional Heteroskedastic
(GARCH) Models
Bollerslev generalized these models to allow for an
ARMA
structure in the error variance, capturing the benefits of
parsimony in the number of parameters needed to fit the
error structure. This was especially important because of
the persistence in the error variance of time series such as
the inflation rate, requiring a distributed lag of past
error variances.
In the Bollerslev model:
e(t) = WN(t)√h(t)
where
h(t) = ie2(t-i) + ih(t-i)
A. The Unconditional Mean
Presuming independence between WN(t) and e(t-i), i≥1
then
E[e(t)] = E[WN(t)} E[√h(t)] = 0
since white noise has mean function equal to zero.
B. The Conditional Mean
Once again assuming independence,
Et-1[e(t)] = Et-1[WN(t)] Et-1[√h(t)] = 0
since the one period ahead forecast for white noise is zero.
C. The Conditional Variance
Once again assuming independence
Et-1[e(t)]2 = Et-1[WN(t)]2 Et-1[h(t)]
= Et-1[h(t)] = h(t)
Econ 240C
Lecture 16
9
May 29, 2007
= ie2(t-i) + ih(t-i)
since the one period ahead forecast for the variance of
white noise is one, and the period ahead forecast of h(t)
depends on information known at time t-1.
The empirical procedure for GARCH models is (1) to
estimate the appropriate model for the economic time series,
(2) identify the residuals to make sure they are white, and
(3) identify the square of the residuals to see if there is
a GARCH error structure.
IV. Maximun Likelihood Estimation of ARCH Models
It is possinle to use EViews/TSP to estimate an ARMA
model and if the error is ARCH, estimate an AR model for the
error variance. The estimated ARMA model can be used to
forecast the time series and the estimated AR model of the
error squared can be used to forecast the error variance for
construction of confidence intervals. This procedure
estimates the ARMA model separately from the AR model for
the error square. Consequently, more efficient estimators
can be obtained by using maximum likelihood methods to
estimate these relationships jointly.
A. Homoskedastic Variance
Suppose
(1) y(t) = b y(t-1) + e(t)
(2) e(t) = WN(t)√a0
In this case, the errors are normal, the unconditional mean
of e(t) is zero and the unconditional variance is a0. Since
the errors are independent from one another, i.e. they are
Econ 240C
Lecture 16
10
May 29, 2007
orthogonal, the density function for the sample of errors,
i.e. the unconditional likelihood function for the sample,
L, is the product of the normal densities for the errors:
L = 1,T (2)-1/2(a0)-1/2 exp[-1/2{(e(i) - 0)/√a0}2
L = (2)-T/2(a0)-T/2 exp[-(1/2)(1/a0)1,T[e(i)]2
and the logarithm of the likelihood function is:
ln L= -(T/2)ln(2)-(T/2)ln(a0)-(1/2)(1/a0)1,T[y(i)-b y(i-1)]2
The derivative with respect to b is:
∂(ln L)/∂b = -(1/2)(1/a0)1,T[y(i)-b y(i-1)][-y(i-1)] = 0
and the estimator for b can be calculated by formula:
b = {1,T[y(i)y(i-1)}/{1,T[y(i-1)y(i-1)}
The derivative with respect to a0 is:
∂(ln L)/∂ a0 = -(T/2) (a0)-1 +(1/2)( (a0)-2 1,T[e(i)]2
= 0
and the estimator for a0 can be calculated by formula as
well:
â0 = {1,T[e(i)]2}/T
These are the usual regression estimators.
A. Heteroskedastic Variance
Suppose
(1) y(t) = b y(t-1) + e(t)
(2) e(t) = WN(t)√h(t)
(3) h(t) = [a0a1e2(t-1)]
For initial values of the parameters, b*, a0*, a1* the
errors, e(t), for the sample can be calculated from (1). The
conditional errors, Et-1[e(t)] = WN(t)√ [a0* a1* e2(t-1)] are
a known quantity, and for any given t, a constant,
Econ 240C
Lecture 16
11
May 29, 2007
[e2(t-1)], times white noise. Hence these conditional
errors are normal and the conditional likelihood finction,
CL,
for the sample is:
CL = 1,T (2)-1/2(h(t))-1/2 exp[-1/2{(e(i) - 0)/√h(t)}2
=1,T(2)-1/2(a0*+a1*e2(t-1))-1/2exp[-1/2{(e(i)-0)/√a0*+a1*e2(t1)}2
=(2)-T/2(a0*+a1*e2(t-1))-T/2 exp[-(1/2){a0*+a1*e2(t-1)}1,T[e(i)]2 and the logarithm of the conditional likelihood
1
function is:
ln CL= -(T/2)ln(2)-(T/2)ln(a0*+a1*e2(t-1))(1/2){a0*+a1*e2(t-1)}-11,T[y(i)-b* y(i-1)]2
For the initial values of the parameters, b*, a0*, a1*
we will obtain a value for the log conditional likelihood
function and then will have to search this three dimensional
parameter space for second iteration values, b** a0**, a1**
that will raise the conditional likelihood, and continue to
iterate until we find the maximum. There are algorithms that
perform these calculations. One is available on Regression
Analysis of Time Series(RATS), version 3.1 and later.
One hundred observations of the simulated variable y
with ARCH(1) errors was written to an ASCII file and copied
to the RATS subdirectory of my Model 80. The 386 version
RATS 3.10 for DOS was opened using the command RATS386. I
had typed the ASCII batch file ARCH.COD, which was in the
subdirectory as well.
> RATS386
Econ 240C
Lecture 16
<ALT> F
12
May 29, 2007
these keys struck simultaneously open the
file
menu
source
is selected from the file menu
ARCH.COD
is typed in file name box
OK(button) runs the batch file
The batch program and RATS output follow:
...........................................................
allocate 0 100
open data simarc.dat
data(org=obs) / y
smpl 2 100
set u = 0.0
set v = 0.0
nonlin b0 b1 a0 a1
frml regresid = y(t)-b0-b1*y(t-1)
frml archvar = a0+a1*u(t-1)**2
frml archlogl = -0.5*(log(v(t)=archvar(t))+(u(t)=regresid(t))**2/v(t))
eval b0 = 0.0; eval b1 = 0.9
eval a0 = 1; eval a1 = 0.7
maximize(method=bhhh,recursive,iterations=80) archlogl
.......................................................................................................................................
NOTE: the smpl statement is critical since [e(t-1)]2 is required and will not be available
if the sample starts with observation # 1.
** ALGORITHM DID NOT CONVERGE IN 80 STEPS **
ON THE LAST ITERATION THE CRITERION WAS 0.1249819E-01
NON-LINEAR MAXIMIZATION - ALGORITHM BHHH
TOTAL OBSERVATIONS 99 SKIPPED/MISSING
0
USABLE OBSERVATIONS 99 DEGREES OF FREEDOM 95
FINAL FUNCTION VALUE -0.20000000E+51
NO. LABEL VAR LAG COEFFICIENT STAND. ERROR T-STATISTIC
*** ******* *** *** ************ ************ ************
1 B0
1 0 0.8221595E-02 0.8467668E-01 0.9709397E-01
2 B1
2 0 0.8997189
0.2401051E-01 37.47188
3 A0
3 0 1.029126
0.1932176
5.326254
4 A1
4 0 0.7332557
0.1714210
4.277515
...............................................................................................................................................
Note: the coefficients are close to the values used in the simulation. Good initial values
are important, otherwise many iterations may be required.
Econ 240C
Lecture 16
13
May 29, 2007
V.ARCH-M Models
The ARCH-M model relates return to risk. As in the
capital asset model, the net return to an asset varies with
an expected risk premium and an asset specific shock:
y(t) = u(t) + e(t)
where, taking conditional expectations:
Et-1 y(t) = u(t) + 0
and
the expected risk premium varies with the conditional
variance of the shock:
u(t) = h(t)
and the shock to the asset is ARCH, h(t) = e2(t-1) + ..
i.e. the error, e(t) has the properties:
e(t) = WN(t) √h(t)
Et-1[e(t)] = Et-1[WN(t)] Et-1[√h(t)] = 0
Et-1[e(t)]2 = Et-1[WN(t)]2 Et-1[√h(t)]2 =
h(t)
Download