Lecture 8 – Linear Processes

advertisement
Lecture 8 – Linear Processes
(Reference – 6.1, Hayashi)
In this part of the course we will study a class of
linear models, called “linear time series models” that
are designed specifically for modeling the dynamic
behavior of time series. These include, movingaverage (MA), autoregressive (AR), and
autoregressive-moving average (ARMA) models. We
will focus on univariate models, but they have very
natural and very important vector generalizations
(VMA,VAR, VARMA models), which we consider
in Econ 674.
These time series models are useful for a variety of
reasons in applied econometrics:
 modeling the serial correlation in the
disturbances of a regression model
 out-of-sample forecasting (univariate;
multivariate)
 providing information about the dynamic
properties of a time series variable
 the vector versions of these models have become
much more prominent than traditional
simultaneous equation systems for studying the
structural relationships in macroeconomic
systems
The basic building block in time series modeling is
the white noise process, {t}:
E(t) = 0 for all t
E(t2) = 2 > 0 for all t
E(tt-s) = 0 for all t for all s ≠ 0
Definition – MA(q) process
The stochastic process {yt} is called a q-th order,
moving average process (i.e., an MA(q) process) if:
yt = μ + θ0t + θ1t-1 + … + θqt-q, where θ0 = 1
Note that an MA(q) process is a covariance stationary
process.
E(yt) = E[μ + θ0t + θ1t-1 + … + θqt-q]
= μ + θ0Et + θ1Et-1 + … + θqEt-q
=μ
Var(yt) = E[(yt – μ)2] = E[(θ0t + θ1t-1 + … + θqt-q)2]
= (1 + θ12 + … + θq2)2 ,
since E(ij) = 0 for i ≠ j
γ1 = Cov(yt,yt-1) = Cov(yt,yt+1) =
E[(θ0t + θ1t-1+…+θqt-q)(θ0t-1+θ1t-2 +…+θqt-q-1)]
= (θ1θ0 + … + θq-1θq)2
More generally –
γj =Cov(yt,yt-j) = Cov(yt,yt+j) = 
= 0 for j > q
q j
2

k 0
j k
 k , j = 0,1,…,q
The sequence of autocovariances of a covariance
stationary process, { γj } for j = 0,1,… is called the
autocovariance function or the covariagram of the
process. The corresponding sequence of
autocorrelations, j = γj/ γ0, is called the
autocorrelation function or the correlogram of the
process.
So the covariagram and correlogram of the MA(q) , i)
is completely determined by the q+1 parameters
θ1,…, θq and 2 and, ii) is equal to zero for all j > q.
A natural generalization of the MA(q) process is the
MA(∞) process

yt     j  t  j
j 0
where t is a white noise process.
In order for this process to be well-defined, the ψj’s
have to die off as j increases at a sufficiently rapid
rate. Thus we impose the condition that { ψj } is an
absolutely summable sequence, i.e.,


0
j
 lim
T
T 

0
j
<

Note that a necessary condition for the ψj’s to be
absolutely summable is that lim  j  0.
j 
A weaker condition that is enough for the MA(∞)
process to be well defined, but is not enough for
certain additional results that we want, is square

summability:

0
2
j
.
Fact: the MA(∞) process, yt, is covariance stationary
with:
E(yt) = μ

Var(yt) =  
2
2
j
0
and,
 j  cov( yt , yt  j )  

2

k 0
 k , for j = 0,1,2,…
jk
Fact: the autocovariances (and autocorrelations)
form an absolutely summable sequence
Fact: if the ’s are i.i.d. then yt is strictly stationary
and ergodic.
The Wold Decomposition Theorem
Let {yt} be a zero-mean (and nondeterministic),
covaraince stationary time series, for t = 0, +1, +2, …
Then yt has an MA(∞) with square summable
coefficients.
That is, loosely speaking, if a time series is
covariance stationary then it has a one-sided, infinite
or finite order MA representation.
{Nondeterministic? Suppose we can decompose yt
into y1t+y2t where var(y2t - 1y2,t-1 - 2y2,t-2 -…) = 0
for all t for some 1,2,… That is, yt has a component
that is perfectly predictable in the mean-square sense
as a linear function of its own past. Then yt has a
deterministic component. The theorem says that yt
does not have a deterministic component or, if it
does, it has been removed from yt.}
The MA(q) and MA(∞) are special cases of linear
processes.
Definition: Linear Process
A stochastic process {yt} is a linear process if
yt   


j 

j t j
where {t} is a white-noise process and the sequence
{ψj}, j = 0, +1, +2,… is absolutely summable.
Thus, a general linear process is a two-sided infinite
order MA process. The MA(q) and MA(∞) are onesided special cases.
The linear process was constructed by taking a
weighted average of successive elements of the {t}
sequence, which is an operation called “(linear)
filtering.” Filtering the white noise process generates
a covariance stationary process with a more
interesting covariagram than the white noise process
it was built from.
A more general filtering result that is useful in time
series analysis –
Let {xt} be a covaraince stationary process and let
{ht} be an absolutely summable sequence of real
numbers. Then, the sequence {yt}, defined according
to

y t   h j xt  j
j 0
is a covariance stationary process. If the
autocovariances of the xt process are absolutely
summable then the autocovariances of the yt process
will be too.
Download