PPT - WordPress.com

advertisement
Bristol MSc Time Series
Econometrics, Spring 2015
Univariate time series processes,
moments, stationarity
Overview
•
•
•
•
Moving average processes
Autogregressive processes
MA representation of autoregressive processes
Computing first and second moments, means,
variances, autocovariances, autocorrelations
• Stationarity, strong and weak, ergodicity
• The lag operator, lag polynomials, invertibility.
• Mostly from Hamilton (1994), but see also
Cochrane’s monograph on time series.
Aim
• These univariate concepts needed in
multivariate analysis
• Moment calculation a building block in
forming the likelihood of time series data, and
therefore estimation
• Foundational tools in descriptive time series
modelling.
Two notions of the mean in time series
I
E
Yt p lim
1/I Yit  
I
i1
T
E
YT p lim 
1/T Ys
T
s1
1. Imagine many computers simulating a series in parrallel. If at date t, we
took an average of all of them, what would that converge to as we made the
number I of these computers large?
2. Suppose we used 1 computer to simulate a time series process. What would
the average of all these observations converge to as T got very large?
Variance, autocovariance
2
0 E

Yt 

The variance
j E
Yt 

Ytj 
General autocovariance, which
nests the variance.
cov
x, yE

x x 

y y 

Related to covariance in general,
(ie not just time series)
multivariate analysis
Autocorrelation, correlation
j
j  0
corr
Yt , Ytj 
Autocorrelation order j is the autocovariance
order j divided by the variance.
cov
Y t ,Y tj 
var
Y t  var
Y tj 

j
0 0
Autocorrelation comes from definition and computation of
general notion of the correlation from multivariate, not
necessarily time-series analysis
j
Moving average processes
Yt  e t e t1
First order MA process, ‘MA(1)’, mu and
theta are parameters, e is a white noise
shock.
E
Yt E
 et et1  E
e t E
e t1 
2
E

Yt 
E
e t e t1 2 2 2 2 2 
1 2 
Computing the mean and variance of an MA(1).
White noise
E
e t 0
E
e t E
et
2 2
E
e t e j 0, j t
e t  N
0, 2 
Cross section average of shocks is
zero. Variance is some constant.
No ‘correlation’ across different
units.
Gaussian white noise if, in addition,
normally distributed.
1st autocovariance of an MA(1)
1 E
Yt 

Yt1 E
e t e t1 

e t1 e t2 
2
E
e t1 
E
e t e t1 E
e t e t2 
2 0 0
Higher order autocovariances of an MA(1) are 0. It’s an exercise to explain
why this is.
Higher order MA processes
Yt  e t 1 e t1 2 e t1
Yt  e t 1 e t1 2 e t1 
...
n e tn
MA(2)
MA(n)
And we can have infinite order MA processes, referred to as
MA(inf).
Why ‘moving average’ process?
• The RHS is an ‘average’ [actually a weighted
sum]
• And it is a sum whose coverage or window
‘moves’ as the time indicator grows.
Stationarity, ergodicity
E
Yt , t
E
Yt 

Ytj j , t, j
Yt, Ytj 1 , Ytj 2 , . . . Ytj n 
Weak or covariance stationarity:
Mean and autocovariances are
independent of t.
Strong stationarity: joint density of
these elements in the sequence
depend not on t, just on the gap
between the different elements.
Ergodicity: convergence of ‘time-series’ average to the ‘crosssection’ average.
Cross-sectional and time series
stationarity
Cross-sectional variances,rho=0.8
5
Top-panel: variance of
outturns ACROSS
simulations
4
3
2
1
0
100
200
300
400
500
600
700
800
900
1000
time-series variances,rho=0.8
6
Bottom panel: rolling
variance OVER TIME
for 1 simulation.
4
2
0
0
100
200
300
400
500
600
700
800
y_t=rho*y_t-1+sd*e_t; rho=0.8,sd=1
900
1000
Cross-sectional and time-series nonstationarity
Cross-sectional variances,rho=1.002
15000
Coefficient just over
unity, but cross
sectional variance
exploding…
10000
5000
0
0
100
200
300
400
500
600
700
800
900
1000
time-series variances,rho=1.002
150
And rolling time series
variance not constant
either.
100
50
0
0
100
200
300
400
500
600
700
800
y_t=rho*y_t-1+sd*e_t; rho=1.002,sd=1
900
1000
Matlab code to simulate ARs, compute
and plot cs and ts variances
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
%script to demonstrate non-stationarity in AR(1) and time series / cross
%sectional notion of variance.
clear all;
tsample=1000;
mcsample=50;
rho=1.002;
sd=1;
%ensures memory doesn't carry forward errors from runs of old versions
%define length of time series to simulate
%number of time series in our monte carlo
%autoregressive parameter
%sdeviation of shocks
shocks=randn(tsample,mcsample);
y=zeros(tsample,mcsample);
csvar=zeros(mcsample,1);
tsvar=zeros(tsample-1,1);
%store our simulated data here
%store cr sec variances here
for i=1:mcsample
for j=2:tsample
y(j,i)=rho*y(j-1,i)+sd*shocks(j,i);
end
end
%calculate cross sectional variances
for i=2:tsample
csvar(i-1)=var(y(i,:));
end
%calculate rolling ts variances
for j=2:tsample
tsvar(j-1)=var(y(1:j,1));
end
%chart results
figure
subplot(2,1,1)
plot(csvar)
title('Cross-sectional variances,rho=1.002')
subplot(2,1,2)
plot(tsvar)
title('time-series variances,rho=1.002')
AR(1), MA(1)
10
ma1
ar1
8
6
4
2
0
-2
0
10
20
30
Initial shock=0, theta=0.7
40
50
60
70
80
90
100
Matlab code to simulate MA(1), AR(1)
AR and ARMA processes
Yt c Yt1 e t
AR(1)
Yt c 1 Yt1 2 Yt2 e t
AR(2)
Yt c 1 Yt1 e t e t1
ARMA(1,1)
Which process you use will depend on whether you have economics/theory to
guide you, or statistical criteria.
MA representation of an AR(1)
Yt c  Yt1 e t
Yt c  
c Yt2 e t1 e t
2 Yt2 
c e t1 
c e t 
Yt 3 Yt3 2 
c e t2 
c e t1 
c e t 
Yt n Ytn n1 
c e tn1
..

c e t1 
c e t 
Derive the MA rep by repeatedly substituting out for lag Y using the AR(1) form..
MA(inf) representation of AR(1)

Yt  s 
c e ts 
Exists provided mod(phi)<1
s0
Shows that for a stationary AR(1), we can view today’s Y as the sum of the
infinite sequence of past shocks.
Note the imprint of past shocks on today is smaller, the further back in
time they happened.
And that’s true because of the dampening implied by the mod(phi)<1.
Impulse response function for an AR(1)
Yt0 e t0
Start from zero. Effect of a shock today is the
shock itself.
Yt1 Yt0 e t0
Effect of that shock tomorrow, in period t+1
Yt2 Yt1 2 Yt0 2 e t0
And the propagated out another period….
Yth Yth1 h Yt0 h e t0
IRF asks: what is the effect of a shock (an impulse) at a particular horizon in the
future? Note relationship with MA(inf) rep of an AR(1).
IRF for AR(1) an example
Y0 1
Suppose phi=0.8, c=0, e_0=1
Y1 0. 8  1 0. 8
Y2 0. 8  0. 8 0. 64
Y2 0. 8 3 0. 512
Yn 0. 8 n
e_0=1 is what we would often take as
a standardised shock size to illustrate
the shape of the IRF for an estimated
time series process.
Y 0. 8  0
Or we might take a shock size=1
standard deviation.
Note how the IRF for the stationary AR(1) is monotonic [always
falling] and dies out.
The forecast in an AR(1)
E 0 Y1 E 0 
Y0 e 1 
(‘time series’) expectation given
information at 0 of Y_1
E 0 Y1 E 0 
Y0 e 1 Y0 0 Y0
E 0 Yh h Y0
Forecast at some horizon h
FE h h Y0 Yh
The forecast error we will record when
period h comes along and we observe the
data we forecast.
Forecast errors in an AR(1)
Yh  Yh1 e h
 
Yh2 e h1 e h 2 Yh2 e h1 e h
h1
 h Y0  s e hs
Partially construct the
MA rep of an outturn
for Y at horizon h in
the future.
s0
FE h  h Y0 Yh
h1
 h Y0 
h Y0  s e hs 
s0
h1
  s e hs
s0
We see that the forecast
error at horizon h is a
moving average of the
shock that hit between now
and h.
Forecast error analysis
• Armed with our analytical time series forecast
errors….
• We can compute their expectation.
• We can compare the expectation to the outturn.
[Are they biased?]
• We can compute the expected autocorrelation of
the errors. Their variance….
• Connection with the empirical literature on
rational expectations and survey/professional
forecaster measures of inflation expectations.
VAR(1) representation of AR(2)
AR(2)
Yt 1 Yt1 2 Yt2 e t
Yt
Yt1

1 2
Yt1
1
Yt2
0

et
0
VAR(1) representation of the
AR(2). First line has the ‘meat’.
Second line just identity.
Yt Yt1 Et
Bold type sometimes to denote
matrices.
Why do this? Certain formulae for IRFs, or standard errors, forecast error
variances, are easily derivable with first order models. So get the higher
order model into first order form and then proceed….
The lag operator
L
Yt Yt1
L2 
Yt L
Yt1 Yt2
L 1 
Yt Yt1
The lag operator shifts the time
subscript backwards, or, if we
write its inverse, forwards.
Yt 1 Yt1 2 Yt2 3 Yt3 4 Yt4 
...
p Ytp e t
1 LYt 2 L 2 Yt 3 L 3 Yt 4 L 4 Yt 
...
p L p Yt e t

1 L 2 L 2 3 L 3 4 L 4 
. . . p L p 
Yt e t
L
Yt LYt Yt1
We can use it to
express time
series processes
like AR models
differently.
The lag operator is commutative with
multiplication.
Rediscovering the MA(inf)
representation of an AR(1) with the lag
operator
Operate on both sides of this
Yt  Yt1 w t
 Yt LYt w t
with this

1 L
Yt w t

1 L 2 L 2 3 L 3 
...
t L t 

1 L 2 L 2 3 L 3 
...
t L t 

1 L
Yt

1 L  L  L 
...
L 
wt
2
2
3
3
t
t

1 L 2 L 2 3 L 3 
...
t L t 

L  L  L  L 
...
 L
2
2

1 t1 L t1 
3
3
4
4
t
1 t
1

And this is what you get.
Here we expand the
compound operator on the
LHS of the equation above
Rediscovering….ctd

1 t1 L t1 
Yt 
1 L 2 L 2 3 L 3 
...
t L t 
wt
Yt t1 Ytt1 Yt t1 Y1
LHS of above written explicitly,
without lag operators. Note
as t goes to inf, we are left
with Y_t
Yt 
1 L 2 L 2 3 L 3 
...
t L t 
wt, t  
So with the aid of the lag operator, we have rediscovered the MA(inf)
representation of the AR(1).
Lag operators and invertibility of AR(1)
This is what we have
established.

1 L 2 L 2 3 L 3 
...
t L t 

1 L
Yt Yt

1 L 2 L 2 3 L 3 
...
t L t 
1 L1

1 L

1 L1 1
1Yt Yt
Implying that these
operators are
approximately inverses of
one another.
Note this property of (any) inverse
operator
`1’ here is the ‘identity operator’
Invertibility, ctd…

1 L
Yt w t
Provided mod(phi)<1, we can operate on
both sides of this with the inverse of the
operator on the RHS to get this…
Yt 
1 L1 w t
w t w t1 2 w t2 3 w t3 
...
This is what is referred to as the ‘invertibility’ property of
an AR(1) process. Analogous properties are deduced of
multivariate vector autoregressive (VAR) processes too.
Computing mean and variance of
AR(2)
• More involved than for the AR(1)
• Introduces likelihood computation for more
complex processes
• Introduces recursive nature of
autocovariances and its usefulness
• NB: it will simply be an exercise to do this for
an AR(1) process.
Mean of an AR(2)
Yt c 1 Yt1 2 Yt2 e t
E
Yt  c 1 E
Yt1 2 E
Yt2 E
et 
  c 1  2  0
c
 
1 1 2
Here is an AR(2) process.
Start with calculating the mean.
To get the mean, simply take
expectations of both sides.
Variance of an AR(2)
Yt c 1 Yt1 2 Yt2 e t
Rewrite our AR(2) using this substitution
for the constant term c.
c 
1 1 2 
Yt  
1 1 2 1 Yt1 2 Yt2 e t
 Yt  1 
Yt1 2 
Yt2 e t
This is what we get after
making the substitution for
c.
Variance of an AR(2)
E
Yt 

Yt 
E
1 Yt1 Yt 2 Yt2 Yt e t Yt 
0 1 1 2 2 2
The above is a recursive equation in
autocovariances, which we can
denote like this.
E
et 
Yt 
E
et
1 
Yt1 2 
Yt2 e t 
0 0 2
* by Y_t-mu, take
expectations, and we
get this.
This is where the sig^2 term
above comes from.
Variance of an AR(2)
E
Yt 

Ytj E
1 Yt1 Ytj 2 Yt2 Ytj e t Ytj 
j 1 j1 2 j2
General form of the recursive autocovariance equation, formed by
multiplying not by Y_t-mu, but Y_t-j-mu, then taking expectations.
Variance of an AR(2)
j1
j2
j




1 
2 
0
0
0
 j  1 j1 2 j2
1  1 0 2 1
 1 1 1 2 1
 1 1 /
1 2 
2 1 1 2
21 /
1 2 2
Divide both sides by the
variance, or the 0th order
autocovariance to get an
equation in autocorrelations
Set j=1, to get this, noting that
rho_0=1, and rho_1=rho_-1
Set j=2 and the recursive
equation in autocorrelations
implies this… which we can
rewrite substituting in for
expression for rho_1
Variance of an MA(2)
0 1 1 2 2 2 
0 1 1 0 2 2 0 2
21
0 1
2
2 21
1
2
21
0 
1  1
2
22 
0 2
2 21
 1
2
1 2
22 

Rewrite the autocovariances on the
RHS in terms of autocorrelations.
Then substitute in for the
autocorrelations which we found
on the last slide….
And rearrange as an equation in
gamma_0, the variance, which is what
we were trying to solve for. Done!
Recap
•
•
•
•
•
•
•
•
Moving average processes
Autoregressive processes
ARMA processes
Methods for computing first and second
moments of these
Impulse response
Forecast, forecast errors
MA(infinity) representation of an AR(1)
Lag operators, polynomials in the lag operator
Download