Document

advertisement
Lecture 9: Multivariate Time Series Analysis
• The following topics will be covered:
• Modeling Mean
–
–
–
–
–
Cross-correlation Matrixes of returns
VAR
VMA
VARMA
Cointegration
• Modeling Volatility
– VGARCH models
L9: Vector Time Series
1
Lag-0 Cross-correlation Matrix
Let rt  (r1t , r2t ,...rkt ) be a log returns of k assets at time t.
0  E[(rt  t )(rt  t )']
Let D be a kxk diagonal matrix consisting of the standard deviations of rit .
  E(rt ),
In other words, D  diag{ 11 (0) ,..., kk (0)}.
The concurrent, or lag-zero, cross-correlation matrix of rt is:
0  [ij (0)]  D10 D1 -- a symmetric matrix with unit diagonal elements.
ij (0) 
ij (0)
ii (0) ii (0)

cov(rit , rjt )
std (rit ) std (rjt )
L9: Vector Time Series
2
Lag-l Cross-correlation Matrix
The lag-l, cross-correlation matrix of rt is:
l  [ij (l )]  D1l D1 -ij (l ) 
If
ij (l )
ii (0) ii (0)

cov(rit , rjt l )
std (rit ) std (rjt )
is the correlation coefficient between rit and rj,t-l
ij (l )  0 and l>0, we say the series rjt leads the series rit at lag l.
Cov(rj ,t l , rit )  Cov(rj ,t , ri ,t l )  Cov(rjt, rt ( l ) )
ij (l )  ji (l )  l  ' l and l  ' l
Note that
ij (l )   ji (l )
l has the following property:
1. The diagonal elements ij (l ) are the autocorrelation function of rit
2. The off-diagonal elements ij (0) are the concurrent relationship of rit and rjt
3. for l>0, the off-diagonal element ij (l ) measures the linear dependence of rit on the past
Putting lag-0 and lag-1 together,
value rj,t-l
L9: Vector Time Series
3
Linear Dependence
1. rit and r jt have no linear relationship if ij (l )   ji (l )  0 for all l  0.
2. rit and r jt have concurrently correlated if  ij (0)  0.
3. rit and r jt have no lead-lag relationship if ij (l )  0 and  ji (l )  0 for all l  0. In
this case, we say the two series are uncoupled.
4. There is a unidirectional relationship from rit and r jt if ij (l )  0 for all l  0, but
 ji (v)  0 for all v  0. In this case, rit does not depend on any past value of r jt , but r jt
depends on some past values of rit .
5. There is a feedback relationship between rit and r jt if ij (l )  0 for some l  0 and
 ji (v)  0 for some v  0.
L9: Vector Time Series
4
Sample Cross-Correlation Matrixes (CCM)
Given the data rt, the cross-covariance matrix l can be estimated by:
T



1 T
l   (rt  r )(rt l  r )' where r   rt / T
T t l 1
t 1
^
The cross-correlation matrix ρl is estimated by
^
^
^
1
l
^
1
^
 l  D  D , where D is the kxk diagonal matrix of the sample standard deviations of the
^
component series.  l is consistent but biased in a finite sample.
L9: Vector Time Series
5
Multivariate Portmanteau Test
•
For a multivariate series, the null hypothesis is H0: ρ1=…=ρm=0 and the alternative
hypothesis H0: ρi ne 0 for some i. The statistic is used to test that there are no autoand cross-correlations in the vector series rt. Portmanteau test is listed on page 308,
where T is the sample size, k is the dimension of rt.
L9: Vector Time Series
6
VAR (1)
VAR(1): rt  φ0  φ1rt 1  at
where ф0 is a k
ф is a k x k matrix, and at is a sequence of serially uncorrelated
random vectors with mean zero and covariance matrix ∑. The covariance matrix ∑ is required to be
positive definite (see page 350); otherwise, the dimension of rt can be reduced.
-
d
i
m
e
n
s
i
o
n
a
l
v
e
c
t
o
r
,
Consider a bivariate case:
r1t  10  11r1,t 1  12 r2,t 1  a1t
r2t  20  21r1,t 1  22 r2,t 1  a2t
If r1t  10  11r1,t 1  12 r2,t 1  a1t and
21  0 then there is a unidirectional relationship from
r1t to r2t. If 12  0 and 21  0 then r1t to r2t are uncoupled. 21  0 and 12  0 then there is a
feedback relationship between the two series.
VAR(1) is a reduced form model since it does not explicitly show the concurrent
dependence between the component series – which is known as the structure form (see
page 310 and the example on page 311).
L9: Vector Time Series
7
VAR (1): Reduced Form System
VAR(1) is a reduced form model since it does not explicitly show the concurrent
dependence between the component series. The latter is known as the structural form
system (see page 310 and the example on page 311).
Note: A reduced-form equation system is:
An example of the structural-form system is:
yt  b10  b12 zt   11 yt 1   12 yt 1   yt
zt  b20  b21 yt   21 yt 1   22 zt 1   zt
L9: Vector Time Series
8
Stationarity Condition of VAR(1)
  E(rt )  (I  φ)10
I is a kxk identity matrix.



Let rt  rt  μ , we have rt  φ(rt 1  μ)  at as the expression for VAR(1).
We have Γl  Φl Γ0
By repeated substitutions, we have




rt  at  φ at 1  φ at 2  φ at 3  ... -- MA(∞)
Several characteristics of VAR(1) process:
(1) Cov(at , rt 1 )  0 . at is uncorrelated with rt-l for l>0. Thus at is referred to as shock or
innovation of the series at time t.
(2) Postmultiplying the expression by a't , taking expectation, and using the fact of no
2
3
serial correlations in the at process, we obtain Cov(at , rt )  Σ , the covariance matrix of at.
(3) All eigenvalues of ф are less than 1.
L9: Vector Time Series
9
VAR(p) Models
rt  0  1rt 1  ...   p rt  p  at
(1) Cov(a t , rt l )  0 .
(2) Cov(at , rt )  Σ , the covariance matrix of at.
(3) t  1t 1  ...   p t  p -- moment equation of a VAR(p) model. It is a
multivariate version of the Yule-Walker equation of a univariate AR(p) model.
The book shows that a VAR(1) model shown in the previous subsection can be used to
derive properties of the VAR(p) model.
An implication of VAR is Granger Causality test to see if a21(1)=a21(2)=a21(3)=…=0
Using F-test on this restriction of the a general VAR(p). See Granger (1969) and Sim
(1972).
L9: Vector Time Series
10
Building VAR(p) Model
Taking the same steps as the univariate AR procedures (see page 314).
rt = Φ0 + Φ1 rt-1 + at
rt = Φ0 + Φ1 rt-1 + Φ2 rt-2 +at
……..
rt = Φ0 + Φ1 rt-1 + ……+Φi rt-i +at
………
^ (i)
For the ith Equation in the above system, let Φ j be the OLS estimate of Φ j , where the
superscript (i) is used to denote that the estimates are for a VAR(i) model. The residual is:
T
^
2
Σi 
a(i)t (a(i)t )', i  0

T  2i  1 t i 1
The residual matrix is:
T
^
2
Σi 
a(i)t (a(i)t )', i  0

T  2i  1 t i 1
L9: Vector Time Series
11
Building VAR(p) Model
(A) Identification: the test statistics to specify the order, i.e., H0: Φl  0 versus the
alternative hypothesis Φl  0 sequentially for l=1,2,…, is:
^
| |
M (1)  (T  k  2.5) ln( ^1 , M(1) is asymptotically a chi-squared distribution with k2
| 0 |
degrees of freedom.
^
| i |
M (i)  (T  k  i  1.5) ln( ^ , M(i) is asymptotically a chi-squared distribution with k2
| i 1 |
degrees of freedom. This is to test VAR(i) versus VAR(i-1).
~
^
1 T ^ (i ) ^ (i )
Note: ML estimator of i is: i   a t (a t )'
T t i 1
^
2k 2 i
The AIC of a VAR(i) model: AIC(i)  ln(| i |) 
. The AR order p is one resulting in
T
the minimum AIC.
(B) Estimation and forecasting, see page 316-318.
(C) SAS Program: proc varmax data=us_money;
id mno interval=month; model ibm sp / p=3; run; -- example 8.4.
L9: Vector Time Series
12
VMA and VARMA
VMA
VMA(p):  t2  0  1at21  1 t21 , or rt  0  ( B)at ,
12  a1,t 1 
 r      a  
VMA(1):  1t    1t    1t    11


r2t   2t  a2t   21  22   r2,t 1 
It says the current return series rt only depends on the current and past shocks. This is a
finite memory model, thus stationary. Items of Θ are also known as impulse response
functions. They are to see how the return vector react to shocks (at). The parameter
12 denotes the linear dependence of r1t on a2,t-1 in the presence of a1,t-1. If 12 =0, then r1t
does not depend on r2t, thus r2t.
Properties of VMA: (1) µ=θ0. That is the constant vector is mean vector of rt for a VMA
model.
(2) ρl=0 if l>q, where q is the order of the VMA process.
VARMA
 (B)rt  0  (B)at , -- see page 322-327.
L9: Vector Time Series
13
Unit Root Nonstationarity and Co-integration
 x1t 
 x1t 
 x  is co-integrated. First, some details on  x  :
 2t 
 2t 
 1.0  x1,t 1   a1t   0.2  0.4  a1,t 1 
 x1t   0.5

 x   0.25 0.5   x   a    0.1 0.2  a 
  2,t 1   2t  
  2,t 1 
 2t  
The above can be restated as:
x1t  0.5x1,t 1  x2,t 1  a1t  0.2a1,t 1  0.4a2,t 1
x2t  0.25x1,t 1  0.5x2,t 1  a2t  0.1a1,t 1  0.2a2,t 1
It can be shown that the cointegration of these two series can be represented by the unit root
of a univariate series (page 320).
Definition: x1t and x2t are cointegrated when
(a) both of them are unit-root nonstationary
(b) They have a linear combination that is unit-root stationary.
a. The vector achieving the unit-root stationary is called cointegrating vector
L9: Vector Time Series
14
Error-Correction Form
Subtracting xt-1 from both sides of the equation,
 x1t    0.5  1.0   x1,t 1   a1t   0.2  0.4  a1,t 1 
x    0.25  0.5  x   a    0.1 0.2  a 
  2,t 1   2t  
  2,t 1 
 2t  
 x1,t 1   a1t   0.2  0.4  a1,t 1 
 x1t    1



0
.
5
1
.
0
x      

x  0.5

 2t   
 2,t 1  a2t   0.1 0.2  a2,t 1 
 x1,t 1 
notice [0.5, 1.0] is the cointegrating vector. 0.5 1.0
 is unit root stationary.
x
 2,t 1 
The standard expression for an error correction model of two variables is:
x1t  1 ( x1,t 1  x2,t 1 )  1t
x2t  2 ( x2,t 1  x2,t 1 )   2t
x is unit-root stationary. αi is to test the short run relationship between xi and x1-β 2.
x
The expression of the error-correction form for the multi-variate case:
p 1
q
xt  αβxt 1   φ xt 1  at   Θ jat  j
i 1
*
i
j 1
L9: Vector Time Series
15
Procedure in Cointegration tests
Adapted from Enders (1995), page 374-377
Step 1: Pre-test the variables for their orders of integration.
Step 2: Estimate the long-run equilibrium relationship.
– Perform OLS regression
yt   0  1 zt  et
– Apply Dickey-Fuller and ADF test to check if the error term is a white
noise or stationary process
yt   0  1 zt  et
Step 3: Estimate the error-correction model
yt  1   y ( yt 1  1 zt 1 )  11 (i)yt i  12 (i)zt i   yt
i 1
i 1
i 1
i 1
zt   2   z ( yt 1  1 zt 1 )   21 (i)yt i   22 (i)zt i   zt
–
Using the residual of the long-run equilibrium relationship. t-stat for the
coefficient of error correction term is valid for large sample
yt  1   y eˆt 1  11 (i)yt i  12 (i)zt i   yt
i 1
i 1
i 1
i 1
zt   2   z eˆt 1   21 (i)yt i   22 (i)zt i   zt
L9: Vector Time Series
16
Conditional Covariance Matrix
Consider a multivariate return series {rt}:
rt  μt  at
p
q
i 1
i
where μt  φ0   Φirt  i   Θiat  i
The conditional covariance matrix of at given Ft-1 is a kxk positive-definite matrix ∑t. It
is referred as the volatility model for the return series rt.
A problem to model ∑t is that there are k(k+1)/2 quantities in ∑t for a k-dimensional
return series. To solve this problem,
(A) Use of Correlations
Σt  [σij,t ]  Dtρt Dt
where ρt is the conditional correlation matrix of at, Dt is a kxk diagonal matrix consisting
of the conditional standard deviations of elements of at.
L9: Vector Time Series
17
Use of Correlations
Σt  [σij,t ]  Dtρt Dt
where ρt is the conditional correlation matrix of at, Dt is a kxk diagonal matrix consisting
of the conditional standard deviations of elements of at.
Using Σt we have a k(k+1)/2-dimentional vector:   (11,t ,..., kk,t ,  t' )'
Where  t' is k(k-1)/2-dimensional vector obtained by stacking columns of the correlation
matrix ρt, but using only elements below the main diagonal.
For k=2, we have  t' =ρ21,t thus   (11,t , 22,t , 21,t )'
Use this matrix, we obtain the conditional density function of at give Ft-1 – see page 359
-- This is a direct way to obtain likelihood functions.
L9: Vector Time Series
18
Cholesky Decomposition
Cholesky Decomposition: Σt  Lt Gt L't . If Σt is positive definite, there exist a lower
triangular matrix Lt with unit diagonal elements and a diagonal matrix Gt
Let b1t  a1t and performing the following orthogonal transformation:
a2t   21b1t
a3t  31b1t
…
We have:
a2t   21b1t
a3t  31b1t
 b2t
 32 b2t  b3t
 b2t
 32 b2t  b3t , …
I.e., b t  Lt1a t , or at  Lt bt
Where Lt1 is a lower triangular matrix with unit diagonal elements. The covariance
matrix of bt is the diagonal matrix Gt of the Cholesky decomposition:
Cov(bt )  Lt1 Σt (Lt1 )'  Gt
  ( g11,t ,...gkk,t , q21,t , q31,t , q32,t ,...,qk ,k 2,t )' . Qijt is the (i,j)th element of the lower
triangular matrix Lt. We have the likelihood function as specified in page 363.
L9: Vector Time Series
19
Bivariate GARCH
For a k-dimensional return series rt, a multivariate GARCH model uses “exact
equations” to describe the evolution of the k(k+1)/2-dimentional vector
over time. By exact equation, we mean that the equation does not
contain any stochastic shock. However, the exact equation may become
complicated. To keep the model simple, some restrictions are often
imposed on the equations.
(1) Constant-correlation models: cross-correlation is a constant.
– see (9.16) and (9.17) on page 364
proc varmax data=all; model ibm sp / p=1 garch=(q=1); nloptions tech=qn;
output out=for lead=5 back=3; run; (all contains two sets of returns)
(2)
Time-Varying Correlation models
L9: Vector Time Series
20
Exercises
• Ch8, problem 2
• Replicate Goeij and Marqliering (2004, J. Fin. Econometrics), Modeling
the conditional covariance between Stock and Bond Returns: A multivariate
GARCH Approach, 2(4), 531-564.
L9: Vector Time Series
21
Download