I. Simultaneity, Systems of Equations and Vector

advertisement
ECON 240C
Lecture 17
1
5-30-2002
I. Simultaneity, Systems of Equations and Vector Autoregression Models(VAR)
As we have seen, in examining the dynamic relationships between two(or more) variables,
the causality may be mutual rather than simply unidirectional. This situation often occurs among
macroeconomic variables. Vector autoregression modeling is an attempt to deal with this situation.
We can capture the simultaneity in the following specification, referred to as the primitive VAR.
A. Primitive VAR
(1) y(t) = w(t) + y(t-1) + w(t-1) + x(t) + ey(t)
(2) w(t) = y(t) + y(t-1) + w(t-1) + x(t) + ew(t)
In equation one, the endogenous variable y(t) depends upon the contemporaneous value of
the other endogenous variable w(t), i.e. they are contemporaneously correlated, and depends as
well on lagged values of itself, y(t-1), and lagged values of the other dependent variable,
w(t-1). In addition, y(t) depends upon an exogenous variable, x(t), and an error term, ey(t). The
latter is presumed to be orthogonal. Similarly, in equation two, the dependent variable w(t)
depends contemporaneously on the other endogenous variable, y(t), and on lagged values of
itself and the other dependent variable. There is also dependence on an exogenous variable and
its own orthogonal error, ew(t), in this illustration. The errors for each equation ey(t) and ew(t)
are presumed to be independent of one another.
Both dependent variables can be expressed as distributed lags of the endogenous variables,
lagged one, by eliminating w(t) in equation one, and y(t) in equation two, through
substitution. For example, use the expression for w(t) in equation two to substitute for w(t) in
equation one. The resulting equations are called the VAR in standard form.
B. Standard VAR
(1’) y(t) = (/(1- ) +[ (+ )/(1- )] y(t-1) + [ (+ )/(1- )] w(t-1)
+ [(+ (1- )] x(t) + (ey(t) +  ew(t))/(1- )
(2’) w(t) = (/(1- ) +[(+ )/(1- )] y(t-1) + [ (+ )/(1- )] w(t-1)
+ [(+ (1- )] x(t) + (ey(t) + ew(t))/(1- )
The things to note in equation one prime is that the dependent variable y(t) depends only on
lagged values of the endogenous variables, y(t-1) and w(t-1), called predetermined variables
ECON 240C
Lecture 17
2
5-30-2002
because they predate y(t) by one period. In this example, since there is only one lag in the
dependence, the structure is autoregressive of order one. Note that y(t) also depends
contemporaneously on the exogenous variable x(t). Note also that the error term in equation one
prime, (ey(t) +  ew(t))/(1- ), is a linear combination of ey(t) and ew(t), but is still
orthogonal.
Equation two prime has a similar structure: w(t) depends on lagged y(t-1) and w(t-1), and
hence is first order autoregresssive, and depends contemporaneously on the exogenous variable
x(t). The error term, (ey(t) + ew(t))/(1- ), is a linear combination of ey(t) and ew(t), is
orthogonal, but contemporaneously correlated with the error term in equation one prime. The two
error terms for equations one prime and two prime will be contemporaneously correlated unless
both and are zero, whichwill be the case only if y(t) does not depend on w(t) in equation
one and w(t) does not depend on y(t) in equation two in the primitive system.
Note that if is zero, the error for equation one prime equals ey(t), i.e. it only depends on
the shock or error in y(t) in equation one. In contrast, the error for equation two prime will equal
ey(t) + ew(t), and depends on both the shock to y(t) in equation one, ey(t), and the shock to
w(t) in equation two, ew(t).
These two equations in the standard VAR can be estimated by ordinary least squares(OLS),
since the error term in each equation is independent of the regressors, y(t-1), w(t-1), and x(t),
and is not autocorrelated, i.e. is orthogonal. This is the econometric motivation for expressing
the VAR in standard form since the VAR in primitive form can not be estimated by OLS unless
both and are zero, in which case the primitive form is equivalent to the standard form, i.e.
equation one prime reduces to equation one and equation two prime reduces to equation two.
The parameter notation for the standard VAR can be simplified now that we have
demonstrated the relationship between the primitive VAR and the standard VAR.
(1’)
y(t) = a1 + b11 y(t-1) + c11 w(t-1) + d1 x(t) + e1(t)
(2’)
w(t) = a2 + b21 y(t-1) + c21 w(t-1) + d2 x(t) + e2(t)
C. Identification and the Choleski Decomposition of the Variance of the Estimated Residuals
There are twelve parameters in the primitive VAR, two ’s, two ’s, four ’s, two ’s , the
variance of ey(t), and the variance of ew(t). There are only eleven parameters to be estimated in
ECON 240C
Lecture 17
3
5-30-2002
the standard VAR: two a’s, two b’s, two c’s, two d’s, the variance of e1(t), the variance of e2(t),
and the covariance of e1(t) and e2(t). Consequently, there is not enough information from the
estimation of the VAR in standard form to uniquely identify the parameters of the primitive
VAR. An additional restriction is necessary, such as assuming is zero. This assumes w(t) does
not contemporaneously affect y(t) and that the estimated variance in the residuals for y(t),
E[e1(t)]2 , is equal to the variance in the shocks to y(t) in the primitive VAR, i.e.
E[e1(t)]2 = E[ey(t)]2 .
The observed variance in the residuals for w(t), E[e2(t)]2 , is a combination of the variance in the
shocks to y(t) and the variance in the shocks to w(t) in the primitive VAR:
E[e2(t)]2 = [ey(t)]2 + ew(t)]2 .
This is called the Choleski decomposition of the variance in the estimated residuals. An
alternative restriction is to assume is zeroWhich way to conduct the Choleski decomposition,
i.e or =0, is a judgement call.
D. Estimation of the Standard VAR
As mentioned above, each equation can be estimated by OLS. There are issues. For example,
how many lags to include, standardizing for seasonality, if present, and testing for Granger
causality.
1. Number of Lags.
Suppose for example that we are using quarterly data. The usual practice would be to
include four lags:
(1a)
y(t) = a1 + b11 y(t-1) + b12 y(t-2) + b13 y(t-3) + b14 y(t-4) + c11 w(t-1) + c12 w(t-2) +
c13 w(t-3) + c14 w(t-4) + d1 x(t) + e1(t)
(2a)
w(t) = a2 + b21 y(t-1) + b22 y(t-2) + b23 y(t-3) + b24 y(t-4) + c21 w(t-1) + c22 w(t-2) +
c23 w(t-3) + c24 w(t-4) + d2 x(t) + e2(t)
The two equations are specified in a similar fashion. It is the system specification we are
interested in, not an equation by equation treatment. One could test for more lags, such as
eight or twelve, or fewer lags such as two.
(1b)
y(t) = a1 + b11 y(t-1) + b12 y(t-2) + c11 w(t-1) + c12 w(t-2) + d1 x(t) + e1(t)
(2b)
w(t) = a2 + b21 y(t-1) + b22 y(t-2) + c21 w(t-1) + c22 w(t-2) + d2 x(t) + e2(t)
ECON 240C
Lecture 17
4
5-30-2002
In the case of two lags, we would have a second order VAR. The test for the number of lags
is based on the determinant of the estimated error variances and covariance, for example for
four lags, DET[4] = E[e1(t)]2 * E[e2(t)]2 - E[e1(t) e2(t)]* E[e1(t) e2(t)], from system a, that is
the product of the estimated variance of the residuals from equation (1a) times the
estimated variance of the residuals from equation (1b) minus the square of the covariance
of the residuals from equations (1a) and (1b). The determinant of the error variances
and
covariance from system b is calculated the same way. Note that system b is
constrained
relative to system a since we set the parameters for lags three and four
equal to zero, a total of
eight parameters constrained in the two equations. As a
consequence of these constraints, the
for system a, as will the determinant:
error variances will be larger for system b than
DET[2] > DET[4] .
The test for the number of lags is based on the logarithm of the likelihood ratio,
(T-c) ln[DET[2] / DET[4] = (T-c) {ln DET[2] - ln DET[4] }
where T is the number of observations, and should be the same for the estimates of both
systems, and c is the number of regression coefficients estimated in an equation in the
unconstrained system, in this case ten. This statistic is distributed chi square with eight
degrees of freedom , the number of parameters set to zero as we moved from system a to
system b. A large chi square statistic significant at the 5% level indicates the restrictions
increased the error variances and we would reject the null hypothesis of only two lags.
2. Seasonality
If seasonality is present, it can be controlled for by adding seasonal dummy variables.
For example, if the data is quarterly, the variable Q1 would equal one for every first quarter
observation and zero elsewhere. Similarly for Q2 and Q3, the dummy variables indicating the
second quarter and third quarter observations respectively. The dummy variables should be
added to both equations even if only one of the dependent variables is seasonal. Using
system b as an example:
(1c)
y(t) = a1 + b11 y(t-1) + b12 y(t-2) + c11 w(t-1) + c12 w(t-2) + d1 x(t) + f11 Q1 + f12 Q2 +
f13 Q3 + e1(t)
(2c)
w(t) = a2 + b21 y(t-1) + b22 y(t-2) + c21 w(t-1) + c22 w(t-2) + d2 x(t) + f21 Q1 + f22 Q2 +
f23 Q3 + e2(t)
ECON 240C
Lecture 17
5
5-30-2002
The standard VAR system can be estimated with the dummy variables and without the
dummy variables and a likelihood ratio test based on the determinants of the error variances
and covariance from each estimation can be conducted. The null hypothesis is that the
seasonal dummy variables are not necessary.
3. Granger Causality
Causality can be tested, conditional on seasonality, for example in equation (1c), by
constraining c11 and c12 to zero. One way to proceed is to estimate
(1c’)
y(t) = a1 + b11 y(t-1) + b12 y(t-2) + d1 x(t) + f11 Q1 + f12 Q2 + f13 Q3 + e1(t)
using OLS and conduct an F test based on the change in the residual sum of squares(RSS)
between equations (1c) and (1c’):
F2, T-9 = {[RSS(1c’) - RSS(1c)]/2}÷ RSS(1c)/(T-9),
where 2 variables are dropped, accounting for the degrees of freedom in the numerator, and
T is the number of observations and 9 is the number of parameters estimated in equation (1c)
so RSS(1c)/(T-9) is the unexplained mean square in the denominator. A similar procedure
can be followed to test whether y(t) Granger causes w(t) in equation (2c).
E. Interpretation of the Standard VAR
The foci of interest in VAR analysis are the endogenous variables and how they respond to
shocks or innovations in the error terms that uniquely affect them, namely ey(t) and ew(t). Recall
that to interpret the observed errors in terms of these unobserved errors, a Choleski
decomposition must be used.
It is possible to express the dependent variables as a distributed lag of the observed errors and
then use a Choleski decomposition to interpret this distributed lag in terms of the original shocks,
unique to each dependent variable. To derive the distributed lag of the dependent variables on the
errors recall that an autoregressive process can be expressed as an infinite moving average process.
For example, from the standard VAR equations (1’) and (2’) above,
constants and exogenous variable for simplicity:
(1’’)
y(t) = b11 y(t-1) + c11 w(t-1)+ e1(t)
(2’’)
w(t) = b21 y(t-1) + c21 w(t-1) + e2(t)
abstracting from the
ECON 240C
Lecture 17
6
5-30-2002
One of the equations can be used to eliminate one of the dependent variables and the other
dependent variable can be expressed as distibuted lags of the two observed errors.
(1’’)
y(t) - b11 Zy(t) = c11 w(t-1)+ e1(t)
(2’’)
w(t) - c21 Zw(t) = b21 y(t-1) + e2(t)
and solving for w(t) and lagging by one period:
(2’’)
w(t-1) = [1- c21 Z]-1 [b21 y(t-2) + e2(t-1)],
Substituting for w(t-1) in equation (1’’):
(1’’)
[1 - b11 Z] y(t) = c11 [1- c21 Z]-1 [b21 y(t-2) + e2(t-1)] + e1(t)
and multiplying by [1- c21 Z]:
[1 - b11 Z] [1- c21 Z] y(t) = c11 [b21 y(t-2) + e2(t-1)] + [1- c21 Z] e1(t)
[1- b11 Z - c21 Z + b11 c21 Z2]y(t) = c11 b21 Z2 y(t) + c11 e2(t-1) + [1- c21 Z] e1(t)
and collecting all the endogenous variable terms on the left hand side:
[1- (b11 + c21 )Z + (b11 c21 - c11 b21)Z2]y(t) = c11 e2(t-1) + [1- c21 Z] e1(t)
y(t) = [1- (b11 + c21 )Z + (b11 c21 - c11 b21)Z2]-1{ c11 e2(t-1) + [1- c21 Z] e1(t)}
To solve for y(t) as a distributed lag of e1(t) and e2(t-1), we need to know the inverse:
[1- (b11 + c21 )Z + (b11 c21 - c11 b21)Z2]-1 = [1+ ZZ3 + ...] .
Recall in the Lecture Six Notes, to express a second order autoregressive process as an infinite
moving average process, we needed to find  etc. This can be done since the product,
[1- (b11 + c21 )Z + (b11 c21 - c11 b21)Z2] [1+ ZZ3 + ...] =1,
i.e. the coefficients on Z, Z2, etc in the product are zero.
1 + [- (b11 + c21 )]Z + [ - (b11 + c21 ) - (b11 c21 - c11 b21)]Z2 + ... + = 1,
and
= (b11 + c21 )
 = (b11 + c21 ) - (b11 c21 - c11 b21)
 = (b11 + c21 ) - (b11 c21 - c11 b21)
etc. Thus
y(t) = {1+ (b11 + c21 )Z + [ (b11 + c21 )2 - (b11 c21 - c11 b21)]Z2 + ...+ }{ c11 e2(t-1) + [1- c21 Z] e1(t)}
and using the Choleski decomposition where is assumed to be zero:
e1(t) = ey(t) + ew(t)
ECON 240C
Lecture 17
7
5-30-2002
e2(t) = ew(t)
and thus
y(t) = {1+ (b11 + c21 )Z +[ (b11 + c21 )2 - (b11 c21 - c11 b21)]Z2 + ...+ }
{c11 ew(t-1) + [1- c21 Z][ ey(t)+ ew(t)]}
i.e.
y(t) = ey(t) + b11 ey(t-1) +[( b11)2 + c11 b21 ] ey(t-2) + ... +



ew(t) + [b11 + c11 ]ew(t-1) +[( b11)2 + c11 b21 ] + c11 (b11 + c21 )}ew(t-2) + ...
1. Numerical Example
Enders uses a numerical example
(1’’)
y(t) = b11 y(t-1) + c11 w(t-1)+ e1(t) = 0.7 y(t-1) + 0.2 w(t-1)+ e1(t)
(2’’)
w(t) = b21 y(t-1) + c21 w(t-1) + e2(t) = 0.2 y(t-1) + 0.7 w(t-1) + e2(t)
where the variance of e1(t) equals the variance of e2(t), and the correlation between e1(t) and
e2(t) equals 0.8. Using a Choleski decomposition where = 0.8 and = 0,
e1(t) = (ey(t) +  ew(t))/(1- ) = ey(t) + 0.8 ew(t)
e2(t) = (ey(t) + ew(t))/(1- ) = ew(t)
Note: COV[e1(t) e2(t)] = 0.8 E[ew(t)]2 = 0.8 E[e2(t)]2 = 0.8 E[e1(t)]2 = 0.8,
and the correlation is COR[e1(t) e2(t)] = COV[e1(t) e2(t)]/{√ E[e1(t)]2}{√ E[e2(t)]2}=0.8.
Using these values for b11, c11, b21, c21, and we have the impulse response
function for y(t) on ey(t) and ew(t):
y(t) = ey(t) + 0.7 ey(t-1) + 0.53 ey(t-2) + ... +ew(t) + 0.76 ew(t-1) +ew(t-3) + ...
Of course this is only the impulse response function for y(t). There is also an impulse
response function for w(t), and these two impulse response functions are only for the Choleski
decomposition where is zeroWe can also calculate the impulse response functions where  is
zero. It may be easier to illustrate this numerical example using simulation.
II. Simulation of Impulse Response Functions
Open Eviews
Objects/New/Workfile: varsim.wf1, undated, 1000 observations
Workfile Menu: GENR: innovy=0.6*nrnd
Workfile Menu: GENR: innovw=nrnd
Workfile Menu: GENR: innovone=innovy+0.8*innovw
ECON 240C
Lecture 17
8
5-30-2002
Workfile Menu: GENR: innovtwo=innovw
Note: the nrnd’s have standard deviations of one so consequently does
innovw and innovtwo. Innovy has a standard deviation of 0.6. The standard
deviations of innovone and innovtwo are equal, as required, and their
correlation is 0.8.
Objects/New/Model: SIMVARMOD
y=0.7*y(-1)+0.2*w(-1)+innovone
w=0.2*y(-1)+0.7*w(-1)+innovtwo
Model Window: SOLVE
Objects/New/VAR: Unrestricted VAR, lag interval 1 1, sample 1 1000, endogenous
variables y w, no intercept
VAR Window: VIEW: estimation output
The estimated VAR is:
y=0.67*y(-1)+0.25*w(-1)
w=0.19*y(-1)+0.73*w(-1)
VAR Window: VIEW: residual covariance matrix
The estimated variance of innovone is 1.01, of innovtwo is 0.95, and their
estimated covariance is 0.79.
VAR Window: VIEW: Impulse-VAR Decomposition, impulse responses,
innovations to: y w, cause responses by: y w, use the ordering: w y, Table,
ECON 240C
Lecture 17
9
5-30-2002
10 periods.
Note: Since innovy has a standard deviation of 0.6, and Eviews calculates the response of y and w
for a one standard deviation change in innovy, the figures in the first column need to be divided by
0.6 to be comparable to the coefficients on ey(t), ey(t-1), etc. in the equation at the top of page 7.
ECON 240C
Lecture 17
10
5-30-2002
The distributed lag(impulse response) of y(t) to a shock, i.e. to ey(t), is compared using the
calculated coefficients from the equation on the top of page 7 and the simulated figures in the left
column, top panel, divided by 0.6, as explained in the note above.
Impact of a Shock in y on the Variable y: Impulse
Response Function
1
0.9
Impact Multiplier
0.8
Calculated
Simulated
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
2
4
6
8
10
Period
------------------------------------------------------------------------------------------------------------------Note that the calculated values compare closely to the simulated ones, and it is easier to simulate.
As we saw from the equation on the top of page 7, the endogenous variable y(t) depends most on the
current shock, and the influence fades with lag. Equivalently, in interpretation, a current shock,
ey(t), has the biggest impact on the current y(t) and then less and less influence as time passes.
In the figure that follows , the impact of a shock to the endogenous variable w, ew(t), on the
variable y(t) is compared for the calculated values from the equation on the top of page seven and the
simulated values in the right column of the top panel. Once again, the calculated and simulated
values are similar and the ease of using the simulation is welcome.
ECON 240C
Lecture 17
11
5-30-2002
Impact of a Shock in w on the Variable y: Impulse Response Function
0.9
0.8
0.7
Impact Multiplier
0.6
0.5
Calculated
0.4
Simulated
0.3
0.2
0.1
0
0
1
2
3
4
5
6
7
8
9
Period
------------------------------------------------------------------------------------------------------------------
Impact of a Shock in w on the Variables y and w
1
0.9
0.8
Impact Multipliers
0.7
0.6
0.5
0.4
Response of y
0.3
Response of w
0.2
0.1
0
0
1
2
3
4
5
6
7
8
9
Period
In the figure above, the simulated impulse response functions from the table, with the left hand
column adjusted, are used to reproduce the top panel of Figure 5.6 in Enders. A shock in w, ew (t),
ECON 240C
Lecture 17
12
5-30-2002
immediately affects the variable w(t) and the variable y(t), the first more than the second. Thus the
effect of a shock in w, ew (t), has an immediate impact on both w and y and then the impact fades
with time, becoming equal after three or four periods.
The Impact of a Shock in y on the Variables y and w
1
0.9
0.8
Response of y
Impact Multipliers
0.7
Response of w
0.6
0.5
0.4
0.3
0.2
0.1
0
0
1
2
3
4
5
6
7
8
9
Period
In the figure above, a shock in y, ey(t), immediately affects the variable y(t), but not the variable
w(t), since 2, is presumed zero. As can be seen from the primitive VAR, current y(t) does not affect
w(t) if 2 is zero. But after a period, a shock in y will affect w through y(t-1). Thus the effect of a
shock in y, ey(t), has an immediate impact on y and then fades, while there is no immediate impact
on w, but the impact builds with time, maxes out in four periods, and then decays.
III. Decomposition of the Forecast Errors
In addition to using the impulse response functions to interpret the vector autoregressions,
one can also decompose one period ahead, two period ahead, etc. forecast errors. For example, using
the moving average representation for the endogenous variable y from page 6 which incorporated the
Choleski decomposition where is assumed zero to express y(t) as a moving average of the error
processes ey(t) and ew(t) :
y(t) = ey(t) + b11 ey(t-1) +[( b11)2 + c11 b21 ] ey(t-2) + ... +
ECON 240C



Lecture 17
13
5-30-2002
ew(t) + [b11 + c11 ]ew(t-1) +[( b11)2 + c11 b21 ] + c11 (b11 + c21 )}ew(t-2) + ...
The one period ahead forecast is:
Et-1 y(t) = b11 ey(t-1) +[( b11)2 + c11 b21 ] ey(t-2) + ... +
[b11 + c11 ]ew(t-1) +[( b11)2 + c11 b21 ] + c11 (b11 + c21 )}ew(t-2) + ...
and the one period ahead forecast error is:
y(t) - Et-1 y(t) = ey(t) + ew(t),
with variance:
E[y(t) - Et-1 y(t)]2 = E[ey(t)]2 + E[ew(t)]2,
and the fraction attributable to the shock to y, ey(t), is:
E[ey(t)]2/{ E[ey(t)]2 + E[ew(t)]2}
The two period ahead forecast
Et-2 y(t) = [( b11)2 + c11 b21 ] ey(t-2) + ... + [( b11)2 + c11 b21 ] + c11 (b11 + c21 )}ew(t-2) + ...
and the two period ahead forecast error is:
y(t) - Et-2 y(t) = ey(t) + b11 ey(t-1) + ew(t) + [b11 + c11 ]ew(t-1),
with variance:
E[y(t) - Et-2 y(t)]2 = E[ey(t)]2 + (b11)2 E[ey(t-1)]2 + (ew(t)]2 + [b11 + c11 ]2 E[ew(t-1)]2,
and the fraction attributable to the shock to y, ey(t), is:
E[ey(t)]2+(b11)2E[ey(t-1)]2/{E[ey(t)]2+(b11)2 E[ey(t-1)]2+ (ew(t)]2 + [b11 + c11 ]2 E[ew(t-1)]2}.
1. Numerical Example
Using the numerical values from above, b11 = 0.7, c11 = 0.2, b21 = 0.2, c21 = 0.7, = 0.8, the
variance of ey(t) is 0.36 and the variance of ew(t) is 1.0. In the one period ahead forecast the fraction
attributable to the shock to y, ey(t), is 0.36. In the two period ahead forecast the fraction attributable
to ey(t), is 0.306.
2. Simulation Example
The simulation can be used to approximate these calculated results.
VAR Window: VIEW: Impulse-VAR Decomposition, variance decomposition,
innovations to: y w, cause responses by: y w, use the ordering: w y, Table:
As can be seen from the left column in the top panel, the fraction of the variance in the exogenous
variable y attributable to the shock in y is, simulated 34.7%, compared to calculated 36% for one
period ahead. For two periods ahead, simulated is 28.3%, compared to calculated 30.6%
ECON 240C
Lecture 17
14
5-30-2002
ECON 240C
Lecture 17
15
5-30-2002
Appendix
VAR models can be expressed in matrix notation. For example, for a primitive VAR of the
first order:
A. Primitive VAR
 y(t)
Z y(t)
ey (t)
=
+
+
+
x(t) +
w(t)
0 w(t)
Z w(t)
ew (t)
y(t)
or
v(t) =  + v(t) +  Z v(t) + x(t) + e(t)
where
v(t) is the vector of the observed endogenous variables,
e(t) is the vector of unobserved shocks to the endogenous variables,
x(t) is the scalar exogenous variable,
Z is the scalar lag operator,
and and are matrices of coefficients.
B. Standard VAR in Autoregressive(AR) Form
The primitive VAR can be solved to obtain the VAR in standard form:
I v(t) -  v(t) = [ I -  v(t) =  +  Z v(t) + x(t) + e(t),
where I is the identity matrix. Multiplying by the inverse of [ I - 


[ I - [ I -  v(t) = [ I -  +  Z v(t) + x(t) + e(t)],
where
[ I -  = a , of dimensions 2 by 1,
[ I -  = b , of dimensions 2 by 2,
[ I - d , of dimensions 2 by 1,
[ I - e(t) = e*(t) , of dimensions 2 by 1, so:
v(t) = a + b Z v(t) + d x(t) + e*(t)
C. The Choleski Decomposition
ECON 240C
Lecture 17
The expression for e*(t) = e1 (t)
16
5-30-2002
can be expanded, where
e2 (t)
[ I -  


so:
e1 (t) = [ ey (t) + ew (t) ] / 



e2 (t) = [ey (t) + ew (t) ] / 


In general, the shock to both y(t) and w(t) in the primitive VAR will affect the shocks to y(t)
and w(t) in the standard VAR. However, if we presume that w(t) does not affect y(t)
contemporaneously in the primitive VAR, i.e. that is zero, then only the shock to y(t) in the
primitive VAR affects the shock to y(t) in the standard VAR, i.e. e1 (t) = ey (t). Consequently,
the estimated shock ê1(t) from the estimated standard VAR can be interpreted as a pure shock to
y(t) under this assumption. Alternatively, if y(t) is assumed to not affect w(t) contemporaneously
in the primitive VAR, i.e.  is assumed to be zero, then e2 (t) = ey (t), and the estimated shock
to w(t) from the estimated standard VAR can be interpreted as a pure shock to w(t). These
alternative Choleski decompositions are alternative “what if” scenarios, that enable an
unambiguous(by assumption) interpretation of the impulse response functions, or distributed
lags, of the endogenous variables on their innovations, or shocks, ey (t) and ew(t), from the the
unobserved primitive VAR, where these shocks are independent of one another. Without these
assumptions, there is no way to tell an unambiguous story since the observed shocks, e1 (t) and
e2 (t), each depend on both of the unobserved shocks, ey (t) and ew (t). This is a limit to the
interpretative value of VAR analysis. But of course one could argue that all economic story
telling depends on assumptions.
D. Standard VAR in Moving Average(MA) Form
As usual, a first (or higher) order AR,
v(t) = a + b Z v(t) + d x(t) + e*(t),
can be expressed as an infinite moving average of the error:
[ I - Z b] v(t) = a + d x(t) + e*(t),
and mutiplying by the inverse of [ I - Z b],
ECON 240C
Lecture 17
17
5-30-2002
v(t) = [ I - Z b]-1 [a + d x(t) + e*(t)],
where
[ I - Z b]-1 = [1 + bZ + b2 Z2 + b3 Z3 + ... +],
and
v(t) = [1 + bZ + b2 Z2 + b3 Z3 + ... +] [a + d x(t) + e*(t)],
v(t) = [I - b]-1 a + d x(t) + b d x(t-1) + b2 d x(t-2) + ...+ e*(t) + b e*(t-1) + ..
Using the Choleski decomposition where is presumed zero, then e*(t) equals:
e1 (t) = [ ey (t) + ew (t) ]


e2 (t) = ew (t),
and using the notation from page 5 above for b, from the standard VAR,
b =  b11 c11
b21 c21
and the dependence of v(t) on the error structure is:
v(t) = ..+ ...+ e*(t) + b e*(t-1) + b2 e*(t-2) + ...
or
y(t) = ... + ey (t) + ew (t) + b11 [ ey (t-1) + ew (-1) ] + c11 ew (t-1) + ...
w(t) = ... + ew (t) + b21 [ ey (t-1) + ew (-1) ] + c21 ew (t-1) + ...
These last two equations can be used to calculate the impulse response functions between the
endogenous variables and their pure shocks, under the Choleski assumption, and the first
equation corresponds to the equation for y(t) on page 6, right above the numerical example.
Download