Lecture 10 – Autoregressive Processes  (Reference – Section 6.2 (pp. 375­380), Hayashi) autoregressive process The stochastic process {y

advertisement
Lecture 10 – Autoregressive Processes (Reference – Section 6.2 (pp. 375­380), Hayashi) The stochastic process {yt} is a first­order autoregressive process (i.e., AR(1) process) if yt = a + ryt­1 + et , where et is a w.n. process Fact – If │r│ ≠ 1, then {yt} is a covariance stationary process. This fact follows from the fact that under this condition, {yt} also has an MA(∞) form with absolutely summable coefficients.
1. Suppose │r│ < 1 Recall – ¥
y t = m + å r s e t - s = m + (1 - rL ) -1 e t 0 and so, ( 1 - rL) y t = ( 1 - rL ) m + e t or, y t = a + ry t -1 + e t , where a =(1­r)μ (since Lc = c if c is a constant).
2. Now suppose │r│ > 1, which means that │1/r│ < 1. Consider the process {yt} defined according to ¥
¥
- s y t = m - å r e t + s = m - ( å r - s L - s ) e t 1
1 which is a one­sided MA(∞) in terms of future e’s. This is covariance stationary process because the sequence of coefficients {r ­s } is absolutely summable. Next, note that ¥
- s - s ­ å r 1
L = (1­rL) ­1 since (1­rL)(­r ­1 L ­1 ­r ­2 L ­2 ­…) = 1. So, in this case, ¥
- s - s yt = μ + (1­rL) ­1et, where (1­rL) ­1 = ­ å r L
1
and, as above, if we multiply both sides by (1­rL) and rearrange, we obtain: yt = a + ryt­1 + et , where a = (1­r) μ
So, an AR(1) process yt is a covariance stationary process in terms of current and past values of the white noise process et if │r│ < 1. It is a covariance stationary process process in terms of future values of the white noise process et if │r│ > 1. In economics, we are generally only interested in AR(1) process that evolve forward from the past, so we usually rule out backward­evolving AR(1) processes with │r│ > 1. We refer to the condtion that │r│ < 1 as the stationarity condition.
3. What if r = 1 (or –1)? In this special case, which is called the unit root case, yt is a nonstationary process – it does not have a moving average form with absolutely summable coefficients. yt = a + yt­1 + et = a + (a+yt­2+et­1)+et … = ja + yt­j + (et + et­1 + …+et­j+1) and so on. The effects of past e’s on current y do not decrease; dyt/det­s = 1 for all s! The unit root case is very important in applied and theoretical time series econometrics. For example, when we first difference a time series, like log(real GDP), to make it stationary, we are assuming that the time series is a unit root process. (More on this in Econ 674)
Consider an AR(1) that satisfies the stationarity condition. It is covaraince­stationary and its mean, variance, and autocovariance function can be ¥
s determined by it MA(∞) form, y t = m + å r e t - s 0 E(yt) = μ, and Var(yt) = s 2 /(1­r 2 ) where s 2 = var(et), and so on. Another approach – yt = a + ryt­1 + et ­­­­ > E(yt) = a + rE(yt­1) since E(a) = a and E(et) =0 E(yt) = a + rE(yt) since yt is stationary E(yt) = a/(1­r) = μ
WLOG, assume a = 0 so that E(yt) = 0. (This won’t affect the solutions for the variances or autocovariances and it simplifies their derivations.)
g0 = E(yt 2 ) = E[(ryt­1+et) 2 ] = r 2 E(yt­1 2 ) +E(et 2 ) + 2rE(yt­1et) = r 2 E(yt 2 ) + s 2 + 0 = s 2 /(1­r 2 )
g1 = E(ytyt­1) = E[(ryt­1+et)yt­1] = rg0
g2 = E(ytyt­2) = E[(ryt­1+et)yt­2] = rg1 = r 2g0 and so on … So, for the stationary AR(1) process
g0 = s 2 /(1­r 2 ) and gj = r jg0 , j = 0,1,2,… rj = gj/g0 = r j , j = 0,1,2,… What is the shape of the autocorrelogram for the AR(1)? How does it differ from the shape of, say, the MA(1) or MA(2)?
Forecasting with the AR(1) process – Suppose yt = a + ryt­1 + et. Then E(yt│yt­1,yt­2,…,et­1,et­2,…) = E(yt│et­1,et­2,…) = E(yt│yt­1,yt­2,…) = a + ryt­1 Note that et = yt ­ E(yt│yt­1,yt­2,…) and so is called the innovation in yt. Similarly, E(yt+1│yt­1,yt­2,…,et­1,et­2,…) = E(yt+1│et­1,et­2,…) = E(yt+1│yt­1,yt­2,…) = (1+r)a + r 2 yt­1 ____________________________ yt+1 = a + ryt + et+1 E(yt+1│yt­1,yt­2,…) E(et+1│yt­1,yt­2,…) = a + rE(yt│yt­1,yt­2,…) + = a + r (a + ryt­1) + 0
More generally – E(yt+s│yt,yt­1,…) = yt+s for s < 0 = (1+r+…+r s­1 )a + r s yt for s > 0
The AR(p) Model – The stochastic process {yt} is a pth­order autoregressive process (i.e., AR(p) process) if yt = a + φ1yt­1 +…+ φpyt­p + et , where et is a w.n. process or, writing it in term of the lag operator, φ(L)yt = a + et, where φ(L) = 1 ­ φ1L ­ … ­ φpL p The stationarity condition for the AR(p) model is that the roots of the polynomial φ(z) all lie outside of the unit circle (or equivalently, the roots of z p ­ φ1z p­1 ­ …­ φp all lie inside the unit circle).
If the stationarity condition holds, then {yt} is a covariance stationary process with a one­sided MA(∞) representation – ¥
y t = m + åy s e t - s , 0 where m = j ( 1 ) - 1 a = a /( 1 - j1 - ... - j p ) y ( L ) = j ( L ) -1 and the coefficients of y (L ) are absolutely summable Note that the mean of the stationary AR(p) process is μ. ¥ The variance of the process is s 2 åy
0 2 s
.
The autocovariance function (and, therefore, the autocorrelation function) can be derived from the Yule­Walker equations (which are described for the AR(1) case in Exercise 5, p. 433­444, Hayashi). The details of computing and characterizing the autocovariance function for the general AR(p) model are not of great importance for our purposes. We note, however, that
· the autocovariance function can be calculated from the AR coefficients (via, e.g., the Yule­ Walker equations)
· the autocovariance function for an AR(p) is an infinite sequence {γj}
· the sequence of autocovariances is absolutely summable
Forecasting using the AR(p) model – Suppose that yt is an AR(p) process, yt = a + φ1yt­1 +…+ φpyt­p + et Then
ˆ t t -1 = E(yt│yt­1,yt­2,…,et­1,et­2,…) y = E(yt│et­1,et­2,…) = E(yt│yt­1,yt­2,…) = a + φ1yt­1 +…+ φpyt­p ˆ t t -1 = the innovation in yt Thus, et = y t - y Simlarly ­ ˆ t t -1 + j 2 y t -1 + ... + j p y t - p +1 yˆ t +1 t -1 = a + j1 y ˆ t +1 t -1 + j 2 y t t -1 + ... + j p y t - p + 2 yˆ t + 2 t -1 = a + j1 y and so on.
In econometrics, if we have a time series {yt} that we believe is a stationary time series, we are typically willing to assume that it has an AR(p) form, for sufficiently large p (and, therefore, it also has an MA(∞)). From the point of view of the econometric theorist, it turns out that the MA(∞) form of the model is often the more useful representation of the process. However, from the point of view of the applied econometrician, the AR(p) form is generally more useful, because it is essentially just a linear regression model with serially uncorrelated and homoskedastic disturbances. The practical (and related) problems are 1) how to select p and 2) for given p, how to estimate the parameters of the AR(p) model, and 3) how to test restrictions on the model’s coefficients. We’ll come back to these issues shortly.
A Note on ARMA(p,q) Processes In addition to the pure AR and pure MA processes, we can define the ARMA(p,q) process according to – yt = a + φ1yt­1 + … + φpyt­p + θ0et + θ1et­1 + … + θqet­q Sometimes in practice it turns out that neither a low order AR(p) model nor a low order MA(q) can account for all of the serial correlation in yt. In these cases, however, an ARMA(p,q), with small p+q, may work well and will have the advantage of explaining the serial correlation pattern in terms of a relatively small number of unknown parameters.
Download