Linear Time Series Models A (discrete) time series [a (discrete) stochastic process] is a sequence of random numbers (or vectors) indexed by the integers: { y } y0,y1,… t 0 y1,y2,… { yt }1 …,y-1,y0,y1,… { yt } The objective of time series analysis is to infer the characteristics of the stochastic process from a data sample (i.e., a partial realization of the process) and any additional information we have about the process. The characteristics of a sequence of random variables? Finite dimensional joint distributions (fidi’s) Moments of the fidi’s (means, variances, covariances,…) In order to be able to use data to draw inferences about the stochastic process that generated these data, there has to be some characteristics of the stochastic process that remain stable over time: E.g., the mean, the variance … A covariance stationary stochastic process is a stochastic process with the following properties – E(yt) = μ for all t Var(yt) = σ2 for all t Cov(yt,yt-s) = γs for all t,s A strictly stationary stochastic process is a stochastic process whose fidi’s are time invariant – Pr ob( yt1 1 ,..., ytn n ) Pr ob( yt1 m 1 ,..., ytn m n ) for all integers m,n,t1,…,tn and real numbers α1,…,αn. Remarks Although covariance and strict stationarity are not precisely the same conditions and neither one implies the other, for most practical purposes we can think of them interchangeably and, in particular, implying that the sequence of r.v.’s has a common mean, variance, and stable covariance structure. In theoretical time series work, strict stationarity turns out to be the more useful concept; in applied work covariance stationarity tends to be more useful. In evaluating whether a data sample is drawn from a stationary process, we tend to look at whether the mean and variance appear to be fixed over time and whether the covariance structure appears stable over time. The most obvious failure of stationarity – time trend in the mean. Although most time series models assume stationarity whereas many (most?) economic time series data appear to be nonstationary, there are often simple transformations of these series that can be plausibly assumed to be stationary: log transformations and, possibly, linear detrending or firstdifferencing. If yt is a stationary process then, according to the Wold Decomposition Theorem, it has a moving average (MA) representation: y t ci t i 0 where the ci’s are square summable the εt’s are white noise the εt’s are the innovations in the y’s If yt has an MA(∞) form and that MA is invertible, then yt has a finite-order autoregressive representation (AR(p)): yt a1 yt 1 ... a p yt p t where the ε’s are the innovations in the y’s the parameters a1,…,ap meet the stationarity condition, i.e., the roots of the characteristic equation (1-a1z-…-apzp) = 0 are strictly greater than one in modulus. Linear autoregressions (or, linear stochastic difference equations) are straightforward to estimate and form a very useful model for univariate forecasting. While the univariate autoregression clearly accounts for the persistence in the (conditional mean of the) y’s, it does not account for the interdependence between the y’s and other time series. A simple and natural extension of the univariate autoregressive model is the vector autoregressive (VAR) model: Yt = A1Yt-1 + …+ ApYt-p + εt where Yt is an n-dimensional jointly stationary process A1,…,Ap are nxn matrices satifying the stationarity condition. That is, the solutions to the determinantal equation det(I-A1z-…-Apzp) = 0 exceed one in modulus εt is n-dimensional white noise with covariance matrix Σεε and is the innovation in Yt.