Lecture 24 – Unit Root Tests, II The initial DF unit root tests assumed that under the unit root null hypothesis, the first differences in the series are serially uncorrelated. Since first differences of most macroeconomic time series are serially correlated, these tests were of limited value in emirical macroeconomics. This problem was addressed in the development of the Augment Dickey-Fuller test (ADF test) and the Phillips-Perron test (PP test). Suppose that yt has the AR(p) form yt = a1yt-1 + … + apyt-p + εt where εt ~ iid (0,σ2) . We assume that either i) all of the zeroes of 1-a1z - … -apzp are greater than one in modulus (and, therefore, yt ~ I(0)) ii) a(1) = 0 and all the other zeroes of 1-a1z - … -apzp are greater than one in modulus (and, therefore, yt ~ I(1)) or We can rewrite this model as yt = ρyt-1 + δ1Δyt-1 + … + δp-1Δyt-p+1 + εt where ρ = a1+…+ap δ1 = -(a2 +…+ap) δ2 = -(a3 + … + ap) … δp-1 = -ap Then, the unit root null hypothesis is: H0 : ρ=0 The ADF Test – Regress yt on yt-1, Δyt-1,…, Δyt-p+1 and compute the t-statistic ˆ 1 se( ˆ ) (Or, regress Δyt on yt-1, Δyt-1,…, Δyt-p+1 and compute the t-statistic ˆ se( ˆ ) ) Use the same DF distribution for τ that you would use for the AR(1) case. We can allow for a non-zero mean under the alternative by adding an intercept to the model, regressing yt (or Δyt) on 1, yt-1, Δyt-1,…, Δyt-p+1, computing the tstatistic ˆ 1 ˆ se( ˆ ) (or, se( ˆ ) ) and comparing its value to the percentiles of the DF τμ distribution. Similarly we can allow for a trend under the null and alternative by adding an intercept and trend to the model, regressing yt (or Δyt) on 1, t, yt-1, Δyt-1,…, Δyt-p+1, computing the t-statistic t ˆ 1 ˆ t se( ˆ ) (or, se( ˆ ) ) and comparing its value to the percentiles of the DF τt distribution. Problem – How to select p? Solutions – model selection criterion (AIC, SIC) formal testing (e.g., sequential t-tests: the δi-hats are asymptotically normal under the null) diagnostic checking for residual “whiteness” (L-B Q test,…) Application – Nelson and Plosser (JME, 1982) The ADF test relies on a parametric transformation of the model that eliminates the serial correlation in the error term without affecting leaves the asymptotic distributions of the various τ statistics. Phillips and Perron (Biometrika, 1988) proposed nonparametric transformations of the τ statistics from the original DF regressions such that under the unit root null, the transformed statistics (the “z” statistics) have DF distributions. So, for example, suppose our model is yt = ρyt-1 + εt , εt ~ I(0) with mean 0 The PP procedure: Regress yt on yt-1 Compute τ “Modify” τ to get z Under H0, z’s asymptotic distribution is the DF distribution for τ If our model is yt = α + ρyt-1 + εt , εt ~ I(0) with mean 0 or yt = α +βt+ ρyt-1 + εt , εt ~ I(0) with mean 0 the PP procedure is basically the same as above, i.e., Regress yt on 1, (t), yt-1 Compute τμ (τt) Modify τμ to get zμ (Modify τt to get zt) Under H0, zμ’s asymptotic distribution is the DF distribution for τμ (and zt’s asymptotic distribution is the DF distribution for τt). ADF vs. PP The PP test does not require us to specify the form of the serial correlation of Δyt under the null. In addition, the PP test does not require that the ε’s are conditionally homoskedastic (an implicit assumption in the ADF test). If we apply the ADF test and have underspecified p, the AR order, the test will be missized. If we apply the ADF test and overspecify p, the test’s power will suffer. These problems are avoided in the PP test, but if we can correctly specify p, the PP test will be less powerful than the ADF test. Also, the PP test requires a “bandwidth” parameter selection (as part of the construction of the Newey-West covariance estimator) that creates finite sample problems analogous to those associated the lag length selection issue in applying the ADF test.