STAT 497 LECTURE NOTE 9

advertisement
STAT 497
LECTURE NOTE 9
DIAGNOSTIC CHECKS
1
DIAGNOSTIC CHECKS
• After identifying and estimating a time series
model, the goodness-of-fit of the model and
validity of the assumptions should be checked.
If we have a perfect model fit, then we can
construct the ARIMA forecasts.
2
1. NORMALITY OF ERRORS
• Check the histogram of the standardized
residuals, aˆt ˆ a .
• Draw Normal QQ-plot of the standardized
residuals (should be a straight line on 450 line).
• Look at Tukey’s simple 5-number summary +
skewness (should be 0 for normal)+ kurtosis
(should be 3 for normal) or excess kurtosis
(should be 0 for normal)
3
1. NORMALITY OF ERRORS
• Jarque-Bera Normality Test: Skewness and
kurtosis are used for constructing this test
statistic. JB (1981) tests whether the
coefficients of skewness and excess kurtosis
are jointly 0.
3

E at 
1 
 skewness
3
/
2
E at2 

2

E a


E a 
4
t
2 2
t
 kurtosis
4
1. NORMALITY OF ERRORS
• JB test statistic
2 approx.

ˆ
n ˆ 2  2  3 
2
JB   1 
 ~  2 under H 0 .
6 
4 
n
1
4
1n
3
 Yt  Y 
 Yt  Y 
n t 1
ˆ
n
t 1
ˆ


1 
2
2
3/ 2
n
 1 Y  Y 2 
 1 n Y  Y 2 
  t

  t

 n t 1

 n t 1

2
• JB>  , 2 , then reject the null hypothesis that
residuals are normally distributed.
5
1. NORMALITY OF ERRORS
• The chi-square approximation, however, is overly
sensitive for small samples, rejecting the null
hypothesis often when it is in fact true. Furthermore,
the distribution of p-values departs from a uniform
distribution and becomes a right-skewed uni-modal
distribution, especially for small p-values. This leads to
a large Type I error rate. The table below shows some
p-values approximated by a chi-square distribution that
differ from their true alpha levels for very small
samples.
• You can also use Shapiro-Wilk test.
6
1. NORMALITY OF ERRORS
Calculated p-value equivalents to true alpha levels
at given sample sizes
True α level
20
30
50
70
100
.1
.307
.252
.201
.183
.1560
.05
.1461
.109
.079
.067
.062
.025
.051
.0303
.020
.016
.0168
.01
.0064
.0033
.0015
.0012
.002
7
2. DETECTION OF THE SERIAL
CORRELATION
• In OLS regression, time residuals are often
found to be serially correlated with their own
lagged values.
• Serial correlation means
– OLS is no longer an efficient linear estimator.
– Standard errors are incorrect and generally
overstated.
– OLS estimates are biased and inconsistent if a
lagged dependent variable is used as a regressor.
8
2. DETECTION OF THE SERIAL
CORRELATION
• Durbin-Watson test is for regular regression
with independent variables. It is not
appropriate for time series models with
lagged dependent variables. It only tests for
AR(1) errors. There should be a constant term
and deterministic independent variables in
the model.
9
2. DETECTION OF THE SERIAL
CORRELATION
• Serial Correlation Lagrange Multiplier
(Breusch-Godfrey) Test is valid in the presence
of lagged dependent variables. It tests for
AR(p) errors.
aˆt  the residual at time t.
aˆt  the auxilary linear model  1aˆt 1     r aˆt r  ut
where ut ~ N 0, u2 .
10
2. DETECTION OF THE SERIAL
CORRELATION
• The test hypothesis:
H 0 : 1  0 and  2  0 and   r  0
( No serial autocorrel ation up to r - th order)
H1 : at least one i  0.
• Test statistic:
n  r R 2 ~  r2
Obtained from the auxiliary regression
11
2. DETECTION OF THE SERIAL
CORRELATION
• Determination of r: No obvious answer exists.
In empirical studies
– for AR, ARMA: r=p+1 lags
– For seasonal, r=s.
12
2. DETECTION OF THE SERIAL
CORRELATION
• Ljung-Box (Modified Box-Pierce) or
Portmanteau Lack-of-Fit Test: Box and Pierce
(1970) have developed a test to check the
autocorrelation structure of the residuals.
Then, it is modified by Ljung and Box.
• The null hypothesis to be tested:
H 0 : 1  2     K  0
13
2. DETECTION OF THE SERIAL
CORRELATION
• The test statistic:
K
Q  nn  2   n  k  ˆ k2
1
k 1
where K  the maximum lag length
n  the number of observatio ns
ˆ k  sACF at lag k
14
2. DETECTION OF THE SERIAL
CORRELATION
• If the correct model is estimated,
approx.
Q ~ 
2
K  m where
m  p  q.
• If Q  
, reject H0. This means that the
autocorrelation exists in residuals. Assumption
is violated. Check the model again. It is better
to add another lag in AR or MA part of the
model.
2
Table
15
3. DETECTING HETEROSCEDASTICITY
• Heteroskedasticity is a violation of the
constant error variance assumption. It occurs if
variance of error changing by time.
Var at   
2
t .
16
3. DETECTING HETEROSCEDASTICITY
• ACF-PACF PLOT OF SQUARED RESIDUALS:
Since {at} is a zero mean process, the variance
of at is defined by the expected value of
squared at’s. So, if at’s are homoscedastic, the
variance will be constant (not change over
time) and when we look at the ACF and PACF
plots of squared residuals, they should be in
95% WN limits. If not, this is a sign of
heteroscedasticity.
17
3. DETECTING HETEROSCEDASTICITY
• Let rt be the log return of an asset at time t. We are going to look at
the study of volatility: the series is either serially uncorrelated or with
minor lower order serial correlations, but it is a dependent series.
• Examine the ACF for the residuals and squared residuals for the
calamari catch data. The catch data had a definite seasonality, which
was removed. Then, the remaining series was modelled with an AR(5)
model and the residuals of this model are obtained.
• There are various definitions of what constitutes weak dependence of
a time series. However, the operational definition of independence
here will be that both the autocorrelation functions of the series and
the squared series show no autocorrelation. If there is no serial
correlation of the series but there is of the squared series, then we
will say there is weak dependence. This will lead us to examine the
volatility of the series, since that is exemplified by the squared terms.
18
Autocorrelation Function for RESI1
Autocorrelation Function for C3
(with 5% significance limits for the autocorrelations)
(with 5% significance limits for the autocorrelations)
1.0
1.0
0.8
0.8
0.6
0.6
0.4
0.4
Autocorrelation
Autocorrelation
3. DETECTING HETEROSCEDASTICITY
0.2
0.0
-0.2
-0.4
0.2
0.0
-0.2
-0.4
-0.6
-0.6
-0.8
-0.8
-1.0
-1.0
1
2
3
4
5
6
7
8
9
Lag
10
11
12
13
14
15
16
Figure 1: Residuals after AR(5) fitted to the deseasoned
calamari data
1
2
3
4
5
6
7
8
9
Lag
10
11
12
13
14
15
16
Figure 2: Autocorrelation of the squared residuals
19
3. DETECTING HETEROSCEDASTICITY
Autocorrelation Function for C1
(with 5% significance limits for the autocorrelations)
1.0
0.8
Autocorrelation
0.6
0.4
0.2
0.0
-0.2
-0.4
-0.6
-0.8
-1.0
1
5
10
15
20
25
30
35
Lag
40
45
50
55
60
Figure 3: Autocorrelation for the log returns for the Intel series
20
Autocorrelation Function for C2
Partial Autocorrelation Function for C2
(with 5% significance limits for the autocorrelations)
(with 5% significance limits for the partial autocorrelations)
1.0
1.0
0.8
0.8
0.6
0.6
Partial Autocorrelation
Autocorrelation
3. DETECTING HETEROSCEDASTICITY
0.4
0.2
0.0
-0.2
-0.4
-0.6
0.4
0.2
0.0
-0.2
-0.4
-0.6
-0.8
-0.8
-1.0
-1.0
1
5
10
15
20
25
30
35
Lag
40
45
50
55
Figure 4: ACF of the squared returns
60
1
5
10
15
20
25
30
35
Lag
40
45
50
55
60
Figure 5: PACF for squared returns
Combining these three plots, it appears that this series is serially
uncorrelated but dependent. Volatility models attempt to capture
such dependence in the return series
21
3. DETECTING HETEROSCEDASTICITY
• If we ignore heterocedasticity:
– The OLS estimator is unbiased but not efficient.
The GLS or WLS is the Gauss-Markov estimator.
– The estimate of the variance of the OLS estimator
is a biased estimator of the true variance. The
classical testing procedures are invalidated.
• Now, the question is how we can detect
heteroscedasticity?
22
3. DETECTING HETEROSCEDASTICITY
• White’s General Test for Heteroscedasticity:
H0
 Var a   E a
t
 
2
t Yt 1 , Yt 2 ,
2
a
 constant
• After identified model is estimated, we obtain
2
the residuals, aˆt . Then, aˆt can be written as
2
ˆ
ˆ
aˆ    0  Yt  1Yt 1   2Yt 2  
2
t
  0  1Yt 1   2Yt 2     1Yt 21   2Yt 22    1Yt 1Yt 2  
23
3. DETECTING HETEROSCEDASTICITY
• Then, construct the following artificial
regression
at2   0  1Yt 1   2Yt 2     1Yt 21   2Yt 22    1Yt 1Yt 2    ut
• The homocedastic case implies that 1 = 2 =
... = 1 = 2=…= 1= 2=…= 0, therefore
E
  .
2
at Yt 1, Yt 2 ,
0
24
3. DETECTING HETEROSCEDASTICITY
• Then, the test statistics is given by
nRa2ˆ 2 ~  m2
t
under the null hypothesis of homoscedasticity
where m is the number of variables in artificial
regression except the constant term.
25
3. DETECTING HETEROSCEDASTICITY
• The Breush-Pagan Test: It is a LagrangeMultiplier test for heteroscedasticity. Consider
the IF of a time series. Let’s assume that we
can write our model in AR(m).
1   B    
1
mB
• Then, consider testing
H 0  Var at   E 
m
Y  
t
0
 at
 
2
at Yt 1, Yt 2 ,
2
a
 constant
26
3. DETECTING HETEROSCEDASTICITY
• Note that we need to evaluate the conditional
(on the independent variables) expectation of
the squared of the error term,
2
at   0  1Yt 1     mYt m  ut
• The homocedastic case implies that 1 = 2 =
... = m = 0.
27
3. DETECTING HETEROSCEDASTICITY
• The problem, however, is that we do not know
the error term at2, but it can be replaced by an
2
estimate aˆt . A simple approach is to run a
regression,
2
aˆt
  0  1Yt 1     mYt m  ut
and test if the slope coefficients are all equal
to zero.
28
3. DETECTING HETEROSCEDASTICITY
• The test statistic
nRa2ˆ 2 ~  m2
t
under the null hypothesis of homoscedasticity
where m is the number of variables in artificial
regression except the constant term.
29
3. DETECTING HETEROSCEDASTICITY
• If we reject the null hypothesis, this means
that the error variance is not constant. It is
changing over time. Therefore, we need to
model the volatility. ARCH (Autoregressive
Conditional Heteroskedasticity) or GARCH
(Generalized Autoregressive Conditional
Heteroskedasticity) modeling helps us to
model the error variance.
30
EXAMPLE (BEER)
> library(TSA)
> fit=arima(beer,order=c(3,1,0),seasonal=list(order=c(3,0,0), period=4))
> par(mfrow=c(1,3))
> plot(window(rstandard(fit),start=c(1975,1)), ylab='Standardized Residuals',type='o')
> abline(h=0)
> acf(as.vector(window(rstandard(fit),start=c(1975,1))), lag.max=36)
> pacf(as.vector(window(rstandard(fit),start=c(1975,1))), lag.max=36)
31
EXAMPLE (BEER)
> fit2=arima(beer,order=c(2,1,1),seasonal=list(order=c(3,0,1), period=4))
> fit2
Call:
arima(x = beer, order = c(2, 1, 1), seasonal = list(order = c(3, 0, 1), period = 4))
Coefficients:
ar1
ar2
ma1
sar1
sar2
sar3 sma1
-0.2567 -0.4255 -0.4990 1.1333 0.2656 -0.3991 -0.9721
s.e. 0.1426 0.1280 0.1501 0.1329 0.1656 0.1248 0.1160
sigma^2 estimated as 1.564: log likelihood = -157.47, aic = 328.95
> plot(window(rstandard(fit2),start=c(1975,1)), ylab='Standardized Residuals',type='o')
> abline(h=0)
> acf(as.vector(window(rstandard(fit2),start=c(1975,1))), lag.max=36)
> pacf(as.vector(window(rstandard(fit2),start=c(1975,1))), lag.max=36)
32
EXAMPLE (BEER)
33
EXAMPLE (BEER)
> hist(rstandard(fit2), xlab='Standardized Residuals')
34
EXAMPLE (BEER)
> qqnorm(rstandard(fit2))
> qqline(rstandard(fit2))
35
EXAMPLE (BEER)
> shapiro.test(window(rstandard(fit2),start=c(1975,1)))
Shapiro-Wilk normality test
data: window(rstandard(fit2), start = c(1975, 1))
W = 0.9857, p-value = 0.4181
> jarque.bera.test(resid(fit2))
Jarque Bera Test
data: resid(fit2)
X-squared = 1.0508, df = 2, p-value = 0.5913
36
EXAMPLE (BEER)
> tsdiag(fit2)
37
EXAMPLE (BEER)
> Box.test(resid(fit2),lag=15,type = c("Ljung-Box"))
Box-Ljung test
data: resid(fit2)
X-squared = 24.2371, df = 15, p-value = 0.06118
> Box.test(resid(fit2),lag=15,type = c("Box-Pierce"))
Box-Pierce test
data: resid(fit2)
X-squared = 21.4548, df = 15, p-value = 0.1229
38
EXAMPLE (BEER)
> rr=resid(fit2)^2
> par(mfrow=c(1,2))
> acf(rr)
> pacf(rr)
39
EXAMPLE (BEER)
> par(mfrow=c(1,1))
> result=plot(fit2,n.ahead=12,ylab='Series & Forecasts',col=NULL,pch=19)
> abline(h=coef(fit2))
> forecast=result$pred
> cbind(beer,forecast)
> plot(fit2,n1=1975,n.ahead=12,ylab='Series, Forecasts, Actuals & Limits', pch=19)
40
Download