Journal of Statistical and Econometric Methods, vol.4, no.3, 2015, 93-99

advertisement
Journal of Statistical and Econometric Methods, vol.4, no.3, 2015, 93-99
ISSN: 1792-6602 (print), 1792-6939 (online)
Scienpress Ltd, 2015
A note on the distribution of residual
autocorrelations in VARMA(p,q) models
Huong Nguyen Thu1
Abstract
This paper generalizes the distribution of residual autocovariance
matrices in VARMA(p,q) models obtained previously in Hosking (1980).
A new simplified version of the multivariate relation between sample
correlation matrix of the errors and its residuals is also established.
The modifications are effective tools for identifying and dealing with
the curse of dimensionality in multivariate time series.
Mathematics Subject Classification: 62M10; 62H10; 60E05
Keywords: Asymptotic distribution; Covariance matrix; Correlation matrix;
Multivariate time series; VARMA(p,q) models
1
Introduction
Consider a causal and invertible m−variate autoregressive moving average
1
Department of Business Administration, Technology and Social Sciences, Luleå
University of Technology, 971 87 Luleå, Sweden. E-mail: huong.nguyen.thu@ltu.se.
Department of Mathematics, Foreign Trade University, Hanoi, Vietnam.
Article Info: Received : May 12, 2015. Revised : June 10, 2015.
Published online : September 1, 2015
94 A note on the distribution of residual autocorrelations in VARMA(p,q) models
VARMA(p,q) process
Φ(B)(Xt − µ) = Θ(B)εt ,
(1)
where B is backward shift operator BXt = Xt−1 . µ is the m × 1 mean vector
and {εt : t ∈ Z} is a zero mean white noise sequence W N (0, Σ), where Σ is a
m × m positive definite matrix. Additionally, Φ(z) = Im − Φ1 z − · · · − Φp z p
and Θ(z) = Im + Θ1 z + · · · + Θq z q are matrix polynomials, where Im is the
m × m identity matrix, and Φ1 , . . . , Φp , Θ1 , . . . , Θq are m × m real matrices
such that the roots of the determinantal equations |Φ(z)| = 0 and |Θ(z)| = 0
all lie outside the unit circle. We also assume that both Φp and Θq are non-null
matrices, and that the identifiability condition of [1], r(Φp , Θq ) = m, holds.
Let P = max(p, q) and define the m × mp matrix Φ = (Φ1 , . . . , Φp ), the
m × mq matrix Θ = (Θ1 , . . . , Θq ), and the m2 (p + q) × 1 vector of parameters
Λ = vec(Φ, Θ). The residual vectors εbt , t = 1, . . . , n, are defined recursively,
in the form
p
q
X
X
b i (Xt−i − Xn ) −
b j εbt−j , t = 1, . . . , n ,
Φ
Θ
(2)
εbt = (Xt − Xn ) −
i=1
j=1
with the usual conditions Xt − Xn ≡ 0 ≡ εbt , for t ≤ 0. In practice, only
residual vectors for t > P = max(p, q) are considered. Define m × m sample
Pn−k 0
error covariance matrix at lag k with the notation Ck = (1/n) t=1
εt εt+k ,
0 ≤ k ≤ n − 1. Similarly, the m × m kth residual covariance matrix is given by
b k = (1/n) Pn−k εbt εb0 , 0 ≤ k ≤ n − (P + 1). Following [2], let Rk = C0 C−1
C
t+k
k 0
t>P
be the kth sample correlation matrix of the errors εt . Its residual analogue is
bk = C
b0 C
b −1
given by R
k 0 .
Properties of the residual covariance matrices and correlation matrices and
their practical use in multivariate Portmanteau statistic have been considerd
by many authors, see, for example, [2], [3], [4], [5]. The linear relation between
the residual and error covariance matrices in [3] is highly dimensional. For
convenience in applications, a new representation is suggested in section 2. To
this end, this paper uses the family of statistics proposed in [6]. The orthogonal
projection matrix in [3] is also generalized. The advantage of the new tool lies
in standard properties of the trace operator. The associated correlation matrix
of the residuals is useful in identification and diagnostic checking. Section 3
examines the large sample distribution of the transformed multivariate residual
autocorrelations. Section 4 concludes.
95
Huong Nguyen Thu
2
Distribution of residual autocovariace
This section examines the asymptotic properties of residual autocovariace. We start by considering the m × m coefficients of the series expansions
Φ−1 (z)Θ(z)
P∞
P∞
j
−1
j
=
j=0 Ωj z and Θ (z) =
j=0 Lj z where Ω0 = L0 = Im . Define also
P
the collection of matrices Gk = kj=0 (ΣΩ0j ⊗ Lk−j ) and Fk = Σ ⊗ Lk , k ≥ 0.
By convention, Gk = Fk = 0 for k < 0, it follows that
b 0 = C0 −
C
k
k
p k−i
X
X
b i − Φi )Ωr Σ −
Lk−i−r (Φ
i=1 r=0
q
X
b j − Θj )Σ + OP ( 1 ) . (3)
Lk−j (Θ
n
j=1
This construction is due to [3, p.603].
Set the sequence of m2 M × m2 (p + q) matrices ZM = (XM , YM ), M ≥ 1,
where


G0
0
0
···
0


G0
0
···
0 
 G1


G1
G0
···
0 
XM = 
 G2
 ,
..
..
..
..


...


.
.
.
.
GM −1 GM −2 GM −3 · · · GM −p
and

YM



=



F0
F1
F2
..
.
0
F0
F1
..
.
0
0
F0
..
.
···
···
···
...
FM −1 FM −2 FM −3 · · ·
0
0
0
..
.




 .



FM −q
(M )
Put the M m2 × M m2 block diagonal matrix W = diag(C0 ⊗ C0 , · · · , C0 ⊗
c = IM ⊗ Σ
b ⊗ Σ.
b For
C0 ) = IM ⊗ C0 ⊗ C0 . The residual counterpart is W
b 0 )]0
b M = [vec(C
b 0 ), . . . , vec(C
convenience, consider M m2 × 1 random vectors H
1
M
0
0
0
and HM = [vec(C1 ), . . . , vec(CM )] and W = IM ⊗ Σ ⊗ Σ. After taking vecs
in both sides of (3), hence, for each M ≥ 1,
b M = HM − ZM vec[(Φ,
b Θ)
b − (Φ, Θ)] + OP ( 1 ) .
H
n
(4)
According to [3], the below orthogonality condition holds.
b M = OP (1/n) .
Z0M W −1 H
(5)
96 A note on the distribution of residual autocorrelations in VARMA(p,q) models
Combining (4) with (5), Hosking [3] concluded that
c −1/2 H
b M = (IM m2 − PH )W−1/2 HM + OP ( 1 ) ,
W
n
(6)
where PH = W −1/2 ZM (Z0M W −1 ZM )−1 Z0M W −1/2 is the M m2 × M m2 orthogonal projection matrix onto the subspace spanned by the columns of W −1/2 ZM .
In practice, the relation (6) is difficult to deal with as m ≥ 5 because of
the curse of dimensionaltity. As a solution, we define a M m2 × M m2 matrix
√
QM = (IM ⊗ a)(IM ⊗ a0 ), where a = vec(Im )/ m. Put also M m2 × M m2
matrix PM = (IM ⊗a0 )W −1/2 ZM (Z0M W −1/2 QM W −1/2 ZM )−1 Z0M W −1/2 (IM ⊗a).
Note that PM is the orthogonal projection matrix onto the subspace spanned
by the columns of (IM ⊗ a0 )W −1/2 ZM .
Remark 2.1. The matrices PH and PM are idempotent.
Lemma 2.2 refers to the multivariate linear relation between the residual
and error covariance matrices
Lemma 2.2. Suppose that the error vectors {εt } are i.i.d. with E[εt ] = 0;
Var[εt ] = Σ > 0; and finite fourth order moments E[kεt k4 ] < +∞. Then, as
n −→ ∞,
c −1/2 H
b M = (IM m2 − PM )W−1/2 HM + OP ( 1 ) .
QM W
n
Proof. We begin by recalling the result in [6] that
√
D
nHM −→ W 1/2 [V1 , . . . , VM ]0 , k ≥ 1 ,
(7)
(8)
where Vj , j = 1, . . . , k are i.i.d. Nm2 (0, Im2 ). See more details in [6, Lemma
3.1].
From (8), under the assumptions of Lemma 2.2, we obtain
√
D
(9)
nQM HM −→ QM W 1/2 [V1 , . . . , VM ]0 , k ≥ 1 .
Notice first that
b M + QM ( W
c −1/2 − W −1/2 )H
bM .
b M = QM W −1/2 H
c −1/2 H
QM W
(10)
As a consequence of (3), (4) and (9), the second summand of (10) is OP (1/n).
We can rewrite (5) as
b M = OP ( 1 ) .
QM Z0M W −1 H
n
(11)
97
Huong Nguyen Thu
b 0 is consistent for the matrix Σ, hence W−1/2 −
Taking into account that C
0
√
W −1/2 = OP (1/ n). Those results combined with (4) and (9) lead to (7) and
Lemma 2.2 follows.
Next section refers to a simplified multivariate linear relation between the
residual autocorrelation matrices and the error ones.
3
Distribution of residual autocorrelation
Define M × 1 random vectors
b M = [tr(R
b 1 ), . . . , tr(R
b M )]0
T
(12)
TM = [tr(R1 ), . . . , tr(RM )]0 .
(13)
and
Theorem 3.1 provides a solution to the highly dimensional relation (7).
Theorem 3.1. Under the assumptions of Lemma 2.2, as n −→ ∞,
1
1
1 b
√ T
M = (IM − PM ) √ TM + OP ( ) .
n
m
m
(14)
Proof. We introduce the notion of a linear relationship for dimensionreduction purpose, following [6],
1
√ TM = (IM ⊗ a0 )W−1/2 HM ,
m
M ≥1.
(15)
M ≥1.
(16)
Its residual version is given by
1 b
0 c −1/2 b
√ T
HM ,
M = (IM ⊗ a )W
m
Combining (7), (15) and (16) finishes the proof of (14).
The objective now is to establish the large sample distribution of the ranb M.
dom vector H
Theorem 3.2. Under the assumptions of Lemma 2.2, as n −→ ∞,
√
D
bM ∼
c −1/2 H
nQM W
= (IM m2 − PM )NM m2 (0, IM m2 ) .
(17)
98 A note on the distribution of residual autocorrelations in VARMA(p,q) models
Proof. The proof is completed by using (7) and (9).
Let us mention an important consequence of the Theorem 3.2. It is a
practical tool to construct a goodness of fit process for selecting a proper time
series model.
4
Conclusions
The main results of this paper were announced in Lemma 2.2, Theorems
3.1 and 3.2. Goodness of fit methods make those properties more attractive in
practice especially for large time series datasets.
ACKNOWLEDGEMENTS. This work was carried out while the author was a PhD student at Department of Statistics, Universidad Carlos III de
Madrid. I am very grateful to Prof. Santiago Velilla for his sharing, encouragement and guidance as my adviser.
References
[1] Hannan, E. J., The identification of vector mixed autoregressive moving
average systems, Biometrika, 56(1), (1969), 223-225.
[2] Chitturi, R. V., Distribution of residual autocorrelations in multiple
autoregressive schemes Journal of the American Statistical Association,
69(348), (1974), 928-934.
[3] Hosking, J. R. M., The multivariate portmanteau statistic Journal of the
American Statistical Association, 75(371), (1980), 602-608.
[4] Li, W. K., McLeod, A. I., Distribution of the residual autocorrelations
in multivariate ARMA time series models Journal of the Royal Statistical
Society. Series B (Methodological), 43(2), (1981), 231-239.
[5] Li, W. K., Hui, Y. V., Robust multiple time series modelling Biometrika,
76(2), (1989), 309-315.
Huong Nguyen Thu
99
[6] Velilla, S., Nguyen, H., A basic goodness-of-fit process for VARMA(p,q)
models, Statistics and econometrics series Carlos III University of
Madrid, 09, (2011).
[7] Velilla, S., A goodness-of-fit test for autoregressive moving-average models
based on the standardized sample spectral distribution of the residuals
Journal of Time Series Analysis, 15, (1994), 637-647.
Download