Asymptotic results for Fourier-PARMA time series Yonas Gebeyehu Tesfaye , Paul L. Anderson

advertisement
Original Article
First version received August 2010
Published online in Wiley Online Library: 27 September 2010
(wileyonlinelibrary.com) DOI: 10.1111/j.1467-9892.2010.00689.x
Asymptotic results for Fourier-PARMA time
series
Yonas Gebeyehu Tesfayea, Paul L. Andersonb and Mark M. Meerschaertc,*,†
Periodically stationary times series are useful to model physical systems whose mean behavior and covariance
structure varies with the season. The Periodic Auto-Regressive Moving Average (PARMA) process provides a
powerful tool for modelling periodically stationary series. Since the process is non-stationary, the innovations
algorithm is useful to obtain parameter estimates. Fitting a PARMA model to high-resolution data, such as weekly
or daily time series, is problematic because of the large number of parameters. To obtain a more parsimonious
model, the discrete Fourier transform (DFT) can be used to represent the model parameters. This article proves
asymptotic results for the DFT coefficients, which allow identification of the statistically significant frequencies to
be included in the PARMA model.
Keywords: Discrete Fourier transform, periodic auto-regressive moving average, parameter estimation, innovations
algorithm, asymptotic distribution.
1. INTRODUCTION
A stochastic process Xt is called periodically stationary (in the wide sense) if lt ¼ EXt and ct(h) ¼ Cov(Xt, Xt+h) for h ¼ 0, ±1, ±2,… are
all periodic functions of time t with the same period m 1. If m ¼ 1, then the process is stationary. Periodically stationary processes
manifest themselves in such fields as economics, hydrology and geophysics, where the observed time series are characterized by
seasonal variations in both the mean and covariance structure. An important class of stochastic models for describing such time
series are the periodic Auto-Regressive Moving Average (ARMA) models, which allows the model parameters in the classical ARMA
model to vary with the season. A periodic ARMA process fX~t g with period m [denoted by PARMAm(p,q)] has representation
Xt p
X
/t ðjÞXtj ¼ et j¼1
q
X
ht ðjÞetj ;
ð1Þ
j¼1
where Xt ¼ X~t lt and fetg is a sequence of random variables with mean zero and scale rt such that fdt ¼ r1
t et g is i.i.d. The
notation in eqn (1) is consistent with that of Box and Jenkins (1976). The autoregressive parameters /t( j), the moving average
parameters ht(j) and the residual standard deviations rt are all periodic functions of t with the same period m 1. Periodic time series
models and their practical applications are discussed in Adams and Goodwin (1995), Anderson and Vecchia (1993), Anderson and
Meerschaert (1997, 1998), Anderson et al. (1999), Basawa et al. (2004), Boshnakov (1996), Gautier (2006), Jones and Brelsford (1967),
Lund and Basawa (1999, 2000), Lund (2006), Nowicka-Zagrajek and Wyłomańska (2006), Pagano (1978), Roy and Saidi (2008), Salas
et al. (1982, 1985), Shao and Lund (2004), Tesfaye et al. (2005), Tjøstheim and Paulsen (1982), Troutman (1979), Vecchia (1985a,
1985b), Vecchia and Ballerini (1991), Ula (1990, 1993), Ula and Smadi (1997, 2003) and Wyłomańska (2008). See also the recent book
of Franses and Paap (2004) as well as Hipel and McLeod (1994).
In this article, we will assume:
(i) Finite variance: Ee2t < 1.
(ii) Either Ee4t < 1 (Finite Fourth Moment Case); or the i.i.d. sequence dt ¼ r1
t et is RV(a) for some 2 < a < 4 (Infinite Fourth
Moment Case), meaning that P[|dt| > x] varies regularly with index a and P[dt > x]/P[|dt| > x] ! p for some p 2 [0,1].
(iii) The model admits a causal representation
Xt ¼
1
X
wt ðjÞetj ;
ð2Þ
j¼0
P
where wt(0) ¼ 1 and 1
j¼0 jwt ðjÞj < 1 for all t. Note that wt(j) ¼ wt+km(j) for all j.
(iv) The model also satisfies an invertibility condition
a
GEI Consultants, Inc.
Albion College
c
Michigan State University
*Correspondence to: M. M. Meerschaert, Department of Statistics and Probability, Michigan State University, East Lansing, MI, USA.
†
E-mail: mcubed@stt.msu.edu
b
2010 Blackwell Publishing Ltd.
157
J. Time Ser. Anal. 2011, 32 157–174
Y. G. TESFAYE, P. L. ANDERSON AND M. M. MEERSCHAERT
et ¼
1
X
pt ðjÞXtj ;
ð3Þ
j¼0
P
where pt(0) ¼ 1 and 1
j¼0 jpt ðjÞj < 1 for all t. Again, pt(j) ¼ pt+km(j) for all j.
In the infinite fourth moment case, the RV(a) assumption implies that E|dt|p < 1 if 0 < p < a, and in particular, the variance of et
exists, while E|dt|p ¼ 1 for p > a, so that Ee4t ¼ 1. Anderson and Meerschaert (1997) show that, in this case, the sample
autocovariance is a consistent estimator of the autocovariance, and asymptotically stable with tail index a/2. Stable laws and
processes, and the theory of regular variation, are comprehensively treated in Feller (1971) and Meerschaert and Scheffler (2001), see
also Samorodnitsky and Taqqu (1994). Time series with infinite fourth moments are often seen in natural river flows, see, for example,
Anderson and Meerschaert (1998).
The main results of this article, and their relation to previous published results, are as follows. The innovations algorithm can be
used to estimate parameters of a non-stationary time series model, based on the infinite order moving average representation (2).
Anderson et al. (1999) proved consistency of the innovations algorithm estimates for the infinite order moving average parameters
wt(j). In the finite fourth moment case, Anderson and Meerschaert (2005) developed the asymptotics necessary to determine which of
these parameter estimates are statistically different from zero. Anderson et al. (2008) extended those results to the infinite fourth
moment case. Theorem 1 in this article reformulates those results in a manner suitable for the application to discrete fourier
transform (DFT) asymptotics. To obtain estimates of the auto-regressive parameters /t(j) and moving average parameters ht( j ) of the
PARMA process in eqn (1), it is typically necessary to solve a system of difference equations that relate these PARMA parameters back
to the infinite order moving average representation. Section 3 discusses the general form of those difference equations, and
develops some useful examples. The PARMAm(1,1) model (eqn 1) with p ¼ q ¼ 1 is an important example, relatively simple to
analyse, yet sufficiently flexible to handle many practical applications (e.g. see Anderson and Meerschaert, 1998; Anderson et al.,
2007). Theorem 2 provides asymptotics for the Yule–Walker estimates, and Theorem 3 gives the asymptotics of the innovations
estimates, for the PARMAm(1,1) model. These asymptotic results can be used for model identification, to determine which seasons
have non-zero PARMA coefficients in the model. For high-resolution data (e.g. weekly or daily data), even a first-order PARMA model
can involve numerous parameters, which can lead to over-fitting. This can be overcome using DFT. Theorem 5 gives DFT asymptotics
for the infinite order moving average parameters wt( j). For the PARMAm(1,1) model, Theorems 7 and 8 provide DFT asymptotics for
the autoregressive parameters, and the moving average parameters respectively. These results can be used to determine which
Fourier frequencies need to be retained in a DFT model for the PARMAm(1,1) parameters. Section 6 briefly reviews results from
Anderson et al. (2007), where results of this article were used to work out several practical applications. Theorems 1, 7 and 8 were
stated in Anderson et al. (2007) without proof.
2. THE INNOVATIONS ALGORITHM
The innovations algorithm (Brockwell and Davis, 1991, Propn 5.2.2) was adapted by Anderson et al. (1999) to yield parameter
estimates for PARMA models. Since the parameters are seasonally dependent, there is a notational difference between the
innovations algorithm for PARMA processes and that for ARMA processes (compare Brockwell and Davis, 1991). We introduce this
difference through the ‘season’, i. For monthly data, we have m ¼ 12 seasons and our convention is to let i ¼ 0 represents the first
month, i ¼ 1 represents the second, …, and i ¼ m 1 ¼ 11 represents the last.
ðiÞ
Let X^iþk ¼ PHk;i Xiþk denotes the one-step predictors, where Hk;i ¼ spfXi ; . . .; Xiþk1 g is the data vector starting at season
0 i m 1, k 1 and PHk;i is the orthogonal projection onto this space, which minimizes the mean-squared error
ðiÞ
ðiÞ
vk;i ¼ kXiþk X^iþk k2 ¼ EðXiþk X^iþk Þ2 :
Then,
ðiÞ
ðiÞ
ðiÞ
X^iþk ¼ /k;1 Xiþk1 þ þ /k;k Xi ; k 1;
ðiÞ
ðiÞ
ð4Þ
ðiÞ
where the vector of coefficients /k ¼ ð/k;1 ; . . . ; /k;k Þ0 solves the prediction equations
ðiÞ
with
ðiÞ
ck
ðiÞ
Ck;i /k ¼ ck
ð5Þ
Ck;i ¼ ½ciþk‘ ð‘ mÞ‘;m¼1;...;k
ð6Þ
0
¼ ðciþk1 ð1Þ; ciþk2 ð2Þ; . . . ; ci ðkÞÞ and
0
is the covariance matrix of (Xi+k1,…, Xi) for each i ¼ 0,…,m 1. Let
^ci ð‘Þ ¼ N1
N1
X
Xjmþi Xjmþiþ‘
ð7Þ
j¼0
denotes the (uncentered) sample autocovariance, where Xt ¼ X~t lt . If we replace the autocovariances in the prediction eqn (5)
^ ðiÞ of /ðiÞ .
with their corresponding sample autocovariances, we obtain the estimator /
k;j
k;j
158
wileyonlinelibrary.com/journal/jtsa
2010 Blackwell Publishing Ltd.
J. Time Ser. Anal. 2011, 32 157–174
FOURIER-PARMA TIME SERIES
As the scalar-valued process Xt is non-stationary, the Durbin–Levinson algorithm (Brockwell and Davis, 1991, Propn 5.2.1) for
^ ðiÞ does not apply. However, the innovations algorithm still applies to a non-stationary process. Writing
computing /
k;j
ðiÞ
X^iþk ¼
k
X
ðiÞ
ðiÞ
hk; j ðXiþkj X^iþkj Þ;
ð8Þ
j¼1
ðiÞ
yields the one-step predictors in terms of the innovations Xiþkj X^iþkj . Lund and Basawa (1999, Propn 4) shows that if r2i > 0 for
i ¼ 0, …,m 1, then for a causal PARMAm(p,q) process, the covariance matrix Ck,i is non-singular for every k 1 and each i. Anderson
ðiÞ
et al. (1999) show that if EXt ¼ 0 and Ck,i is non-singular for each k 1, then the one-step predictors X^iþk starting at season i and
their mean-square errors vk,i are given by
v0;i ¼ ci ð0Þ;
‘1
X
ðiÞ
ðiÞ
ðiÞ
h‘;‘j hk;kj vj;i ;
hk;k‘ ¼ ðv‘;i Þ1 ciþ‘ ðk ‘Þ ð9Þ
j¼0
vk;i ¼ ciþk ð0Þ k1
X
ðiÞ
ðhk;kj Þ2 vj;i ;
j¼0
ðiÞ
ðiÞ
ðiÞ
ðiÞ
ðiÞ
ðiÞ
where eqn (7) is solved in the order v0,i, h1;1 , v1,i, h2;2 , h2;1 , v2,i, h3;3 , h3;2 , h3;1 , v3,i,… and so forth. The results in Anderson et al. (1999)
show that
ðhikiÞ
hk;j
! wi ðjÞ;
vk;hiki ! r2i ;
ðhikiÞ
/k;j
ð10Þ
! pi ðjÞ;
for all i,j, where hti is the season corresponding to index t, so that hjm + ii ¼ i.
If we replace the autocovariances in eqn (9) with the corresponding sample autocovariances in eqn (7), we obtain the innovations
ðiÞ
estimates ^hk;‘ and ^vk;i . Similarly, replacing the autocovariances in eqn (5) with the corresponding sample autocovariances yields the
^ ðiÞ . The consistency of these estimators was also established in Anderson et al. (1999).
Yule–Walker estimators /
k;‘
Suppose that the PARMA process given by eqn (1) satisfies assumptions (i) through (iv) and that:
(v) The spectral density matrix f (k) of the equivalent vector ARMA process (Anderson and Meerschaert, 1997, p. 778) is such that
for some 0 < m M < 1, we have
mz 0 z z 0 f ðkÞz Mz0 z;
p k p;
m
for all z in R ;
(vi) In the finite fourth moment case Ee4t < 1, we choose k as a function of the sample size N so that k2/N ! 0 as N ! 1 and
k ! 1. In the infinite fourth moment case, where the i.i.d. noise sequence dt ¼ r1
t et is RV(a) for some 2 < a < 4, define
aN ¼ inffx : Pðjdt j > xÞ < 1=Ng
ð11Þ
a regularly varying sequence with index 1/a, (see, e.g., Propn 6.1.37 in Meerschaert and Scheffler, 2001). Here, we choose k as a
function of the sample size N so that k5=2 a2N =N ! 0 as N ! 1 and k ! 1.
P
Then, the results in Anderson et al. (1999) show that for all i,j, where ‘‘!’’ denotes
ðhikiÞ P
^
hk;j
! wi ðjÞ;
P
ð12Þ
^vk;ðhikiÞ ! r2i ;
P
^ ðhikiÞ !
pi ðjÞ;
/
k;j
convergence in probability.
Suppose that the PARMA process given by eqn (1) satisfies assumptions (i) through (v) and that:
(vii) In the finite fourth moment case, we suppose that k ¼ k(N) ! 1 as N ! 1 with k3/N ! 0. In the infinite fourth moment
case, we suppose k 3 a2N =N ! 0 where aN is defined by eqn (11).
Then, results in Anderson and Meerschaert (2005) and Anderson et al. (2008) show that for any fixed positive integer D, we have
ðhikiÞ
N1=2 ðh^k;u
wi ðuÞ : u ¼ 1; . . . ; D; i ¼ 0; . . .; m 1Þ ) N ð0; WÞ;
ð13Þ
where
W ¼ A diagðr20 Dð0Þ ; . . . ; r2m1 Dðm1Þ ÞA0 ;
2010 Blackwell Publishing Ltd.
wileyonlinelibrary.com/journal/jtsa
159
J. Time Ser. Anal. 2011, 32 157–174
ð14Þ
Y. G. TESFAYE, P. L. ANDERSON AND M. M. MEERSCHAERT
A¼
D1
X
En P½DmnðDþ1Þ ;
n¼0
2
2
DðiÞ ¼ diagðr2
i1 ; ri2 ; . . . ; riD Þ;
8
<
9
=
En ¼ diag 0; . . . ; 0 ; w0 ðnÞ; . . . ; w0 ðnÞ ; . . . ; 0; . . . ; 0 ; wm1 ðnÞ; . . . ; wm1 ðnÞ
|fflfflfflffl{zfflfflfflffl} |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} ;
:|fflfflfflffl{zfflfflfflffl} |fflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflffl}
n
n
Dn
Dn
and P an orthogonal Dm · Dm cyclic permutation matrix defined as
0
0
B0
B.
.
P¼B
B.
@0
1 0
0 1
.. ..
. .
0 0
0 0
1
0
0
..
.
0
0
1
0
0C
.. C
.C
C:
1A
0
ð15Þ
0
Note that Pq ¼ Pmq, P0 ¼ Pm ¼ I, and P ¼ Pm1 ¼ P1.
For the purposes of this article, it is useful to express the joint asymptotics of the innovations estimates as follows.
THEOREM 1. Let Xt ¼ X~t lt , where Xt is the periodic moving average process (eqn 2) and lt is a periodic mean function with period m.
ðhikiÞ
^ ð‘Þ, for any non-negative integers j and h with j 6¼ h we have
Suppose that (i) through (v) and (vii) hold. Letting ^
hk;‘
¼w
i
N1=2
^ wðjÞ
wðjÞ
^
wðhÞ
wðhÞ
Vjj
) N 0;
Vhj
Vjh
Vhh
;
ð16Þ
^
^ ð‘Þ; w
^ ð‘Þ; . . . ; w
^ ð‘Þ0 , w(‘) ¼ [w0(‘),w1(‘), ,wm1(‘)]0 ,
where wð‘Þ
¼ ½w
0
1
m1
x
X
fFjn PðjnÞ Bn ðFhn PðhnÞ Þ0 g;
Vjh ¼
ð17Þ
n¼1
with x ¼ min(h, j), and
Fn ¼ diagfw0 ðnÞ; w1 ðnÞ; . . . ; wm1 ðnÞg;
2 2
2
2
Bn ¼ diagfr20 r2
0n ; r1 r1n ; . . . ; rm1 rm1n g;
ð18Þ
where P is the orthogonal m·m cyclic permutation matrix (eqn 15).
PROOF. For a p · q matrix M, we will write Mij for its ij entry, and we will write Mi for the ii entry of a diagonal matrix. We will also
use modulo arithmetic to compute subscripts, so that Mi+p,j ¼ Mi,j+q ¼ Mij. Since Pij ¼ 1fi¼j1g, we have for any matrix M that
½MPij ¼
X
Mik Pkj ¼
k
X
Mik 1fk¼j1g ¼ Mi;j1
k
and hence we also have [MPs]ij ¼ Mi,js. Since [P1]ij ¼ 1fj¼i1g, we also have
½P1 MPij ¼
X
1fk¼i1g Mk;j1 ¼ Mi1;j1 ;
k
so that [PtMPt]ij ¼ Mit,jt.
^ with entries grouped by season, ½w
^ ^
Equation (13) shows that the Dm dimensional column vector w
S
S iDþ‘ ¼ wi ð‘Þ, has asymptotic
0
for
0
i
m
1,1
‘
D,
and
covariance matrix W ¼ AD0A , where ½D0 iDþ‘ ¼ r2i r2
i‘
A¼
D1
X
Em PmðDþ1Þ ;
m¼0
where [Em]iD+‘ ¼ wi(m)1f‘>mg. Define Dm ¼ Pm(D+1)D0Pm(D+1) and E‘,t ¼ Pt(D+1)E‘Pt(D+1). Then, Dm and E‘,t are diagonal matrices
0
formed by a permutation of coordinates, and DmPm(D+1) ¼ Pm(D+1)D0, E‘,tPt(D+1) ¼ Pt(D+1)E‘. Since (Pt) ¼ Pt, we can write
160
wileyonlinelibrary.com/journal/jtsa
2010 Blackwell Publishing Ltd.
J. Time Ser. Anal. 2011, 32 157–174
FOURIER-PARMA TIME SERIES
W ¼ AD0 A0
¼
D1
X
Em PmðDþ1Þ D0
m¼0
¼
D1
X
P‘ðDþ1Þ E‘
‘¼0
D1 X
D1
X
Em Dm PmðDþ1Þ P‘ðDþ1Þ E‘
ð19Þ
m¼0 ‘¼0
¼
D1 X
D1
X
Em Dm Pðm‘ÞðDþ1Þ E‘
m¼0 ‘¼0
¼
D1 X
D1
X
Em Dm E‘;m‘ Pðm‘ÞðDþ1Þ :
m¼0 ‘¼0
Substitute p ¼ m ‘ to obtain
D1
X
W¼
Gp PpðDþ1Þ ;
where Gp ¼
X
ð20Þ
Em Dm Emp;p
m
p¼1D
is a diagonal matrix and m ranges over the set of 0 m D 1 such that 0 m p D 1.
Note that [GpPp(D+1)]rs ¼ [Gp]r,s+p(D+1) in eqn (20). Since Gp is a diagonal matrix, it follows that [GpPp(D+1)]rs can be non-zero only
if s ¼ r p(D + 1). Hence, the only non-zero entries of the matrix W are
WiDþj;iDþjpðDþ1Þ ¼ ½Gp iDþj ;
ð21Þ
for integers 1 D p D 1. Then, we can compute
½Em iDþj ¼ wi ðmÞ1fj>mg ;
½Dm iDþj ¼ ½PmðDþ1Þ D0 PmðDþ1Þ iDþj
¼ ½D0 iDþjmðDþ1Þ ¼ ½D0 ðimÞDþðjmÞ ¼ r2im r2
ij ;
pðDþ1Þ
½Emp;p iDþj ¼ ½P
Emp P
pðDþ1Þ
ð22Þ
iDþj
¼ ½Emp iDþjpðDþ1Þ ¼ wip ðm pÞ1fjp>mpg ¼ wip ðm pÞ1fj>mg ;
so that the diagonal matrix Gp has entries
½Gp iDþj ¼
X
X
½Em Dm Emp;p iDþj ¼
r2im r2
ij wi ðmÞwip ðm pÞ1fj>mg ;
m
ð23Þ
m
where m ranges over the set of 0 m D 1 such that 0 m p D 1.
Since j D, the condition m < j, equivalent to m j 1, is stronger than m D 1. Hence, m ranges over the set of
0 m p D 1 such that 0 m j 1.
Since W is symmetric, it suffices to consider 0 j h. Substitute p ¼ j h to see that
½Gjh iDþj ¼
X
r2im r2
ij wi ðmÞwijþh ðm j þ hÞ;
m
where m ranges over the set of 0 m j 1 such that 0 m j + h D 1. Since D is arbitrary, we may take D ¼ h. Then, the
condition 0 m j + h D 1, equivalent to j D m D 1 + j D, reduces to j D m j 1. Since j D 0, this
together with the remaining condition 0 m j 1 shows that the only non-zero entries Wk‘ with ‘ k are of the form
½Gjh iDþj ¼
j1
X
r2im r2
ij wi ðmÞwijþh ðm j þ hÞ:
m¼0
Substitute n ¼ j m and use eqn (21) to arrive at
WiDþj;ðijþhÞDþh ¼ ½Gjh iDþj ¼
j
X
r2ijþn r2
ij wi ðj nÞwijþh ðh nÞ;
ð24Þ
n¼1
for 0 j h.
^ with entries grouped by lag: ½w
^ ^
Now define a Dm-dimensional column vector w
L
L ð‘1Þmþiþ1 ¼ wi ð‘Þ. We want to characterize the
0
^ , where C is the transition matrix such that w
^ ¼ Cw
^ . Then, Cr,iD+‘ ¼ 1fr¼(‘1)m+i+1g. Write
variance–covariance matrix V ¼ CWC of w
L
L
S
C as the block matrix
2010 Blackwell Publishing Ltd.
wileyonlinelibrary.com/journal/jtsa
161
J. Time Ser. Anal. 2011, 32 157–174
Y. G. TESFAYE, P. L. ANDERSON AND M. M. MEERSCHAERT
0
C11
B C21
B
C ¼ B ..
@ .
C12
C22
..
.
...
...
CD1
CD2
...
1
C1m
C2m C
C
.. C;
. A
CDm
where the submatrix Cjr is m · D. Since [Cjr]hk ¼ [C](j1)m+h,(r1)D+k, we see that [Cj,i+1]h‘ ¼ [C](j1)m+h,iD+‘ equals zero unless j ¼ ‘ and
h ¼ i + 1. This shows that Cjr ¼ Jrj, the m · D indicator matrix with [Jrj]st ¼ 1fs¼r,t¼jg. Write
0
W11
B W21
B
W ¼ B ..
@ .
W12
W22
..
.
...
...
Wm1
Wm2
...
1
W1m
W2m C
C
.. C
. A
Wmm
0
V11
B V21
B
and V ¼ B ..
@ .
V12
V22
..
.
...
...
VD1
VD2
...
1
V1D
V2D C
C
.. C;
. A
VDD
where Wrs is a D · D matrix and Vrs is a m · m matrix. Block-matrix multiplication yields
Vjh ¼ ðCWC 0 Þjh ¼
m X
m
X
0
Cjr Wrs Chs
:
s¼1 r¼1
It is easy to check that indicator matrices have the property JijAJrs ¼ ajrJis for any matrix A with [A]ij ¼ aij. Then,
0
Cjr Wrs Chs
¼ Jrj Wrs Jhs ¼ ½Wrs jh Jrs . In other words, the rs entry of Vjh equals the jh entry of Wrs. Then, from eqn (24) we obtain
½Vjh iþ1;ijþhþ1 ¼ ½Wiþ1;ijþhþ1 jh ¼
j
X
r2ijþn r2
ij wi ðj nÞwijþh ðh nÞ;
ð25Þ
n¼1
for 0 j h. Furthermore, [Vjh]i+1,s+1 ¼ [Wi+1,s+1]jh ¼ WiD+j,sD+h ¼ 0 for all s 6¼ i j + h, i.e. there is only one non-zero entry in each
row of Vjh. Since V is symmetric, this determines every entry of V.
Finally, we want to establish eqn (17). The diagonal matrices in eqn (18) are such that [Fn]i+1 ¼ wi(n) and ½Bn iþ1 ¼ r2i r2
in . Then,
½Fjn PðjnÞ iþ1;s ¼ ½Fjn iþ1;sjþn ¼ wi ðj nÞ1fiþ1¼sþjng ;
½Bn st ¼ r2s1 r2
s1n 1ft¼sg ;
½ðFhn PðhnÞ Þ0 tv ¼ ½Fhn PðhnÞ vt ¼ wv1 ðh nÞ1fv¼tþhng ;
so that
½Fjn PðjnÞ Bn iþ1;t ¼
¼
m
X
wi ðj nÞ1fiþ1¼sþjng r2s1 r2
s1n 1ft¼sg
s¼1
r2t1 r2
t1n wi ðj nÞ1fiþ1¼tþjng
and
½Fjn PðjnÞ Bn ðFhn PðhnÞ Þ0 iþ1;v ¼
¼
m
X
r2t1 r2
t1n wi ðj nÞ1fiþ1¼tþjng wv1 ðh
t¼1
r2ijþn r2
ij wi ðj nÞwijþh ðh nÞ1fv¼ijþhþ1g :
Then, a comparison with eqn (25) shows that eqn (17) holds. This completes the proof.
COROLLARY 1.
nÞ1fv¼tþhng
h
Under the assumptions of Theorem 1, we have
N1=2
^ ðjÞ w ðjÞ
w
i
i
^
wk ð‘Þ wk ð‘Þ
vijij
) N 0;
vijk‘
vijk‘
vk‘k‘
;
ð26Þ
for all 0 i m 1 and 0 j ‘, where
vijk‘ ¼
j
X
r2ijþn r2
ij wi ðj nÞwijþ‘ ð‘ nÞ;
n¼1
if k ¼ i + ‘ j mod m, and vijk‘ ¼ 0 otherwise.
For a second-order stationary process, where the period m ¼ 1, we have r2i ¼ r2 , and then substituting m ¼ j n in Corollary 1
yields
162
wileyonlinelibrary.com/journal/jtsa
2010 Blackwell Publishing Ltd.
J. Time Ser. Anal. 2011, 32 157–174
FOURIER-PARMA TIME SERIES
1=2
N
^
ðwðuÞ
wðuÞÞ ) N 0;
j1
X
!
wðmÞ
2
ð27Þ
;
m¼0
which agrees with Thm 2.1 in Brockwell and Davis (1988).
3. DIFFERENCE EQUATIONS
For a PARMAm(p,q) model given by eqn (1), we develop a vector difference equation for the w-weights of the PARMA process so that
we can determine feasible values of p and q. For fixed p and q with p + q ¼ m, assuming m statistically significant values of wt(j), to
determine the parameters /t(‘) and ht(‘), eqn (2) can be substituted in to eqn (1) to obtain
1
X
wt ðjÞetj j¼0
p
X
/t ð‘Þ
‘¼1
1
X
wt‘ ðjÞet‘j ¼ et j¼0
q
X
ht ð‘Þet‘
ð28Þ
‘¼1
and then the coefficients on both sides can be equated so as to calculate /t(‘) and ht(‘):
wt ð0Þ ¼ 1;
wt ð1Þ /t ð1Þwt1 ð0Þ ¼ ht ð1Þ;
wt ð2Þ /t ð1Þwt1 ð1Þ /t ð2Þwt2 ð0Þ ¼ ht ð2Þ;
wt ð3Þ /t ð1Þwt1 ð2Þ /t ð2Þwt2 ð1Þ /t ð3Þwt3 ð0Þ ¼ ht ð3Þ;
..
.
ð29Þ
where we take /t(‘) ¼ 0 for ‘ > p, and ht(‘) ¼ 0 for ‘ > q. The w-weights satisfy the homogeneous difference equations
8
p
X
>
>
>
/t ðkÞwtk ðj kÞ ¼ 0
> wt ðjÞ <
j maxðp; q þ 1Þ
>
>
>
>
: wt ðjÞ 0 j maxðp; q þ 1Þ,
k¼1
j
X
ð30Þ
/t ðkÞwtk ðj kÞ ¼ ht ðjÞ
k¼1
for 0 t m 1. Defining
A‘ ¼ diagf/0 ð‘Þ; /1 ð‘Þ; . . . ; /m1 ð‘Þg;
wðjÞ ¼ ðw0 ðjÞ; w1 ðjÞ; . . . ; wm1 ðjÞÞ0 ;
hðjÞ ¼ ðh0 ðjÞ; h1 ðjÞ; . . . ; hm1 ðjÞÞ0 ;
wjk ðj kÞ ¼ ðwk ðj kÞ; wkþ1 ðj kÞ; . . . ; wkþm1 ðj kÞÞ0 ;
then eqn (30) leads to the vector difference equations
8
p
X
>
>
>
Ak wjk ðj kÞ ¼ 0
wðjÞ
>
<
j maxðp; q þ 1Þ
>
>
>
>
: wðjÞ 0 j maxðp; q þ 1Þ,
k¼1
j
X
Ak wjk ðj kÞ ¼ hðjÞ
ð31Þ
k¼1
where wt(0) ¼ 1.
Since wjk( j k) ¼ Pkw(j k) where P is the orthogonal m · m cyclic permutation matrix given by eqn (15), it follows from
eqn (31) that
8
p
X
>
>
>
wðjÞ
Ak Pk wðj kÞ ¼ 0
>
<
j maxðp; q þ 1Þ
>
>
>
>
: wðjÞ 0 j maxðp; q þ 1Þ.
k¼1
j
X
Ak Pk wðj kÞ ¼ hðjÞ
ð32Þ
k¼1
The vector difference eqn (32) can be helpful for the analysis of higher-order PARMA models using matrix algebra. The following
are special cases of eqn (32).
2010 Blackwell Publishing Ltd.
wileyonlinelibrary.com/journal/jtsa
163
J. Time Ser. Anal. 2011, 32 157–174
Y. G. TESFAYE, P. L. ANDERSON AND M. M. MEERSCHAERT
3.1. Periodic moving average
The periodic moving average process, denoted by PMAm(q), is obtained by setting p ¼ 0 in eqn (1). The vector difference equation for
this process, from (32), is
wð jÞ ¼ hð jÞ
wð jÞ ¼ 0
0jq
j > q.
ð33Þ
Then, Theorem 1 can be directly applied to identify the order of the PMA process via
^ jÞ þ hð jÞÞ ) N ð0; Vjj Þ;
N1=2 ðhð
ð34Þ
where Vjj is obtained from eqn (17).
3.2. Periodic autoregressive processes
The periodic moving average process, denoted by PARm(p), is obtained by setting q ¼ 0 in eqn (1). The vector difference equation for
this process given by eqn (32) is
wðjÞ ¼
p
X
Ak Pk wð j kÞ j p:
ð35Þ
k¼1
0
For the PARm(1) with Xt ¼ /tXt1 + et, we have w(1) ¼ A1P1w(0) ¼ / ¼ f/0,/1,…,/m1g .
3.3. First order PARMA process
For higher-order PAR or PARMA models, it is difficult to obtain explicit solutions for /(‘) and h(‘), hence model identification is a
complicated problem. However, for the PARMAm(1,1) model
Xt ¼ /t Xt1 þ et ht et1 ;
ð36Þ
it is possible to solve directly in eqn (29) to obtain wt(0) ¼ 1 and
ht ¼ /t wt ð1Þ;
ð37Þ
wt ð2Þ ¼ /t wt1 ð1Þ:
ð38Þ
4. ASYMPTOTICS FOR PARMA PARAMETER ESTIMATES
Here, we will apply Theorem 1 to derive the asymptotic distribution of the autoregressive and moving average parameters in the
PARMAm(1,1) model (eqn 36).
THEOREM 2.
Under the assumption of Theorem 1, we have
^ /Þ ) N ð0; QÞ;
N1=2 ð/
ð39Þ
^ ;...;/
^ 0 , / ¼ [/0,/1,…,/m1]0 and the m · m matrix Q is defined by
^ ¼ ½/
^ ;/
where /
0
1
m1
Q¼
2
X
H‘ V‘k Hk0 ;
ð40Þ
k;‘¼1
where V‘k is given by eqn (17), H1 ¼ F2 P1 F12 and H2 ¼ P1 F11 P with P the m · m permutation matrix eqn (15) and Fn is from
eqn (18).
PROOF. We will use a continuous mapping argument (Brockwell and Davis, 1991, Propn 6.4.3): we say that a sequence of random
vectors Xn is ANðln ; c2n RÞ if cn ðX n ln Þ ) N ð0; RÞ, where R is a symmetric non-negative definite matrix and cn ! 0 as n ! 1.
0
If Xn is ANðl; c2n RÞ and g(x) ¼ (g1(x),…,gm(x)) is a mapping from Rk into Rm such that each gi(.) is continuously differentiable in a
0
neighborhood of l, and if DRD has all of its diagonal elements non-zero, where D is the m · k matrix [(¶gi/¶xj)(l)], then g(Xn) is
ANðgðlÞ; c2n DRD0 Þ.
164
wileyonlinelibrary.com/journal/jtsa
2010 Blackwell Publishing Ltd.
J. Time Ser. Anal. 2011, 32 157–174
FOURIER-PARMA TIME SERIES
0
^
^
Theorem 1 with j ¼ 1 and h ¼ 2 yields that Xn ¼ ðwð1Þ;
wð2ÞÞ
is AN(l,N1V) with l ¼ (w(1),w(2))0 and
V11 V12
;
V¼
V21 V22
where V‘k is given by eqn (17). Apply the continuous mapping g(l) ¼ / to see that eqn (39) holds with Q ¼ HVH0 , where H is the
m · 2m matrix of partial derivatives
@/‘1
@/‘1
H ¼ ðH1 ; H2 Þ ¼
;
ð41Þ
;
@wm1 ð1Þ @wm1 ð2Þ ‘;m¼1;...;m
which we now compute. Use /‘ ¼ w‘(2)/w‘1(1) from eqn (38) to compute
@
w‘1 ð2Þ
¼ wm ð2Þwm1 ð1Þ2 1fm¼‘1g
½H1 ‘m ¼
@wm1 ð1Þ w‘2 ð1Þ
and recall from eqn (18) that [Fn]ij ¼ wi1(n)1fj¼ig. Since [P1]ij ¼ 1fj¼i1g, we have
½P1 F12 im ¼
m
X
1fk¼i1g wk1 ð1Þ2 1fk¼mg ¼ wm1 ð1Þ2 1fm¼i1g
k¼1
and then
½F2 P1 F12 ‘m ¼ m
X
w‘1 ð2Þ1f‘¼ig wm1 ð1Þ2 1fm¼i1g ¼ wm ð2Þwm1 ð1Þ2 1fm¼‘1g ;
i¼1
which shows that H1 ¼ F2 P1 F12 . Since
½H2 ‘m ¼
and recalling that [P1MP]ij ¼ Mi1,
COROLLARY 2.
j1,
w‘1 ð2Þ
¼ wm2 ð1Þ1 1fm¼‘g
@wm1 ð2Þ w‘2 ð1Þ
@
we also have H2 ¼ P1 F11 P.
h
Regarding Theorem 2, in particular, we have that
^ / Þ ) N ð0; w 2 Þ;
N1=2 ð/
i
i
/i
ð42Þ
for 0 i m 1, where
(
2
w/i
¼
w4
i1 ð1Þ
2
w2i ð2Þr2
i2 ri1
)
1
X
2wi ð1Þwi1 ð1Þ
2
2
2
2
þ wi1 ð1Þri2
1
rin wi ðnÞ :
wi ð2Þ
n¼0
ð43Þ
PROOF. We need to compute the matrix Q in eqn (40). From eqns (17) and (18), we obtain V11 ¼ B1 and ½Bn mj ¼
r2m1 r2
m1n 1f j¼mg . Then,
½H1 V11 ‘j ¼ m
X
wm ð2Þ
m¼1 wm1 ð1Þ
2
1fm¼‘1g
wj ð2Þr2j1
r2m1
1fj¼mg ¼
1fj¼‘1g
2
rm2
wj1 ð1Þ2 r2j2
and
½H1 V11 H10 ‘k ¼ m
X
wj ð2Þr2j1
j¼1
wj1 ð1Þ2 r2j2
1fj¼‘1g
wj ð2Þ
wj1 ð1Þ
2 1fj¼k1g ¼
wk1 ð2Þ2 r2k2
wk2 ð1Þ4 r2k3
1fk¼‘g ;
so that H1 V11 H10 is a diagonal matrix. Next, note that ½B1 Pij ¼ ½B1 i;j1 ¼ r2i1 r2
i2 1fj1¼ig so that in view of eqn (17), we have
½V12 ij ¼ ½B1 PF1 ij ¼
m
X
r2
i1
2
r
k¼1 i2
1fk1¼ig wk1 ð1Þ1fk¼jg ¼
r2i1 wj1 ð1Þ
1fj1¼ig ;
r2i2
so that
½H1 V12 ‘j ¼ m
X
wi ð2Þ
i¼1
wi1 ð1Þ
2
1fi¼‘1g
r2i1 wj1 ð1Þ
w ð2Þw‘1 ð1Þr2‘2
1fj1¼ig ¼ ‘1
1fj¼‘g
2
ri2
w‘2 ð1Þ2 r2‘3
and finally,
2010 Blackwell Publishing Ltd.
wileyonlinelibrary.com/journal/jtsa
165
J. Time Ser. Anal. 2011, 32 157–174
Y. G. TESFAYE, P. L. ANDERSON AND M. M. MEERSCHAERT
½H1 V12 H20 ‘k ¼ m
X
w
j¼1
2
‘1 ð2Þw‘1 ð1Þr‘2
2 2
w‘2 ð1Þ r‘3
1fj¼‘g wj2 ð1Þ1 1fk¼jg ¼ w‘1 ð2Þw‘1 ð1Þr2‘2
w‘2 ð1Þ3 r2‘3
1fk¼‘g ;
so that H1 V12 H20 is another diagonal matrix. Then, H2 V21 H10 ¼ ðH1 V12 H20 Þ0 ¼ H1 V12 H20 is the same matrix. Lastly, note that since
½P1 B1 Pij ¼ ½B1 i1; j1 ¼ r2i2 r2
i3 1fj¼ig , B2, F1 and H2 are all diagonal matrices, it follows from the definition (17) that
!
r2‘1 r2‘2 w‘1 ð1Þ2
1
½V22 ‘k ¼ ½B2 þ F1 P B1 PF1 ‘k ¼
1fk¼‘g
þ
r2‘3
r2‘3
and
½H2 V22 H20 ‘k
¼
1
w‘2 ð1Þ2
!
r2‘1 r2‘2 w‘1 ð1Þ2
þ
1fk¼‘g
r2‘3
r2‘3
2
is also diagonal. Then, it follows from eqn (40) that Q is a diagonal matrix with entries ½Qiþ1 ¼ w/i
given by eqn (43).
THEOREM 3.
h
Under the assumption of Theorem 1, we have
N1=2 ðh^ hÞ ) N ð0; SÞ;
ð44Þ
0
where ^h ¼ ½^h0 ; ^h1 ; . . . ; ^hm1 0 , h ¼ [h0,h1,…,hm1] , and the m · m matrix S is defined by
S¼
2
X
M‘ V‘k M0k ;
ð45Þ
k;‘¼1
where V‘k is given in eqn (17), M1 ¼ I F2 P1 F12 and M2 ¼ P1 F11 P. Here, I is the m · m identity matrix, P is the m · m
permutation matrix (eqn 15), and Fn is from eqn (18).
0
^
^
wð2ÞÞ
is AN(l,N1V) with l,V as in the proof of Theorem 2. Recall
PROOF. Theorem 1 with j ¼ 1 and h ¼ 2 yields that Xn ¼ ðwð1Þ;
from eqn (37) that ht ¼ /t wt(1), and apply the continuous mapping g(l) ¼ h to see that eqn (44) holds with
S ¼ MVM0 ;
where M is a m · 2m matrix of partial derivatives
M ¼ ðM 1 ; M 2 Þ ¼
@h‘1
@h‘1
;
@wm1 ð1Þ @wm1 ð2Þ
ð46Þ
;
‘;m¼1;...;m
and then it follows immediately from eqns (37) and (41) that M1 ¼ H1 I and M2 ¼ H2.
COROLLARY 3.
h
Regarding Theorem 3, in particular, we have that
N1=2 ðh^i hi Þ ) N ð0; whi2 Þ;
ð47Þ
for 0 i m 1, where
(
whi2
¼
w4
i1 ð1Þ
2
w2i ð2Þr2
i2 ri1
)
X
j1
2
X
2wi ð1Þwi1 ð1Þ
4=j
2
2
2
þ
1
wi1 ð1Þrij
rin wi ðnÞ :
wi ð2Þ
n¼0
j¼1
ð48Þ
h
PROOF. Since M1 ¼ H1 I and M2 ¼ H2, it follows from block matrix multiplication that S ¼ MVM0 ¼ HVH0 þ S ¼ Q þ S where
S ¼ V11 H1 V11 V11 H0 H2 V21 V12 H0 ;
1
2
ð49Þ
with ½V11 ‘‘ ¼ r2‘1 r2
‘2 and the remaining matrix terms on the right-hand side of eqn (49) have zero entries along the diagonal.
2
Then, ½Siþ1;iþ1 ¼ r2i r2
h
i1 þ ½Qiþ1;iþ1 which reduces to whi in eqn (48).
Using Corollaries 2 and 3, we can write the (1 a)100% confidence intervals for /i and hi as
^ þ za=2 N1=2 w/i Þ;
^ za=2 N1=2 w/i ; /
ð/
i
i
1=2
^
^
ðhi za=2 N
whi ; hi þ za=2 N1=2 whi Þ;
where P(Z > za) ¼ a for Z N ð0; 1Þ.
166
wileyonlinelibrary.com/journal/jtsa
2010 Blackwell Publishing Ltd.
J. Time Ser. Anal. 2011, 32 157–174
FOURIER-PARMA TIME SERIES
5. ASYMPTOTICS FOR DISCRETE FOURIER TRANSFORMS
The PARMAm(p,q) model (eqn 1) has (p + q + 1)m total parameters. For example, for a monthly series (m ¼ 12) with p ¼ q ¼ 1, there
are 36 parameters. For a weekly series with p ¼ q ¼ 1, there are 156 parameters, representing three periodic functions with period
m ¼ 52. When the period m is large, the authors have found that the model parameters often vary smoothly with time, and can
therefore be explained by just a few non-zero discrete Fourier coefficients. In fact, increasing m often makes the periodically varying
parameter functions smoother (see, e.g, Anderson et al., 2007). The statistical basis for selecting the significant harmonics in the DFT
of the periodically varying model parameters in eqns (1) and (2) depends on the asymptotic distribution of the DFT coefficients. The
PARMA model parameters in eqn (1) can be expressed in terms of the infinite order moving average parameters in eqn (2), as shown
in Section 3. Hence, we begin by computing DFT asymptotics for parameter estimates obtained from the innovations algorithm.
5.1. Moving averages
Write the moving average parameters in eqn (2) at lag j in the form
k X
2prt
2prt
wt ðjÞ ¼ c0 ðjÞ þ
þ sr ðjÞ sin
;
cr ðjÞ cos
m
m
r¼1
ð50Þ
where cr(j) and sr(j) are the Fourier coefficients, r is the harmonic and k is the total number of harmonics, which is equal to m/2 or
(m 1)/2 depending on whether m is even or odd respectively. Write the vector of Fourier coefficients at lag j in the form
(
f ðjÞ ¼
c0 ðjÞ; c1 ðjÞ; s1 ðjÞ; . . . ; cðm1Þ=2 ðjÞ; sðm1Þ=2 ðjÞ
0
c0 ðjÞ; c1 ðjÞ; s1 ðjÞ; . . . ; sðm=21Þ ðjÞ; cðm=2Þ ðjÞ
0
ðm oddÞ
ðm evenÞ.
ð51Þ
^ ðjÞ, defined by replacing wt( j) by w
^ ð jÞ,
Similarly, define ^fj to be the vector of Fourier coefficients for the innovations estimates w
t
t
cr( j) by ^cr ð jÞ and sr( j) by ^sr ð jÞ in eqns (50) and (51). We wish to describe the asymptotic distributional properties of these Fourier
coefficients to determine those that are statistically significantly different from zero. These are the coefficients that will be included in
our model.
To compute the asymptotic distribution of the Fourier coefficients, it is convenient to work with the complex DFT and its inverse
m1
X
2iprt
wt ðjÞ;
wr ðjÞ ¼ m1=2
exp
m
t¼0
ð52Þ
m1
X
2iprt
1=2
wr ðjÞ;
exp
wt ðjÞ ¼ m
m
r¼0
^ ðjÞ. The complex DFT can also be written in matrix form. Recall from Theorem 1 the
^ ðjÞ is the complex DFT of w
and similarly w
m
r
^
^ ð‘Þ; w
^ ð‘Þ; . . . ; w
^ ð‘Þ0 and w(‘) ¼ [w0(‘),w1(‘),…,wm1(‘)]0 and similarly define
definitions wð‘Þ
¼ ½w
0
1
m1
^ ðjÞ; w
^ ðjÞ; . . . ; w
^ ðjÞ0 ;
^ ðjÞ ¼ ½w
w
0
1
m1
ð53Þ
w ðjÞ ¼ ½w0 ðjÞ; w1 ðjÞ; ; wm1 ð‘Þ0 ;
noting that these are all m-dimensional vectors. Define a m · m matrix U with complex entries
U ¼ m1=2 ðe
i2prt
m
Þr;t¼0;1;...;m1 ;
ð54Þ
^ ðjÞ ¼ UwðjÞ.
^
so that w (j) ¼ Uw(j) and w
This matrix form is useful because it is easy to invert. Obviously w (j) ¼ Uw(j) is equivalent to
w(j) ¼ U1w (j) since the matrix U is invertible. This is what guarantees that there exists a unique vector of complex DFT coefficients
~ 0 ¼ I)
w (j) corresponding to any vector w(j) of moving average parameters. However, in this case, the matrix U is also unitary (i.e. UU
1
0
~
~
which means that U ¼ U , and the latter is easy to compute. Here, U denotes the matrix whose entries are the complex conjugates
~ 0 w ðjÞ which is the matrix form of the second relation in eqn (52).
of the respective entries in U. Then, we also have wðjÞ ¼ U
Next, we convert from complex to real DFT, and it is advantageous to do this in a way that also involves a unitary matrix. Define
ar ðjÞ ¼ 21=2 fwr ðjÞ þ wmr ðjÞg ðr ¼ 1; 2; . . . ; ½ðm 1Þ=2Þ
ar ðjÞ ¼ wr ðjÞ ðr ¼ 0 or m=2Þ
ð55Þ
br ðjÞ ¼ i21=2 fwr ðjÞ wmr ðjÞg ðr ¼ 1; 2; . . . ; ½ðm 1Þ=2Þ
and let
(
eðjÞ ¼
a0 ðjÞ; a1 ðjÞ; b1 ðjÞ; . . . ; aðm1Þ=2 ðjÞ; bðm1Þ=2 ðjÞ
0
a0 ðjÞ; a1 ðjÞ; b1 ðjÞ; . . . ; bðm=21Þ ðjÞ; aðm=2Þ ðjÞ
0
ðm oddÞ
ðm evenÞ
ð56Þ
^ ðjÞ. These relations (eqn 55) define another m · m matrix P with complex entries such that
and likewise for the coefficients of w
t
2010 Blackwell Publishing Ltd.
wileyonlinelibrary.com/journal/jtsa
167
J. Time Ser. Anal. 2011, 32 157–174
Y. G. TESFAYE, P. L. ANDERSON AND M. M. MEERSCHAERT
eðjÞ ¼ PU wðjÞ ¼ Pw ðjÞ;
ð57Þ
~0 eðjÞ, the latter form being most useful for computations.
and it is not hard to check that P is also unitary, so that w ðjÞ ¼ P eðjÞ ¼ P
The DFT coefficients ar( j) and br( j) are not the same as the coefficients cr( j) and sr( j) in eqn (50) but they are closely related. Substitute
the first line of eqn (52) into eqn (55) and simplify to obtain
m1
X
2prt
1=2
cos
ar ðjÞ ¼ m
wt ðjÞ ðr ¼ 0 or m=2Þ;
m
t¼0
rffiffiffi m1
2 X
2prt
ð58Þ
wt ðjÞ ðr ¼ 1; 2; . . . ; ½ðm 1Þ=2Þ;
cos
ar ðjÞ ¼
m t¼0
m
rffiffiffi m1
2 X
2prt
br ðjÞ ¼
sin
wt ðjÞ ðr ¼ 1; 2; . . . ; ½ðm 1Þ=2Þ:
m tm¼0
m
1
~0 eðjÞ, we obtain w ðjÞ ¼ ar ðjÞ for r ¼ 0
Inverting the relations (eqn 55) or, equivalently, using the matrix equation w ðjÞ ¼ P
r
or m/2 and
wr ðjÞ ¼ 21=2 far ðjÞ ibr ðjÞg and
~ ðjÞ ¼ 21=2 far ðjÞ þ ibr ðjÞg;
wmr ðjÞ ¼ w
r
for r ¼ 1,…, k ¼ [(m 1)/2]. Substitute these relations into the second expression in eqn (52) and simplify to obtain
rffiffiffi k 2X
2prt
2prt
þ br ðjÞ sin
;
ar ðjÞ cos
wt ðjÞ ¼ m1=2 a0 ðjÞ þ
m r¼1
m
m
for m odd and
wt ðjÞ ¼ m1=2 ða0 ðjÞ þ ak ðjÞÞ þ
rffiffiffi k1 2X
2prt
2prt
þ br ðjÞ sin
;
ar ðjÞ cos
m r¼1
m
m
for m even, where k is the total number of harmonics, which is equal to m/2 or (m 1)/2 depending on whether m is even or odd
respectively. Comparison with eqn (50) reveals that
rffiffiffi
2
cr ¼
ar ðr ¼ 1; 2; . . . ; ½ðm 1Þ=2Þ;
m
ð59Þ
cr ¼ m1=2 ar ðr ¼ 0 or m=2Þ;
rffiffiffi
2
br ðr ¼ 1; 2; . . . ; ½ðm 1Þ=2Þ:
sr ¼
m
Substituting into eqn (58) yields
2prm
wm ðjÞ ðr ¼ 0 or m=2Þ;
cos
m
m¼0
m1
X
2prm
wm ðjÞ ðr ¼ 1; 2; . . . ; ½ðm 1Þ=2Þ;
cos
cr ðjÞ ¼ 2m1
m
m¼0
m1
X
2prm
wm ðjÞ ðr ¼ 1; 2; . . . ; ½ðm 1Þ=2Þ;
sr ðjÞ ¼ 2m1
sin
m
m¼0
cr ðjÞ ¼ m1
m1
X
ð60Þ
^ ðjÞ. Define the m · m diagonal matrix
and likewise for the Fourier coefficients of w
m
L¼
pffiffiffiffiffiffiffi
pffiffiffiffiffiffiffi
diagðm1=2 ; p2=m
; . . . ; p2=m
ffiffiffiffiffiffiffi
ffiffiffiffiffiffiffiÞ
diagðm1=2 ; 2=m; . . . ; 2=m; m1=2 Þ
ðm oddÞ
;
ðm evenÞ
ð61Þ
so that in view of eqn (59), we have f( j) ¼ Le( j) and ^f ð jÞ ¼ L^eð jÞ. Substituting into eqn (57) we obtain
f ðjÞ ¼ LPU wð jÞ
THEOREM 4.
^
and ^f ð jÞ ¼ LPU wðjÞ:
ð62Þ
For any positive integer j
N1=2 ½f^ðjÞ f ð jÞ ) N ð0; RV Þ;
ð63Þ
~0 L0 :
~0P
RV ¼ LPUVjj U
ð64Þ
where
168
wileyonlinelibrary.com/journal/jtsa
2010 Blackwell Publishing Ltd.
J. Time Ser. Anal. 2011, 32 157–174
FOURIER-PARMA TIME SERIES
PROOF. From Theorem 1, we have
^ wðjÞ ) N ð0; Vjj Þ;
N1=2 ½wðjÞ
ð65Þ
^ jÞ using eqn (62). Apply continuous mapping to
where Vjj is given by eqn (17). Define B ¼ LPU so that f( j) ¼ Bw(j) and ^f ð jÞ ¼ Bwð
^ Bwð jÞ ) N ð0; BVjj B0 Þ or in other words N1=2 ½^f ð jÞ Bf ð jÞ ) N ð0; BVjj B0 Þ. Although P and U are complex
obtain N1=2 ½BwðjÞ
~0 ¼ U
~0 L0 . Then eqns (63) and (64) follow, which finishes the
~0P
matrices, the product B ¼ LPU is a real matrix, and therefore B0 ¼ B
proof.
h
THEOREM 5. Let Xt be the periodically stationary infinite order moving average process (eqn 2). Then, under the null hypothesis that the
process is stationary with wt(h) ¼ w(h) and rt ¼ r, the elements of eqn (51) are asymptotically independent with
N1=2 f^cm ðhÞ lm ðhÞg ) N ð0; m1 gV ðhÞÞ
N
1
ðm ¼ 0 or m=2Þ;
1=2
f^cm ðhÞ lm ðhÞg ) N ð0; 2m gV ðhÞÞ
ðm ¼ 1; 2; . . . ; ½ðm 1Þ=2Þ;
1=2
f^sm ðhÞ lm ðhÞg ) N ð0; 2m1 gV ðhÞÞ
ðm ¼ 1; 2; . . . ; ½ðm 1Þ=2Þ;
N
ð66Þ
for all h 1, where
lm ðhÞ ¼
ðm ¼ 0Þ
:
ðm > 0Þ
wðhÞ
0
gV ðhÞ ¼
h1
X
ð67Þ
w2 ðnÞ:
ð68Þ
n¼0
PROOF. Under the null hypothesis, wt(h) ¼ w(h) and rt ¼ r, is constant in t for each h and hence the Fn and Bn matrices in eqn (18)
become respectively, a scalar multiple of the identity matrix: Fn ¼ w(n)I, and an identity matrix: Bn ¼ I. Then, from eqn (17), using
0
(Pt) ¼ Pt, we have
Vhh ¼
h
X
wðh nÞPðhnÞ wðh nÞPhn ¼
n¼1
h1
X
w2 ðmÞI ¼ gV ðhÞI
m¼0
is also a scalar multiple of the identity matrix. Hence, since scalar multiples of the identity matrix commute in multiplication with any
~0 P ¼ I and UU
~0 ¼ Vhh PUU
~0 ¼ Vhh since P and U are unitary matrices (i.e. P
~0P
~0P
~ 0 ¼ I).
other matrix, we have from eqn (64) that PUVhh U
Then, in Theorem 4, we have
N1=2 ½^f ðhÞ f ðhÞ ) N ð0; RV Þ;
ð69Þ
~0 L0 ¼ Vhh LL0 , so that
~0P
where RV ¼ LPUVhh U
RV ¼
gV ðhÞ diagðm1 ; 2m1 ; . . . ; 2m1 ; 2m1 Þ
gV ðhÞ diagðm1 ; 2m1 ; . . . ; 2m1 ; m1 Þ
ðm oddÞ
ðm evenÞ.
0
Under the null hypothesis, f(h) ¼ [w(h),0,…,0] and then the theorem follows by considering the individual elements of the vector
convergence (eqn 69).
Theorem 5 can be used to test whether the coefficients in the infinite order moving average model (2) vary with the season.
Suppose, for example, that m is odd. Then, under the null hypothesis that cm(h) and sm(h) are zero for all m 1 and h 6¼ 0,
f^c1 ðhÞ; ^s1 ðhÞ; . . . ; c^ðm1Þ=2 ðhÞ; ^sðm1Þ=2 ðhÞg form m 1 independent and normally distributed random variables with mean zero and
standard error ð2m1 ^gV ðhÞ=NÞ1=2 . The Bonferroni a-level test rejects the null hypothesis that cm(h) and sm(h) are all zero if
|Zc(m)| > za0 =2 and |Zs(m)| > za0 =2 for m ¼ 1,…, (m 1)/2, where
Zc ðmÞ ¼
^cm ðhÞ
ð2m1 ^
gV ðhÞ=NÞ1=2
;
Zs ðmÞ ¼
^sm ðhÞ
1
ð2m ^
gV ðhÞ=NÞ1=2
;
ð70Þ
^
^gV ðhÞ is given by eqn (68) with w(n) replaced by wðnÞ,
a0 ¼ a/(m 1) and P(Z > za) ¼ a for Z N ð0; 1Þ. A similar formula holds
1
1
when m is even, except that 2m is replaced by m when m ¼ 0 or m/2 in view of eqn (66). When a ¼ 5% and m ¼ 12, a0 ¼ 0.05/
11 ¼ 0.0045, za0 /2 ¼ z0.0023 ¼ 2.84, and the null hypothesis is rejected when any |Zc,s(m)| > 2.84, indicating that at least one of the
corresponding Fourier coefficients are statistically significantly different from zero. In that case, that the moving average parameter at
this lag should be represented by a periodic function in the infinite order moving average model (2).
2010 Blackwell Publishing Ltd.
wileyonlinelibrary.com/journal/jtsa
169
J. Time Ser. Anal. 2011, 32 157–174
Y. G. TESFAYE, P. L. ANDERSON AND M. M. MEERSCHAERT
5.2. PARMA model parameters
For large m, it is often the case that the PARMA model parameters /t(‘), ht(‘) and rt, will vary smoothly w.r.t. t, and can therefore be
explained by a few of their non-zero Fourier coefficients. For a PARMAm(p,q) model, the DFT of /t(‘), ht(‘) and rt, can be written as
2prt
2prt
þ sar ð‘Þ sin
;
m
m
r¼1
k X
2prt
2prt
þ sbr ð‘Þ sin
;
cbr ð‘Þ cos
/t ð‘Þ ¼ cb0 ð‘Þ þ
m
m
r¼1
k X
2prt
2prt
þ sdr sin
:
rt ¼ cd0 þ
cdr cos
m
m
r¼1
ht ð‘Þ ¼ ca0 ð‘Þ þ
k X
car ð‘Þ cos
ð71Þ
car,br,dr and sar,br,dr are the Fourier coefficients, r is the harmonic and k is the total number of harmonics as in eqn (50). For instance, for
monthly series where m ¼ 12, we have k ¼ 6; for weekly series with m ¼ 52, k ¼ 26 and for daily series with m ¼ 365, k ¼ 182.
In practice, a small number of harmonics k < k is used.
Fourier analysis of PARMAm(p,q) models can be accomplished using the vector difference equations (eqn 32) to write the Fourier
coefficients of the model parameters in terms of the DFT of wt(j). This procedure is complicated, in general, by the need to solve the
nonlinear system (eqn 32). Here we illustrate the general procedure by developing asymptotics of the DFT coefficients for a
PARMAm(1,1) model, using the relationships (37) and (38). Since first order PARMA models are often sufficient to capture periodically
varying behavior (see, e.g. Anderson and Meerschaert, 1998; Anderson et al., 2007), these results are also useful in their own right.
Consider again the PARMAm(1,1) model given in eqn (36). To simplify notation, we will express the model parameters, along with
their Fourier coefficients, in terms of vector notation. Let h ¼ [h0, h1, , hm1]0 , / ¼ [/0, /1, , /m1]0 and r ¼ [r0, r1, , rm1]0 be the
vector of PARMAm(1,1) model parameters. These model parameters may be defined in terms of their complex DFT coefficients ht , /t
and r as follows:
~ 0 h ð‘Þ;
h ð‘Þ ¼ Uhð‘Þ and hð‘Þ ¼ U
~ 0 / ð‘Þ;
/ ð‘Þ ¼ U/ð‘Þ and /ð‘Þ ¼ U
ð72Þ
~0r ;
r ¼ Ur and r ¼ U
where U is the m · m Fourier transform matrix defined in eqn (54) and
0
h ¼ h0 ; h1 ; ; hm1 ;
0
/ ¼ /0 ; /1 ; ; /m1 ;
0
r ¼ r0 ; r1 ; ; rm1 :
As in Theorem 4 let the vector form for transformed h and / be given by
fh ¼ LPh ¼ LPUh;
f/ ¼ LP/ ¼ LPU/;
ð73Þ
where
fh ¼
f/ ¼
½ca0 ; ca1 ; sa1 ; . . . ; caðm1Þ=2 ; saðm1Þ=2 0
½ca0 ; ca1 ; sa1 ; . . . ; saðm=21Þ ; caðm=2Þ 0
ðm oddÞ
;
ðm evenÞ
ð74Þ
½cb0 ; cb1 ; sb1 ; . . . ; cbðm1Þ=2 ; sbðm1Þ=2 0
½cb0 ; cb1 ; sb1 ; . . . ; sbðm=21Þ ; cbðm=2Þ 0
ðm oddÞ
;
ðm evenÞ
ð75Þ
2prm
hm ðr ¼ 0 or m=2Þ;
cos
car ¼ m
m
m¼0
m1
X
2prm
hm ðr ¼ 1; 2; . . . ; ½ðm 1Þ=2Þ;
car ¼ 2m1
cos
m
m¼0
m1
X
2prm
hm ðr ¼ 1; 2; . . . ; ½ðm 1Þ=2Þ
sar ¼ 2m1
sin
m
m¼0
1
m1
X
ð76Þ
and
170
wileyonlinelibrary.com/journal/jtsa
2010 Blackwell Publishing Ltd.
J. Time Ser. Anal. 2011, 32 157–174
FOURIER-PARMA TIME SERIES
2prm
/m ðr ¼ 0 or m=2Þ;
m
m¼0
m1
X
2prm
1
/m ðr ¼ 1; 2; . . . ; ½ðm 1Þ=2Þ;
cbr ¼ 2m
cos
m
m¼0
m1
X
2prm
/m ðr ¼ 1; 2; . . . ; ½ðm 1Þ=2Þ
sin
sbr ¼ 2m1
m
m¼0
cbr ¼ m1
m1
X
cos
ð77Þ
^ . We wish to describe the asymptotic distributional properties of the elements of
and likewise for the Fourier coefficients of h^m and /
m
eqns (74) and (75).
THEOREM 6. Regarding the DFT of the PARMAm(1,1) model coefficients in eqns (74) and (75), under the assumption of Theorem 1,
we have
N1=2 ½f^h fh ) N ð0; RS Þ;
N1=2 ½f^/ f/ ) N ð0; RQ Þ;
ð78Þ
where
fh ¼ LPh ¼ LPUh;
~0 L0 ;
~0P
RS ¼ LPUSU
ð79Þ
f/ ¼ LP/ ¼ LPU/;
~0 L0 ;
~0P
RQ ¼ LPUQU
with Q given by eqn (40) and S given by eqn (45).
PROOF. The proof is similar to Theorem 4. Apply continuous mapping along with Theorems 2 and 3.
h
THEOREM 7. Let Xt be the mean-standardized PARMAm(1,1) process (eqn 36), and suppose that the assumptions of Theorem 1 hold.
Then, under the null hypothesis that the Xt is stationary with /t ¼ /, ht ¼ h and rt ¼ r, the elements of ^f/ , defined by eqn (75) with cbr
replaced by ^cbr and sbr replaced by ^sbr , are asymptotically independent with
N1=2 f^cbm lbm g ) N ð0; m1 gQ Þ ðm ¼ 0 or m=2Þ;
N1=2 f^cbm lbm g ) N ð0; 2m1 gQ Þ ðm ¼ 1; 2; . . . ; ½ðm 1Þ=2Þ;
ð80Þ
N1=2 f^sbm lbm g ) N ð0; 2m1 gQ Þ ðm ¼ 1; 2; . . . ; ½ðm 1Þ=2Þ;
where
lbm ¼
/
0
ðm ¼ 0Þ
;
ðm > 0Þ
ð81Þ
(
)
1
X
2w2 ð1Þ
2
2
w ðnÞ
þ w ð1Þ
gQ ¼ w ð1Þ w ð2Þ 1 wð2Þ
n¼0
4
2
ð82Þ
and w(1) ¼ / h, w(2) ¼ / w(1).
PROOF. The proof follows along the same lines as Theorem 2 and hence we adopt the same notation. As in proof of Theorem 5,
we have Bn ¼ I and Fn ¼ w(n)I in eqn (18) and so eqn (17) implies (x ¼ min(h, j)):
Vjh ¼
x
X
wðj nÞPðjnÞ Phn wðh nÞ ¼
n¼1
x
X
w2 ðj nÞI
if j ¼ h;
n¼1
so
V11 ¼ w2 ð0ÞI ¼ I;
V22 ¼ ½w2 ð1Þ þ w2 ð0ÞI ¼ ½w2 ð1Þ þ 1I;
V12 ¼ wð0ÞP0 P1 wð1Þ ¼ wð1ÞP;
V21 ¼ wð1ÞP1 wð0Þ ¼ wð1ÞP1 ¼ wð1ÞP0
2010 Blackwell Publishing Ltd.
wileyonlinelibrary.com/journal/jtsa
171
J. Time Ser. Anal. 2011, 32 157–174
Y. G. TESFAYE, P. L. ANDERSON AND M. M. MEERSCHAERT
and
Q ¼ ð H1
H2 Þ
V11
V21
V12
V22
H10
;
0
H2
0
so that Q is symmetric. Since Xt is stationary, every w‘(t) ¼ w(t) in eqn (1) and so
where V21 ¼ V12
H1 ¼ wð2Þ 1
P
w2 ð1Þ
and H2 ¼ w(1)1I so that
Q ¼ H1 V11 H10 þ H2 V21 H10 þ H1 V12 H20 þ H2 V22 H20
wð2Þ 1 wð2Þ
wð2Þ 1
1
1
1
1
1 wð2Þ
¼ 2
þ
Iwð1ÞP
I½w2 ð1Þ þ 1I
I
P I
P þ 2
P wð1ÞP
P þ
wð1Þ wð1Þ
wð1Þ
wð1Þ
w ð1Þ
w2 ð1Þ
w ð1Þ
w2 ð1Þ
2
w ð2Þ 2w2 ð1Þwð2Þ þ ½w2 ð1Þ þ 1w2 ð1Þ
I:
¼
w4 ð1Þ
~0 ¼ Q and hence RQ ¼ LQL0 ¼ QLL0 or in
~0~
~0P
P0 ¼ QPUU
So Q ¼ gQI is actually a scalar multiple of the identity matrix I. Then, PUQU
other words
ðm oddÞ
gQ diagðm1 ; 2m1 ; . . . ; 2m1 ; 2m1 Þ
RQ ¼
ðm evenÞ.
gQ diagðm1 ; 2m1 ; . . . ; 2m1 ; m1 Þ
0
Under the null hypothesis, f/ ¼ [/, 0,…, 0] and then the theorem follows by considering the individual elements of the vector
convergence from the second line of eqn (78).
h
THEOREM 8. Under the assumption of Theorem 7, the elements of ^fh , defined by eqn (74) with car replaced by ^car and sar replaced by ^sar ,
are asymptotically independent with
N1=2 f^cam lam g ) N ð0; m1 gS Þ ðm ¼ 0 or m=2Þ;
N1=2 f^cam lam g ) N ð0; 2m1 gS Þ ðm ¼ 1; 2; . . . ; ½ðm 1Þ=2Þ;
1=2
N
ð83Þ
1
f^sam lam g ) N ð0; 2m gS Þ ðm ¼ 1; 2; . . . ; ½ðm 1Þ=2Þ;
where
lam ¼
h
0
ðm ¼ 0Þ
;
ðm > 0Þ
ð84Þ
(
)
X
j1
2
X
2w2 ð1Þ
4=j
2
þ
w ð1Þ
w ðnÞ ;
gS ¼ w ð1Þ w ð2Þ 1 wð2Þ
n¼0
j¼1
4
2
ð85Þ
and w(1) ¼ / h, w(2) ¼ /w(1).
PROOF. The proof follows along the same lines as Theorem 3 and hence we adopt the same notation. Note that S ¼ Q þ S,
where
S ¼ V11 H1 V11 V11 H0 H2 V21 V12 H0 ¼ I;
1
2
~0 L0 ¼ SLPUU
~0 L0 ¼ SLL0 , where
~0P
~0P
so as in the proof of Theorem 7 we obtain RS ¼ LPUSU
S ¼ RQ I ¼
w2 ð2Þ 2w2 ð1Þwð2Þ þ ½w2 ð1Þ þ 1w2 ð1Þ þ w4 ð1Þ
I;
w4 ð1Þ
so that
RS ¼
gS diagðm1 ; 2m1 ; . . . ; 2m1 ; 2m1 Þ
gS diagðm1 ; 2m1 ; . . . ; 2m1 ; m1 Þ
ðm oddÞ
ðm evenÞ.
0
Under the null hypothesis, fh ¼ [h, 0,…, 0] and then the theorem follows by considering the individual elements of the vector
convergence from the first line of eqn (78).
h
Theorems 7 and 8 can be used to test whether the coefficients in the first-order PARMA model (eqn 36) vary with the season.
Under the null hypothesis that /t ” /, the Bonferroni a-level test statistic rejects if
172
wileyonlinelibrary.com/journal/jtsa
2010 Blackwell Publishing Ltd.
J. Time Ser. Anal. 2011, 32 157–174
FOURIER-PARMA TIME SERIES
^cbm ðhÞ
ðk^
gQ =NÞ1=2
> za0 =2
and
^sbm ðhÞ
ðk^
gQ =NÞ1=2
ð86Þ
> za0 =2 ;
0
for all m > 0, where a ¼ a/(m 1), k ¼ m1 for m ¼ m/2, k ¼ 2m1 for m ¼ 1,2,…,[(m 1)/2], and
(
)
!
1
^ 2 ð1Þ
X
2w
4
2
2
2
^
^
^
^
^gQ ¼ w ð1Þ w ð2Þ 1 w ðnÞ :
þ w ð1Þ
^
wð2Þ
n¼0
Similarly, the Bonferroni a-level test statistic rejects the null hypothesis ht ” h if
^cam ðhÞ
1=2
ðk^
gS =NÞ
> za0 =2
and
^sam ðhÞ
ðk^
gS =NÞ1=2
> za0 =2 ;
ð87Þ
where
!
(
)
j1
2
^ 2 ð1Þ
X
X
2w
4
2
4=j
2
^
^
^
^
þ
w ð1Þ
w ðnÞ :
g^S ¼ w ð1Þ w ð2Þ 1 ^
wð2Þ
n¼0
j¼1
6. DISCUSSION
The results of this article were applied in Anderson et al. (2007) to a time series of average discharge for the Fraser river near Hope,
British Columbia in Canada. The time series of monthly average flows was adequately fit by a PARMA12(1,1) model with
12 autoregressive and 12 moving average parameters. The DFT methods of this article were then used to identify eight statistically
significant Fourier coefficients. Model diagnostics indicated a pleasing fit, and a reduction from 24 to 8 parameters. In fact, there were
some indications that the original PARMA12(1,1) model suffered from ‘overfitting’ because of the large number of parameters. The
weekly flow series at the same site was also considered. In that case, the DFT methods of this article reduced the number of
autoregressive and moving average parameters in the PARMA52(1,1) from 104 to 4. The parameters in the weekly model varied more
smoothly, leading to fewer significant frequencies that for the monthly data. In fact, the weekly autoregressive parameters collapsed
to a constant, as only the zero frequency was significant. In summary, the results in this article render the PARMA model a useful
and practical method for modeling high-frequency time series data with significant seasonal variations in the underlying correlation
structure.
Acknowledgements
Partially supported by NSF grants DMS-0803360 and EAR-0823965.
REFERENCES
Adams, G. J. and Goodwin, G. C. (1995) Parameter estimation for periodically ARMA models. Journal of Time Series Analysis 16(2), 127–45.
Anderson, P. L. and Meerschaert, M. M. (1997) Periodic moving averages of random variables with regularly varying tails. Annals of Statistics 25, 771–85.
Anderson, P. L. and Meerschaert, M. M. (1998) Modeling river flows with heavy tails. Water Resources Research 34(9), 2271–80.
Anderson, P. L. and Meerschaert, M. M. (2005) Parameter estimates for periodically stationary time series. Journal of Time Series Analysis 26, 489–518.
Anderson, P. L. and Vecchia, A. V. (1993) Asymptotic results for periodic autoregressive moving-average processes. Journal of Time Series Analysis 14,
1–18.
Anderson, P. L., Meerschaert, M. M. and Veccia, A. V. (1999) Innovations algorithm for periodically stationary time series. Stochastic Processes and their
Applications 83, 149–69.
Anderson, P. L., Tesfaye, Y. G. and Meerschaert, M. M. (2007) Fourier-PARMA models and their application to river flows. Journal of Hydrologic
Engineering 12(7), 462–72.
Anderson, P. L., Kavalieris, L. and Meerschaert, M. M. (2008) Innovations algorithm asymptotics for periodically stationary time series with heavy tails.
Journal of Multivariate Analysis 99, 94–116.
Basawa, I. V., Lund, R. B. and Shao, Q. (2004) First-order seasonal autoregressive processes with periodically varying parameters. Statistics & Probability
Letters 67(4), 299–306.
Berk, K. (1974) Consistent autoregressive spectral estimates. Annals of Statistics 2, 489–502.
Bhansali, R. (1978) Linear prediction by autoregressive model fitting in the time domain. Annals of Statistics 6, 224–31.
Boshnakov, G. N. (1996) Recursive computation of the parameters of periodic autoregressive moving-average processes. Journal of Time Series Analysis
17(4), 333–49.
Box, G. E. and Jenkins, G. M. (1976) Time Series Analysis: Forcasting and Control, 2nd edn. San Francisco: Holden-Day.
Brockwell, P. J. and Davis, R. A. (1988) Simple consistent estimation of the coefficients of a linear filter. Stochastic Processes and their Applications 28,
47–59.
Brockwell, P. J. and Davis, R. A. (1991) Time Series: Theory and Methods, 2nd edn. New York: Springer-Verlag.
Feller, W. (1971) An Introduction to Probability Theory and its Applications, 2nd edn. New York: John Wiley & Sons.
2010 Blackwell Publishing Ltd.
wileyonlinelibrary.com/journal/jtsa
173
J. Time Ser. Anal. 2011, 32 157–174
Y. G. TESFAYE, P. L. ANDERSON AND M. M. MEERSCHAERT
Franses, P. H. and Paap, R. (2004) Periodic Time Series Models. Oxford: Oxford University Press.
Gautier, A. (2006) Asymptotic inefficiency of mean-correction on parameter estimation for a periodic first-order autoregressive model. Communication
in Statistics – Theory and Methods 35(11), 2083–106.
Hipel, K. W. and McLeod, A. I. (1994) Time Series Modelling of Water Resources and Environmental Systems. Amsterdam: Elsevier.
Jones, R. H. and Brelsford, W. M. (1967) Times series with periodic structure. Biometrika 54, 403–8.
Lund, R. B. (2006) A seasonal analysis of riverflow trends. Journal of Statistical Computation and Simulation 76(5), 397–405.
Lund, R. B. and Basawa, I. V. (1999) Modeling and inference for periodically correlated time series. In Asymptotics, Nonparameterics, and Time Seress (ed.
S. Ghosh ). New York: Marcel Dekker, pp. 37–62.
Lund, R. B. and Basawa, I. V. (2000) Recursive prediction and likelihood evaluation for periodic ARMA models. Journal of Time Series Analysis 20(1),
75–93.
Meerschaert, M. M. and Scheffler, H. P. (2001) Limit Distributions for Sums of Independent Random Vectors: Heavy Tails in Theory and Practice. New York:
Wiley Interscience.
Nowicka-Zagrajek, J. and Wyłomańska, A. (2006) The dependence structure for PARMA models with a-stable innovations. Acta Physiologica Polonica B
37(11), 3071–81.
Pagano, M. (1978) On periodic and multiple autoregressions. Annals of Statistics 6(6), 1310–17.
Roy, R. and Saidi, A. (2008) Aggregation and systematic sampling of periodic ARMA processes. Computational Statistics and Data Analysis 52(9),
4287–304.
Salas, J. D., Delleur, J. W., Yevjevich, V. and Lane, W. L. (1980) Applied Modeling of Hydrologic Time Series. Littleton, Colorado: Water Resource
Publications.
Salas, J. D. Obeysekera, J. T. B. and Smith, R. A. (1981) Identification of streamflow stochastic models. ASCE Journal of the Hydraulics Division 107(7),
853–66.
Salas, J. D., Boes, D. C. and Smith, R. A. (1982) Estimation of ARMA models with seasonal parameters. Water Resources Research 18, 1006–10.
Salas, J. D., Tabios III, G. Q. and Bartolini, P. (1985) Approaches to multivariate modeling of water resources time series. Water Resources Bulletin 21,
683–708.
Samorodnitsky, G., and Taqqu, M. (1994) Stable non-Gaussian Random Processes. New York: Chapman & Hall.
Shao, Q. and Lund, R. B. (2004) Computation and characterization of autocorrelations and partial autocorrelations in periodic ARMA models. Journal of
Time Series Analysis 25(3), 359–72.
Tesfaye, Y. G., Meerschaert, M. M. and Anderson, P. L. (2005) Identification of PARMA models and their application to the modeling of river flows. Water
Resources Research 42(1), W01419.
Tiao, G. C. and Grupe, M. R. (1980) Hidden periodic autoregressive moving average models in time series data. Biometrika 67, 365–73.
Tjøstheim, D. and Paulsen, J. (1982) Empirical identification of multiple time series. Journal of Time Series Analysis 3, 265–82.
Thompstone, R. M., Hipel, K. W. and McLeod, A. I. (1985) Forecasting quarter-monthly riverflow. Water Resources Bulletin 25(5), 731–41.
Troutman, B. M. (1979) Some results in periodic autoregression. Biometrika 6, 219–28.
Ula, T. A. (1990) Periodic covariance stationarity of multivariate periodic autoregressive moving average processess. Water Resources Research 26(5),
855–61.
Ula, T. A. (1993) Forcasting of multivariate periodic autoregressive moving average processes. Journal of Time Series Analysis 14, 645–57.
Ula, T. A. and Smadi, A. A. (1997) Periodic stationarity conditions for periodic autoregressive moving average processess as eigenvalue problems. Water
Resources Research 33(8), 1929–34.
Ula, T. A. and Smadi, A. A. (2003) Identification of periodic moving average models. Communications in Statistics: Theory and Methods 32(12), 2465–75.
Vecchia, A. V. (1985a) Periodic autoregressive-moving average (PARMA) modelling with applications to water resources. Water Resources Bulletin 21,
721–30.
Vecchia, A. V. (1985b) Maximum likelihood estimation for periodic moving average models. Technometrics 27(4), 375–84.
Vecchia, A. V. and Ballerini, R. (1991) Testing for periodic autocorrelations in seasonal time series data. Biometrika 78(1), 53–63.
Wyłomańska, A. (2008) Spectral measures of PARMA sequences. Journal of Time Series Analysis 29(1), 1–13.
174
wileyonlinelibrary.com/journal/jtsa
2010 Blackwell Publishing Ltd.
J. Time Ser. Anal. 2011, 32 157–174
Download