Testing for Parameter Stability in RCA(1) Time Series

advertisement
Testing for
Parameter Stability in RCA(1) Time Series
Alexander Aue
1
Abstract: We utilize strong invariance principles to construct tests for the stability of
model parameters determining a random coefficient autoregressive time series of order one.
The test statistics are based on (conditional) least squares estimators for the unknown
parameters.
AMS 2000 Subject Classification: Primary 62G20, Secondary 60F15.
Keywords and Phrases: LS estimators, RCA time series, strong invariance principles.
1
Department of Mathematics, University of Utah, 155 South 1440 East, Salt Lake City, UT 84112–
0090, USA, email: aue@math.utah.edu
Partially supported by NATO grant PST.EAP.CLG 980599
1
Introduction and review of existing results
In time series analysis, autoregressive (AR) and linear processes are widely used due
to their (relatively simple) mathematical tractability. Conditions for the existence of
stationary solutions are easily provided and estimation procedures are well established.
On the other hand, these processes also match with real phenomena quite well at first
glance (cf. the comprehensive book of Brockwell and Davis (1991) for a presentation of
both theoretical results and a bunch of applications, such as ,e.g., the Wölfer sunspot
numbers).
But it has been found that a variety of data samples cannot be modeled precisely by
assuming linearity. For example, financial data exhibits heteroscedasticity and biological
data suffers from random perturbations. Therefore, during the past two decades there
has been an increasing interest in non–linear time series. The introduction of ARCH
and GARCH processes by Engle (1982) and Bollerslev (1986), respectively, allow for
modeling time dependent (conditional) volatilities and take moreover into account what is
known as ”stylized facts” (cf. Mikosch (2003)). So–called random coefficient autoregressive
(RCA) time series were introduced to study the above mentioned random perturbations of
dynamical systems (cf. Tong (1990), and Nicholls and Quinn (1982)). A striking example
for their usefulness is the application of an RCA process of order two to the Canadian lynx
data, which outperforms AR time series of much higher order on the basis of mean squared
errors. Also, the model parameters involved have an evident biological interpretation (cf.
Nicholls and Quinn (1982)).
Bougerol and Picard (1992a,b) used a common starting point to derive necessary and
sufficient conditions for the existence of a unique strictly stationary solution for both
GARCH processes and sequences of random vectors satisfying a stochastic difference
equation (SDE), such as RCA time series. For the GARCH case, their work has been
substantially refined by Berkes et al. (2003).
Here, our interest is in providing asymptotic tests for the stability of model parameters
determining an RCA time series of order one, RCA(1), based on least squares estimators,
which are based on strong approximations for appropriately defined partial sums (cf.
Sections 2 and 3 below). Hence, we focus on the one–dimensional SDE
Xn = (ϕ + bn )Xn−1 + en ,
n ∈ Z,
(1.1)
where {(bn , en )}n∈Z is an iid sequence and ϕ a constant parameter. Aue et al. (2004)
restated necessary and sufficient conditions for the existence of a unique and strictly
stationary solution of (1.1) from Bougerol and Picard (1992a) and gave a slightly modified
2
and somewhat simpler proof. In particular, let
E log+ |e1 | < ∞
E log+ |ϕ + b1 | < ∞.
and
(1.2)
Here log+ x = 0 if x ≤ 1, = log x if x > 1. If ϕ + b1 is ”less than one” on average in the
sense that
E log |ϕ + b1 | < 0
(1.3)
(equivalently, one says the SDE (1.1) is contractive), then, for n ∈ Z,
Xn =
∞
X
i=0
i−1
Y
en−i (ϕ + bn−j )
(1.4)
j=0
converges absolutely with probability one, and {Xn }n∈Z defined by (1.4) is the unique
strictly stationary solution of the SDE (1.1). In particular, the invariant distribution of
the Markov chain associated with (1.1) is given by the distribution of the infinite sum in
(1.4). Conversely, if {Xn }n∈Z satisfies (1.1) and
P {λ1 b1 + λ2 e1 = λ3 } < 1
for any λ1 , λ2 , λ3 ∈ R,
(λ1 , λ2 ) 6= (0, 0),
(1.5)
then (1.3) holds true. (In the terminology of Bougerol and Picard (1992a), the last
inequality says that there is no invariant subspace of R2 with dimension one.)
We close the discussion on the structure of RCA(1) time series with some remarks on
moments. The finiteness of moments of some order ν > 2 is a major ingredient to derive
strong approximations. Hence, it is desirable to have a criterion on the noise sequences
{bn }n∈Z and {en }n∈Z . Aue et al. (2004) provided the following answer. If
E|e1 |ν < ∞
and
E|ϕ + b1 |ν < 1
(1.6)
for some ν ≥ 1, then necessarily also E|X1 |ν < ∞. Note that from (1.6) we obtain
0 > log E|ϕ + b1 |ν ≥ νE log |ϕ + b1 | by Jensen’s inequality, so a strictly stationary solution
exists. If ν is an even positive integer, the previous result is due to Feigin and Tweedie
(1985) who proved a corresponding statement of Nicholls and Quinn (1982). Condition
(1.6) also follows as special case from Ling (1999) who studied a more general random
coefficient model.
In this paper, we shall deal with the least squares estimators (LSEs) of the unknown
parameters determining the SDE (1.1) based on strong approximations for partial sums
related to these LSEs. From now on, we will hence assume that
3
(A) for all n ∈ Z the noise sequences {bn }n∈Z and {en }n∈Z are centered and their second
moments exist, in particular we define Eb21 = ω 2 ≥ 0 and Ee21 = σ 2 > 0,
(B) for all n ∈ Z the random variables bn and en are uncorrelated.
Under (A), ϕ2 + ω 2 < 1 is equivalent to the existence of a unique second–order stationary
solution of (1.1). With the further assumption {(bn , en )}n∈Z being iid, this solution is also
strictly stationary (cf. Nicholls and Quinn (1982)). If ω 2 = 0, then {Xn }n∈Z is simply an
AR(1) time series with coefficient ϕ.
The paper is organized as follows. In Section 2, we shall introduce asymptotic tests
for the stability of the model parameters ϕ, ω 2 and σ 2 . Our test statistic will be based
on a two–step (conditional) least squares estimation procedure, which has already been
investigated by Nicholls and Quinn (1982).
Lee et al. (2003) applied a weak invariance principle to obtain similar tests under the
null hypothesis of no change in the parameters. However, their approach requires ν = 16
moments in (1.6) and proofs are given under the assumption that the covariance matrix of
the approximating Wiener process is regular. Using strong approximations, the moment
condition can be relaxed to some κ > 8, which is minimal in our setting (cf. Theorems
2.1 and 2.2). Also, we will give sufficient conditions on b1 and e1 , which in turn imply the
invertibility of the corresponding covariance matrix. The conditions assure, roughly, that
these random variables are not concentrated on a ”too small support” (cf. Lemma 4.1).
In addition, we study the behaviour of the test statistic under the alternative of a change
in some of the model parameters (cf. Theorems 2.3 and 2.4).
The proofs will rely on a strong invariance principle imposed on a partial sum vector,
whose components are related to the least squares estimators. The exact formulation can
be found in Section 3. Strong approximations have been used by Aue (2004a,b) to obtain
asymptotic tests for a change in the mean of random variables with RCA(1) innovations.
The proofs fall back on a result of Eberlein (1986), whose method applies here, too. See
Section 4 for details and all proofs.
2
Least squares estimation and testing for parameter
stability
The unknown parameter vector determining the second–order behaviour of the relations
(1.1) is θ = (ϕ, ω 2, σ 2 )T , where xT denotes the transpose of a vector x in R. There are
4
various ways to gain information on the true parameter values by statistical inference, for
instance
(i) the quasi maximum likelihood estimator (QMLE),
(ii) the minimum distance estimator (MDE) or
(iii) the least squares estimator (LSE).
Aue et al. (2004) dealt with case (i), namely they showed that under appropriate assumptions the QMLE is strongly consistent and satisfies the central limit theorem (CLT). Case
(ii) was investigated by Swaminathan and Naik–Nimbalkar (1997). The MDE is defined
by means of empirical characteristic functions (cf. Beran (1993)). Also, Swaminathan and
Naik–Nimbalkar (1997) gave conditions under which the MDE is strongly consistent and
satisfies the CLT.
Here, we are interested in method (iii). Assume we have observed X0 , . . . , Xm . The
(conditional) LSE is based on a two–step procedure (cf. Nicholls and Quinn (1982)). At
first,
!−1 m
m
X
X
2
ϕ̂m =
Xi−1
Xi−1 Xi
(2.1)
i=1
i=1
can be calculated by minimizing the sum
m
X
i=1
(Xi − ϕXi−1 )2
with respect to ϕ. Observe that the conditional expectation of the latter sum given F0 is
2
2
).
zero. Define the sample mean of the squared observations by X̄m
= m1 (X02 + . . . + Xm−1
2
Note that X̄m is asymptotically equivalent to the sample variance. Plugging in ϕ̂m in
place of the unknown deterministic parameter ϕ, we obtain
!−1 m
m
X
X
2
2
2 2
2
2
ω̂m
=
(Xi−1
− X̄m
)
(Xi−1
− X̄m
)(Xi − ϕ̂m Xi−1 )2 ,
(2.2)
2
σ̂m
=
i=1
m
X
1
m
i=1
i=1
2
2
(Xi − ϕ̂m Xi−1 )2 − ω̂m
X̄m
5
(2.3)
as minimizers (with respect to ω 2 and σ 2 , respectively) of
m
X
i=1
2
(Xi − ϕ̂m,L Xi−1 )2 − ω 2 Xi−1
− σ2
2
.
2
Recall, that here E((Xi − ϕXi−1 )2 |Fi−1) = ω 2 Xi−1
+ σ 2 for all i ∈ Z, and hence the
conditional expectation (with respect to F0 ) of the sum to be minimized vanishes at least
asymptotically. Therein, Fi = σ(bk , ek : k ≤ i) for i ∈ N0 . Further properties of the LSE
2
2 T
θ̂ m = (ϕ̂m , ω̂m
, σ̂m
) ,
(2.4)
such as strong consistency and the CLT are reported in Nicholls and Quinn (1982).
The following paragraph is devoted to testing the constancy of ϕ via the LSE ϕ̂m from
(2.1), i.e., we are interested in testing the null hypothesis of structural stability against
the alternative of a change somewhere in the observation period. The alternative will be
defined as follows: there exists a k ∗ ∈ {1, . . . , m − 1}, the so–called change–point, such
that {Xn }n∈Z evolves according to (1.1) until k ∗ −1, while afterwards ϕ has to be replaced
by a different parameter ϕ∗ , i.e.,
n < k∗ ,
n ≥ k∗,
Xn = (ϕ + bn )Xn−1 + en ,
Xn = (ϕ∗ + bn )Xn−1 + en ,
where the variance parameters ω 2 and σ 2 do not change. Similar tests of this kind have
also been proposed by Lee et al. (2003). Their asymptotic analysis, however, was based on
a weak invariance principle. Aue (2004a,b) investigated a change in the mean model with
RCA(1) time series as innovations using the strong approximation given at the beginning
of Section 3 for deriving the limit statements of the corresponding test statistic. For a
comprehensive review of applications of strong and weak approximations in the field of
change–point analysis confer Csörgő and Horváth (1997).
Now, ϕ̂m can be expressed in terms of partial sums, which will be shown to satisfy a
strong approximation (cf. Corollary 3.1). It holds,
!−1 m
!−1 m
m
m
X
X
X
X
2
2
2
ϕ̂m − ϕ =
Xi−1
Xi Xi−1 −
Xi−1
ϕXi−1
i=1
=
m
X
i=1
i=1
2
Xi−1
!−1
m
X
i=1
i=1
Xi−1 (Xi − ϕXi−1 )
6
i=1
2 −1
= (mX̄m
) EX12 T m1 ,
where T m1 is the first component of the partial sum vector T m defined in (3.5) below.
Hence, a test statistic can be obtained by imitating the CUSUM procedure for a change
in the mean by comparing the estimators ϕ̂k and ϕ̂m or by using the functional version
Um (t) = ⌊mt⌋ ϕ̂⌊mt⌋ − ϕ̂m ,
t ∈ [0, 1],
where ⌊·⌋ denotes the integer part. We are able to proof the following theorem.
Theorem 2.1 (Asymptotic under the null hypothesis)
Let {Xn }n∈Z be an RCA(1) time series satisfying (A), (B) and let
E|e1 |κ < ∞
and
E|ϕ + b1 |κ < 1
for some κ > 4. If ϕ is constant in the observation period 0, 1, . . . , m, then,
|Um (t)| D
sup √
−→ sup |B(t)|,
mσU
t∈[0,1]
t∈[0,1]
Um (t) D
−→ sup B(t),
sup √
mσU
t∈[0,1]
t∈[0,1]
as m → ∞, where {B(t)}t∈[0,1] denotes a Brownian bridge and
σU2 = (EX12 )−2 ω 2 EX14 + σ 2 EX12 .
See Section 4 for the proof. Note that the variance parameter σU2 can be replaced by
a consistent estimator σ̂U2 . Also, under suitable moment conditions (e.g. those given in
2
2
Theorem 3.1 below), similar results could be stated for the variance estimators ω̂m
and σ̂m
,
respectively. We shall abstain from doing so and else turn to simultaneously testing the
stability of all three model parameters, i.e., to testing the constancy of θ = (ϕ, ω 2, σ 2 )T
based on the LSE θ̂ m from (2.4). The situation under the alternative is as follows. Again,
before k ∗ , {Xn }n∈Z is driven by the parameter vector θ = (ϕ, ω 2, σ 2 )T , afterwards, the
time series evolves according to a new parameter vector θ ∗ = (ϕ∗ , ω∗2, σ∗2 )T 6= θ, where all
three parameters are allowed to change.
Define
U m (t) = ⌊mt⌋ θ̂ ⌊mt⌋ − θ̂ m ,
7
t ∈ [0, 1],
and the covariance matrix
Γ = (Γij )i,j=1,2,3
(2.5)
by the entries
Γ11 = σU2 ,
−2 Γ22 = EX14 − (EX12 )2
(Eb41 − ω 4 )(EX18 − 2EX12 EX16 + (EX12 )2 EX14 )
+4ω 2 σ 2 (EX16 − 2EX12 EX14 + (EX12 )3 ) + (Ee41 − σ 4 )(EX14 − (EX12 )2 ) ,
Γ33 = [Eb41 − ω 4 ] EX14 − [2EX12 (EX16 − EX12 EX14 )][EX14 − (EX12 )2 ]−1
−4ω 2 σ 2 EX12 + Ee41 − σ 4 + (EX12 )2 Γ22 ,
−1 3
Γ12 = EX12 EX14 − (EX12 )3
Eb1 EX16 − Eb31 EX12 EX14 + Ee31 EX13 ,
−1 3
Γ13 = EX12 EX14 − (EX12 )3
Eb1 (EX14 )2 − Eb31 EX12 EX16 − Ee31 EX12 EX13 ,
−1
Γ23 = EX14 − (EX12 )2
(Eb41 − ω 4 )(EX16 − EX12 EX14 ) + 4ω 2 σ 2 − EX12 Γ22 ,
and Γ21 = Γ12 , Γ13 = Γ31 , Γ32 = Γ23 . The matrix Γ is regular if
P {(ϕ + b1 )e1 = 0} < 1
(2.6)
and
P {λ1 b1 + e1 ∈ {λ2 , λ3 }} < 1
for all λ1 , λ2 , λ3 ∈ R, λ1 6= 0.
(2.7)
See Lemma 4.1 below. Aue et al. (2004) pointed out, that conditions (2.6) and (2.7)
are particularly satisfied if e1 and b1 are independent (cf. their Remarks 3.1 and 3.4,
respectively). Now, the following theorem holds true, which studies the general parameter
case.
Theorem 2.2 (Asymptotic under the null hypothesis)
Let {Xn }n∈Z be an RCA(1) time series satisfying (A), (B) and let
E|e1 |κ < ∞
and
E|ϕ + b1 |κ < 1
for some κ > 8. If θ is constant in the observation period 0, 1, . . . , m, and if (2.6), (2.7)
hold, then,
1
D
sup √ kΓ−1/2 U m (t)k −→ sup kB(t)k,
m
t∈[0,1]
t∈[0,1]
as m → ∞, where {B(t)}t∈[0,1] denotes a 3–dimensional standard Brownian bridge and
k · k Euclidean norm on R3 .
8
The covariance matrix Γ can be replaced by a consistent estimator Γ̂m . For a possible
construction of such an estimator based on LSE for expectations of powers of the noise
sequences {bn }n∈Z and {en }n∈Z see Lee et al. (2003).
Next, we turn our attention to examining the test statistics, which are based on Um (t)
and U m (t), under the alternative of a structural break. For motivational reasons, we start
again with studying the case, which only investigates the deterministic parameter ϕ in
(1.1) as a subject of change. We assume the specific form
k ∗ = ⌊θm⌋,
θ ∈ (0, 1) fixed,
(2.8)
for the change–point. Then, we get the following theorem.
Theorem 2.3 (Asymptotic under the alternative)
Let {Xn }n∈Z be a time series satisfying (A), (B) and let
E|e1 |κ < ∞
and
E|ϕ + b1 |κ , E|ϕ∗ + b1 |κ < 1
for some κ > 4. If (2.8) holds and ϕ changes to ϕ∗ in the observation period 0, 1, . . . , m,
then,
|Um (t)| P
sup √
−→ ∞
mσU
t∈[0,1]
as m → ∞.
The final theorem gives the corresponding convergence for the general parameter case,
which can be treated similarly with some greater notational effort. Assuming higher
moments and the regularity of the covariance matrix Γ, the result of Theorem 2.3 retains
if k ∗ satisfies (2.8).
Theorem 2.4 (Asymptotic under the alternative)
Let {Xn }n∈Z be time series satisfying (A),(B) (with possible changes in the variance parameters) and let
E|e1 |κ < ∞
and
E|ϕ + b1 |κ < 1, E|ϕ∗ + b1 |κ < 1
for some κ > 8. If θ changes to θ ∗ in the observation period 0, 1, . . . , m, and if (2.6)–(2.8)
hold, then,
1
P
Γ−1/2 U m (t) −→ ∞
sup √
m
t∈[0,1]
as m → ∞, where k · k denotes Euclidean norm.
9
It is interesting to study the behaviour of the time series {Xn }n∈Z in terms of first and
second moments. It is clear, that the random variables evolve according to the RCA(1)
equations (1.1) until time–point k ∗ −1. In particular, they are strictly stationary under
the assumptions made, and
EXn = 0,
EXn2 =
σ2
,
1 − ϕ2 − ω 2
n < k∗ .
Then, a perturbation changes one of the model parameters, but this does not affect the
expectation of {Xn }n∈Z , since, for n ∈ N, we get EXk∗ +n = ϕ∗ EXk∗ +n−1 = 0 by iteration.
Also, EXk∗ = 0. However, the second moment switches. Using the recursions, we obtain
EXk2∗ =
σ 2 (ϕ2∗ + ω∗2 )
+ σ∗2 ,
1 − ϕ2 − ω 2
(2.9)
and, for n ∈ N,
EXk2∗ +n = (ϕ2∗ + ω∗2 )n EXk2∗ + σ∗2
n
X
j=1
(ϕ2∗ + ω∗2 )j−1 −→
σ∗2
1 − ϕ2∗ − ω∗2
as n → ∞, since the assumptions imply E(ϕ∗ + b1 )2 < 1. In other words, {Xk∗ +n }n∈N0
is a Markov chain, with initial distribution given by that of Xk∗ . Obviously, the chain is
not started with its stationary distribution determined by the parameter vector θ ∗ . But,
under suitable assumptions, it is converging to it according to the ergodic theorem, which
can be seen, for instance, by a coupling argument (cf. Brémaud (1999), Chapter 4).
In the situation of Theorem 2.3, equation (2.9) can be rewritten as
EXk2∗ = EX12 (1 + ∆),
3
∆ = ϕ2∗ − ϕ2 .
Strong approximations for partial sums related to
RCA(1) time series
Strong invariance principles are frequently used in statistics to derive asymptotic tests
in terms of the approximating Wiener process. The fundamental results for partial sums
of iid random variables are due to Komlós, Major and Tusnády (1975,76). Generalizations to multivariate partial sums coming from iid random vectors were obtained by
10
Einmahl (1989). Since then, extensions of these results were proved to include a variety of dependence concepts into the strong approximation scheme, such as linear processes (Horváth (1986)), mixing sequences (Kuelbs and Philipp (1980)) and martingale
differences (Eberlein (1986)), among others. For applications of strong invariance principles in the context of change–point analysis confer the comprehensive book Csörgő and
Horváth (1997). Developing effective tests for detecting structural changes of some underlying process requires a refined asymptotic analysis. One powerful method of tackling such
problems is the method of strong invariance, proceeding through a direct approximation
of the paths of the considered process by paths of a Wiener process.
P
A strong invariance principle for the partial sums {X1 +. . .+Xn }n∈N0 (as usual 01 = 0)
of a strictly stationary RCA(1) time series is due to Aue (2004a,b). The method of proof
is to determine the order of the fluctuations of both conditional expectation and variance
of the partial sums around their unconditioned counterparts, viz. the investigation of the
covariance structure of the sequence {Xn }n∈Z . Here and in the following conditioning
is meant to be with respect to the filtration {Fn }n∈N0 generated by the noise sequences
{bn }n∈Z and {en }n∈Z , i.e.,
Fn = σ(bk , ek : k ≤ n)
(n ∈ N0 ).
(3.1)
Then, the strong approximation given in Theorem 1 of Eberlein (1986) provides the
existence of a Wiener process {WX (t)}t≥0 such that
X1 + . . . + X⌊t⌋ − σX WX (t) = O t1/ν
a.s.
as t → ∞, where ν > 2,
2
σX
=
σ2
1+ϕ
1 − ϕ2 − ω 2 1 − ϕ
2
and ⌊·⌋ the integer part. The first fraction of σX
is plainly the variance of X1 , while the
second reflects the dependence on the past, which clearly is the stronger the closer ϕ gets
to the right boundary value 1. Recall, that in case of strictly stationary solutions of (1.1)
with finite second moments this boundary value is excluded.
The same method will be applied to obtain a further strong invariance principle. Thus,
for n ∈ N, introduce the three–dimensional random vectors Y n with components
Y n1 = (EX12 )−1 Xn−1 (Xn − ϕXn−1 ),
(3.2)
2
2 2 −1
2
2
2
2 2
2
Y n2 = (E(X1 − EX1 ) ) (Xn−1 − EX1 ) (Xn − ϕXn−1 ) − (ω Xn−1 + σ ) ,(3.3)
2
Y n3 = (Xn − ϕXn−1 )2 − (ω 2Xn−1
+ σ 2 ) − EX12 Y n2
(3.4)
11
and for n ∈ N, m ∈ N0 the corresponding partial sums
T n (m) = Y m+1 + . . . + Y m+n ,
(3.5)
where we shall abbreviate T n = T n (0). We will see in Section 4 how the latter partial
sums are related to the least squares estimators of ϕ, ω 2 and σ 2 . But firstly, we state
the strong invariance principle needed for the asymptotic statistical examinations of the
previous section.
Theorem 3.1 (Strong invariance)
Let {Xn }n∈Z be an RCA(1) time series satisfying (A), (B) and let {Y n }n∈N be given by
(3.2)–(3.4). If
E|e1 |κ < ∞
and
E|ϕ + b1 |κ < 1
with some κ > 8, there exists a Wiener process {W T (t)}t≥0 , such that
T ⌊t⌋ − Γ1/2 W T (t) = O t1/ν
a.s.
as t → ∞ for some ν > 2, where k · k denotes Euclidean norm and Γ is defined in (2.5).
Obviously, the joint approximation of Theorem 3.1 can also be read componentwise. If
we are, for instance, solely interested in the stability of the deterministic parameter ϕ,
it is enough to utilize the following corollary, which suffices to proof the corresponding
Theorem 2.1. The corollary is stated separately here, because the moment conditions
can be weakened to some κ > 4. The greater value κ > 8 is only needed to include the
variances ω 2 and σ 2 into the test procedure.
Corollary 3.1
Let {Xn }n∈Z be an RCA(1) time series satifying (A), (B) and let {Yn1 }n∈N be as in (3.2).
If
E|e1 |κ < ∞
and
E|ϕ + b1 |κ < 1
with some κ > 4, there exists a Wiener process {WS (t)}t≥0 , such that
T ⌊t⌋,1 − σT WT (t) = O t1/ν
a.s.
as t → ∞ for some ν > 2, where
σT2 = (EX12 )−2 ω 2 EX14 + σ 2 EX12 = σU2 .
12
Corollary 3.1 follows readily from the proof of Theorem 3.1. Moreover, it has already
been established in Aue (2004a). Finally, it is worthwhile mentioning that all variance
and covariance terms can be expressed in terms of moments of {bn }n∈Z and {en }n∈Z only
(cf. Proposition 4.1). The proof of Theorem 3.1 is given in the following section.
It is known from Feigin and Tweedie (1985) that RCA time series are geometrically
ergodic if in addition
{en }n∈Z
have a positive density w.r.t. Lebesgue measure.
(3.6)
But geometric ergodicity implies strong mixing, such that the results in Kuelbs and
Philipp (1980) can be used to give another proof of Theorem 3.1. However, the assumption (3.6) is not always serious in practise and can be avoided using the approach
above.
4
Proofs
The proof section is divided into two parts. The first one contains the proofs of the
statistical theorems of Section 2, the second one the proofs of the strong approximations
given in Section 3.
We start with some remarks on moments and on conditional expectations of powers obtained from a strictly stationary RCA(1) time series, summarized in the following
proposition which is stated without proof. Recall, that throughout the paper {(bn , en )}n∈Z
constitutes an iid sequence of random vectors.
Proposition 4.1 Let {Xn }n∈Z be an RCA(1) time series satisfying (A) and (B). If
E|e1 |l < ∞ and E|ϕ + b1 |l < 1, then it holds
a) for the l–th moment of Xm ,
l
EXm
=
l
X
ajl EX1j
j=0
l−2
1 X
ajl EX1j ,
=
1 − all j=0
l
b) for the conditional expectation of Xm+i−1
,
E
l
Xm+i−1
|Fm
=
l
X
j=0
=
l−2
X
j=0
j
ajl E Xm+i−2
|Fm
ajl
i−1
X
k=1
13
j
l
allk−1 E Xm+i−k+1
|Fm + ai−1
ll Xm ,
for all m ∈ N0 , where
ajl
l
E(ϕ + b1 )j Eel−j
=
1
j
(j = 0, 1, . . . , l).
Proposition 4.1 yields in particular al−1,l = 0, since Ee1 = 0 by assumption (A), and
a0l
i−1
X
allk−1 =
k=1
a0l (1 − ai−1
ll )
.
1 − all
For instance, we obtain moreover EX1 = 0,
σ2
,
1 − ϕ2 − ω 2
Ee31
=
,
1 − E(ϕ + b1 )3
Ee41 + 6σ 2 (ω 2 + ϕ2 )EX12
,
=
1 − E(ϕ + b1 )4
EX12 =
EX13
EX14
provided these moments exist. This clarifies the final remark of Section 3.
4.1
Proofs of Section 2
We start this subsection with studying the functional Um (t), which is used to test for the
stability of the parameter ϕ, giving the average value of the random coefficients of an
RCA(1) time series.
Proof of Theorem 2.1. By Corollary 3.1, there exists a Wiener process {WT (t)}t≥0 such
that
T ⌊t⌋,1 − σT WT (t) = O t1/ν
a.s.
as t → ∞. Observe that
1 Um (t) sup √
− WT (⌊mt⌋) − tWT (m)
m σU
t∈[0,1]
1 Um (t)
⌊mt⌋
1
T ⌊mt⌋,1 −
≤ sup √
−
T m1
σT
m
m σU
t∈[0,1]
1
⌊mt⌋
1
T ⌊mt⌋,1 −
T m1 − WT (⌊mt⌋) − tWT (m)
+ sup √
m
m σT
t∈[0,1]
= K 1 + K2 .
14
Noting that κ > 4, from Lemma 4.2 we get that {T n1 }n∈N is a square integrable martingale
with respect to the filtration in (3.1). Moreover, by the ergodic theorem,
m
2
X̄m
as m → ∞. Hence,
K1
1 X 2
Xi−1 −→ EX12
=
m i=1
a.s.
(4.1)
1
⌊mt⌋
= sup √
T m1
Um (t) − T ⌊mt⌋,1 −
m
mσU
t∈[0,1]
1
2
≤ sup √
EX12 (X̄⌊mt⌋
)−1 − 1 T ⌊mt⌋,1
mσU
t∈[0,1]
1
⌊mt⌋
2 −1
EX12 (X̄m
) − 1 T m1
+ sup √
mσU m
t∈[0,1]
= oP (1)
as m → ∞. Recall, that σT2 = σU2 . From Corollary 3.1,
1
⌊mt⌋
1
K2 = sup √
T ⌊mt⌋,1 −
T m1 − WT (⌊mt⌋) − tWT (m)
m
m σT
t∈[0,1]
!
⌊mt⌋1/ν + tm1/ν
√
= O sup
m
t∈[0,1]
= O m1/ν−1/2
= o(1)
a.s.
as m → ∞, completing the proof.
2
Next, we proof the regularity of the covariance matrix Γ defined in (2.5).
Lemma 4.1 Let Γ be as in (2.5) and let (A), (B), (2.6) and (2.7) be satisfied. Provided
all occurring moments exist, Γ is regular.
Proof. On observing that Γ is non–negative definite and finite (the corresponding moments
exist), it is enough to prove that A is non–singular. So let us assume it is. Then, there
are real constants c1 , c2 and c3 not all zero, such that a.s. holds,
c1
0 =
X1 (X1 − ϕX0 )
(4.2)
EX12
c2
2
2
2
2
2 2
2
+
+
c
[1
−
EX
(X
−
EX
)]
[X
−
ϕX
]
−
[ω
X
+
σ
]
.
3
1
0
1
1
1
0
E(X12 − EX12 )2
15
Now, consider (4.2) as a quadratic equation in X1 − ϕX0 . Our first goal is hence to show
that
c2
+ c3 [1 − EX12 (X12 − EX12 )] 6= 0
a.s.
(4.3)
2
2 2
E(X1 − EX1 )
(i) Let c3 6= 0. If (4.3) did not hold, we would have
X12 = EX12 −
1
1
c2
−
2
2 2
c3 E(X1 − EX1 )
EX12
a.s.,
i.e., X12 is a.s. constant, which is a contradiction to condition (2.6) as is pointed out
in the proof of Lemma 4.2 in Aue et al. (2004).
(ii) Let c3 = 0. Then c2 6= 0. Else X1 (X1 − ϕX0 ) = 0 a.s., yielding
0 = E(X1 (X1 − ϕX0 )|F0 ) = ω 2 X02 + σ 2
a.s.,
i.e., X02 = σ 2 ω −2 a.s. if ω 2 > 0, or σ 2 = 0 if ω 2 = 0. Hence, c1 = c2 = c3 = 0, a
contradiction to the assumption that A is singular.
Observe that X1 − ϕX0 = b1 X0 + e1 . By solving the quadratic equation (4.2), we see
P {b1 X0 + e1 ∈ {C1 , C2 }} = 1,
where C1 = C1 (X0 ), C2 = C2 (X0 ). Similarly as in the proof of Lemma 4.3 in Aue et al.
(2004), this last relation results in
P {b1 x + e1 ∈ {C1 (x), C2 (x)}} = 1
contradicting (2.7). Hence, the lemma is proved.
for PX0 –almost all x,
2
Now obviously, U m,1 (t) = Um (t) for all m ∈ N and t ∈ [0, 1]. The idea of the proof of
Theorem 2.2 is therefore to transfer the previous steps to the 3–dimensional case replacing
the strong approximation of Corollary 3.1 with the more general joint approximation of
Theorem 3.1, which will be proved in the following subsection.
Proof of Theorem 2.2. We repeat the arguments of the proof of Theorem 2.1 using the
16
Wiener process {W T (t)}t≥0 from Theorem 3.1. Firstly,
1
sup √
Γ−1/2 U m (t) − W T (⌊mt⌋) − tW T (m)
m
t∈[0,1]
1
⌊mt⌋
−1/2
Tm
U m (t) − T ⌊mt⌋ −
Γ
≤ sup √
m
m
t∈[0,1]
1
⌊mt⌋
−1/2
T m − W T (⌊mt⌋) − tW T (m)
+ sup √
T ⌊mt⌋ −
Γ
m
m
t∈[0,1]
= K 1 + K 2,
where
⌊mt⌋1/ν + tm1/ν
√
= O sup
m
t∈[0,1]
K2
!
= O m1/ν−1/2
= o(1)
a.s.
as m → ∞ by Theorem 3.1. Next,
⌊mt⌋
1
−1/2
Tm
U m (t) − T ⌊mt⌋ −
Γ
K 1 = sup √
m
m
t∈[0,1]
1
1
⌊mt⌋
Tm
sup √
Γ−1/2 A(⌊mt⌋) T ⌊mt⌋ + sup √
Γ−1/2 A(m)
m
m
m
t∈[0,1]
t∈[0,1]
1
Γ−1/2 R⌊mt⌋ − Rm ,
+ sup √
m
t∈[0,1]
(m) = Aij i,j=1,2,3 is a diagonal matrix with entries
≤
where A(m)
(m)
A11
(m)
A22
(m)
A33
2 −1
= EX12 (X̄m
) − 1,
2 −1
= E(X12 − EX12 )2 (X̃m
) − 1,
= 0,
2
where X̃m
is defined in (4.4) and Rm is a 3–dimensional random vector with components
Rm1 = 0,
Rm2 =
2 −1
(X̃m
)
m
X
2
2
(Xi−1 − X̄m
)(Xi − ϕ̂m Xi−1 )2 − E(X12 − EX12 )2 Y i2 ,
i=1
Rm3 =
m
X
i=1
2
2
2
(Xi − ϕ̂m Xi−1 )2 − (Xi − ϕXi−1 )2 + (ω 2 Xi−1
+ σ 2 ) + EX12 Y i2 − mω̂m
X̄m
17
Observe, that by the ergodic theorem,
m
2
X̃m
1 X 2
2 2
(X − X̄m
) −→ E(X12 − EX12 )2
=
m i=1 i−1
a.s.
(4.4)
(m)
as m → ∞. Hence, on using again (4.1), Aii → 0 a.s. as m → ∞ for i = 1, 2.
Furthermore,
1
sup √
Γ−1/2 R⌊mt⌋ − Rm
= oP (1)
m
t∈[0,1]
as m → ∞, following from Lee et al. (2003), yielding K 1 = oP (1) as m → ∞, thus
completing the proof.
2
Proof of Theorem 2.3. Fix t = θ. Then, we obtain the decomposition
2
k∗
k∗
2 −1
∗2
√ (ϕ̂k∗ − ϕ̂m ) = √
)
(X̄m − X̄k2∗ )ϕ − X̄m−k
[ϕ̂k∗ − ϕ] + (X̄m
∗ ϕ∗
m
m
!
m
∗2
X
X̄m−k
k∗
∗
2 −1
+√
Xi Xi−1
ϕ∗ − (X̄m )
2
m
X̄m
i=k ∗ +1
!
k∗
2
X
k∗
X̄
∗
2 −1
−√
Xi Xi−1 − k2 ϕ ,
(X̄m
)
m
X̄m
i=1
where, for k = 1, . . . , m − 1,
∗2
X̄m−k
m
X
1
=
Xi Xi−1 .
m − k i=k+1
Now, according to the discussion before Theorem 2.1,
k∗
√ [ϕ̂k∗ − ϕ] = O(1)
m
a.s.
(4.5)
as m → ∞. The strong approximation of Corollary 3.1 yields in particular uniform weak
invariance, so
#
"
m
∗2
X
k ∗ X̄m−k
∗
∗2
−1
√
Xi Xi−1 = OP (1)
(4.6)
ϕ∗ − (X̄m−k
∗)
2
m X̄m
i=k ∗ +1
18
as m → ∞. Furthermore,
"
#
k∗
X
k ∗ X̄k2∗
2 −1
√
(X̄k∗ )
Xi Xi−1 − ϕ = O(1)
2
m X̄m
i=1
a.s.,
similarly to (4.5). But finally, since ϕ − ϕ∗ 6= 0,
2
∗2
2
√
X̄m
− X̄k2∗ − X̄m−k
k∗
X̄m − X̄k2∗
∗
√
m
[ϕ
−
ϕ
]
+
ϕ
∼
∗
2
2
m
X̄m
X̄m
(4.7)
a.s.
(4.8)
as m → ∞, where am ∼ bm , if am b−1
m −→ c as m → ∞. On combining (4.5)–(4.8), we see
that
2
k∗
k∗
X̄m
− X̄k2∗
|Um (t)|
P
[ϕ − ϕ∗ ] + OP (1) −→ ∞
≥√
|ϕ̂k∗ − ϕ̂m | = √
sup √
2
mσ
mσ
mσ
X̄
U
U
U
t∈[0,1]
m
as m → ∞, completing the proof.
2
Proof of Theorem 2.4. For m ∈ N, define the matrices B (m) = A(m) + I3 , where I3 denotes
the identity matrix in R3×3 . Fix t = θ. Then, we can use the following decomposition,
which is formally the same as the one obtained in the previous proof. It holds,
i
k∗ k ∗ h
−1 ∗
∗
√
(B (m) − B (k ) )θ − B∗(m−k ) θ ∗
θ̂ k∗ − θ̂ m = √
θ̂ k∗ − θ + B (m)
m
m
!
m
X
k∗
−1
−1
B (m) B∗(m) θ ∗ − B (m)
Yi
+√
m
∗
i=k +1
!
∗
k
X
k∗
(m) −1
(m) −1 (k ∗ )
B
Yi−B
B θ .
−√
m
i=1
Therefore,
1
sup √
Γ−1/2 U m (t)
m
t∈[0,1]
k∗
−1/2
≥ √
Γ
θ̂ k∗ − θ̂ m
m
k∗
−1
∗
P
= √
Γ−1/2 B (m) (B (m) − B (k ) )(θ − θ ∗ ) + OP (1) −→ ∞,
m
which is the assertion.
2
19
4.2
Proofs of Section 3
To provide the proofs of the strong approximations in Theorem 3.1 and Corollary 3.1, it
is sufficient to verify the following assumptions made in Theorem 1 of Eberlein (1986).
Let {Z n }n∈N be a sequence of Rd –valued random variables, denote by
S n (m) = Z m+1 + . . . + Z m+n ,
m ∈ N0 , n ∈ N,
the corresponding partial sums and let {Gm }m∈N0 be a filtration such that
(i) EZ n = 0 for all n ∈ N,
(ii) kE(S ni (m)|Gm )k1 = O(n1/2−θ ) uniformly in m ∈ N0 as n → ∞ for some θ ∈ (0, 12 ),
and for all i = 1, . . . , d,
(iii) there exists a covariance matrix Σ = (Σij )i,j=1,...,d , such that uniformly in m ∈ N0 ,
n−1 ES ni (m)S nj (m) − Σij = O n−ρ
as n → ∞ for some ρ > 0 and all i, j = 1, . . . , d,
(iv) there exists a γ > 0, such that uniformly in m ∈ N0 ,
kE (S ni (m)S nj (m)|Gm ) − ES ni (m)S nj (m)k1 = O n1−γ
as n → ∞ for all i, j = 1, . . . , d,
(v) there exists M < ∞ and κ > 2, such that EkZ n kκ ≤ M for all n ∈ N,
where k · k1 denotes L1 –norm and k · k Euclidean norm on Rd . In our case d = 3. Clearly,
condition (i) is satisfied for the random vectors {Y n }n∈N defined by (3.2)–(3.4). Recalling
the moment properties given in Section 1 and provided the random vectors of interest form
a strictly stationary sequence, condition (v) is fulfilled under appropriate assumptions on
the noise sequences {bn }n∈Z and {en }n∈Z . The further conditions (ii)–(iv) shall be proved
in a series of Lemmas, starting with determining the order of the conditional expectation
of the partial sums T n (m) from (3.5). Here and in the sequel, we shall make use of the
following property. If C1 ⊂ C2 are σ–fields, then E(X|C1 ) = E(E(X|C2 )|C1 ) for arbitrary
integrable random variables X. Set τ1 = EX12 and τ2 = E(X12 − EX12 )2 .
Lemma 4.2 Let {Xn }n∈Z be an RCA(1) time series satisfying (A), (B) and let the partial
sums {T n (m)}n∈N for all m ∈ N0 be as in (3.5). Then,
kE(T ni (m)|Fm )k1 = 0
for all m ∈ N0 , n ∈ N and i = 1, 2, 3.
20
Proof. Since
2
E(Xn−1 (Xn − ϕXn−1 )|Fn−1 ) = Xn−1 E(Xn |Fn−1) − ϕXn−1
= 0
a.s.
for all n ∈ Z, {Y n1 }n∈N is a sequence of martingale differences. Next, consider the
conditional expectation of Y n2 given the past. Then,
2
2
τ2 E(Y n2 |Fn−1) = E [Xn−1
− EX12 ]([Xn − ϕXn−1 ]2 − [ω 2 Xn−1
+ σ 2 ]|Fn−1
2
2
− EX12 ] E[ Xn − ϕXn−1 ]2 |Fn−1 − [ω 2Xn−1
= [Xn−1
+ σ2]
= 0
a.s.
for all n ∈ Z. Hence {Y n2 }n∈N is also a sequence of martingale differences. A similar
calculation holds true for the third component {Y n3 }n∈N , and the assertion follows. 2
Next, we estimate the covariance terms of T n (m) and hence verify condition (iii) from
above. Recall the definition of the matrix Γ from (2.5).
Lemma 4.3 Let {Xn }n∈Z be an RCA(1) time series satisfying (A), (B) and let the partial
sums {T n (m)}n∈N for all m ∈ N0 be as in (3.5). If Ee81 < ∞ and E(ϕ + b1 )8 < 1, then,
n−1 ET ni (m)T nj (m) = Γij
for all m ∈ N0 , n ∈ N and i, j = 1, 2, 3.
Proof. It follows from the assumptions that EX18 < ∞ (cf. also the according paragraph
of Section 1). Moreover, from the strict stationarity of {Xn }n∈Z , we immediately obtain
the same property for {Y ni }n∈Z , i = 1, 2, 3.
(i) Fix i = j and m ∈ N0 , n ∈ N. Then, it holds,
−1
n
ET 2ni (m)
m+n
m+n m+n
2 X X
1 X
2
EY ki +
EY ki Y li .
=
n k=m+1
n k=m+1 l=k+1
Now, using the defining SDE (1.1), the independence of (bk , ek ) from Xk−1 and (B),
the covariance part vanishes, i.e.,
EY kiY li = 0
for all k < l and i = 1, 2, 3. Similar arguments applied to the variance terms yield
in case i = j = 1,
2
τ12 EY 2k1 = E(Xk−1
bk + Xk−1 ek )2 = ω 2EX14 + σ 2 EX12 = τ1 Γ11 ,
21
where we have also used the strict stationarity of {Xn }n∈Z . If i = j = 2,
τ22 EY 2k2 = (Eb41 − ω 4 )(EX18 − 2EX12 EX16 + (EX12 )2 EX14 )
+4ω 2σ 2 (EX16 − 2EX12 EX14 + (EX12 )3 )
+(Ee41 − σ 4 )(EX14 − (EX12 )2 ) ,
= τ22 Γ22 .
Finally, straightforward calculations give
EY 2k3 = [Eb41 − ω 4 ] EX14 − [2EX12 (EX16 − EX12 EX14 )][EX14 − (EX12 )2 ]−1
−4ω 2 σ 2 EX12 + Ee41 − σ 4 + (EX12 )2 Γ22
= Γ33 .
(ii) Fix i < j and m ∈ N0 , n ∈ N. Then, as in part (i) of this proof,
! m+n
!
m+n
m+n
X
X
1
1 X
n−1 ET ni (m)T nj (m) = E
EY ki Y kj ,
Y ki
Y lj =
n
n
k=m+1
k=m+1
l=m+1
i.e., terms with k 6= l are zero. First, consider the case i = 1, j = 2. It holds for
any k = m + 1, . . . , m + n,
τ1 τ2 EY k1 Y k2 = EXk−1 (bk Xk−1 + ek )3 (Xk−1 − EX12 )
= Eb31 EX16 + Ee31 EX13 − Eb31 EX12 EX14
= τ1 τ2 Γ12 .
Next, let i = 1, j = 3. Then,
τ1 τ2 EY k1 Y k3 = EXk−1 (bk Xk−1 + ek )3 − EX12 Γ12
= Eb31 EX14 − EX12 Γ12
= τ1 τ2 Γ13 .
For i = 2, j = 3, we obtain similarly
−1
EX14 − (EX12 )2
(Eb41 − ω 4 )(EX16 − EX12 EX14 )
+4ω 2σ 2 − EX12 Γ22
= Γ23 .
EY k2 Y k3 =
22
By the strict stationarity of {Y ki }k∈Z (i = 1, 2, 3) and a symmetry argument, the proof
is complete.
2
Finally, condition (iv) is verified by an application of Proposition 4.1 .
Lemma 4.4 Let {Xn }n∈Z be an RCA(1) time series satisfying (A), (B) and let the partial
sums {T n (m)}n∈N for all m ∈ N0 be as in (3.5). If Ee81 < ∞ and E(ϕ + b1 )8 < 1, then,
uniformly in m ∈ N0 ,
kE(T ni (m)T nj (m)|Fm ) − ET ni (m)T nj (m)k1 = O(1)
as n → ∞ for all i, j = 1, 2, 3.
Proof. If i = j, the assertion follows directly from parts a) and b) of Proposition 4.1. If
i 6= j, similar arguments apply. See Aue (2004a) for an elaborated method of proof. 2
All ingredients have been collected to give the proof of the joint strong approximation.
Proof of Theorem 3.1. For a κ > 8, the moment E|X1 |κ is finite, since by assumption
E|e1 |κ is finite and E|ϕ + b1 |κ is less than one, so (v) holds by the strict stationarity
of {Xn }n∈Z . Conditions (ii)–(iv) follow from Lemmas 4.2–4.4, such that Theorem 3.1 is
readily proved.
2
References
[1] Aue, A. (2004a). Sequential change–point analysis based on invariance principles.
Dissertation, Universität zu Köln.
[2] Aue, A. (2004b). Strong approximation for RCA(1) time series with applications.
Statist. Probab. Letters 68, 369–382.
[3] Aue, A., Horváth, L., and Steinebach, J. (2004). Estimation in random coefficient
autoregressive models. Preprint, Universität zu Köln.
[4] Beran, R. (1993). Semiparametric random coefficient regression models. Ann. Inst.
Statist. Math. 15, 639–654.
[5] Berkes, I., Horváth, L. and Kokoszka, P. (2003) GARCH processes: structure and
estimation. Bernoulli 9, 201–227.
23
[6] Bollerslev, T. (1986). Generalized autoregressive conditional hetereoscedasticity. J.
Econ. 31, 307–327.
[7] Bougerol, P., and Picard, N. (1992a). Strict stationarity of generalized autoregressive
processes. Ann. Probab. 20, 1714–1730.
[8] Bougerol, P., and Picard, N. (1992b). Stationarity of GARCH processes and of some
nonnegative time series. J. Econ. 52, 115–127.
[9] Brandt, A. (1986). The stochastic equation Yn+1 = An Yn + Bn with stationary coefficients. Adv. Appl. Probab. 18, 211–220.
[10] Brémaud, P. (1999). Markov chains. Gibbs fields, Monte Carlo simulation, and
queues. Springer–Verlag, New York.
[11] Brockwell P.J., and Davis R.A. (1991). Time series: theory and methods (2nd ed.).
Springer–Verlag, Berlin.
[12] Csörgő, M., and Horváth, L. (1997). Limit theorems in change–point analysis. Wiley,
Chichester.
[13] Eberlein, E. (1986). On strong invariance principles under dependence assumptions.
Ann. Probab. 14, 260–270.
[14] Einmahl, U. (1989). Extensions of results of Komlós, Major, and Tusnády to the
multivariate case. J. Mult. Anal. 28, 20–68.
[15] Engle, R.F. (1982). Autoregressive conditional heteroscedasticity with estimates of
the variance of United Kingdom inflation. Econometrica 50, 987–1007.
[16] Feigin, P.D., and Tweedie, R.L. (1985). Random coefficient autoregressive processes:
a Markov chain analysis of stationarity and finiteness of moments. J. Time Ser. Anal.
6, 1–14.
[17] Komlós, J., Major, P., and Tusnády, G. (1975). An approximation of partial sums of
independent r.v.’s and the sample d.f. I. Z. Wahrsch. Verw. Gebiete 32, 111–131.
[18] Komlós, J., Major, P., and Tusnády, G. (1976). An approximation of partial sums of
independent r.v.’s and the sample d.f. II. Z. Wahrsch. Verw. Gebiete 34, 33–58.
24
[19] Kuelbs, J., and Philipp, W. (1980). Almost sure invariance principles for partial sums
of mixing B–valued random variables. Ann. Probab. 8, 1003–1036.
[20] Lee, S., Ha, J., Na, O., and Na, S. (2003). The cusum test for parameter change in
time series models. Scand. J. Statist. 30, 781–796.
[21] Ling, S. (1999). On the stationarity and the existence of moments of conditional
heteroskedastic ARMA models. Statistica Sinica 9, 1118–1130.
[22] Mikosch, T. (2003). Modeling dependence and tails of financial time series. In:
Finkenstaedt, B. and Rootzen, H. (eds.). Extreme values in finance, telecommunications, and the environment. Chapman and Hall, pp. 185–286.
[23] Nicholls, D.F., and Quinn, B.G. (1982). Random coefficient autoregressive models:
an introduction. Springer–Verlag, New York.
[24] Swaminathan, V., and Naik–Nimbalkar, U.V. (1997). Minimum distance estimation
for random coefficient autoregressive models. Statist. Probab. Letters 34, 313–322.
[25] Tong, H. (1990). Non–linear time series: a dynamical system approach. Clarendon,
Oxford.
[26] Vervaat, W. (1979). On a stochastic difference equation and a representation of non–
negative infinitely divisible random variables. Adv. Appl. Probab. 11, 750–783.
25
Download