MOVING AVERAGES OF RANDOM VECTORS WITH REGULARLY VARYING TAILS

advertisement
MOVING AVERAGES OF RANDOM VECTORS WITH REGULARLY
VARYING TAILS
By Mark M. Meerschaert and Hans-Peter Scheffler
University of Nevada and University of Dortmund
First version received October 1998
Abstract. Regular variation is an analytic condition on the tails of a probability
distribution which is necessary for an extended central limit theorem to hold, when the
tails are too heavy to allow attraction to a normal limit. The limiting distributions
which can occur are called operator stable. In this paper we show that moving averages
of random vectors with regularly varying tails are in the generalized domain of
attraction of an operator stable law. We also prove that the sample autocovariance
matrix of these moving averages is in the generalized domain of attraction of an
operator stable law on the vector space of symmetric matrices.
AMS 1990 subject classi®cation. Primary 62M10, secondary 62E20, 62F12, 60F05.
Keywords. Moving average, autocovariance matrix, operator stable, regular variation,
random matrix.
1.
INTRODUCTION
In this paper we establish the basic asymptotic theory for moving averages of
random vectors with heavy tails. Heavy tail distributions occur frequently in
applications to ®nance, geology, physics, chemistry, computer and systems
engineering. The recent books of Samorodnitsky and Taqqu (1994), Janicki and
Weron (1994), Nikias and Shao (1995), and Mittnik and Rachev (1998) review
many of these applications. Scalar time series with heavy tails are discussed in
Anderson and Meerschaert (1997), Bhansali (1993), Brockwell and Davis (1991),
Davis and Resnick (1985a, 1985b, 1986), Jansen and de Vries (1991), Kokoszka
and Taqqu (1994, 1996), Loretan and Phillips (1994), Mikosch, Gadrich,
KluÈppenberg and Adler (1995), and Resnick and StaÏricaÏ (1995). Modern
applications of heavy tail distributions were pioneered by Mandelbrot (1963) and
others in connection with problems in ®nance. When the probability tails of the
random ¯uctuations in a time series model are suf®ciently light, the asymptotics
are normal. But when the tails are suf®ciently heavy that the fourth moments fail
to exist, the asymptotics are governed by the extended central limit theorem. For
scalar models the limiting distributions are stable. Stable laws are characterized
by the fact that sums of independent stable random variables are also stable, and
the distribution of the sum can be reduced to that of any summand by an
appropriate af®ne normalization. The normal laws are a special case of stable
0143-9782/00/04 297±328
JOURNAL OF TIME SERIES ANALYSIS Vol. 21, No. 3
# 2000 Blackwell Publishers Ltd., 108 Cowley Road, Oxford OX4 1JF, UK and 350 Main Street,
Malden, MA 02148, USA.
298
M. M. MEERSCHAERT AND H.-P. SCHEFFLER
laws. Stable stochastic processes are interesting because they provide the most
straightforward mechanism for generating random fractals.
For random vectors with heavy tails, the extended central limit theorem
yields operator stable limit laws, so called because the af®ne normalization
which reduces the distribution of a sum to that of one summand involves a
linear operator. Regular variation is the analytic condition necessary for the
extended central limit theorem to hold for heavy tails. See Feller (1971)
Chapter XVII for the scalar version and Meerschaert (1993) for the vector
version of the extended central limit theorem. In this paper we will show that
moving averages of random vectors with regularly varying tail probabilities are
asymptotically operator stable. The regular variation arguments at the heart of
the proof will appear familiar to any reader who is acquainted with the work of
Feller on regular variation. We will also show, using similar regular variation
methods, that the sample autocovariance matrix formed from these moving
averages is asymptotically operator stable as a random element of the vector
space of d 3 d symmetric matrices.
2.
NOTATION AND PRELIMINARY RESULTS
In this section we present the notion of a regularly varying measure together with
the multivariable theory of regular variation necessary in the proofs of our main
results. The connection of regularly varying measures to generalized domains of
attraction of operator stable laws is also discussed.
Assume that f Z n g are i.i.d. random vectors on R d with common distribution
ì. A sequence (An ) of invertible linear operators on R d is called regularly
varying with index F, where F is a d 3 d matrix, if
F
A[ë n] Aÿ1
n ! ë as n ! 1
(2:1)
F
where ë ˆ exp(F log ë) and exp is the exponential mapping for d 3 d matrices.
We write (An ) 2 RV(F) if (2.1) holds. We say that ì is regularly varying with
exponent E if there exists a sequence (An ) 2 RV(ÿE) such that
n(An ì) ! ö
(2:2)
where ö is some ó-®nite Borel measure on R d nf0g which cannot be supported
on any lower dimensional subspace. Note that t:ö ˆ (t E ö) follows. Here
(Aö)(dx) ˆ ö(Aÿ1 dx) denotes the image measure. The convergence in (2.2)
d
means that nì(Aÿ1
n S) ! ö(S) for any Borei set S of R which is bounded away
from the origin, and whose boundary has ö-measure zero. Note that this is the
vague convergence on the set R d nf0g where R d is the one-point compacti®cation of R d . For more information on multivariable regular variation see
Meerschaert (1993), Meerschaert and Schef¯er (1999) and Schef¯er (1998).
Regular variation is an analytic tail condition which is necessary for an
extended central limit theorem to apply. We say that f Z n g belongs to the
generalized domain of attraction of some full-dimensional limit Y if
# Blackwell Publishers Ltd 2000
MOVING AVERAGES OF RANDOM VECTORS
299
An ( Z 1 ‡ ‡ Z n ÿ nbn ) ) Y
(2:3)
for some linear operators An and nonrandom centering vectors bn . If
Ek Z n k2 , 1 then Y is multivariate normal and we can take An ˆ nÿ1=2 and
bn ˆ EZ 1 . Meerschaert (1993) together with Meerschaert (1994) shows that (2.3)
holds with a nonnormal limit if and only if the distribution ì is regularly varying
with exponent E, where every eigenvalue of E has real part exceeding 12.
Sharpe (1969) characterized operator stable laws in terms of transforms. The
characteristic function of any random vector Y whose probability distribution is
in®nitely divisible can be written in the form Ee ihY ,si ˆ e ø(s) where
…
1
ihs, xi
ö(dx):
e hs,xi ÿ 1 ÿ
ø(s) ˆ iha, si ÿ s9Ms ‡
2
1
‡
hx, xi
x6ˆ0
Here a 2 R d , M is a symmetric d 3 d matrix, and ö is a LeÂvy measure, i.e. a ó®nite measure to sets bounded
®nite Borel measure on R d nf0g which assigns
„
away from the origin and which satis®es kxk2 I(0 , kxk < 1)ö(dx) , 1. We
say that Y has LeÂvy representation [a, M, ö]. If ö ˆ 0 then Y is multivariate
normal with covariance matrix M. A nonnormal operator stable law has LeÂvy
representation [a, 0, ö] where tö ˆ t E ö for all t . 0 and every eigenvalue of the
exponent E has real part exceeding 12. If ì varies regularly with exponent E,
where every eigenvalue of E has real part exceeding 12, then the limit measure ö
in (2.2) is also the LeÂvy measure of Y.
Let ì be regularly varying with exponent E and let ë ˆ minfR(á)g, Ë ˆ
maxfR(á)g where á ranges over the eigenvalues of E. Meerschaert (1993)
shows that in this case the moment functions
…
Uæ (r, è) ˆ
jhx, èijæ ì(dx)
jhx,èij< r
…
Vç (r, è) ˆ
jhx,èij . r
jhx, èijç ì(dx)
are uniformly R±O varying whenever ç , 1=Ë < 1=ë , æ. A Borel measurable
function R(r) is R±O varying if it is real-valued and positive for r > A and if
there exist constants a . 1, 0 , m , 1, M . 1 such that m < R(tr)=R(r) < M
whenever 1 < t < a and r > A. Then R(r, è) is uniformly R±O varying if it is
an R±O varying function of r for each è, and the constants A, a, m, M can be
chosen independent of è. See Seneta (1976) for more information on R±O
variation.
In particular it is shown in Meerschaert (1993) (see also Schef¯er (1998) for
a more general case) that for any ä . 0 there exist real constants m, M, r0 such
that
Vç (tr, è)
> mt çÿ1=ëÿä
Vç (r, è)
# Blackwell Publishers Ltd 2000
300
M. M. MEERSCHAERT AND H.-P. SCHEFFLER
Uæ (tr, è)
< Mt æÿ1=ˇä
Uæ (r, è)
for all kèk ˆ 1, all t > 1 and all r > r0 . A uniform version of Feller (1971)
p. 289 (see Schef¯er (1998) for a detailed proof) yields that for some positive
real constants A, B, t0 we have
A<
t æÿç Vç (t, è)
<B
Uæ (t, è)
for all kèk ˆ 1 and all t > t0 .
If ì is regularly varying with index E then Meerschaert and Schef¯er (1997)
show that there exists a unique direct sum decomposition R d ˆ V1 Vp
and real numbers 0 , ë ˆ a1 , , ap ˆ Ë with the following properties: the
subspaces V1 Vp are mutually orthogonal; for any nonzero vector è 2 Vi the
marginal absolute moment Ejh Z n , èijr exists for r , 1=ai and this moment is
in®nite for r . 1=ai ; Pi An ˆ An Pi and Pi E ˆ EPi where Pi is orthogonal
projection onto Vi ; every eigenvalue of Pi E has real part equal to ai ; for any
nonzero vector x 2 R d we have logkPi An xk=log n ! ÿai (uniformly on
compact subsets); and logkPi An k=log n ! ÿai . This is called the spectral
decomposition.
The spectral decomposition implies that the distribution Pi ì of the random
vector Pi Z n on Vi varies regularly with exponent Pi E. Since every eigenvalue of
the exponent Pi E has real part equal to ai , we can also apply the above R±O
variation results whenever è 2 Vi and ç , 1=ai , æ. For a linear operator A let
At denote its transpose.
Lemma 2.1. Assume that the distribution ì of Z is regularly varying with
index E; let R d ˆ V1 Vp be the spectral decomposition of R d and let
0 , a1 , . . . , ap denote the corresponding real parts of the eigenvalues of E.
Then for any i ˆ 1, . . ., p, given 0 , ç , 1=ai , æ there exists n0 and constants
K 1 , . . ., K 4 . 0 such that
(i)
nEjhAn Z, èijæ I(jhAn Z, èij < 1) , K 1
(ii)
nEjhAn Z, èijæ I(kAn Zk < 1) , K 2
(iii)
nEjhAn Z, èijç I(jhAn Z, èij . 1) , K 3
(iv)
nEjhAn Z, èijç I(kAn Zk . 1) , K 4
(2:4)
for all n > n0 and all kèk ˆ 1 in Vi .
Proof. Suppose we are given an arbitrary sequence of unit vectors è n in Vi ,
and write rn xn ˆ A tn è n where rn . 0 and kxn k ˆ 1. Then we have
# Blackwell Publishers Ltd 2000
MOVING AVERAGES OF RANDOM VECTORS
301
nEjhAn Z, è n ijæ I(jhAn Z, è n ij < 1) ˆ nEjh Z, A tn è n ijæ I(jh Z, A tn è n ij < 1)
ˆ nEjh Z, rn xn ijæ I(jh Z, rn xn ij < 1)
ˆ nræn Ejh Z, xn ijæ I(jh Z, xn ij < rÿ1
n )
ˆ nræn Uæ (rÿ1
n , xn )
Since æ . 1=ai both Uæ (r, è) and V0 (r, è) are uniformly R±O varying on
compact subsets of è 6ˆ 0 in Vi . Then for some m, M, t0 we have
m<
t æ V0 (t, è)
<M
Uæ (t, è)
(2:5)
for all kèk ˆ 1 in Vi and all t > t0 . Since (2.2) holds where ö cannot be
supported on any lower dimensional subspace we must have kAn k ! 0. Then
kAn èk ! 0 uniformly on compact subsets of R d nf0g, and so for some n0 we
have kAn èk < tÿ1
0 for all kèk ˆ 1 and all n > n0 . Then for all n > n0 we have
ÿ1
ÿ1
nræn Uæ (rÿ1
n , xn ) < m nV0 (r n , xn )
ˆ mÿ1 nP(jh Z, xn ij . rÿ1
n )
ˆ mÿ1 nP(jh Z, rn xn ij . 1)
ˆ mÿ1 nP(jh Z, A tn è n ij . 1)
ˆ mÿ1 nP(jhAn Z, è n ij . 1)
< mÿ1 nP(kAn Zk . 1)
as n ! 1, and this upper bound holds independent of our choice of the
sequence è n . By (2.2) we have nP(kAn Zk . 1) ˆ nAn ìfz: kzk . 1g ! öfz:
kzk . 1g (if fz: kzk . 1g is not a continuity set of ö then we can use the upper
bound nP(kAn Zk . r) instead, where 0 , r , 1). Then the sequence of real
numbers nP(kAn Zk . 1) is bounded above by some K . 0, and assertion (i) of
(2.4) holds with K 1 ˆ mÿ1 K. Since
nEjhAn Z, è n ijæ I(kAn Zk < 1) < nEjhAn Z, è n ijæ I(jhAn Z, è n ij < 1)
we immediately obtain assertion (ii) with K 2 ˆ K 1 . Now write
# Blackwell Publishers Ltd 2000
302
M. M. MEERSCHAERT AND H.-P. SCHEFFLER
nEjhAn Z, è n ijç I(jhAn Z, è n ij . 1) ˆ nEjh Z, A tn è n ijç I(jh Z, A tn è n ij . 1)
ˆ nEjh Z, rn xn ijç I(jh Z, rn xn ij . 1)
ˆ nrçn Ejh Z, xn ijç I(jh Z, xn ij . rÿ1
n )
ˆ nrçn Vç (rÿ1
n , xn )
Since ç , 1=ai we also have Vç (r, è) uniformly R±O varying on compact
subsets of è 6ˆ 0 in Vi . Then for some m9, M9, t0 we have
m9 <
t æÿç Vç (t, è)
< M9
Uæ (t, è)
(2:6)
for all kèk ˆ 1 in Vi and all t > t0 . Choosing t0 large enough so that both (2.5)
and (2.6) hold whenever t > t0 and kèk ˆ 1 in Vi , and then choosing
n0 ˆ n0 (t0 ) such that kAn èk < tÿ1
0 for all kèk ˆ 1 whenever n > n0, we have
for all n > n0 that
æ
ÿ1
r çn Vç (rÿ1
n , xn ) < M9nr n U æ (r n , xn )
and so assertion (iii) of (2.4) holds with K 3 ˆ M9K 1 . Finally write
nEjhAn Z, è n ijç I(kAn Zk . 1) ˆ nEjhAn Z, è n ijç I(jhAn Z, è n ij . 1)
‡ nEjhAn Z, è n ijç I(kAn Zk . 1 and jhAn Z, è n ij < 1)
< nEjhAn Z, è n ijç I(jhAn Z, è n ij . 1) ‡ nP(kAn Zk . 1)
so that assertion (iv) of (2.4) holds with K 4 ˆ K 3 ‡ K. This concludes the proof
of Lemma 2.1.
3.
MOVING AVERAGES
Suppose that f Z j g1
jˆÿ1 is a double sequence of i.i.d. random vectors with
common distribution ì varying regularly with index E, i.e. (2.2) holds. De®ne
the moving average process
1
X
Cj Z tÿ j
(3:1)
Xt ˆ
jˆÿ1
where Cj are d 3 d real matrices such that for each j either Cj ˆ 0 or else C ÿ1
j
exists and An Cj ˆ Cj An for all n. The spectrum of the exponent of regular
variation E is connected with the tail behavior of the random vectors Z n . Let
Ë ˆ maxfR(á)g and ë ˆ minfR(á)g where á ranges over the eigenvalues of E
as before. Lemma 2 of Meerschaert (1993) or Schef¯er (1998), Corollary 4.21
implies that Ek Z n kr exists for 0 , r , 1=Ë and is in®nite for r . 1=ë. Then the
following lemma implies that the moving average (3.1) is well-de®ned as long as
# Blackwell Publishers Ltd 2000
MOVING AVERAGES OF RANDOM VECTORS
1
X
kCj kä , 1
303
(3:2)
jˆÿ1
for some ä , 1=Ë with ä < 1. Here kAk denotes the operator norm of a linear
operator A on R d . For the remainder of the paper we will assume that this is the
case, so that the moving average (3.1) is well-de®ned.
Lemma 3.1. Suppose that Ek Z n ká , 1 8á , á0 and that (3.2) holds for
some ä , á0 , ä < 1. Then (3.1) exists almost surely.
P
Ek Z n k , 1. Then
kCj kä , 1
Proof. First
P suppose á0 . 1 so that
ä
implies that
kCj k , 1 because kCj k ! 0 since the series converges, so
ä
kCj k , 1 for all large j, and
P for such j we
P have kCj k <
PkCj k .
kCj k:k Z tÿ j k P
so that
Then weP
have kX t k ˆ k Pj Cj Z tÿ j k < kCP
j Z tÿ j k <
EkX t k < E kCj k k Z tÿ j k ˆ EkCj k k Z tÿ j kˆ kCj kEk Z tÿ j kˆ Ek Z 1 k kCj k
, 1 by monotone
convergence and then X t exists almost surely. To see this, note
P
~ ˆ kCj k k Z tÿ j k is a well-de®ned random variable which is nonnegative
that X
and has a ®nite mean and so it is almost surely ®nite, i.e. the sum in (3.1) converges
absolutely with probability
one.
P
kCj kä , 1 for some ä , á0 and Ek Z 1 kä , 1 as well.
If á0 < 1 then
Also jx ‡ yjä < jxjä ‡ j yjä 8x, y > 0 so
"
ä #
X
ä
Cj Z tÿ j EkX t k ˆ E j
2
< E4
X
!ä 3
kCj k k Z tÿ j k 5
j
<E
" X
j
ˆ
X
!#
ä
ä
kCj k k Z tÿ j k
kCj kä Ek Z 1 kä , 1
j
so as before X t converges absolutely almost surely.
Theorem 3.2. Suppose that X t is a moving average de®ned by (3.1) where
f Z n g are i.i.d. with common distribution ì on R d . Suppose that ì varies
regularly with exponent E, where every eigenvalue of E has real part exceeding
1
2. Then for the norming operators An in (2.2) we have
An (X 1 ‡ ‡ X n ÿ nan ) ) U
(3:3)
where an 2 R d and U is full-dimensional and operator stable.
# Blackwell Publishers Ltd 2000
304
M. M. MEERSCHAERT AND H.-P. SCHEFFLER
Remark 3.3. In the proof
P of Theorem 3.2 we show that we can take the
centering constants an ˆ j Cj bn in (3.3) where bP
n are the centering constants
from (2.3) above. The
operator
stable
limit
U
ˆ
j C j Y in (3.3) is nonnormal
P
with LeÂvy measure
j C j ö.
Before we proof Theorem 3.2 we need an additional result that shows that
under the general assumptions on the moving averages parameter matrices Cj
the spectral decomposition remains invariant under Cj.
Lemma 3.4. Suppose the ì is regularly varying with index E and spectral
decomposition R d ˆ V1 . . . Vp and norming operators An as in (2.2). If
Cj is invertible and Cj An ˆ An Cj for all n then Cj Vi ˆ Vi for all i ˆ 1 p.
Proof. Let Li ˆ Vi Vp and Li ˆ V1 Vi . Meerschaert and
Schef¯er (1998) show that for any x 2 Li nLi‡1 we have logkAn xk=log n !
ÿai . This convergence is uniform on compact subsets of x 2 Li nLi‡1 . For any
x 2 Li nLiÿ1 we have logk(A tn )ÿ1 xk=log n ! ai (uniformly on compact sets).
Since Cj is invertible there exist cj . 0 and bj , 1 such that cj kxk <
kCj xk < bj kxk for all x. Then logkCj An xk=log n ! ÿai if and only if
x 2 Li nLi‡1 , and logkAn Cj xk=log n ! ÿai if and only if Cj x 2 Li nLi‡1 . Since
An Cj ˆ Cj An this implies that C ÿ1
j (Li nLi‡1 ) ˆ Li nL i‡1 , and so C j (Li nLi‡1 )
ˆ Li nLi‡1 . Similarly we have logk(A tn )ÿ1 xk=log n ! ai if and only if
x 2 Li nLiÿ1 as well as logk(A tn )ÿ1 (C tj )ÿ1 xk=log n ! ai if and only if (C tj )ÿ1 x
2 Li nLiÿ1 . Then (C tj )ÿ1 (Li nLiÿ1 ) ˆ Li nLiÿ1 and since the subspaces Vi are
mutually orthogonal it follows that Cj (Li nLiÿ1 ) ˆ Li nLiÿ1 . Then Cj (Li \ Li )
ˆ Li \ Li and Li \ Li ˆ Vi . This concludes the proof of Lemma 3.4.
Proof of theorem 3.2. Suppose that (2.3) holds and de®ne
X
Cj Z tÿ j
X (t m) ˆ
j jj< m
a(nm) ˆ
U ( m) ˆ
X
Cj bn
j jj< m
X
Cj Y
j jj< m
for m > 1, where Y is the limit in (2.3). From (2.3) it follows that
!
n
X
( Z tÿ j ÿ bn ): j jj < m ) (Y , . . ., Y )
An
tˆ1
since we know that the convergence holds for the j ˆ 0 component, and since
the difference between this component and any other component (at most 2m
# Blackwell Publishers Ltd 2000
MOVING AVERAGES OF RANDOM VECTORS
305
terms) tends to zero in probability. Now by the continuous mapping theorem we
obtain
An (X (1m) ‡ ‡ X (nm) ÿ na(nm) ) ) U ( m)
P
and since U ( m) ! U ˆ j Cj Y almost surely it suf®ces to show (e.g. by
Billingsley (1968) Theorem 4.2) that
lim lim sup P(kT k . å) ˆ 0
m!1
n!1
where
n
XX
Tˆ
An Cj ( Z tÿ j ÿ bn ):
j jj. m tˆ1
For this it suf®ces to show that
lim lim sup P(jhT , xij . å) ˆ 0
m!1
n!1
(3:4)
for any unit vector x 2 Vi where 1 < i < p. Decompose T ˆ A ‡ B ÿ C where
Aˆ
n
XX
An Cj Z tÿ j I(kAn Z tÿ j k < 1)
j jj. m tˆ1
Bˆ
n
XX
An Cj Z tÿ j I(kAn Z tÿ j k . 1)
j jj. m tˆ1
Cˆ
X
nAn Cj bn :
j jj. m
First suppose that ai . 1, so that assertion (ii) of Lemma 2.1 holds with æ ˆ 1.
Write C tj x ˆ rj xj where rj . 0 and kxj k ˆ 1. Note that xj 2 Vi by Lemma 3.4.
Then
# Blackwell Publishers Ltd 2000
306
M. M. MEERSCHAERT AND H.-P. SCHEFFLER
P [jhA, xij . å=3] < 3å ÿ1 EjhA, xij
< 3å ÿ1
X
nEjhAn Cj Z 1 , xijI(kAn Z 1 k < 1)
j jj. m
ˆ 3å ÿ1
X
nEjhCj An Z 1 , xijI(kAn Z 1 k < 1)
j jj. m
ˆ 3å ÿ1
X
j jj. m
ˆ 3å ÿ1
X
nEjhAn Z 1 , C tj xijI(kAn Z 1 k < 1)
rj nEjhAn Z 1 , xj ijI(kAn Z 1 k < 1)
j jj. m
< 3å ÿ1 K 2
X
kCj k
j jj. m
which tends to zero
ä , 1=ap < 1=ai , 1.
that assertion (iv)
jx ‡ yjä < jxjä ‡ j yjä
as m ! 1 in view of the fact that (3.2) holds with
Choosing 0 , ä , 1=ap so that (3.2) holds, we have
of Lemma 2.1 holds with ç ˆ ä. Note also that
since ä , 1=ai , 1. Then
P [jhB, xij . å=3] < 3å ÿä EjhB, xijä
< 3å ÿä
X
nEjhAn Cj Z 1 , xijä I(kAn Z 1 k . 1)
j jj. m
ˆ 3å ÿä
X
nEjhCj An Z 1 , xijä I(kAn Z 1 k . 1)
j jj. m
ˆ 3å ÿä
X
j jj. m
ˆ 3å ÿä
X
j jj. m
< 3å ÿä K 4
nEjhAn Z 1 , C tj xijä I(kAn Z 1 k . 1)
räj nEjhAn Z 1 , xj ijä I(kAn Z 1 k . 1)
X
kCj kä
j jj. m
which tends to zero as m ! 1 in view of the fact that (3.2) holds. The standard
convergence criteria for triangular arrays (e.g. Araujo and Gine (1980)) shows
that we can take bn ˆ EZ 1 I(kAn Z 1 k < 1) in (2.3). Then for all n > n0 we have
# Blackwell Publishers Ltd 2000
MOVING AVERAGES OF RANDOM VECTORS
jhC, xij <
X
307
njhAn Cj bn , xij
j jj. m
ˆ
X
njhAn Cj EZ 1 I(kAn Z 1 k < 1), xij
j jj. m
<
X
nEjhAn Cj Z 1 , xijI(kAn Z 1 k < 1)
j jj. m
< K2
X
kCj k
j jj. m
which tends to zero as m ! 1 as in the argument for A. Since
P(jhT , xij . å) < P(jhA, xij . å=3) ‡ P(jhB, xij . å=3) ‡ P(jhC, xij . å=3)
it follows that (3.4) holds when x is a unit vector in Vi and ai . 1.
Suppose then that 12 , ai < 1 and note that (2.4) part (ii) holds with æ ˆ 2.
Since C ˆ EA we have by Chebyshev's inequality that
P[jhA ÿ C, xij . å=2] < 2å ÿ2 var(hA, xi)
ˆ 2å ÿ2
X
n var[hAn Cj Z 1 , xiI(kAn Z 1 k < 1)]
j jj. m
< 2å ÿ2
X
nE[hAn Cj Z 1 , xi2 I(kAn Z 1 k < 1)]
j jj. m
ˆ 2å ÿ2
X
j jj. m
< 2å ÿ2 K 2
nr 2j E[hAn Z 1 , xj i2 I(kAn Z 1 k < 1)]
X
kCj k2
j jj. m
which tends to zero as m ! 1 in view of the fact that (3.2) holds with ä < 1. If
1
2 , ai , 1 then (2.4) part (iv) holds with ç ˆ 1 and so
# Blackwell Publishers Ltd 2000
308
M. M. MEERSCHAERT AND H.-P. SCHEFFLER
P [jhB, xij . å=2] < 2å ÿ1 EjhB, xij
< 2å ÿ1
X
nEjhAn Cj Z 1 , xijI(kAn Z 1 k . 1)
j jj. m
ˆ 2å ÿ1
X
rj nEjhAn Z 1 , xj ijI(kAn Z 1 k . 1)
j jj. m
X
< 2å ÿ1 K 4
kCj k
j jj. m
which tends to zero as m ! 1 in view of the fact that (3.2) holds with ä < 1.
Finally if ai ˆ 1 then (2.4) part (iv) holds with ç ˆ ä where we choose ä , 1
such that (3.2) holds. Then argue as before that
P [jhB, xij . å=2] < 2å ÿä EjhB, xijä
< 2å ÿä
X
nEjhAn Cj Z 1 , xijä I(kAn Z 1 k . 1)
j jj. m
ˆ 2å ÿä
X
j jj. m
< 2å ÿä K 4
räj nEjhAn Z 1 , xj ijä I(kAn Z 1 k . 1)
X
kCj kä
j jj. m
which tends to zero as m ! 1 in view of the fact that (3.2) holds. Since
P(jhT , xij . å) < P(jhA ÿ C, xij . å=2) ‡ P(jhB, xij . å=2)
it follows that (3.4) holds when x is a unit vector in Vi and ai < 1, which
concludes the proof of Theorem 3.2.
Remark 3.5. The assumption that An Cj ˆ Cj An is somewhat restrictive, but
necessary for our method of proof. It may be possible to relax this restriction.
For example, it is not hard to check that for ®nite moving averages X t ˆ
C0 Z t ‡ ‡ Cq Z tÿq we have
(CAn C ÿ1 )(X 1 ‡ ‡ X n ÿ nan ) ) CY
where C ˆ C0 ‡ ‡ Cq . We conjecture that the same is true in general, i.e.
that (3.3) holds even with An Cj 6ˆ Cj An if we replace An by CAn C ÿ1, but we
have not been able to prove this.
# Blackwell Publishers Ltd 2000
309
MOVING AVERAGES OF RANDOM VECTORS
4.
SAMPLE AUTOCOVARIANCE MATRIX FOR MOVING AVERAGES
In the previous section we proved that moving averages of random variables with
regularly varying tail probabilities are asymptotically operator stable. In this
section we prove that the sample autocovariance matrix formed from these
moving averages is also asymptotically operator stable as a random element of
the vector space of d 3 d symmetric matrices. The sample covariance matrix at
lag h of the moving average X t is de®ned by
n
X
^ n (h) ˆ 1
Ã
X t X 9t‡ h :
(4:1)
n tˆ1
Meerschaert and Schef¯er (1999) consider the asymptotic behavior of the sample
covariance matrix of an i.i.d. sequence of random vectors with regularly varying
tails. Suppose that f Z n g are i.i.d. random vectors on R d with common
distribution ì, where ì varies regularly with exponent E, i.e. (2.2) holds and
every eigenvalue of E has real part exceeding 14. Then
!
n
X
Z i Z9i ÿ Bn A tn ) W
(4:2)
An
iˆ1
where An is taken from the de®nition (2.2) above, Bn ˆ EZ 1 Z91 I(kAn Z 1 k < 1),
and W is a nonnormal operator stable random element of the vector space M ds
of symmetric d 3 d matrices with real entries.
^ n (h) is the sample covariance matrix de®ned
Theorem 4.1. Suppose that Ã
by (4.1) where X t is the moving average (3.1) and f Z n g are i.i.d. with common
distribution ì on R d . Suppose that ì varies regularly with exponent E, where
every eigenvalue of E has real part exceeding 14 and (2.2) holds. Assume that
Ex9 Z n ˆ 0 for all x 2 R d such that Ejx9 Z n j , 1. Then for all h we have
"
#
1
1
X
X
t
^ n (h) ÿ
Cj Bn C
Cj WC t
(4:3)
At )
nAn Ã
jˆÿ1
j‡ h
n
j‡ h
jˆÿ1
where An , Bn and W are as in (4.2).
The method of proof is similar to that of Theorem 3.2 above but much more
involved. The assertion of Theorem 4.1 follows easily from the following two
main propositions whose proofs are included in the appendix. The ®rst key
^ n (h) dominate.
proposition asserts that the quadratic terms of Ã
Proposition 4.2. Under the assumptions of Theorem 4.2 we have
"
#
n X
1
X
P
t
^
Cj Z tÿ j Z9tÿ j C
At ! 0
An nÃn (h) ÿ
tˆ1 jˆÿ1
j‡ h
n
(4:4)
# Blackwell Publishers Ltd 2000
310
M. M. MEERSCHAERT AND H.-P. SCHEFFLER
as n ! 1.
The next proposition establishes the convergence of the quadratic terms of
the sample autocovariance matrix to the limit in (4.3).
Proposition 4.3. Under the assumptions of Theorem 4.1 we have
" n 1
#
1
X
X X
t
Cj ( Z tÿ j Z9tÿ j ÿ Bn )C j‡ h A tn )
Cj WC tj‡ h
An
tˆ1 jˆÿ1
(4:5)
jˆÿ1
as n ! 1.
Proof of theorem 4.1. Combining Propositions 4.2 and 4.3 we have
"
#
"
#
1
n X
1
X
X
t
t
t
^
^
Cj Bn C j‡ h A n ˆ An nÃn (h) ÿ
Cj Bn C j‡ h A tn
nAn Ãn (h) ÿ
jˆÿ1
tˆ1 jˆÿ1
"
^ n (h) ÿ
ˆ An nÃ
n X
1
X
tˆ1 jˆÿ1
"
‡ An
n X
1
X
1
X
jˆÿ1
A tn
#
Cj ( Z tÿ j Z9tÿ j ÿ
tˆ1 jˆÿ1
)0‡
#
Cj Z tÿ j Z9tÿ j C tj‡ h
Bn )C tj‡ h
A tn
Cj WC tj‡ h
which concludes the proof.
Remark 4.4. If ap , 12 then EZ 1 Z91 exists and consequently the autocovariance set as 12 matrix Ã(h) ˆ EX t X 9t‡ h is well-de®ned. In this case
Meerschaert and Schef¯er (1999) show that we can take Bn ˆ EZ 1 Z91 in (4.2)
and then (4.3) becomes
1
X
^ n (h) ÿ Ã(h)]A t )
Cj WC tj‡ h :
(4:6)
nAn [Ã
n
jˆÿ1
ÿ1=2
when ap , 12, we also have in this case that
Since An ! 0 slower than n
^
Ãn (h) ! Ã(h) in probability, and (4.6) provides the rate of convergence. If a1 . 12
then Meerschaert and Schef¯er (1999) show that we can take Bn ˆ 0 in (4.2) and
then (4.3) becomes
1
X
^ n (h)A t )
Cj WC tj‡ h :
(4:7)
nAn Ã
n
jˆÿ1
# Blackwell Publishers Ltd 2000
MOVING AVERAGES OF RANDOM VECTORS
311
^ n (h)
Since An ! 0 faster than nÿ1=2 when a1 . 12, we also have in this case that Ã
is not bounded in probability, and (4.7) provides the rate at which it blows up.
An application of Proposition 3.1 in Resnick (1986) shows that the regular
variation (2.2) implies weak convergence of the associated point processes. All
of the results in this paper can also be established using point process methods.
In the special case of scalar norming (where (2.3) holds with An ˆ aÿ1
n I for all
n) we say that f Z n g belongs to the domain of attraction of Y. In this special
case a result equivalent to Theorem 4.1 was obtained by Davis, Marengo, and
Resnick (1985) and Davis and Marengo (1990) using point process methods. In
this case the random matrix W is multivariate stable. Since we are norming by
constants in this case, one immediately obtains the asymptotic distribution of
the sample autocorrelations using the continuous mapping theorem. It is not
possible to extend this argument to the general case considered in this paper
(with norming by linear operators).
In Theorem 4.1, the assumption that Ex9 Z n ˆ 0 when the mean exists can be
removed if we use the centered version of the sample covariance matrix de®ned
by
n
X
^ n (h) ˆ 1
(X t ÿ X )(X t‡ h ÿ X )9:
G
n tˆ1
(4:8)
P
where X ˆ 1=n ntˆ1 X t is the sample mean.
^ n (h) is the sample autocovariance matrix
Theorem 4.5. Suppose that G
de®ned by (4.8) where X t is the moving average (3.1) and f Z n g are i.i.d. with
common distribution ì on R d . Suppose that ì varies regularly with exponent
E, where every eigenvalue of E has real part exceeding 14. Then for all h we
have
"
^ n (h) ÿ
nAn G
1
X
jˆÿ1
#
Cj Bn C tj‡ h
A tn )
1
X
jˆÿ1
Cj WC tj‡ h ,
(4:9)
where An , Bn and W are as in (4.2).
Proof. Note that the difference between the two formulas (4.1) and (4.8)
can be written in the form
# Blackwell Publishers Ltd 2000
312
M. M. MEERSCHAERT AND H.-P. SCHEFFLER
n
n
1X
1X
(X t ÿ X )(X t‡ h ÿ X )9 ÿ
X t X 9t‡ h
n tˆ1
n tˆ1
ˆÿ
n
n
n
1X
1X
1X
XtX9 ÿ
X X 9t‡ h ‡
X X9
n tˆ1
n tˆ1
n tˆ1
!
h
n‡ h
9
1X
1 X
Xt ‡
X t ‡X X 9
ˆ ÿX X 9 ÿ X X ÿ
n tˆ1
n tˆ n‡1
ˆ X X 9 ‡ oP (X X 9)
and so it suf®ces to show that nAn X X 9A tn ! 0 in probability. For this it suf®ces
to show that for all unit vectors x 2 Vi and y 2 Vl we have
nx9An X X 9A tn y ˆ
p
p
P
n x9An X : n y9An X ! 0
p
and so it suf®ces to show that n x9An X ! 0 in probability for any unit vector
x 2 Vi for any i ˆ 1, . . ., p.
If ai , 1 then EPi Z t exists. Note that (4.8) is not changed if we replace Z t by
Z t ÿ EPi Z t so that without loss of generality EPi Z t ˆ 0. Let Sn ˆ X 1 ‡
‡ X n so that X ˆ Sn =n. In the case ai , 1, Meerschaert (1993), along with
the spectral decomposition shows that we can take Pi bn ˆ EPi Z t ˆ 0 in (2.3)
and then we have from (3.3) that An Pi Sn ˆ nAn Pi X ) U and so
n1=2 An Pi X ! 0 in probability. It follows that n1=2 x9An X ! 0 in probability.
If ai . 1 then Meerschaert (1993) along with the spectral decomposition shows
that we can also take Pi bn ˆ 0 in (2.3) and it follows as before that
n1=2 x9An X ! 0 in probability.
If ai ˆ 1 we can apply Lemma 2.1 part (ii) with æ ˆ 1 ‡ å . 1=ai for any
å . 0. By (3.3) we have An (Sn ÿ nan ) ) U , and so we have nAp
n (X
 ÿ an ) ) U ,
and then nx9An (X ÿ an ) ) x9U by continuous mapping. p
Then
n x9An (X ÿ an )

! 0 in probability and so it P
suf®ces to show that
n x9An an ! 0, where
without loss of generality an ˆ j Cj bn and bn ˆ EZ 1 I(kAn Z 1 k < 1). Argue as
in the proof of lemma 5.1 that U 1 (r, è) , CU1‡å (r, è) for all r large and all è
unit vectors in Vi . Then as in the proof of Lemma 5.1 we have (letting
A tn x ˆ rn è n with rn . 0 and kè n k ˆ 1) that
jnx9An bn j < nEjhAn Z 1 , xijI(kAn Z 1 k < 1)
å
1‡å
I(jhAn Z 1 , xij < 1)
< CkPi Aÿ1
n k nEjhAn Z 1 , xij
å
, CkPi Aÿ1
n k K1
for all n large, and so for all large n we have
# Blackwell Publishers Ltd 2000
MOVING AVERAGES OF RANDOM VECTORS
1
X
1=2
ÿ1=2 jn x9An an j ˆ n
nx9An Cj bn 313
jˆÿ1
1
X
< nÿ1=2
jˆÿ1
kCj k jnx9An bn j
å
< nÿ1=2 CkPi Aÿ1
n k K1
1
X
kCj k
jˆÿ1
å
1
where the series converges. Since log(nÿ1=2 kPi Aÿ1
n k )=log n ! ÿ2 ‡ ai å , 0 for
1=2
small å . 0 we obtain n x9An an ! 0, which concludes the proof.
Remark 4.6. Davis, Marengo, and Resnick (1985) and Davis and Marengo
(1990) also establish the joint convergence of the sample autocovariance
matrices over all lags jhj < h0 , in the special case of scalar norming. It is
straightforward to extend the results in this paper to establish joint convergence
as well. In Meerschart and Schef¯er (1998) we apply this joint convergence to
compute the asymptotic distribution of the sample cross-correlations for a
bivariate moving average. This extends the results of Davis et al. to the case
where the norming operator An and Cj are diagonal. The general case, where
both An and Cj are arbitrary linear operators, remains open.
5.
APPENDIX
In this section we prove Proposition 4.2 and Proposition 4.3 above and hence complete the
proof of Theorem 4.1. Throughout this section we assume that the assumptions of
Theorem 4.1 hold. That is, we assume that f Z n g are i.i.d. with common distribution ì on
R d , and that ì varies regularly with index E, where every eigenvalue of E has real part
exceeding 14 and (2.2) holds. Assume further that Ex9 Z n ˆ 0 for all x 2 R d such that
Ejx9 Z n j , 1.
Recall from section 2 that in view of the spectral decomposition we can (and hence
will) assume without loss of generality that An is block diagonal with respect to the
direct sum decomposition R d ˆ V1 . . . Vp , where the Vi -spaces are mutually
orthogonal and form the spectral decomposition relative to E. Let 14 , a1 , . . . , ap
denote the real parts of the eigenvalues of E.
Since the proofs of the two propositions are quite long we split them into several
lemmas.
Proof of proposition 4.2. Let V1 , . . ., Vp be the spectral decomposition of E and
a1 , , ap the associated real spectrum (see section two above). Recall that Vi ? Vj
and that Pi is the orthogonal projection onto Vi . We also assume as before that (3.2)
holds for some ä , 1=ap with ä < 1. It will suf®ce to show that
"
#
n X
1
X
P
^ n (h) ÿ
Cj Z tÿ j Z9tÿ j C tj‡ h A tn y ! 0
(5:1)
x9An nÃ
tˆ1 jˆÿ1
for all unit vectors x 2 Vi and y 2 Vl . Note that
# Blackwell Publishers Ltd 2000
314
^ n (h) ˆ
nÃ
M. M. MEERSCHAERT AND H.-P. SCHEFFLER
n
X
X t X 9t‡ h ˆ
tˆ1
ˆ
ˆ
n
X
1
X
tˆ1
jˆÿ1
n
X
1
X
tˆ1
jˆÿ1
!
Cj Z tÿ j
Cj Z tÿ j
tˆ1 jˆÿ1
ˆ
n X
1
X
tˆ1 jˆÿ1
1
X
Cq Z t‡ hÿq
!
9
!
C k‡ h Z tÿ k
9
(set k ˆ q ÿ h)
kˆÿ1
n X
1
1
X
X
n X
1
X
qˆÿ1
!
tˆ1 jˆÿ1 kˆÿ1
ˆ
1
X
Cj Z tÿ j Z9tÿ k C tk‡ h
Cj Z tÿ j Z9tÿ j C tj‡ h ‡
Cj Z tÿ j Z9tÿ j C tj‡ h ‡
X
k6ˆ j
Cj Z tÿ j Z9tÿ k C tk‡ h
n X
1 X
X
tˆ1 jˆÿ1 k6ˆ j
!
Cj Z tÿ j Z9tÿ k C tk‡ h
so that (5.1) is equivalent to
"
x9An
n X
1 X
X
tˆ1 jˆÿ1 k6ˆ j
#
Cj Z tÿ j Z9tÿ k C tk‡ h
P
A tn y ! 0:
(5:2)
We now have to consider several cases separately.
Case i: Suppose that x 2 Vi and y 2 Vl are unit vectors with ai ‡ al . 1. Choose
ä . 0 such that ä . 1=(ai ‡ al ), ä , 1=ai , ä , 1=al , and ä < 1. Write C tj A tn x ˆ rè where
r . 0 and kèk ˆ 1. Then r ˆ kC tj A tn xk ˆ kC tj A tn Pi xk ˆ kC tj Pi A tn xk < kCj k kPi An k and
so
Ejx9An Cj Z 1 jä ˆ Ej(C tj A tn x)9 Z 1 jä
ˆ Ejrè9 Z 1 jä
< kPi An kä kCj kä Ejh Z 1 , èijä
ˆ kPi An kä kCj kä Ejh Z 1 , Pi èijä
ˆ kPi An kä kCj kä EjhPi Z 1 , èijä
< kPi An kä kCj kä EkPi Z 1 kä
and likewise
Ej Z92 C tk‡ h A tn yjä < kPl An kä kC k‡ h kä EkPl Z 2 kä :
Then
# Blackwell Publishers Ltd 2000
MOVING AVERAGES OF RANDOM VECTORS
" n 1
#
ä
X X X
t
t E x9An
Cj Z tÿ j Z9tÿ k C k‡ h A n y
315
tˆ1 jˆÿ1 k6ˆ j
n 1
ä
X X X
ˆ E
x9An Cj Z tÿ j Z9tÿ k C tk‡ h A tn y
tˆ1 jˆÿ1 k6ˆ j
2
< E4
n X
1 X
X
tˆ1 jˆÿ1 k6ˆ j
"
<E
n X
1 X
X
tˆ1 jˆÿ1 k6ˆ j
ˆE
" n 1
X X X
tˆ1 jˆÿ1 k6ˆ j
ˆ
n X
1 X
X
tˆ1 jˆÿ1 k6ˆ j
ˆ
n X
1 X
X
tˆ1 jˆÿ1 k6ˆ j
!ä 3
jx9An Cj Z tÿ j Z9tÿ k C tk‡ h A tn yj 5
#
jx9An Cj Z tÿ j Z9tÿ k C tk‡ h yA tn yjä
#
jx9An Cj Z tÿ j jä j Z9tÿ k C tk‡ h yA tn yjä
E(jx9An Cj Z tÿ j jä j Z9tÿ k C tk‡ h yA tn yjä )
Ejx9An Cj Z tÿ j jä Ej Z9tÿ k C tk‡ h yA tn yjä
by independence of Z tÿ j , Z tÿ k
<
n X
1 X
X
tˆ1 jˆÿ1 k6ˆ j
kPi An kä kCj kä EkPi Z 1 kä :kPl An kä kC k‡ h kä EkPl Z 2 kä
ˆ nkPi An kä kPl An kä EkPi Z 1 kä EkPl Z 2 kä
1 X
X
jˆÿ1 k6ˆ j
kCj kä kC k‡ h kä
where EkPi Z 1 kä , 1 because ä , 1=ai , EkPl Z 2 kä , 1 because ä , 1=al , and
! 1
!
1 X
1
X
X
X
ä
ä
ä
ä
kCj k kC k‡ h k <
kCj k
kC k‡ h k
jˆÿ1 k6ˆ j
jˆÿ1
ˆ
1
X
jˆÿ1
jˆÿ1
!2
kCj kä
,1
and furthermore since
logkPi An k
! ÿai
log n
and
logkPl An k
! ÿal
log n
we have
log(nkPi An kä kPl An kä )
! 1 ÿ äai ÿ äal , 0
log n
# Blackwell Publishers Ltd 2000
316
M. M. MEERSCHAERT AND H.-P. SCHEFFLER
since ä . 1=(ai ‡ al ). Then nkPi An kä kPl An kä ! 0 and it follows that (5.2) holds in this
case.
Case ii: Suppose that x 2 Vi and y 2 Vl are unit vectors with ai ‡ al < 1. Then
ai , 1 and al , 1 so EkPi Z 1 k , 1 and EkPl Z 2 k , 1. Then by assumption Ex9 Z 1 ˆ 0
and EZ92 y ˆ 0. De®ne Z in ˆ Z i I(kAn Z i k < 1) and ì n ˆ EZ in and let
A i, j, n ˆ ( Z in ÿ ì n )( Z jn ÿ ì n )9
Bi, j, n ˆ Z in ì9n ÿ ì n Z9jn
C i, j, n ˆ Z i Z9j I(kAn Z i k . 1 or kAn Z j k . 1)
Di, j, n ˆ ÿì n ì9n
so that EA i, j, n ˆ 0 and Z i Z9j ˆ A i, j, n ‡ Bi, j, n ‡ C i, j, n ‡ Di, j, n . Then let
n X
1 X
X
x9An Cj A tÿ j, tÿ k, n C tk‡ h A tn y
Aˆ
tˆ1 jˆÿ1 k6ˆ j
and similarly for B, C, D to obtain
" n 1
#
X X X
t
Cj Z tÿ j Z9tÿ k C k‡ h A tn y ˆ A ‡ B ‡ C ‡ D
x9An
tˆ1 jˆÿ1 k6ˆ j
and so in order to establish (5.2) it suf®ces to show that each of A, B, C, D tends to zero
in probability. Let Z in ˆ Z in ÿ ì n so that Aijn ˆ Z in Z9jn . Since Z in and Z jn are
independent with mean zero for i 6ˆ j it is easy to check that EA ˆ 0. Then
!2
n X
1 X
X
x9An Cj Z tÿ j, n Z9tÿ k, n C tk‡ h A tn y
var(A) ˆ E
tˆ1 jˆÿ1 k6ˆ j
2
ˆ E4
n
1
X
X
X
t1 ˆ1 j1 ˆÿ1 k 1 6ˆ j1
:
n
1
X
X
X
t2 ˆ1 j2 ˆÿ1 k 2 6ˆ j2
ˆ
n X
n
1
X
X
!
x9An C j1 Z t1 ÿ j1 , n Z9t1 ÿ k 1 , n C tk 1 ‡ h A tn y
!3
x9An C j2 Z t2 ÿ j2 , n Z9t2 ÿ k 2 , n C tk 2 ‡ h A tn y 5
1
X
X X
t1 ˆ1 t2 ˆ1 j1 ˆÿ1 j2 ˆÿ1 k 1 6ˆ j1 k 2 6ˆ j2
E[h Z t1 ÿ j1 , n , C tj1 A tn xi
:h Z t1 ÿ k 1 , n , C tk ‡ h A tn yih Z t2 ÿ j2 , n , C tj A tn xih Z t2 ÿ k 2 , n , C tk ‡ h A tn yi]:
1
2
2
If any one of t1 ÿ j1 , t1 ÿ k 1 , t2 ÿ j2 , or t2 ÿ k 2 is distinct then this term vanishes since
E Z in ˆ 0, using the independence to write the mean of the product as the product of the
means. The only nonvanishing terms for a given t1 , j1 , k 1 occur when either
(a) t1 ÿ j1 ˆ t2 ÿ j2 and t1 ÿ k 1 ˆ t2 ÿ k 2
(b) t1 ÿ j1 ˆ t2 ÿ k 2 and t1 ÿ k 1 ˆ t2 ÿ j2 .
Given t1 , j1 , k 1 for every t2 ˆ 1, . . ., n there exists a unique j2 , k 2 satisfying (a) or (b).
Indeed we have
# Blackwell Publishers Ltd 2000
MOVING AVERAGES OF RANDOM VECTORS
317
(a) j2 ˆ (t2 ÿ t1 ) ‡ j1 and k 2 ˆ (t2 ÿ t1 ) ‡ k 1
(b) j2 ˆ (t2 ÿ t1 ) ‡ k 1 and k 2 ˆ (t2 ÿ t1 ) ‡ j1
so that j2 6ˆ k 2 is guaranteed. Now let p ˆ t1 ÿ j1 , q ˆ t1 ÿ k 1 , j ˆ j1 , t ˆ t1 , k ˆ k 1 ,
and Ä ˆ t2 ÿ t1 so that in case
(a) t2 ÿ j2 ˆ p, t2 ÿ k 2 ˆ q, j2 ˆ Ä ‡ j, and k 2 ˆ Ä ‡ k
(b) t2 ÿ j2 ˆ q, t2 ÿ k 2 ˆ p, j2 ˆ Ä ‡ k, and k 2 ˆ Ä ‡ j.
Then in case (a) the only nonvanishing terms for a given t, j, k are of the form
t
t
t
t
E[h Z pn , C tj A tn xih Z qn , C tk‡ h A tn yih Z pn , C ć
j A n xih Z qn , C ć k‡ h A n yi]
t
t
t
t
t
t
ˆ E[h Z pn , C tj A tn xih Z pn , C ć
j A n xih Z qn , C k‡ h A n yih Z qn , C ć k‡ h A n yi
t
t
t
t
ˆ E[x9An Cj Z pn Z9pn C ć
j A n x]E[ y9An C k‡ h Z qn Z9qn C ć k‡ h A n y]
t
t
t
t
ˆ (x9An Cj M n C ć
j A n x)( y9An C k‡ h M n C ć k‡ h A n y)
where M n ˆ E Z in Z9in . In case (b) the only nonvanishing terms are of the form
t
t
t
t
E[h Z pn , C tj A tn xih Z qn , C tk‡ h A tn yih Z qn , C ć
k A n xih Z pn , C ć j‡ h A n yi]
t
t
t
t
t
t
ˆ E[h Z pn , C tj A tn xih Z pn , C ć
j‡ h A n yih Z qn , C k‡ h A n yih Z qn , C ć k A n xi]
t
t
t
t
ˆ (x9An Cj M n C ć
j‡ h A n y)( y9An C k‡ h M n C ć k A n x)
so that
var(A) ˆ
n
n
1 X
X
X
X
t
t
t
t
[(x9An Cj M n C ć
j A n x)( y9An C k‡ h M n C ć k‡ h A n y)
tˆ1 t‡Äˆ1 jˆÿ1 k6ˆ j
t
t
t
t
‡ (x9An Cj M n C ć
j‡ h A n y)( y9An C k‡ h M n C ć k A n x)]:
Now we need to bound the terms in this sum. Write C tj x ˆ rj xj , A tn xj ˆ anj xnj ,
C tj y ˆ r j yj , and A tn yj ˆ bnj ynj where xj , xnj , yj , ynj are unit vectors and rj, anj , r j , bnj are
positive reals. Then for example we have
t
t
t t
x9An Cj M n C ć
j A n x ˆ x9Cj An M n A n C ć j x
t
ˆ (C tj x)9An M n A tn C ć
jx
ˆ rj rć j x9j An M n A tn xć j
where
x9j An M n A tn xć j ˆ x9j An E Z 1 n Z91 n A tn xć j
ˆ Ex9j An Z 1 n Z91 n A tn xć j
ˆ Eh Z 1 n , A tn xj ih Z 1 n , A tn xć j i
q
< Eh Z 1 n , A tn xj i2 Eh Z 1 n , A tn xć j i2 :
Since Z 1 n ˆ Z 1 n ÿ ì n where ì n ˆ EZ 1 n we have for example that
# Blackwell Publishers Ltd 2000
318
M. M. MEERSCHAERT AND H.-P. SCHEFFLER
Eh Z 1 n , A tn xj i2 ˆ Eh Z 1 n ÿ ì n , A tn xj i2
ˆ E(h Z 1 n , A tn xj i ÿ Eh Z 1 n , A tn xj i)2
ˆ var(h Z 1 n , A tn xj i)
< E[h Z 1 n , A tn xj i2 ]
so that, calculating in a similar manner for the remaining terms, we have for example
q
t
t
x9An Cj M n C ć
Eh Z 1 n , A tn xj i2 Eh Z 1 n , A tn xć j i2
j A n x < rj r j‡Ä
and then
var(A) <
n
n
1 X
X
X
X
tˆ1 t‡Äˆ1 jˆÿ1 k6ˆ j
(rj r j‡Ä [Eh Z 1 n , A tn xj i2 Eh Z 1 n , A tn x j‡Ä i2 ]1=2
:r k‡ h r k‡ h‡Ä [Eh Z 1 n , A tn y k‡ h i2 Eh Z 1 n , A tn y k‡ h‡Ä i2 ]1=2
‡ rj r j‡ h‡Ä [Eh Z 1 n , A tn x y i2 Eh Z 1 n , A tn y j‡ h‡Ä i2 ]1=2
:r k‡ h r k‡Ä [Eh Z 1 n , A tn y k‡ h i2 Eh Z 1 n , A tn x k‡Ä i2 ]1=2 †:
To bound the expectations on the right hand side of the inequality above we need
Lemma 5.1. For all unit vectors x 2 Vi , y 2 Vl , for some n0 , for all n > n0 and all j,
k, h, Ä, for some K n . 0 satisfying nK n ! 0 we have
[Eh Z 1 n , A tn x j i2 Eh Z 1 n , A tn x j‡Ä i2 : Eh Z 1 n , A tn y k‡ h i2 Eh Z 1 n , A tn y k‡ h‡Ä i2 ]1=2
‡ [Eh Z 1 n , A tn x j i2 Eh Z 1 n , A tn y j‡ h‡Ä i2 : Eh Z 1 n , A tn y k‡ h i2 Eh Z 1 n , A tn x k‡Ä i2 ]1=2
(5:3)
< Kn :
Proof of lemma 5.1. Suppose that ai . 12. Then al , 12 so EkPi Z 1 k2 ˆ 1 while
EkPl Z 1 k2 , 1. Recall that Pl ˆ P2l ˆ P tl for this orthogonal projection operator,
An Pl ˆ Pl An and that jhx, èi < kxk when è is a unit vector. Then for all unit vectors
è 2 Vl we have
Eh Z 1 n , A tn èi2 ˆ EhAn Z 1 , èi2 I(kAn Z 1 k < 1)
< EhAn Z 1 , èi2
ˆ EhAn Z 1 , Pl èi2
ˆ EhPl An Pl Z 1 , èi2
< EkPl An Pl Z 1 k2
< kPl An k2 EkPl Z 1 k2 :
Since 1=ai , 2 we can apply part (ii) of Lemma 2.1 with æ ˆ 2 to get
nEh Z 1 n , A tn èi2 ˆ nEhAn Z 1 , èi2 I(kAn Z 1 k < 1) , K 2
for all n > n0 and all kèk ˆ 1 in Vi . Then (5.3) holds in this case with
# Blackwell Publishers Ltd 2000
MOVING AVERAGES OF RANDOM VECTORS
319
K n ˆ 2[(K 2 =n)2 (kPl An k2 EkPl Z 1 k2 )2 ]1=2
ˆ 2(K 2 =n)kPl An k2 EkPl Z 1 k2
so that nK n ! 0 since kPl An k ! 0. If al . 12 so that ai , 12 then a similar argument
establishes (5.3) with K n ˆ 2(K 2 =n)kPi An k2 EkPi Z 1 k2 and again nK n ! 0. Otherwise we
have ai < 12 and al < 12. If both EkPi Z 1 k2 and EkPl Z 1 k2 exist then (5.3) holds with
K n ˆ 2kPi An k2 kPl An k2 EkPi Z 1 k2 EkPl Z 1 k2 and then
log nK n log(nkPi An k2 kPl An k2 )
! 1 ÿ 2(ai ‡ al ) , 0
log n
log n
since ai . 14 and al . 14 by assumption, so again nK n ! 0. Otherwise suppose that ai < 12
and al < 12 with EkPi Z 1 k2 ˆ 1 and EkPl Z 1 k2 , 1. Then ai ˆ 12 and so we can apply part
(ii) of Lemma 2.1 with æ ˆ 2 ‡ å for any å . 0. Observe that for all t and all è we have
…
jhz, èij2 ì(dz)
U2 (t, è) ˆ
jhz,èij< t
…
ˆ
0<jhz,èij , 1
…
<
0<jhz,èij , 1
jhz, èij2 ì(dz) ‡
jhz, èij2 ì(dz) ‡
…
1<jhz,èij< t
…
1<jhz,èij< t
jhz, èij2 ì(dz)
jhz, èij2‡å ì(dz)
< 1 ‡ U 2‡å (t, è)
and since U 2‡å (t, è) ! 1 uniformly on kèk ˆ 1 in Vi , for some t0 and C . 0 we have
for all t > t0 and all kèk ˆ 1 in Vi that U 2 (t, è) < CU2‡å (t, è). As in the proof of
for all kèk ˆ 1 and all n > n0 .
Lemma 2.1 we can choose n0 such that kAn èk < tÿ1
0
Enlarge n0 if necessary so that Lemma 2.1 also applies. Then
nEh Z 1 n , A tn xj i2 ˆ nEh Z 1 , A tn xj i2 I(kAn Z 1 k < 1)
< na2nj U 2 (aÿ1
nj , xnj )
ÿ1
: 2‡å
< Caÿå
nj na nj U 2‡å (a nj , xnj )
2‡å
:
ˆ Caÿå
I(jhAn Z 1 , xj ij < 1)
nj nEjhAn Z 1 , xj ij
:
< Caÿå
nj K 1
for all n > n0 ˆ n0 (t0 ) and all j. Note also that for all j and all n we have
ÿ1 å
ÿå
anj ˆ kAn xj k > minfkAn èk: kèk ˆ 1 in Vi g ˆ 1=kPi Aÿ1
n k so that a nj < kPi A n k for all
n and all j. Then (5.3) holds in this case with
å 2
2
2 2 1=2
K n ˆ [(nÿ1 K 1 kPi Aÿ1
n k ) (kPl An k EkPi Z 1 k ) ]
å
2
2
ˆ nÿ1 K 1 kPi Aÿ1
n k kPl An k EkPi Z 1 k
so that
å
2
log nK n log(kPi Aÿ1
n k kPl An k )
! åai ÿ 2al , 0
log n
log n
for all å . 0 suf®ciently small. Note that å . 0 can be chosen independently of x, y, j
and so nK n ! 0 here too. Finally if ai < 12 and al < 12 with EkPi Z 1 k2 , 1 and
# Blackwell Publishers Ltd 2000
320
M. M. MEERSCHAERT AND H.-P. SCHEFFLER
å
2
2
EkPl Z 1 k2 ˆ 1 then (5.3) holds with K n ˆ nÿ1 K 1 kPl Aÿ1
and
n k kPi An k EkPi Z 1 k
log(nK n )=log n ! åal ÿ 2ai , 0 for å . 0 suf®ciently small, and again nK n ! 0. This
concludes the proof of Lemma 5.1.
Returning to the proof of Proposition 4.2, we get from Lemma 5.1
n
n
1 X
X
X
X
(rj r j‡Ä r k‡ h r k‡ h‡Ä ‡ rj r j‡ h‡Ä r k‡ h r k‡Ä )
var(A) < K n
tˆ1 t‡Äˆ1 jˆÿ1 k6ˆ j
< Kn
n
nÿ t X
1 X
X
X
(kCj k kC j‡Ä k kC k‡ h k kC k‡ h‡Ä k
tˆ1 Ĉ1ÿ t jˆÿ1 k6ˆ j
‡ kCj k kC j‡ h‡Ä k kC k‡ h k kC k‡Ä k)
< nK n 2
1
X
jˆÿ1
!4
kCj k
!0
and so A ! 0 in probability.
Next we have
Bˆ
n X
1 X
X
tˆ1 jˆÿ1 k6ˆ j
x9An Cj [ Z tÿ j, n ì9n ‡ ì n Z9tÿ k, n ]C tk‡ h A tn y
and we will let
jk
ˆ x9An Cj [ Z rn ì9n ‡ ì n Z9sn ]C tk A tn y
B rs
ˆ (Cj x)9An [ Z rn ì9n ‡ ì n Z9sn ]A tn (C tk y)
ˆ rj rk (A tn xj )9[ Z rn ì9n ‡ ì n Z9sn ](A tn yk )
(2)
ˆ rj rk [B (1)
r ‡ Bs ]
where
t
t
B (1)
r ˆ (A n xj )9 Z rn ì9n A n yk
t
t
B (2)
s ˆ (A n xj )9ì n Z9sn A n yk
In the next lemma we derive bounds on the expectation of B lk
rs .
Lemma 5.2. For all unit vectors x 2 Vi and y 2 Vl , for some n0 , for all n > n0 , for
all j, k, r, s we have
jk
j < kCj k kCk kK n
EjB rs
where nK n ! 0.
Proof of lemma 5.2. Write
jk
(2)
j ˆ rj rk EjB (1)
EjB rs
r ‡ Bs j
(2)
< kCj k kCk k(EjB (1)
r j ‡ EjB s j)
where
# Blackwell Publishers Ltd 2000
(5:4)
MOVING AVERAGES OF RANDOM VECTORS
321
t
t
EjB (1)
r j ˆ Ejh Z 1 n , A n xj ihì n , A n yk ij
ˆ jhì n , A tn yk ijEjh Z 1 n , A tn xj ij:
Since ai , 1 we have EkPi Z 1 k , 1. Then Ejh Z 1 n , A tn xj ij < kPi An kEkPi Z 1 k as in the
proof of Lemma 5.1. Apply part (iv) of Lemma 2.1 with ç ˆ 1 (so that ç , 1=al ). Using
the fact that Eh Z 1 , yk i ˆ 0 we have for some n1 . 0 that
njhì n , A tn yk ij ˆ njhEZ 1 n , A tn yk ij
ˆ njEh Z 1 n , A tn yk ij
ˆ njEh Z 1 , A tn yk iI(kAn Z 1 k < 1)j
ˆ njEh Z 1 , A tn yk iI(kAn Z 1 k . 1)j
< nEjhAn Z 1 , yk ijI(kAn Z 1 k . 1) , K 4
for all n > n1 and all k. It follows that
nEjB (1)
r j < K 4 kPi An kEkPi Z 1 k
for all n > n1 and all k. A similar argument (note that al , 1 so that EkPl Z 1 k , 1 and
apply part (iv) of Lemma 2.1 again with ç ˆ 1, so that ç , 1=ai ) shows that for some
n2 . 0 we have
nEjB (2)
r j < K 4 kPl An kEkPl Z 1 k
for all n > n2 and all j. Letting n0 ˆ maxfn1 , n2 g we have that (5.4) holds with
nK n ˆ K 4 (kPi An kEkPi Z 1 k ‡ kPl An kEkPl Z 1 k)
for all j, k, r, s and all n > n0 . Then nK n ! 0 since kAn k ! 0, which concludes the
proof of Lemma 5.2.
Returning to the proof of Proposition 4.2, Lemma 5.2 yields
n X
1 X
X
kCj k kC k‡ h kK n
EjBj <
tˆ1 jˆÿ1 k6ˆ j
< nK n
1
X
jˆÿ1
!2
kCj k
!0
and so B ! 0 in probability.
Next we write
C rsjk ˆ x9An Cj Z r Z9s I(kAn Z r k . 1 or kAn Z s k . 1)C tk A tn y
ˆ rj rk h Z r , A tn xj ih Z s , A tn yk iI(kAn Z r k . 1 or kAn Z s k . 1)
so that
jk
j < kCj k kCk kE(jh Z r , A tn xj ih Z s , A tn yk ijI(kAn Z r k . 1 or kAn Z s k . 1))
EjC rs
< kCj k kCk k(Ejx9j An Z r Z9s A tn yk jI(kAn Z r k . 1) ‡ Ejx9j An Z r Z9s A tn yk jI(kAn Z s k . 1)):
The next lemma gives bounds on EjC rs
jk j.
Lemma 5.3. For all unit vectors x 2 Vi and y 2 Vl , for some n0 , for all n > n0 , and
all j, k, r, s we have
# Blackwell Publishers Ltd 2000
322
M. M. MEERSCHAERT AND H.-P. SCHEFFLER
EjC rs
jk j < kCj k kC k kK n
(5:5)
where nK n ! 0.
Proof of lemma 5.3. Write
Ejx9j An Z r Z9s A tn yk jI(kAn Z r k . 1) ˆ Ejh Z s , A tn yk ijEjh Z r , A tn xj ijI(kAn Z r k . 1)
where
Ejh Z s , A tn yk ij ˆ bnk Ejh Z s , ynk ij
< bnk EkPl Z 2 k
< kPl An kEkPl Z 2 k
and
nEjh Z r , A tn xj ijI(kAn Z r k . 1) < K 4
and similarly
nEjx9j An Z r Z9s A tn yk jI(kAn Z s k . 1) < kPi An kEkPi An kK 4
so that (5.5) holds with
nK n ˆ K 4 (kPi An kEkPi Z 1 k ‡ kPl An kEkPl Z 1 k)
which is the same K n as for Lemma 5.2, so nK n ! 0. This concludes the proof of Lemma
5.3.
In the proof of Proposition 4.2 we get using Lemma 5.3
n X
1 X
X
kCj k kC k‡ h kK n
EjCj <
tˆ1 jˆÿ1 k6ˆ j
< nK n
1
X
!2
kCj k
!0
jˆÿ1
and so C ! 0 in probability.
Finally we can write
Djk ˆ x9An Cj ì n ì9n C tk A tn y
ˆ rj rk hì n , A tn xj ihì n , A tn yk i:
and use the bound proved in
Lemma 5.4. For all unit vectors x 2 Vi and y 2 Vl , for some n0 , for all n > n0 , and
all j, k, we have
jDjk j < kCj k kCk kK n
where nK n ! 0.
Proof of lemma 5.4. Recall that for all large n we have jhì n , A tn xj ij , K 4 =n from
the proof of Lemma 5.2 above. Likewise jhì n , A tn yk ij < K 4 =n so we can take
K n ˆ (K 4 =n)2 , which concludes the proof of Lemma 5.4.
Now we have from Lemma 5.4
# Blackwell Publishers Ltd 2000
323
MOVING AVERAGES OF RANDOM VECTORS
jDj <
n X
1 X
X
kCj k kC k‡ h kK n
tˆ1 jˆÿ1 k6ˆ j
!2
1
X
< nK n
jˆÿ1
kCj k
!0
which completes the proof of Proposition 4.2.
We now prove Proposition 4.3.
Proof of proposition 4.3. De®ne
S (nm)
ˆ
X
n
X
Cj
tˆ1
j jj< m
d (nm) ˆ n
W ( m) ˆ
X
j jj< m
X
j jj< m
!
Z tÿ j Z9tÿ j C tj‡ h
Cj Bn C tj‡ h
Cj WC tj‡ h
for m > 1. We will now apply (4.2) to show that
!
!
n
X
t
Z tÿ j Z9tÿ j ÿ Bn A n : j jj < m ) (W , . . ., W ):
An
(5:6)
tˆ1
Since we know that the convergence holds for the j ˆ 0 component, it suf®ces to show
that the difference between this component and any other component tends to zero in
probability. This difference consists of at most 2m identically distributed terms, any one
of which must tend to zero in probability in view of the convergence (4.2) above, and so
(5.6) holds. Now by the continuous mapping theorem we obtain from (5.6) that
An (S (nm) ÿ d (nm) )A tn ) W ( m)
and since
X
j jj. m
kCj WC tj‡ h k < kW k
< kW k
X
kCj k kC j‡ h k
j jj. m
X
kCj k2
j jj. m
!1=2
X
kC j‡ h k2
!1=2
!0
j jj. m
as m ! 1 we also have
W (m) !
1
X
jˆÿ1
Cj WC tj‡ h
almost surely. Then in order to establish (4.5) it suf®ces to show (e.g. by Billingsley
(1968) Theorem 4.2) that
lim lim sup P(kT k . å) ˆ 0
m!1
n!1
(5:7)
where
# Blackwell Publishers Ltd 2000
324
M. M. MEERSCHAERT AND H.-P. SCHEFFLER
n
XX
Tˆ
2
j jj. m tˆ1
P
An Cj ( Z tÿ j Z9tÿ j ÿ Bn )C tj‡ h A tn :
Note that kT k ˆ i, j je9i Tej j2 where e1 , . . ., ed is the standard basis for R d . Then in order
to show that (5.7) holds it suf®ces to show that
lim lim sup P(jx9Tyj . å) ˆ 0
m!1
(5:8)
n!1
for x 2 Vi and y 2 Vl where 1 < i, l < p. Write T ˆ A ‡ B ÿ C where
Aˆ
n
XX
j jj. m tˆ1
Bˆ
n
XX
j jj. m tˆ1
Cˆ
n
XX
j jj. m tˆ1
An Cj Z tÿ j Z9tÿ j C tj‡ h A tn I(kAn Z tÿ j k < 1)
An Cj Z tÿ j Z9tÿ j C tj‡ h A tn I(kAn Z tÿ j k . 1)
An Cj B n C tj‡ h A tn :
Since T ˆ A ‡ B ÿ C it suf®ces to show that (5.8) holds with T replaced by A, B, or C.
We begin with B. Choose ä . 0 with ä , 1=ap and ä < 1 and let ä1 ˆ ä=2 , 1. Use the
Markov inequality along with the fact that jx ‡ yjä1 < jxjä1 ‡ j yjä1 to obtain
P(jx9Byj . å) < å ÿä1 Ejx9Byjä1
< å ÿä1
X
nEjx9Cj An Z 1 Z91 A tn C tj‡ h yjä1 I(kAn Z 1 k . 1)
j jj. m
< å ÿä1
X
j jj. m
< å ÿä1
kCj kä1 kC j‡ h kä1 nEjx9j An Z 1 Z91 A tn y j‡ h jä1 I(kAn Z 1 k . 1)
"X
kC j‡ h kä
#1=2
j jj. m
:
"X
j jj. m
kCj kä (nEjx9j An Z 1 Z91 A tn y j‡ h jä1 I(kAn Z 1 k . 1))2
#1=2
by the Schwartz inequality. Then in order to show that (5.8) holds with T replaced by B it
will suf®ce to show that for some K 4 . 0 we have
nEjx9j An Z 1 Z91 A tn y j‡ h jä1 I(kAn Z 1 k . 1) < K 4
(5:9)
for all j, h and all n large. Apply the Schwartz inequality again to see that
nEjx9j An Z 1 Z91 A tn y j‡ h jä1 I(kAn Z 1 k . 1)
ˆ nEjhAn Z 1 , xj ihAn Z 1 , y j‡ h ijä1 I(kAn Z 1 k . 1)
q
< nEjhAn Z 1 , xj ijä I(kAn Z 1 k . 1): nEjhAn Z 1 , y j‡ h ijä I(kAn Z 1 k . 1)
where nEjhAn Z 1 , xj ijä I(kAn Z 1 k . 1) , K 4 for all n large and all j by an application of
# Blackwell Publishers Ltd 2000
325
MOVING AVERAGES OF RANDOM VECTORS
Lemma 2.1 part (iv) with ç ˆ ä , 1=ai . A similar argument holds for the remaining term,
which establishes (5.9), and so (5.8) holds with T replaced by B.
Next we look at A and C which requires us to consider different cases. First suppose
that ai . 12 and al . 12. As in the proof of Proposition 4.2 we write C tj x ˆ rj xj ,
A tn xj ˆ anj xnj , C tj y ˆ rj yj and A tn yj ˆ bnj ynj where xj , xnj , yj , ynj are unit vectors and rj,
anj , r j , bnj are positive reals. By the Markov inequality we get
P (jx9Ayj . å) < å ÿ1 Ejx9Ayj
< å ÿ1
X
j jj. m
< å ÿ1
X
j jj. m
nEjx9Cj An Z 1 Z91 A tn C tj‡ h yjI(kAn Z 1 k < 1)
kCj k kC j‡ h knEjh Z 1 , A tn xj ih Z 1 , A tn y j‡ h ijI(kAn Z 1 k < 1):
Next we will show that for some K 2 . 0 we have
nEjh Z 1 , A tn xj ih Z 1 , A tn y j‡ h ijI(kAn Z 1 k < 1) < K 2
(5:10)
for all j, h and all n large. By the Schwartz inequality we have that the left hand side of
(5.10) is bounded above by
q
nEh Z 1 , A tn xj i2 I(kAn Z 1 k < 1)nEjh Z 1 , A tn y j‡ h i2 I(kAn Z 1 k < 1)
where we have nEh Z 1 , A tn xj i2 I(kAn Z 1 k < 1) , K 2 independent of j and n large as in the
proof of Lemma 5.1 above, since ai . 12 (so that æ ˆ 2 . 1=ai in Lemma 2.1 part (ii)).
Similarly nEh Z 1 , A tn y j‡ h i2 I(kAn Z 1 k < 1) , K 2 independent of j and n large by another
application of Lemma 2.1 part (ii) (note that al . 12 so that æ ˆ 2 . 1=al ). Hence (5.10)
holds for all j, h and all n large, and so for n large we have
X
kCj k kC j‡ h k
P(jx9Ayj . å) < å ÿ1 K 2
j jj. m
<å
ÿ1
K2
X
kCj k
2
!1=2
j jj. m
X
kC j‡ h k
2
!1=2
j jj. m
which tends to zero as m ! 1, and so (5.8) holds with T replaced by A in this case.
Since jx9Cyj ˆ jEx9Ayj we also have that (5.8) holds with T replaced by C in this case.
Finally suppose that either ai < 12 or al < 12 (or both). As in the proof of Proposition 4.2
de®ne Z in ˆ Z i I(kAn Z i k < 1) and let Qin ˆ Z in Z9in ÿ EZ in Z9in ˆ Z in Z9in ÿ Bn . By the
Markov inequality we get
P(jx9(A ÿ C) yj . å) < å ÿ2 E[(x9(A ÿ C) y)2 ]
ˆ å ÿ2 E
n
XX
j jj. m tˆ1
ˆ å ÿ2
x9An Cj Q tÿ j, n C tj‡ h A tn y
n X X
n
XX
j jj. m tˆ1 j j9j. m t9ˆ1
!2
E(x9An Cj Q tÿ j, n C tj‡ h A tn y: x9An C j9 Q t9ÿ j9, n C tj9‡ h A tn y):
Since Qin are independent with mean zero the only terms that contribute to the sum
above are those for which t ÿ j ˆ t9 ÿ j9. Then the sum above equals
# Blackwell Publishers Ltd 2000
326
M. M. MEERSCHAERT AND H.-P. SCHEFFLER
nÿ
t‡ j
n
XX
X
j jj. m tˆ1 j9ˆ1ÿ t‡ j
rj r j‡ h r j9 r j9‡ h E(x9j An Q tÿ j, n A tn y j‡ h x9j9 An Q tÿ j, n A tn y j9‡ h ):
Next we will show that for some K 2 . 0 we have
nE(x9j An Q1 n A tn y j‡ h x9j9 An Q1 n A tn y j9‡ h ) < K 2
(5:11)
for all j, j9, h and all n large. Write
x9j An Q1 n A tn yk ˆ x9j An Z 1 n Z91 n A tn yk ÿ Ex9j An Z 1 n Z91 n A tn yk
ˆ hAn Z 1 n , xj ihAn Z 1 n , yk i ÿ EhAn Z 1 n , xj ihAn Z 1 n , yk i,
ˆ î nj â nk ÿ Eî nj â nk
where
î nj ˆ hAn Z 1 n , xj i:
â nk ˆ hAn Z 1 n , yk i
Then
E(x9j An Q1 n A tn y j‡ h x9j9 An Q1 n A tn y j9‡ h )
ˆ E(î nj â n, j‡ h ÿ Eî nj â n, j‡ h )(î nj9 â n, j9‡ h ÿ Eî nj9 â n, j9‡ h )
ˆ Eî nj â n, j‡ h î nj9 â n, j9‡ h ÿ Eî nj â n, j‡ h Eî nj9 â n, j9‡ h
ÿ Eî nj â n, j‡ h Eî nj9 â n, j9‡ h ‡ Eî nj â n, j‡ h Eî nj9 â n, j9‡ h
ˆ Eî nj â n, j‡ h î nj9 â n, j9‡ h ÿ Eî nj â n, j‡ h Eî nj9 â n, j9‡ h
where
Ejî nj â n, j‡ h î nj9 â n, j9‡ h j < (E(î nj â n, j‡ h )2 )1=2 (E(î nj9 â n, j9‡ h )2 )1=2
< (Eî 4nj )1=4 (Eâ 4n, j‡ h )1=4 (Eî 4nj9 )1=4 (Eâ 4n, j9‡ h )1=4
ˆ (Eî 4nj Eâ 4n, j‡ h Eî4nj9 Eâ 4n, j9‡ h )1=4
by the Schwartz inequality, and similarly
Ejî nj â n, j‡ h j < (Eî 4nj Eâ 4n, j‡ h )1=4
Ejî nj9 â n, j9‡ h j < (Eî 4nj9 Eâ 4n, j9‡ h )1=4
so that together we have
E(x9j An Q tÿ j, n A tn y j‡ h x9j9 An Q tÿ j, n A tn y j9‡ h ) < 2(Eî 4nj Eâ 4n, j‡ h Eî 4nj9 Eâ 4n, j9‡ h )1=4
where
nEî 4nj ˆ nEhAn Z 1 n , xj i4 ˆ nEhAn Z 1 , xj i4 I(kAn Z 1 k < 1)
which is bounded above by some K 2 , 1, for all n > n0 and all j by an application of
part (ii) of Lemma 2.1 with 4 ˆ æ . 1=ai . A similar argument shows that nEâ4nk , K 2 for
all k and all n large (apply part (ii) of Lemma 2.1 again with 4 ˆ æ . 1=al ). Then (5.11)
holds for j, j9, h and all n large, and so for all large n we have
# Blackwell Publishers Ltd 2000
327
MOVING AVERAGES OF RANDOM VECTORS
n
XX
P(jx9(A ÿ C) yj . å) < å ÿ2 2K 2 nÿ1
nÿ
t‡ j
X
kCj k kC j‡ h k kC j9 k kC j9‡ h k
j jj. m tˆ1 j9ˆ1ÿ t‡ j
X
ˆ å ÿ2 2K 2
kCj k kC j‡ h knÿ1
X
kCj k2
!1=2
j jj. m
:
1
X
kC j9 k
j9ˆÿ1
kC j9 k kC j9‡ h k
tˆ1 j9ˆ1ÿ t‡ j
j jj. m
< å ÿ2 2K 2
nÿ
t‡ j
n
X
X
2
X
kC j‡ h k2
!1=2
j jj. m
!1=2
1
X
2
!1=2
kC j9‡ h k
j9ˆÿ1
which tends to zero as m ! 1, and so (5.8) holds with T replaced by A ÿ C in this case,
which concludes the proof of Proposition 4.3.
REFERENCES
Anderson, P. and Meerschaert, M. (1997) Periodic moving averages or random variables with
regularly varying tails, Ann. Statist., 25, 771±785.
Araujo, A. and Gin EÂ, E. (1980) The Central Limit Theorem for Real and Banach Valued Random
Variables, Wiley, New York.
Bhansali, R. (1993) Estimation of the impulse response coef®cients of a linear process with
in®nite variance, J. Multivariate Anal., 45, 274±290.
Billingsley, P. (1968) Convergence of Probability Measures, Wiley, New York.
Brockwell, P. and Davis, R. (1991) Time Series: Theory and Methods, 2nd Ed., Springer-Verlag,
New York.
Davis, R. and Resnick, S. (1985a) Limit theory for moving averages of random variables with
regularly varying tail probabilities, Ann. Probab., 13, 179±195.
Davis, R. and Resnick, S. (1985b) More limit theory for the sample correlation function of
moving averages, Stoch. Proc. Appl., 20, 257±279.
Davis, R. and Resnick, S. (1986) More limit theory for the sample covariance and correlation
functions of moving averages, Ann. Statist., 14, 533-558.
Davis, R., Marengo, J. and Resnick, S. (1985) Extremal properties of a class of multivariate
moving averages, Proceedings of the 45th International Statistical Institute, Vol. 4, Amsterdam.
Davis, R. and Marengo, J. (1990) Limit theory for the sample covariance and correlation matrix
functions of a class of multivariate linear processes, Commun. Statist. Stochastic Models, 6,
483±497.
Feller, W. (1971) An Introduction to Probability Theory and Its Applications, Vol. II, 2nd Ed.,
Wiley, New York.
Janicki, A. and Weron, A. (1994) Simulation and Chaotic Behavior of á-Stable Stochastic
Processes, Marcel Dekker, New York.
Jansen, D. and de Vries, C. (1991) On the frequency of large stock market returns: Putting
booms and busts into perspective, Review of Econ. and Statist., 23, 18±24.
Kokoszka, P. and Taqqu, M. (1994) In®nite variance stable ARMA processes, J. Time Series
Anal., 115, 203±220.
Kokoszka, P. and Taqqu, M. (1996) Parameter estimation for in®nite variance fractional ARIMA,
Ann. Statist., 24, 1880±1913.
Loretan, M. and Phillips, P. (1994) Testing the covariance stationarity of heavy tailed time
series, J. Empirical Finance 1, 211±248.
Mandelbrot, B. (1963) The variation of certain speculative prices, J. Business, 36, 394±419.
Meerschaert, M. (1993) Regular variation and generalized domains of attraction in R k , Stat.
Probab. Lett. 18, 233±239.
# Blackwell Publishers Ltd 2000
328
M. M. MEERSCHAERT AND H.-P. SCHEFFLER
Meerschaert, M. (1994) Norming operators for generalized domains of attraction, J. Theoretical
Probab., 7, 793±798.
Meerschaert, M. and Scheffler, H. P. (1999) Sample covariance matrix for random vectors
with regularly varying tails, J. Theoretical Probab., 12, 821±838.
Meerschaert, M. and Scheffler, H. P. (1999) Multivariable regular variation of functions and
measures, J. Applied Analysis., 5, 125±146.
Meerschaert, M. and Scheffler, H. P. (1998) Sample cross-correlations for moving averages
with regularly varying tails, preprint.
Mikosch, T., Gadrich, T., KlÏppenberg, C. and Adler, R. (1995) Parameter estimation for
ARMA models with in®nite variance innovations, Ann. Statist., 23, 305±326.
Mikknik, S. and Rachev, S. (1998) Asset and Option Pricing with Alternative Stable Models,
Wiley, New York.
Nikias, C. and Shao, M. (1995) Signal Processing with Alpha Stable Distributions and
Applications, Wiley, New York.
Resnick, S. (1986) Point processes, regular variation, and weak convergence, Adv. Appl. Probab.,
18, 66±138.
Resnick, S. and StAÆric AÆ, C. (1995) Consistency of Hill's estimator for dependent data, J. Appl.
Probab., 32, 139±167.
Samorodnitsky, G. and Taqqu, M. (1994) Stable non-Gaussian Random Processes: Stochastic
Models with In®nite Variance, Chapman and Hall, London.
Scheffler, H. P. (1998) Multivariate R±O varying measures, preprint.
Seneta, E. (1976) Regularly Varying Functions, Lecture Notes Mathematics, Springer, BerlinHeidelberg, New York.
Sharpe, M. (1969) Operator-stable probability distributions on vector groups, Trans. Amer. Math.
Soc. 136, 51±65.
# Blackwell Publishers Ltd 2000
Download