MOVING AVERAGES OF RANDOM VECTORS WITH REGULARLY VARYING TAILS By Mark M. Meerschaert and Hans-Peter Scheffler University of Nevada and University of Dortmund First version received October 1998 Abstract. Regular variation is an analytic condition on the tails of a probability distribution which is necessary for an extended central limit theorem to hold, when the tails are too heavy to allow attraction to a normal limit. The limiting distributions which can occur are called operator stable. In this paper we show that moving averages of random vectors with regularly varying tails are in the generalized domain of attraction of an operator stable law. We also prove that the sample autocovariance matrix of these moving averages is in the generalized domain of attraction of an operator stable law on the vector space of symmetric matrices. AMS 1990 subject classi®cation. Primary 62M10, secondary 62E20, 62F12, 60F05. Keywords. Moving average, autocovariance matrix, operator stable, regular variation, random matrix. 1. INTRODUCTION In this paper we establish the basic asymptotic theory for moving averages of random vectors with heavy tails. Heavy tail distributions occur frequently in applications to ®nance, geology, physics, chemistry, computer and systems engineering. The recent books of Samorodnitsky and Taqqu (1994), Janicki and Weron (1994), Nikias and Shao (1995), and Mittnik and Rachev (1998) review many of these applications. Scalar time series with heavy tails are discussed in Anderson and Meerschaert (1997), Bhansali (1993), Brockwell and Davis (1991), Davis and Resnick (1985a, 1985b, 1986), Jansen and de Vries (1991), Kokoszka and Taqqu (1994, 1996), Loretan and Phillips (1994), Mikosch, Gadrich, KluÈppenberg and Adler (1995), and Resnick and StaÏricaÏ (1995). Modern applications of heavy tail distributions were pioneered by Mandelbrot (1963) and others in connection with problems in ®nance. When the probability tails of the random ¯uctuations in a time series model are suf®ciently light, the asymptotics are normal. But when the tails are suf®ciently heavy that the fourth moments fail to exist, the asymptotics are governed by the extended central limit theorem. For scalar models the limiting distributions are stable. Stable laws are characterized by the fact that sums of independent stable random variables are also stable, and the distribution of the sum can be reduced to that of any summand by an appropriate af®ne normalization. The normal laws are a special case of stable 0143-9782/00/04 297±328 JOURNAL OF TIME SERIES ANALYSIS Vol. 21, No. 3 # 2000 Blackwell Publishers Ltd., 108 Cowley Road, Oxford OX4 1JF, UK and 350 Main Street, Malden, MA 02148, USA. 298 M. M. MEERSCHAERT AND H.-P. SCHEFFLER laws. Stable stochastic processes are interesting because they provide the most straightforward mechanism for generating random fractals. For random vectors with heavy tails, the extended central limit theorem yields operator stable limit laws, so called because the af®ne normalization which reduces the distribution of a sum to that of one summand involves a linear operator. Regular variation is the analytic condition necessary for the extended central limit theorem to hold for heavy tails. See Feller (1971) Chapter XVII for the scalar version and Meerschaert (1993) for the vector version of the extended central limit theorem. In this paper we will show that moving averages of random vectors with regularly varying tail probabilities are asymptotically operator stable. The regular variation arguments at the heart of the proof will appear familiar to any reader who is acquainted with the work of Feller on regular variation. We will also show, using similar regular variation methods, that the sample autocovariance matrix formed from these moving averages is asymptotically operator stable as a random element of the vector space of d 3 d symmetric matrices. 2. NOTATION AND PRELIMINARY RESULTS In this section we present the notion of a regularly varying measure together with the multivariable theory of regular variation necessary in the proofs of our main results. The connection of regularly varying measures to generalized domains of attraction of operator stable laws is also discussed. Assume that f Z n g are i.i.d. random vectors on R d with common distribution ì. A sequence (An ) of invertible linear operators on R d is called regularly varying with index F, where F is a d 3 d matrix, if F A[ë n] Aÿ1 n ! ë as n ! 1 (2:1) F where ë exp(F log ë) and exp is the exponential mapping for d 3 d matrices. We write (An ) 2 RV(F) if (2.1) holds. We say that ì is regularly varying with exponent E if there exists a sequence (An ) 2 RV(ÿE) such that n(An ì) ! ö (2:2) where ö is some ó-®nite Borel measure on R d nf0g which cannot be supported on any lower dimensional subspace. Note that t:ö (t E ö) follows. Here (Aö)(dx) ö(Aÿ1 dx) denotes the image measure. The convergence in (2.2) d means that nì(Aÿ1 n S) ! ö(S) for any Borei set S of R which is bounded away from the origin, and whose boundary has ö-measure zero. Note that this is the vague convergence on the set R d nf0g where R d is the one-point compacti®cation of R d . For more information on multivariable regular variation see Meerschaert (1993), Meerschaert and Schef¯er (1999) and Schef¯er (1998). Regular variation is an analytic tail condition which is necessary for an extended central limit theorem to apply. We say that f Z n g belongs to the generalized domain of attraction of some full-dimensional limit Y if # Blackwell Publishers Ltd 2000 MOVING AVERAGES OF RANDOM VECTORS 299 An ( Z 1 Z n ÿ nbn ) ) Y (2:3) for some linear operators An and nonrandom centering vectors bn . If Ek Z n k2 , 1 then Y is multivariate normal and we can take An nÿ1=2 and bn EZ 1 . Meerschaert (1993) together with Meerschaert (1994) shows that (2.3) holds with a nonnormal limit if and only if the distribution ì is regularly varying with exponent E, where every eigenvalue of E has real part exceeding 12. Sharpe (1969) characterized operator stable laws in terms of transforms. The characteristic function of any random vector Y whose probability distribution is in®nitely divisible can be written in the form Ee ihY ,si e ø(s) where 1 ihs, xi ö(dx): e hs,xi ÿ 1 ÿ ø(s) iha, si ÿ s9Ms 2 1 hx, xi x60 Here a 2 R d , M is a symmetric d 3 d matrix, and ö is a LeÂvy measure, i.e. a ó®nite measure to sets bounded ®nite Borel measure on R d nf0g which assigns away from the origin and which satis®es kxk2 I(0 , kxk < 1)ö(dx) , 1. We say that Y has LeÂvy representation [a, M, ö]. If ö 0 then Y is multivariate normal with covariance matrix M. A nonnormal operator stable law has LeÂvy representation [a, 0, ö] where tö t E ö for all t . 0 and every eigenvalue of the exponent E has real part exceeding 12. If ì varies regularly with exponent E, where every eigenvalue of E has real part exceeding 12, then the limit measure ö in (2.2) is also the LeÂvy measure of Y. Let ì be regularly varying with exponent E and let ë minfR(á)g, Ë maxfR(á)g where á ranges over the eigenvalues of E. Meerschaert (1993) shows that in this case the moment functions Uæ (r, è) jhx, èijæ ì(dx) jhx,èij< r Vç (r, è) jhx,èij . r jhx, èijç ì(dx) are uniformly R±O varying whenever ç , 1=Ë < 1=ë , æ. A Borel measurable function R(r) is R±O varying if it is real-valued and positive for r > A and if there exist constants a . 1, 0 , m , 1, M . 1 such that m < R(tr)=R(r) < M whenever 1 < t < a and r > A. Then R(r, è) is uniformly R±O varying if it is an R±O varying function of r for each è, and the constants A, a, m, M can be chosen independent of è. See Seneta (1976) for more information on R±O variation. In particular it is shown in Meerschaert (1993) (see also Schef¯er (1998) for a more general case) that for any ä . 0 there exist real constants m, M, r0 such that Vç (tr, è) > mt çÿ1=ëÿä Vç (r, è) # Blackwell Publishers Ltd 2000 300 M. M. MEERSCHAERT AND H.-P. SCHEFFLER Uæ (tr, è) < Mt æÿ1=Ëä Uæ (r, è) for all kèk 1, all t > 1 and all r > r0 . A uniform version of Feller (1971) p. 289 (see Schef¯er (1998) for a detailed proof) yields that for some positive real constants A, B, t0 we have A< t æÿç Vç (t, è) <B Uæ (t, è) for all kèk 1 and all t > t0 . If ì is regularly varying with index E then Meerschaert and Schef¯er (1997) show that there exists a unique direct sum decomposition R d V1 Vp and real numbers 0 , ë a1 , , ap Ë with the following properties: the subspaces V1 Vp are mutually orthogonal; for any nonzero vector è 2 Vi the marginal absolute moment Ejh Z n , èijr exists for r , 1=ai and this moment is in®nite for r . 1=ai ; Pi An An Pi and Pi E EPi where Pi is orthogonal projection onto Vi ; every eigenvalue of Pi E has real part equal to ai ; for any nonzero vector x 2 R d we have logkPi An xk=log n ! ÿai (uniformly on compact subsets); and logkPi An k=log n ! ÿai . This is called the spectral decomposition. The spectral decomposition implies that the distribution Pi ì of the random vector Pi Z n on Vi varies regularly with exponent Pi E. Since every eigenvalue of the exponent Pi E has real part equal to ai , we can also apply the above R±O variation results whenever è 2 Vi and ç , 1=ai , æ. For a linear operator A let At denote its transpose. Lemma 2.1. Assume that the distribution ì of Z is regularly varying with index E; let R d V1 Vp be the spectral decomposition of R d and let 0 , a1 , . . . , ap denote the corresponding real parts of the eigenvalues of E. Then for any i 1, . . ., p, given 0 , ç , 1=ai , æ there exists n0 and constants K 1 , . . ., K 4 . 0 such that (i) nEjhAn Z, èijæ I(jhAn Z, èij < 1) , K 1 (ii) nEjhAn Z, èijæ I(kAn Zk < 1) , K 2 (iii) nEjhAn Z, èijç I(jhAn Z, èij . 1) , K 3 (iv) nEjhAn Z, èijç I(kAn Zk . 1) , K 4 (2:4) for all n > n0 and all kèk 1 in Vi . Proof. Suppose we are given an arbitrary sequence of unit vectors è n in Vi , and write rn xn A tn è n where rn . 0 and kxn k 1. Then we have # Blackwell Publishers Ltd 2000 MOVING AVERAGES OF RANDOM VECTORS 301 nEjhAn Z, è n ijæ I(jhAn Z, è n ij < 1) nEjh Z, A tn è n ijæ I(jh Z, A tn è n ij < 1) nEjh Z, rn xn ijæ I(jh Z, rn xn ij < 1) nræn Ejh Z, xn ijæ I(jh Z, xn ij < rÿ1 n ) nræn Uæ (rÿ1 n , xn ) Since æ . 1=ai both Uæ (r, è) and V0 (r, è) are uniformly R±O varying on compact subsets of è 6 0 in Vi . Then for some m, M, t0 we have m< t æ V0 (t, è) <M Uæ (t, è) (2:5) for all kèk 1 in Vi and all t > t0 . Since (2.2) holds where ö cannot be supported on any lower dimensional subspace we must have kAn k ! 0. Then kAn èk ! 0 uniformly on compact subsets of R d nf0g, and so for some n0 we have kAn èk < tÿ1 0 for all kèk 1 and all n > n0 . Then for all n > n0 we have ÿ1 ÿ1 nræn Uæ (rÿ1 n , xn ) < m nV0 (r n , xn ) mÿ1 nP(jh Z, xn ij . rÿ1 n ) mÿ1 nP(jh Z, rn xn ij . 1) mÿ1 nP(jh Z, A tn è n ij . 1) mÿ1 nP(jhAn Z, è n ij . 1) < mÿ1 nP(kAn Zk . 1) as n ! 1, and this upper bound holds independent of our choice of the sequence è n . By (2.2) we have nP(kAn Zk . 1) nAn ìfz: kzk . 1g ! öfz: kzk . 1g (if fz: kzk . 1g is not a continuity set of ö then we can use the upper bound nP(kAn Zk . r) instead, where 0 , r , 1). Then the sequence of real numbers nP(kAn Zk . 1) is bounded above by some K . 0, and assertion (i) of (2.4) holds with K 1 mÿ1 K. Since nEjhAn Z, è n ijæ I(kAn Zk < 1) < nEjhAn Z, è n ijæ I(jhAn Z, è n ij < 1) we immediately obtain assertion (ii) with K 2 K 1 . Now write # Blackwell Publishers Ltd 2000 302 M. M. MEERSCHAERT AND H.-P. SCHEFFLER nEjhAn Z, è n ijç I(jhAn Z, è n ij . 1) nEjh Z, A tn è n ijç I(jh Z, A tn è n ij . 1) nEjh Z, rn xn ijç I(jh Z, rn xn ij . 1) nrçn Ejh Z, xn ijç I(jh Z, xn ij . rÿ1 n ) nrçn Vç (rÿ1 n , xn ) Since ç , 1=ai we also have Vç (r, è) uniformly R±O varying on compact subsets of è 6 0 in Vi . Then for some m9, M9, t0 we have m9 < t æÿç Vç (t, è) < M9 Uæ (t, è) (2:6) for all kèk 1 in Vi and all t > t0 . Choosing t0 large enough so that both (2.5) and (2.6) hold whenever t > t0 and kèk 1 in Vi , and then choosing n0 n0 (t0 ) such that kAn èk < tÿ1 0 for all kèk 1 whenever n > n0, we have for all n > n0 that æ ÿ1 r çn Vç (rÿ1 n , xn ) < M9nr n U æ (r n , xn ) and so assertion (iii) of (2.4) holds with K 3 M9K 1 . Finally write nEjhAn Z, è n ijç I(kAn Zk . 1) nEjhAn Z, è n ijç I(jhAn Z, è n ij . 1) nEjhAn Z, è n ijç I(kAn Zk . 1 and jhAn Z, è n ij < 1) < nEjhAn Z, è n ijç I(jhAn Z, è n ij . 1) nP(kAn Zk . 1) so that assertion (iv) of (2.4) holds with K 4 K 3 K. This concludes the proof of Lemma 2.1. 3. MOVING AVERAGES Suppose that f Z j g1 jÿ1 is a double sequence of i.i.d. random vectors with common distribution ì varying regularly with index E, i.e. (2.2) holds. De®ne the moving average process 1 X Cj Z tÿ j (3:1) Xt jÿ1 where Cj are d 3 d real matrices such that for each j either Cj 0 or else C ÿ1 j exists and An Cj Cj An for all n. The spectrum of the exponent of regular variation E is connected with the tail behavior of the random vectors Z n . Let Ë maxfR(á)g and ë minfR(á)g where á ranges over the eigenvalues of E as before. Lemma 2 of Meerschaert (1993) or Schef¯er (1998), Corollary 4.21 implies that Ek Z n kr exists for 0 , r , 1=Ë and is in®nite for r . 1=ë. Then the following lemma implies that the moving average (3.1) is well-de®ned as long as # Blackwell Publishers Ltd 2000 MOVING AVERAGES OF RANDOM VECTORS 1 X kCj kä , 1 303 (3:2) jÿ1 for some ä , 1=Ë with ä < 1. Here kAk denotes the operator norm of a linear operator A on R d . For the remainder of the paper we will assume that this is the case, so that the moving average (3.1) is well-de®ned. Lemma 3.1. Suppose that Ek Z n ká , 1 8á , á0 and that (3.2) holds for some ä , á0 , ä < 1. Then (3.1) exists almost surely. P Ek Z n k , 1. Then kCj kä , 1 Proof. First P suppose á0 . 1 so that ä implies that kCj k , 1 because kCj k ! 0 since the series converges, so ä kCj k , 1 for all large j, and P for such j we P have kCj k < PkCj k . kCj k:k Z tÿ j k P so that Then weP have kX t k k Pj Cj Z tÿ j k < kCP j Z tÿ j k < EkX t k < E kCj k k Z tÿ j k EkCj k k Z tÿ j k kCj kEk Z tÿ j k Ek Z 1 k kCj k , 1 by monotone convergence and then X t exists almost surely. To see this, note P ~ kCj k k Z tÿ j k is a well-de®ned random variable which is nonnegative that X and has a ®nite mean and so it is almost surely ®nite, i.e. the sum in (3.1) converges absolutely with probability one. P kCj kä , 1 for some ä , á0 and Ek Z 1 kä , 1 as well. If á0 < 1 then Also jx yjä < jxjä j yjä 8x, y > 0 so " ä # X ä Cj Z tÿ j EkX t k E j 2 < E4 X !ä 3 kCj k k Z tÿ j k 5 j <E " X j X !# ä ä kCj k k Z tÿ j k kCj kä Ek Z 1 kä , 1 j so as before X t converges absolutely almost surely. Theorem 3.2. Suppose that X t is a moving average de®ned by (3.1) where f Z n g are i.i.d. with common distribution ì on R d . Suppose that ì varies regularly with exponent E, where every eigenvalue of E has real part exceeding 1 2. Then for the norming operators An in (2.2) we have An (X 1 X n ÿ nan ) ) U (3:3) where an 2 R d and U is full-dimensional and operator stable. # Blackwell Publishers Ltd 2000 304 M. M. MEERSCHAERT AND H.-P. SCHEFFLER Remark 3.3. In the proof P of Theorem 3.2 we show that we can take the centering constants an j Cj bn in (3.3) where bP n are the centering constants from (2.3) above. The operator stable limit U j C j Y in (3.3) is nonnormal P with LeÂvy measure j C j ö. Before we proof Theorem 3.2 we need an additional result that shows that under the general assumptions on the moving averages parameter matrices Cj the spectral decomposition remains invariant under Cj. Lemma 3.4. Suppose the ì is regularly varying with index E and spectral decomposition R d V1 . . . Vp and norming operators An as in (2.2). If Cj is invertible and Cj An An Cj for all n then Cj Vi Vi for all i 1 p. Proof. Let Li Vi Vp and Li V1 Vi . Meerschaert and Schef¯er (1998) show that for any x 2 Li nLi1 we have logkAn xk=log n ! ÿai . This convergence is uniform on compact subsets of x 2 Li nLi1 . For any x 2 Li nLiÿ1 we have logk(A tn )ÿ1 xk=log n ! ai (uniformly on compact sets). Since Cj is invertible there exist cj . 0 and bj , 1 such that cj kxk < kCj xk < bj kxk for all x. Then logkCj An xk=log n ! ÿai if and only if x 2 Li nLi1 , and logkAn Cj xk=log n ! ÿai if and only if Cj x 2 Li nLi1 . Since An Cj Cj An this implies that C ÿ1 j (Li nLi1 ) Li nL i1 , and so C j (Li nLi1 ) Li nLi1 . Similarly we have logk(A tn )ÿ1 xk=log n ! ai if and only if x 2 Li nLiÿ1 as well as logk(A tn )ÿ1 (C tj )ÿ1 xk=log n ! ai if and only if (C tj )ÿ1 x 2 Li nLiÿ1 . Then (C tj )ÿ1 (Li nLiÿ1 ) Li nLiÿ1 and since the subspaces Vi are mutually orthogonal it follows that Cj (Li nLiÿ1 ) Li nLiÿ1 . Then Cj (Li \ Li ) Li \ Li and Li \ Li Vi . This concludes the proof of Lemma 3.4. Proof of theorem 3.2. Suppose that (2.3) holds and de®ne X Cj Z tÿ j X (t m) j jj< m a(nm) U ( m) X Cj bn j jj< m X Cj Y j jj< m for m > 1, where Y is the limit in (2.3). From (2.3) it follows that ! n X ( Z tÿ j ÿ bn ): j jj < m ) (Y , . . ., Y ) An t1 since we know that the convergence holds for the j 0 component, and since the difference between this component and any other component (at most 2m # Blackwell Publishers Ltd 2000 MOVING AVERAGES OF RANDOM VECTORS 305 terms) tends to zero in probability. Now by the continuous mapping theorem we obtain An (X (1m) X (nm) ÿ na(nm) ) ) U ( m) P and since U ( m) ! U j Cj Y almost surely it suf®ces to show (e.g. by Billingsley (1968) Theorem 4.2) that lim lim sup P(kT k . å) 0 m!1 n!1 where n XX T An Cj ( Z tÿ j ÿ bn ): j jj. m t1 For this it suf®ces to show that lim lim sup P(jhT , xij . å) 0 m!1 n!1 (3:4) for any unit vector x 2 Vi where 1 < i < p. Decompose T A B ÿ C where A n XX An Cj Z tÿ j I(kAn Z tÿ j k < 1) j jj. m t1 B n XX An Cj Z tÿ j I(kAn Z tÿ j k . 1) j jj. m t1 C X nAn Cj bn : j jj. m First suppose that ai . 1, so that assertion (ii) of Lemma 2.1 holds with æ 1. Write C tj x rj xj where rj . 0 and kxj k 1. Note that xj 2 Vi by Lemma 3.4. Then # Blackwell Publishers Ltd 2000 306 M. M. MEERSCHAERT AND H.-P. SCHEFFLER P [jhA, xij . å=3] < 3å ÿ1 EjhA, xij < 3å ÿ1 X nEjhAn Cj Z 1 , xijI(kAn Z 1 k < 1) j jj. m 3å ÿ1 X nEjhCj An Z 1 , xijI(kAn Z 1 k < 1) j jj. m 3å ÿ1 X j jj. m 3å ÿ1 X nEjhAn Z 1 , C tj xijI(kAn Z 1 k < 1) rj nEjhAn Z 1 , xj ijI(kAn Z 1 k < 1) j jj. m < 3å ÿ1 K 2 X kCj k j jj. m which tends to zero ä , 1=ap < 1=ai , 1. that assertion (iv) jx yjä < jxjä j yjä as m ! 1 in view of the fact that (3.2) holds with Choosing 0 , ä , 1=ap so that (3.2) holds, we have of Lemma 2.1 holds with ç ä. Note also that since ä , 1=ai , 1. Then P [jhB, xij . å=3] < 3å ÿä EjhB, xijä < 3å ÿä X nEjhAn Cj Z 1 , xijä I(kAn Z 1 k . 1) j jj. m 3å ÿä X nEjhCj An Z 1 , xijä I(kAn Z 1 k . 1) j jj. m 3å ÿä X j jj. m 3å ÿä X j jj. m < 3å ÿä K 4 nEjhAn Z 1 , C tj xijä I(kAn Z 1 k . 1) räj nEjhAn Z 1 , xj ijä I(kAn Z 1 k . 1) X kCj kä j jj. m which tends to zero as m ! 1 in view of the fact that (3.2) holds. The standard convergence criteria for triangular arrays (e.g. Araujo and Gine (1980)) shows that we can take bn EZ 1 I(kAn Z 1 k < 1) in (2.3). Then for all n > n0 we have # Blackwell Publishers Ltd 2000 MOVING AVERAGES OF RANDOM VECTORS jhC, xij < X 307 njhAn Cj bn , xij j jj. m X njhAn Cj EZ 1 I(kAn Z 1 k < 1), xij j jj. m < X nEjhAn Cj Z 1 , xijI(kAn Z 1 k < 1) j jj. m < K2 X kCj k j jj. m which tends to zero as m ! 1 as in the argument for A. Since P(jhT , xij . å) < P(jhA, xij . å=3) P(jhB, xij . å=3) P(jhC, xij . å=3) it follows that (3.4) holds when x is a unit vector in Vi and ai . 1. Suppose then that 12 , ai < 1 and note that (2.4) part (ii) holds with æ 2. Since C EA we have by Chebyshev's inequality that P[jhA ÿ C, xij . å=2] < 2å ÿ2 var(hA, xi) 2å ÿ2 X n var[hAn Cj Z 1 , xiI(kAn Z 1 k < 1)] j jj. m < 2å ÿ2 X nE[hAn Cj Z 1 , xi2 I(kAn Z 1 k < 1)] j jj. m 2å ÿ2 X j jj. m < 2å ÿ2 K 2 nr 2j E[hAn Z 1 , xj i2 I(kAn Z 1 k < 1)] X kCj k2 j jj. m which tends to zero as m ! 1 in view of the fact that (3.2) holds with ä < 1. If 1 2 , ai , 1 then (2.4) part (iv) holds with ç 1 and so # Blackwell Publishers Ltd 2000 308 M. M. MEERSCHAERT AND H.-P. SCHEFFLER P [jhB, xij . å=2] < 2å ÿ1 EjhB, xij < 2å ÿ1 X nEjhAn Cj Z 1 , xijI(kAn Z 1 k . 1) j jj. m 2å ÿ1 X rj nEjhAn Z 1 , xj ijI(kAn Z 1 k . 1) j jj. m X < 2å ÿ1 K 4 kCj k j jj. m which tends to zero as m ! 1 in view of the fact that (3.2) holds with ä < 1. Finally if ai 1 then (2.4) part (iv) holds with ç ä where we choose ä , 1 such that (3.2) holds. Then argue as before that P [jhB, xij . å=2] < 2å ÿä EjhB, xijä < 2å ÿä X nEjhAn Cj Z 1 , xijä I(kAn Z 1 k . 1) j jj. m 2å ÿä X j jj. m < 2å ÿä K 4 räj nEjhAn Z 1 , xj ijä I(kAn Z 1 k . 1) X kCj kä j jj. m which tends to zero as m ! 1 in view of the fact that (3.2) holds. Since P(jhT , xij . å) < P(jhA ÿ C, xij . å=2) P(jhB, xij . å=2) it follows that (3.4) holds when x is a unit vector in Vi and ai < 1, which concludes the proof of Theorem 3.2. Remark 3.5. The assumption that An Cj Cj An is somewhat restrictive, but necessary for our method of proof. It may be possible to relax this restriction. For example, it is not hard to check that for ®nite moving averages X t C0 Z t Cq Z tÿq we have (CAn C ÿ1 )(X 1 X n ÿ nan ) ) CY where C C0 Cq . We conjecture that the same is true in general, i.e. that (3.3) holds even with An Cj 6 Cj An if we replace An by CAn C ÿ1, but we have not been able to prove this. # Blackwell Publishers Ltd 2000 309 MOVING AVERAGES OF RANDOM VECTORS 4. SAMPLE AUTOCOVARIANCE MATRIX FOR MOVING AVERAGES In the previous section we proved that moving averages of random variables with regularly varying tail probabilities are asymptotically operator stable. In this section we prove that the sample autocovariance matrix formed from these moving averages is also asymptotically operator stable as a random element of the vector space of d 3 d symmetric matrices. The sample covariance matrix at lag h of the moving average X t is de®ned by n X ^ n (h) 1 à X t X 9t h : (4:1) n t1 Meerschaert and Schef¯er (1999) consider the asymptotic behavior of the sample covariance matrix of an i.i.d. sequence of random vectors with regularly varying tails. Suppose that f Z n g are i.i.d. random vectors on R d with common distribution ì, where ì varies regularly with exponent E, i.e. (2.2) holds and every eigenvalue of E has real part exceeding 14. Then ! n X Z i Z9i ÿ Bn A tn ) W (4:2) An i1 where An is taken from the de®nition (2.2) above, Bn EZ 1 Z91 I(kAn Z 1 k < 1), and W is a nonnormal operator stable random element of the vector space M ds of symmetric d 3 d matrices with real entries. ^ n (h) is the sample covariance matrix de®ned Theorem 4.1. Suppose that à by (4.1) where X t is the moving average (3.1) and f Z n g are i.i.d. with common distribution ì on R d . Suppose that ì varies regularly with exponent E, where every eigenvalue of E has real part exceeding 14 and (2.2) holds. Assume that Ex9 Z n 0 for all x 2 R d such that Ejx9 Z n j , 1. Then for all h we have " # 1 1 X X t ^ n (h) ÿ Cj Bn C Cj WC t (4:3) At ) nAn à jÿ1 j h n j h jÿ1 where An , Bn and W are as in (4.2). The method of proof is similar to that of Theorem 3.2 above but much more involved. The assertion of Theorem 4.1 follows easily from the following two main propositions whose proofs are included in the appendix. The ®rst key ^ n (h) dominate. proposition asserts that the quadratic terms of à Proposition 4.2. Under the assumptions of Theorem 4.2 we have " # n X 1 X P t ^ Cj Z tÿ j Z9tÿ j C At ! 0 An nÃn (h) ÿ t1 jÿ1 j h n (4:4) # Blackwell Publishers Ltd 2000 310 M. M. MEERSCHAERT AND H.-P. SCHEFFLER as n ! 1. The next proposition establishes the convergence of the quadratic terms of the sample autocovariance matrix to the limit in (4.3). Proposition 4.3. Under the assumptions of Theorem 4.1 we have " n 1 # 1 X X X t Cj ( Z tÿ j Z9tÿ j ÿ Bn )C j h A tn ) Cj WC tj h An t1 jÿ1 (4:5) jÿ1 as n ! 1. Proof of theorem 4.1. Combining Propositions 4.2 and 4.3 we have " # " # 1 n X 1 X X t t t ^ ^ Cj Bn C j h A n An nÃn (h) ÿ Cj Bn C j h A tn nAn Ãn (h) ÿ jÿ1 t1 jÿ1 " ^ n (h) ÿ An nà n X 1 X t1 jÿ1 " An n X 1 X 1 X jÿ1 A tn # Cj ( Z tÿ j Z9tÿ j ÿ t1 jÿ1 )0 # Cj Z tÿ j Z9tÿ j C tj h Bn )C tj h A tn Cj WC tj h which concludes the proof. Remark 4.4. If ap , 12 then EZ 1 Z91 exists and consequently the autocovariance set as 12 matrix Ã(h) EX t X 9t h is well-de®ned. In this case Meerschaert and Schef¯er (1999) show that we can take Bn EZ 1 Z91 in (4.2) and then (4.3) becomes 1 X ^ n (h) ÿ Ã(h)]A t ) Cj WC tj h : (4:6) nAn [à n jÿ1 ÿ1=2 when ap , 12, we also have in this case that Since An ! 0 slower than n ^ Ãn (h) ! Ã(h) in probability, and (4.6) provides the rate of convergence. If a1 . 12 then Meerschaert and Schef¯er (1999) show that we can take Bn 0 in (4.2) and then (4.3) becomes 1 X ^ n (h)A t ) Cj WC tj h : (4:7) nAn à n jÿ1 # Blackwell Publishers Ltd 2000 MOVING AVERAGES OF RANDOM VECTORS 311 ^ n (h) Since An ! 0 faster than nÿ1=2 when a1 . 12, we also have in this case that à is not bounded in probability, and (4.7) provides the rate at which it blows up. An application of Proposition 3.1 in Resnick (1986) shows that the regular variation (2.2) implies weak convergence of the associated point processes. All of the results in this paper can also be established using point process methods. In the special case of scalar norming (where (2.3) holds with An aÿ1 n I for all n) we say that f Z n g belongs to the domain of attraction of Y. In this special case a result equivalent to Theorem 4.1 was obtained by Davis, Marengo, and Resnick (1985) and Davis and Marengo (1990) using point process methods. In this case the random matrix W is multivariate stable. Since we are norming by constants in this case, one immediately obtains the asymptotic distribution of the sample autocorrelations using the continuous mapping theorem. It is not possible to extend this argument to the general case considered in this paper (with norming by linear operators). In Theorem 4.1, the assumption that Ex9 Z n 0 when the mean exists can be removed if we use the centered version of the sample covariance matrix de®ned by n X ^ n (h) 1 (X t ÿ X )(X t h ÿ X )9: G n t1 (4:8) P where X 1=n nt1 X t is the sample mean. ^ n (h) is the sample autocovariance matrix Theorem 4.5. Suppose that G de®ned by (4.8) where X t is the moving average (3.1) and f Z n g are i.i.d. with common distribution ì on R d . Suppose that ì varies regularly with exponent E, where every eigenvalue of E has real part exceeding 14. Then for all h we have " ^ n (h) ÿ nAn G 1 X jÿ1 # Cj Bn C tj h A tn ) 1 X jÿ1 Cj WC tj h , (4:9) where An , Bn and W are as in (4.2). Proof. Note that the difference between the two formulas (4.1) and (4.8) can be written in the form # Blackwell Publishers Ltd 2000 312 M. M. MEERSCHAERT AND H.-P. SCHEFFLER n n 1X 1X (X t ÿ X )(X t h ÿ X )9 ÿ X t X 9t h n t1 n t1 ÿ n n n 1X 1X 1X XtX9 ÿ X X 9t h X X9 n t1 n t1 n t1 ! h n h 9 1X 1 X Xt X t X X 9 ÿX X 9 ÿ X X ÿ n t1 n t n1 X X 9 oP (X X 9) and so it suf®ces to show that nAn X X 9A tn ! 0 in probability. For this it suf®ces to show that for all unit vectors x 2 Vi and y 2 Vl we have nx9An X X 9A tn y p p P n x9An X : n y9An X ! 0 p and so it suf®ces to show that n x9An X ! 0 in probability for any unit vector x 2 Vi for any i 1, . . ., p. If ai , 1 then EPi Z t exists. Note that (4.8) is not changed if we replace Z t by Z t ÿ EPi Z t so that without loss of generality EPi Z t 0. Let Sn X 1 X n so that X Sn =n. In the case ai , 1, Meerschaert (1993), along with the spectral decomposition shows that we can take Pi bn EPi Z t 0 in (2.3) and then we have from (3.3) that An Pi Sn nAn Pi X ) U and so n1=2 An Pi X ! 0 in probability. It follows that n1=2 x9An X ! 0 in probability. If ai . 1 then Meerschaert (1993) along with the spectral decomposition shows that we can also take Pi bn 0 in (2.3) and it follows as before that n1=2 x9An X ! 0 in probability. If ai 1 we can apply Lemma 2.1 part (ii) with æ 1 å . 1=ai for any å . 0. By (3.3) we have An (Sn ÿ nan ) ) U , and so we have nAp n (X ÿ an ) ) U , and then nx9An (X ÿ an ) ) x9U by continuous mapping. p Then n x9An (X ÿ an ) ! 0 in probability and so it P suf®ces to show that n x9An an ! 0, where without loss of generality an j Cj bn and bn EZ 1 I(kAn Z 1 k < 1). Argue as in the proof of lemma 5.1 that U 1 (r, è) , CU1å (r, è) for all r large and all è unit vectors in Vi . Then as in the proof of Lemma 5.1 we have (letting A tn x rn è n with rn . 0 and kè n k 1) that jnx9An bn j < nEjhAn Z 1 , xijI(kAn Z 1 k < 1) å 1å I(jhAn Z 1 , xij < 1) < CkPi Aÿ1 n k nEjhAn Z 1 , xij å , CkPi Aÿ1 n k K1 for all n large, and so for all large n we have # Blackwell Publishers Ltd 2000 MOVING AVERAGES OF RANDOM VECTORS 1 X 1=2 ÿ1=2 jn x9An an j n nx9An Cj bn 313 jÿ1 1 X < nÿ1=2 jÿ1 kCj k jnx9An bn j å < nÿ1=2 CkPi Aÿ1 n k K1 1 X kCj k jÿ1 å 1 where the series converges. Since log(nÿ1=2 kPi Aÿ1 n k )=log n ! ÿ2 ai å , 0 for 1=2 small å . 0 we obtain n x9An an ! 0, which concludes the proof. Remark 4.6. Davis, Marengo, and Resnick (1985) and Davis and Marengo (1990) also establish the joint convergence of the sample autocovariance matrices over all lags jhj < h0 , in the special case of scalar norming. It is straightforward to extend the results in this paper to establish joint convergence as well. In Meerschart and Schef¯er (1998) we apply this joint convergence to compute the asymptotic distribution of the sample cross-correlations for a bivariate moving average. This extends the results of Davis et al. to the case where the norming operator An and Cj are diagonal. The general case, where both An and Cj are arbitrary linear operators, remains open. 5. APPENDIX In this section we prove Proposition 4.2 and Proposition 4.3 above and hence complete the proof of Theorem 4.1. Throughout this section we assume that the assumptions of Theorem 4.1 hold. That is, we assume that f Z n g are i.i.d. with common distribution ì on R d , and that ì varies regularly with index E, where every eigenvalue of E has real part exceeding 14 and (2.2) holds. Assume further that Ex9 Z n 0 for all x 2 R d such that Ejx9 Z n j , 1. Recall from section 2 that in view of the spectral decomposition we can (and hence will) assume without loss of generality that An is block diagonal with respect to the direct sum decomposition R d V1 . . . Vp , where the Vi -spaces are mutually orthogonal and form the spectral decomposition relative to E. Let 14 , a1 , . . . , ap denote the real parts of the eigenvalues of E. Since the proofs of the two propositions are quite long we split them into several lemmas. Proof of proposition 4.2. Let V1 , . . ., Vp be the spectral decomposition of E and a1 , , ap the associated real spectrum (see section two above). Recall that Vi ? Vj and that Pi is the orthogonal projection onto Vi . We also assume as before that (3.2) holds for some ä , 1=ap with ä < 1. It will suf®ce to show that " # n X 1 X P ^ n (h) ÿ Cj Z tÿ j Z9tÿ j C tj h A tn y ! 0 (5:1) x9An nà t1 jÿ1 for all unit vectors x 2 Vi and y 2 Vl . Note that # Blackwell Publishers Ltd 2000 314 ^ n (h) nà M. M. MEERSCHAERT AND H.-P. SCHEFFLER n X X t X 9t h t1 n X 1 X t1 jÿ1 n X 1 X t1 jÿ1 ! Cj Z tÿ j Cj Z tÿ j t1 jÿ1 n X 1 X t1 jÿ1 1 X Cq Z t hÿq ! 9 ! C k h Z tÿ k 9 (set k q ÿ h) kÿ1 n X 1 1 X X n X 1 X qÿ1 ! t1 jÿ1 kÿ1 1 X Cj Z tÿ j Z9tÿ k C tk h Cj Z tÿ j Z9tÿ j C tj h Cj Z tÿ j Z9tÿ j C tj h X k6 j Cj Z tÿ j Z9tÿ k C tk h n X 1 X X t1 jÿ1 k6 j ! Cj Z tÿ j Z9tÿ k C tk h so that (5.1) is equivalent to " x9An n X 1 X X t1 jÿ1 k6 j # Cj Z tÿ j Z9tÿ k C tk h P A tn y ! 0: (5:2) We now have to consider several cases separately. Case i: Suppose that x 2 Vi and y 2 Vl are unit vectors with ai al . 1. Choose ä . 0 such that ä . 1=(ai al ), ä , 1=ai , ä , 1=al , and ä < 1. Write C tj A tn x rè where r . 0 and kèk 1. Then r kC tj A tn xk kC tj A tn Pi xk kC tj Pi A tn xk < kCj k kPi An k and so Ejx9An Cj Z 1 jä Ej(C tj A tn x)9 Z 1 jä Ejrè9 Z 1 jä < kPi An kä kCj kä Ejh Z 1 , èijä kPi An kä kCj kä Ejh Z 1 , Pi èijä kPi An kä kCj kä EjhPi Z 1 , èijä < kPi An kä kCj kä EkPi Z 1 kä and likewise Ej Z92 C tk h A tn yjä < kPl An kä kC k h kä EkPl Z 2 kä : Then # Blackwell Publishers Ltd 2000 MOVING AVERAGES OF RANDOM VECTORS " n 1 # ä X X X t t E x9An Cj Z tÿ j Z9tÿ k C k h A n y 315 t1 jÿ1 k6 j n 1 ä X X X E x9An Cj Z tÿ j Z9tÿ k C tk h A tn y t1 jÿ1 k6 j 2 < E4 n X 1 X X t1 jÿ1 k6 j " <E n X 1 X X t1 jÿ1 k6 j E " n 1 X X X t1 jÿ1 k6 j n X 1 X X t1 jÿ1 k6 j n X 1 X X t1 jÿ1 k6 j !ä 3 jx9An Cj Z tÿ j Z9tÿ k C tk h A tn yj 5 # jx9An Cj Z tÿ j Z9tÿ k C tk h yA tn yjä # jx9An Cj Z tÿ j jä j Z9tÿ k C tk h yA tn yjä E(jx9An Cj Z tÿ j jä j Z9tÿ k C tk h yA tn yjä ) Ejx9An Cj Z tÿ j jä Ej Z9tÿ k C tk h yA tn yjä by independence of Z tÿ j , Z tÿ k < n X 1 X X t1 jÿ1 k6 j kPi An kä kCj kä EkPi Z 1 kä :kPl An kä kC k h kä EkPl Z 2 kä nkPi An kä kPl An kä EkPi Z 1 kä EkPl Z 2 kä 1 X X jÿ1 k6 j kCj kä kC k h kä where EkPi Z 1 kä , 1 because ä , 1=ai , EkPl Z 2 kä , 1 because ä , 1=al , and ! 1 ! 1 X 1 X X X ä ä ä ä kCj k kC k h k < kCj k kC k h k jÿ1 k6 j jÿ1 1 X jÿ1 jÿ1 !2 kCj kä ,1 and furthermore since logkPi An k ! ÿai log n and logkPl An k ! ÿal log n we have log(nkPi An kä kPl An kä ) ! 1 ÿ äai ÿ äal , 0 log n # Blackwell Publishers Ltd 2000 316 M. M. MEERSCHAERT AND H.-P. SCHEFFLER since ä . 1=(ai al ). Then nkPi An kä kPl An kä ! 0 and it follows that (5.2) holds in this case. Case ii: Suppose that x 2 Vi and y 2 Vl are unit vectors with ai al < 1. Then ai , 1 and al , 1 so EkPi Z 1 k , 1 and EkPl Z 2 k , 1. Then by assumption Ex9 Z 1 0 and EZ92 y 0. De®ne Z in Z i I(kAn Z i k < 1) and ì n EZ in and let A i, j, n ( Z in ÿ ì n )( Z jn ÿ ì n )9 Bi, j, n Z in ì9n ÿ ì n Z9jn C i, j, n Z i Z9j I(kAn Z i k . 1 or kAn Z j k . 1) Di, j, n ÿì n ì9n so that EA i, j, n 0 and Z i Z9j A i, j, n Bi, j, n C i, j, n Di, j, n . Then let n X 1 X X x9An Cj A tÿ j, tÿ k, n C tk h A tn y A t1 jÿ1 k6 j and similarly for B, C, D to obtain " n 1 # X X X t Cj Z tÿ j Z9tÿ k C k h A tn y A B C D x9An t1 jÿ1 k6 j and so in order to establish (5.2) it suf®ces to show that each of A, B, C, D tends to zero in probability. Let Z in Z in ÿ ì n so that Aijn Z in Z9jn . Since Z in and Z jn are independent with mean zero for i 6 j it is easy to check that EA 0. Then !2 n X 1 X X x9An Cj Z tÿ j, n Z9tÿ k, n C tk h A tn y var(A) E t1 jÿ1 k6 j 2 E4 n 1 X X X t1 1 j1 ÿ1 k 1 6 j1 : n 1 X X X t2 1 j2 ÿ1 k 2 6 j2 n X n 1 X X ! x9An C j1 Z t1 ÿ j1 , n Z9t1 ÿ k 1 , n C tk 1 h A tn y !3 x9An C j2 Z t2 ÿ j2 , n Z9t2 ÿ k 2 , n C tk 2 h A tn y 5 1 X X X t1 1 t2 1 j1 ÿ1 j2 ÿ1 k 1 6 j1 k 2 6 j2 E[h Z t1 ÿ j1 , n , C tj1 A tn xi :h Z t1 ÿ k 1 , n , C tk h A tn yih Z t2 ÿ j2 , n , C tj A tn xih Z t2 ÿ k 2 , n , C tk h A tn yi]: 1 2 2 If any one of t1 ÿ j1 , t1 ÿ k 1 , t2 ÿ j2 , or t2 ÿ k 2 is distinct then this term vanishes since E Z in 0, using the independence to write the mean of the product as the product of the means. The only nonvanishing terms for a given t1 , j1 , k 1 occur when either (a) t1 ÿ j1 t2 ÿ j2 and t1 ÿ k 1 t2 ÿ k 2 (b) t1 ÿ j1 t2 ÿ k 2 and t1 ÿ k 1 t2 ÿ j2 . Given t1 , j1 , k 1 for every t2 1, . . ., n there exists a unique j2 , k 2 satisfying (a) or (b). Indeed we have # Blackwell Publishers Ltd 2000 MOVING AVERAGES OF RANDOM VECTORS 317 (a) j2 (t2 ÿ t1 ) j1 and k 2 (t2 ÿ t1 ) k 1 (b) j2 (t2 ÿ t1 ) k 1 and k 2 (t2 ÿ t1 ) j1 so that j2 6 k 2 is guaranteed. Now let p t1 ÿ j1 , q t1 ÿ k 1 , j j1 , t t1 , k k 1 , and Ä t2 ÿ t1 so that in case (a) t2 ÿ j2 p, t2 ÿ k 2 q, j2 Ä j, and k 2 Ä k (b) t2 ÿ j2 q, t2 ÿ k 2 p, j2 Ä k, and k 2 Ä j. Then in case (a) the only nonvanishing terms for a given t, j, k are of the form t t t t E[h Z pn , C tj A tn xih Z qn , C tk h A tn yih Z pn , C Ä j A n xih Z qn , C Ä k h A n yi] t t t t t t E[h Z pn , C tj A tn xih Z pn , C Ä j A n xih Z qn , C k h A n yih Z qn , C Ä k h A n yi t t t t E[x9An Cj Z pn Z9pn C Ä j A n x]E[ y9An C k h Z qn Z9qn C Ä k h A n y] t t t t (x9An Cj M n C Ä j A n x)( y9An C k h M n C Ä k h A n y) where M n E Z in Z9in . In case (b) the only nonvanishing terms are of the form t t t t E[h Z pn , C tj A tn xih Z qn , C tk h A tn yih Z qn , C Ä k A n xih Z pn , C Ä j h A n yi] t t t t t t E[h Z pn , C tj A tn xih Z pn , C Ä j h A n yih Z qn , C k h A n yih Z qn , C Ä k A n xi] t t t t (x9An Cj M n C Ä j h A n y)( y9An C k h M n C Ä k A n x) so that var(A) n n 1 X X X X t t t t [(x9An Cj M n C Ä j A n x)( y9An C k h M n C Ä k h A n y) t1 tÄ1 jÿ1 k6 j t t t t (x9An Cj M n C Ä j h A n y)( y9An C k h M n C Ä k A n x)]: Now we need to bound the terms in this sum. Write C tj x rj xj , A tn xj anj xnj , C tj y r j yj , and A tn yj bnj ynj where xj , xnj , yj , ynj are unit vectors and rj, anj , r j , bnj are positive reals. Then for example we have t t t t x9An Cj M n C Ä j A n x x9Cj An M n A n C Ä j x t (C tj x)9An M n A tn C Ä jx rj rÄ j x9j An M n A tn xÄ j where x9j An M n A tn xÄ j x9j An E Z 1 n Z91 n A tn xÄ j Ex9j An Z 1 n Z91 n A tn xÄ j Eh Z 1 n , A tn xj ih Z 1 n , A tn xÄ j i q < Eh Z 1 n , A tn xj i2 Eh Z 1 n , A tn xÄ j i2 : Since Z 1 n Z 1 n ÿ ì n where ì n EZ 1 n we have for example that # Blackwell Publishers Ltd 2000 318 M. M. MEERSCHAERT AND H.-P. SCHEFFLER Eh Z 1 n , A tn xj i2 Eh Z 1 n ÿ ì n , A tn xj i2 E(h Z 1 n , A tn xj i ÿ Eh Z 1 n , A tn xj i)2 var(h Z 1 n , A tn xj i) < E[h Z 1 n , A tn xj i2 ] so that, calculating in a similar manner for the remaining terms, we have for example q t t x9An Cj M n C Ä Eh Z 1 n , A tn xj i2 Eh Z 1 n , A tn xÄ j i2 j A n x < rj r jÄ and then var(A) < n n 1 X X X X t1 tÄ1 jÿ1 k6 j (rj r jÄ [Eh Z 1 n , A tn xj i2 Eh Z 1 n , A tn x jÄ i2 ]1=2 :r k h r k hÄ [Eh Z 1 n , A tn y k h i2 Eh Z 1 n , A tn y k hÄ i2 ]1=2 rj r j hÄ [Eh Z 1 n , A tn x y i2 Eh Z 1 n , A tn y j hÄ i2 ]1=2 :r k h r kÄ [Eh Z 1 n , A tn y k h i2 Eh Z 1 n , A tn x kÄ i2 ]1=2 : To bound the expectations on the right hand side of the inequality above we need Lemma 5.1. For all unit vectors x 2 Vi , y 2 Vl , for some n0 , for all n > n0 and all j, k, h, Ä, for some K n . 0 satisfying nK n ! 0 we have [Eh Z 1 n , A tn x j i2 Eh Z 1 n , A tn x jÄ i2 : Eh Z 1 n , A tn y k h i2 Eh Z 1 n , A tn y k hÄ i2 ]1=2 [Eh Z 1 n , A tn x j i2 Eh Z 1 n , A tn y j hÄ i2 : Eh Z 1 n , A tn y k h i2 Eh Z 1 n , A tn x kÄ i2 ]1=2 (5:3) < Kn : Proof of lemma 5.1. Suppose that ai . 12. Then al , 12 so EkPi Z 1 k2 1 while EkPl Z 1 k2 , 1. Recall that Pl P2l P tl for this orthogonal projection operator, An Pl Pl An and that jhx, èi < kxk when è is a unit vector. Then for all unit vectors è 2 Vl we have Eh Z 1 n , A tn èi2 EhAn Z 1 , èi2 I(kAn Z 1 k < 1) < EhAn Z 1 , èi2 EhAn Z 1 , Pl èi2 EhPl An Pl Z 1 , èi2 < EkPl An Pl Z 1 k2 < kPl An k2 EkPl Z 1 k2 : Since 1=ai , 2 we can apply part (ii) of Lemma 2.1 with æ 2 to get nEh Z 1 n , A tn èi2 nEhAn Z 1 , èi2 I(kAn Z 1 k < 1) , K 2 for all n > n0 and all kèk 1 in Vi . Then (5.3) holds in this case with # Blackwell Publishers Ltd 2000 MOVING AVERAGES OF RANDOM VECTORS 319 K n 2[(K 2 =n)2 (kPl An k2 EkPl Z 1 k2 )2 ]1=2 2(K 2 =n)kPl An k2 EkPl Z 1 k2 so that nK n ! 0 since kPl An k ! 0. If al . 12 so that ai , 12 then a similar argument establishes (5.3) with K n 2(K 2 =n)kPi An k2 EkPi Z 1 k2 and again nK n ! 0. Otherwise we have ai < 12 and al < 12. If both EkPi Z 1 k2 and EkPl Z 1 k2 exist then (5.3) holds with K n 2kPi An k2 kPl An k2 EkPi Z 1 k2 EkPl Z 1 k2 and then log nK n log(nkPi An k2 kPl An k2 ) ! 1 ÿ 2(ai al ) , 0 log n log n since ai . 14 and al . 14 by assumption, so again nK n ! 0. Otherwise suppose that ai < 12 and al < 12 with EkPi Z 1 k2 1 and EkPl Z 1 k2 , 1. Then ai 12 and so we can apply part (ii) of Lemma 2.1 with æ 2 å for any å . 0. Observe that for all t and all è we have jhz, èij2 ì(dz) U2 (t, è) jhz,èij< t 0<jhz,èij , 1 < 0<jhz,èij , 1 jhz, èij2 ì(dz) jhz, èij2 ì(dz) 1<jhz,èij< t 1<jhz,èij< t jhz, èij2 ì(dz) jhz, èij2å ì(dz) < 1 U 2å (t, è) and since U 2å (t, è) ! 1 uniformly on kèk 1 in Vi , for some t0 and C . 0 we have for all t > t0 and all kèk 1 in Vi that U 2 (t, è) < CU2å (t, è). As in the proof of for all kèk 1 and all n > n0 . Lemma 2.1 we can choose n0 such that kAn èk < tÿ1 0 Enlarge n0 if necessary so that Lemma 2.1 also applies. Then nEh Z 1 n , A tn xj i2 nEh Z 1 , A tn xj i2 I(kAn Z 1 k < 1) < na2nj U 2 (aÿ1 nj , xnj ) ÿ1 : 2å < Caÿå nj na nj U 2å (a nj , xnj ) 2å : Caÿå I(jhAn Z 1 , xj ij < 1) nj nEjhAn Z 1 , xj ij : < Caÿå nj K 1 for all n > n0 n0 (t0 ) and all j. Note also that for all j and all n we have ÿ1 å ÿå anj kAn xj k > minfkAn èk: kèk 1 in Vi g 1=kPi Aÿ1 n k so that a nj < kPi A n k for all n and all j. Then (5.3) holds in this case with å 2 2 2 2 1=2 K n [(nÿ1 K 1 kPi Aÿ1 n k ) (kPl An k EkPi Z 1 k ) ] å 2 2 nÿ1 K 1 kPi Aÿ1 n k kPl An k EkPi Z 1 k so that å 2 log nK n log(kPi Aÿ1 n k kPl An k ) ! åai ÿ 2al , 0 log n log n for all å . 0 suf®ciently small. Note that å . 0 can be chosen independently of x, y, j and so nK n ! 0 here too. Finally if ai < 12 and al < 12 with EkPi Z 1 k2 , 1 and # Blackwell Publishers Ltd 2000 320 M. M. MEERSCHAERT AND H.-P. SCHEFFLER å 2 2 EkPl Z 1 k2 1 then (5.3) holds with K n nÿ1 K 1 kPl Aÿ1 and n k kPi An k EkPi Z 1 k log(nK n )=log n ! åal ÿ 2ai , 0 for å . 0 suf®ciently small, and again nK n ! 0. This concludes the proof of Lemma 5.1. Returning to the proof of Proposition 4.2, we get from Lemma 5.1 n n 1 X X X X (rj r jÄ r k h r k hÄ rj r j hÄ r k h r kÄ ) var(A) < K n t1 tÄ1 jÿ1 k6 j < Kn n nÿ t X 1 X X X (kCj k kC jÄ k kC k h k kC k hÄ k t1 Ä1ÿ t jÿ1 k6 j kCj k kC j hÄ k kC k h k kC kÄ k) < nK n 2 1 X jÿ1 !4 kCj k !0 and so A ! 0 in probability. Next we have B n X 1 X X t1 jÿ1 k6 j x9An Cj [ Z tÿ j, n ì9n ì n Z9tÿ k, n ]C tk h A tn y and we will let jk x9An Cj [ Z rn ì9n ì n Z9sn ]C tk A tn y B rs (Cj x)9An [ Z rn ì9n ì n Z9sn ]A tn (C tk y) rj rk (A tn xj )9[ Z rn ì9n ì n Z9sn ](A tn yk ) (2) rj rk [B (1) r Bs ] where t t B (1) r (A n xj )9 Z rn ì9n A n yk t t B (2) s (A n xj )9ì n Z9sn A n yk In the next lemma we derive bounds on the expectation of B lk rs . Lemma 5.2. For all unit vectors x 2 Vi and y 2 Vl , for some n0 , for all n > n0 , for all j, k, r, s we have jk j < kCj k kCk kK n EjB rs where nK n ! 0. Proof of lemma 5.2. Write jk (2) j rj rk EjB (1) EjB rs r Bs j (2) < kCj k kCk k(EjB (1) r j EjB s j) where # Blackwell Publishers Ltd 2000 (5:4) MOVING AVERAGES OF RANDOM VECTORS 321 t t EjB (1) r j Ejh Z 1 n , A n xj ihì n , A n yk ij jhì n , A tn yk ijEjh Z 1 n , A tn xj ij: Since ai , 1 we have EkPi Z 1 k , 1. Then Ejh Z 1 n , A tn xj ij < kPi An kEkPi Z 1 k as in the proof of Lemma 5.1. Apply part (iv) of Lemma 2.1 with ç 1 (so that ç , 1=al ). Using the fact that Eh Z 1 , yk i 0 we have for some n1 . 0 that njhì n , A tn yk ij njhEZ 1 n , A tn yk ij njEh Z 1 n , A tn yk ij njEh Z 1 , A tn yk iI(kAn Z 1 k < 1)j njEh Z 1 , A tn yk iI(kAn Z 1 k . 1)j < nEjhAn Z 1 , yk ijI(kAn Z 1 k . 1) , K 4 for all n > n1 and all k. It follows that nEjB (1) r j < K 4 kPi An kEkPi Z 1 k for all n > n1 and all k. A similar argument (note that al , 1 so that EkPl Z 1 k , 1 and apply part (iv) of Lemma 2.1 again with ç 1, so that ç , 1=ai ) shows that for some n2 . 0 we have nEjB (2) r j < K 4 kPl An kEkPl Z 1 k for all n > n2 and all j. Letting n0 maxfn1 , n2 g we have that (5.4) holds with nK n K 4 (kPi An kEkPi Z 1 k kPl An kEkPl Z 1 k) for all j, k, r, s and all n > n0 . Then nK n ! 0 since kAn k ! 0, which concludes the proof of Lemma 5.2. Returning to the proof of Proposition 4.2, Lemma 5.2 yields n X 1 X X kCj k kC k h kK n EjBj < t1 jÿ1 k6 j < nK n 1 X jÿ1 !2 kCj k !0 and so B ! 0 in probability. Next we write C rsjk x9An Cj Z r Z9s I(kAn Z r k . 1 or kAn Z s k . 1)C tk A tn y rj rk h Z r , A tn xj ih Z s , A tn yk iI(kAn Z r k . 1 or kAn Z s k . 1) so that jk j < kCj k kCk kE(jh Z r , A tn xj ih Z s , A tn yk ijI(kAn Z r k . 1 or kAn Z s k . 1)) EjC rs < kCj k kCk k(Ejx9j An Z r Z9s A tn yk jI(kAn Z r k . 1) Ejx9j An Z r Z9s A tn yk jI(kAn Z s k . 1)): The next lemma gives bounds on EjC rs jk j. Lemma 5.3. For all unit vectors x 2 Vi and y 2 Vl , for some n0 , for all n > n0 , and all j, k, r, s we have # Blackwell Publishers Ltd 2000 322 M. M. MEERSCHAERT AND H.-P. SCHEFFLER EjC rs jk j < kCj k kC k kK n (5:5) where nK n ! 0. Proof of lemma 5.3. Write Ejx9j An Z r Z9s A tn yk jI(kAn Z r k . 1) Ejh Z s , A tn yk ijEjh Z r , A tn xj ijI(kAn Z r k . 1) where Ejh Z s , A tn yk ij bnk Ejh Z s , ynk ij < bnk EkPl Z 2 k < kPl An kEkPl Z 2 k and nEjh Z r , A tn xj ijI(kAn Z r k . 1) < K 4 and similarly nEjx9j An Z r Z9s A tn yk jI(kAn Z s k . 1) < kPi An kEkPi An kK 4 so that (5.5) holds with nK n K 4 (kPi An kEkPi Z 1 k kPl An kEkPl Z 1 k) which is the same K n as for Lemma 5.2, so nK n ! 0. This concludes the proof of Lemma 5.3. In the proof of Proposition 4.2 we get using Lemma 5.3 n X 1 X X kCj k kC k h kK n EjCj < t1 jÿ1 k6 j < nK n 1 X !2 kCj k !0 jÿ1 and so C ! 0 in probability. Finally we can write Djk x9An Cj ì n ì9n C tk A tn y rj rk hì n , A tn xj ihì n , A tn yk i: and use the bound proved in Lemma 5.4. For all unit vectors x 2 Vi and y 2 Vl , for some n0 , for all n > n0 , and all j, k, we have jDjk j < kCj k kCk kK n where nK n ! 0. Proof of lemma 5.4. Recall that for all large n we have jhì n , A tn xj ij , K 4 =n from the proof of Lemma 5.2 above. Likewise jhì n , A tn yk ij < K 4 =n so we can take K n (K 4 =n)2 , which concludes the proof of Lemma 5.4. Now we have from Lemma 5.4 # Blackwell Publishers Ltd 2000 323 MOVING AVERAGES OF RANDOM VECTORS jDj < n X 1 X X kCj k kC k h kK n t1 jÿ1 k6 j !2 1 X < nK n jÿ1 kCj k !0 which completes the proof of Proposition 4.2. We now prove Proposition 4.3. Proof of proposition 4.3. De®ne S (nm) X n X Cj t1 j jj< m d (nm) n W ( m) X j jj< m X j jj< m ! Z tÿ j Z9tÿ j C tj h Cj Bn C tj h Cj WC tj h for m > 1. We will now apply (4.2) to show that ! ! n X t Z tÿ j Z9tÿ j ÿ Bn A n : j jj < m ) (W , . . ., W ): An (5:6) t1 Since we know that the convergence holds for the j 0 component, it suf®ces to show that the difference between this component and any other component tends to zero in probability. This difference consists of at most 2m identically distributed terms, any one of which must tend to zero in probability in view of the convergence (4.2) above, and so (5.6) holds. Now by the continuous mapping theorem we obtain from (5.6) that An (S (nm) ÿ d (nm) )A tn ) W ( m) and since X j jj. m kCj WC tj h k < kW k < kW k X kCj k kC j h k j jj. m X kCj k2 j jj. m !1=2 X kC j h k2 !1=2 !0 j jj. m as m ! 1 we also have W (m) ! 1 X jÿ1 Cj WC tj h almost surely. Then in order to establish (4.5) it suf®ces to show (e.g. by Billingsley (1968) Theorem 4.2) that lim lim sup P(kT k . å) 0 m!1 n!1 (5:7) where # Blackwell Publishers Ltd 2000 324 M. M. MEERSCHAERT AND H.-P. SCHEFFLER n XX T 2 j jj. m t1 P An Cj ( Z tÿ j Z9tÿ j ÿ Bn )C tj h A tn : Note that kT k i, j je9i Tej j2 where e1 , . . ., ed is the standard basis for R d . Then in order to show that (5.7) holds it suf®ces to show that lim lim sup P(jx9Tyj . å) 0 m!1 (5:8) n!1 for x 2 Vi and y 2 Vl where 1 < i, l < p. Write T A B ÿ C where A n XX j jj. m t1 B n XX j jj. m t1 C n XX j jj. m t1 An Cj Z tÿ j Z9tÿ j C tj h A tn I(kAn Z tÿ j k < 1) An Cj Z tÿ j Z9tÿ j C tj h A tn I(kAn Z tÿ j k . 1) An Cj B n C tj h A tn : Since T A B ÿ C it suf®ces to show that (5.8) holds with T replaced by A, B, or C. We begin with B. Choose ä . 0 with ä , 1=ap and ä < 1 and let ä1 ä=2 , 1. Use the Markov inequality along with the fact that jx yjä1 < jxjä1 j yjä1 to obtain P(jx9Byj . å) < å ÿä1 Ejx9Byjä1 < å ÿä1 X nEjx9Cj An Z 1 Z91 A tn C tj h yjä1 I(kAn Z 1 k . 1) j jj. m < å ÿä1 X j jj. m < å ÿä1 kCj kä1 kC j h kä1 nEjx9j An Z 1 Z91 A tn y j h jä1 I(kAn Z 1 k . 1) "X kC j h kä #1=2 j jj. m : "X j jj. m kCj kä (nEjx9j An Z 1 Z91 A tn y j h jä1 I(kAn Z 1 k . 1))2 #1=2 by the Schwartz inequality. Then in order to show that (5.8) holds with T replaced by B it will suf®ce to show that for some K 4 . 0 we have nEjx9j An Z 1 Z91 A tn y j h jä1 I(kAn Z 1 k . 1) < K 4 (5:9) for all j, h and all n large. Apply the Schwartz inequality again to see that nEjx9j An Z 1 Z91 A tn y j h jä1 I(kAn Z 1 k . 1) nEjhAn Z 1 , xj ihAn Z 1 , y j h ijä1 I(kAn Z 1 k . 1) q < nEjhAn Z 1 , xj ijä I(kAn Z 1 k . 1): nEjhAn Z 1 , y j h ijä I(kAn Z 1 k . 1) where nEjhAn Z 1 , xj ijä I(kAn Z 1 k . 1) , K 4 for all n large and all j by an application of # Blackwell Publishers Ltd 2000 325 MOVING AVERAGES OF RANDOM VECTORS Lemma 2.1 part (iv) with ç ä , 1=ai . A similar argument holds for the remaining term, which establishes (5.9), and so (5.8) holds with T replaced by B. Next we look at A and C which requires us to consider different cases. First suppose that ai . 12 and al . 12. As in the proof of Proposition 4.2 we write C tj x rj xj , A tn xj anj xnj , C tj y rj yj and A tn yj bnj ynj where xj , xnj , yj , ynj are unit vectors and rj, anj , r j , bnj are positive reals. By the Markov inequality we get P (jx9Ayj . å) < å ÿ1 Ejx9Ayj < å ÿ1 X j jj. m < å ÿ1 X j jj. m nEjx9Cj An Z 1 Z91 A tn C tj h yjI(kAn Z 1 k < 1) kCj k kC j h knEjh Z 1 , A tn xj ih Z 1 , A tn y j h ijI(kAn Z 1 k < 1): Next we will show that for some K 2 . 0 we have nEjh Z 1 , A tn xj ih Z 1 , A tn y j h ijI(kAn Z 1 k < 1) < K 2 (5:10) for all j, h and all n large. By the Schwartz inequality we have that the left hand side of (5.10) is bounded above by q nEh Z 1 , A tn xj i2 I(kAn Z 1 k < 1)nEjh Z 1 , A tn y j h i2 I(kAn Z 1 k < 1) where we have nEh Z 1 , A tn xj i2 I(kAn Z 1 k < 1) , K 2 independent of j and n large as in the proof of Lemma 5.1 above, since ai . 12 (so that æ 2 . 1=ai in Lemma 2.1 part (ii)). Similarly nEh Z 1 , A tn y j h i2 I(kAn Z 1 k < 1) , K 2 independent of j and n large by another application of Lemma 2.1 part (ii) (note that al . 12 so that æ 2 . 1=al ). Hence (5.10) holds for all j, h and all n large, and so for n large we have X kCj k kC j h k P(jx9Ayj . å) < å ÿ1 K 2 j jj. m <å ÿ1 K2 X kCj k 2 !1=2 j jj. m X kC j h k 2 !1=2 j jj. m which tends to zero as m ! 1, and so (5.8) holds with T replaced by A in this case. Since jx9Cyj jEx9Ayj we also have that (5.8) holds with T replaced by C in this case. Finally suppose that either ai < 12 or al < 12 (or both). As in the proof of Proposition 4.2 de®ne Z in Z i I(kAn Z i k < 1) and let Qin Z in Z9in ÿ EZ in Z9in Z in Z9in ÿ Bn . By the Markov inequality we get P(jx9(A ÿ C) yj . å) < å ÿ2 E[(x9(A ÿ C) y)2 ] å ÿ2 E n XX j jj. m t1 å ÿ2 x9An Cj Q tÿ j, n C tj h A tn y n X X n XX j jj. m t1 j j9j. m t91 !2 E(x9An Cj Q tÿ j, n C tj h A tn y: x9An C j9 Q t9ÿ j9, n C tj9 h A tn y): Since Qin are independent with mean zero the only terms that contribute to the sum above are those for which t ÿ j t9 ÿ j9. Then the sum above equals # Blackwell Publishers Ltd 2000 326 M. M. MEERSCHAERT AND H.-P. SCHEFFLER nÿ t j n XX X j jj. m t1 j91ÿ t j rj r j h r j9 r j9 h E(x9j An Q tÿ j, n A tn y j h x9j9 An Q tÿ j, n A tn y j9 h ): Next we will show that for some K 2 . 0 we have nE(x9j An Q1 n A tn y j h x9j9 An Q1 n A tn y j9 h ) < K 2 (5:11) for all j, j9, h and all n large. Write x9j An Q1 n A tn yk x9j An Z 1 n Z91 n A tn yk ÿ Ex9j An Z 1 n Z91 n A tn yk hAn Z 1 n , xj ihAn Z 1 n , yk i ÿ EhAn Z 1 n , xj ihAn Z 1 n , yk i, î nj â nk ÿ Eî nj â nk where î nj hAn Z 1 n , xj i: â nk hAn Z 1 n , yk i Then E(x9j An Q1 n A tn y j h x9j9 An Q1 n A tn y j9 h ) E(î nj â n, j h ÿ Eî nj â n, j h )(î nj9 â n, j9 h ÿ Eî nj9 â n, j9 h ) Eî nj â n, j h î nj9 â n, j9 h ÿ Eî nj â n, j h Eî nj9 â n, j9 h ÿ Eî nj â n, j h Eî nj9 â n, j9 h Eî nj â n, j h Eî nj9 â n, j9 h Eî nj â n, j h î nj9 â n, j9 h ÿ Eî nj â n, j h Eî nj9 â n, j9 h where Ejî nj â n, j h î nj9 â n, j9 h j < (E(î nj â n, j h )2 )1=2 (E(î nj9 â n, j9 h )2 )1=2 < (Eî 4nj )1=4 (Eâ 4n, j h )1=4 (Eî 4nj9 )1=4 (Eâ 4n, j9 h )1=4 (Eî 4nj Eâ 4n, j h Eî4nj9 Eâ 4n, j9 h )1=4 by the Schwartz inequality, and similarly Ejî nj â n, j h j < (Eî 4nj Eâ 4n, j h )1=4 Ejî nj9 â n, j9 h j < (Eî 4nj9 Eâ 4n, j9 h )1=4 so that together we have E(x9j An Q tÿ j, n A tn y j h x9j9 An Q tÿ j, n A tn y j9 h ) < 2(Eî 4nj Eâ 4n, j h Eî 4nj9 Eâ 4n, j9 h )1=4 where nEî 4nj nEhAn Z 1 n , xj i4 nEhAn Z 1 , xj i4 I(kAn Z 1 k < 1) which is bounded above by some K 2 , 1, for all n > n0 and all j by an application of part (ii) of Lemma 2.1 with 4 æ . 1=ai . A similar argument shows that nEâ4nk , K 2 for all k and all n large (apply part (ii) of Lemma 2.1 again with 4 æ . 1=al ). Then (5.11) holds for j, j9, h and all n large, and so for all large n we have # Blackwell Publishers Ltd 2000 327 MOVING AVERAGES OF RANDOM VECTORS n XX P(jx9(A ÿ C) yj . å) < å ÿ2 2K 2 nÿ1 nÿ t j X kCj k kC j h k kC j9 k kC j9 h k j jj. m t1 j91ÿ t j X å ÿ2 2K 2 kCj k kC j h knÿ1 X kCj k2 !1=2 j jj. m : 1 X kC j9 k j9ÿ1 kC j9 k kC j9 h k t1 j91ÿ t j j jj. m < å ÿ2 2K 2 nÿ t j n X X 2 X kC j h k2 !1=2 j jj. m !1=2 1 X 2 !1=2 kC j9 h k j9ÿ1 which tends to zero as m ! 1, and so (5.8) holds with T replaced by A ÿ C in this case, which concludes the proof of Proposition 4.3. REFERENCES Anderson, P. and Meerschaert, M. (1997) Periodic moving averages or random variables with regularly varying tails, Ann. Statist., 25, 771±785. Araujo, A. and Gin EÂ, E. (1980) The Central Limit Theorem for Real and Banach Valued Random Variables, Wiley, New York. Bhansali, R. (1993) Estimation of the impulse response coef®cients of a linear process with in®nite variance, J. Multivariate Anal., 45, 274±290. Billingsley, P. (1968) Convergence of Probability Measures, Wiley, New York. Brockwell, P. and Davis, R. (1991) Time Series: Theory and Methods, 2nd Ed., Springer-Verlag, New York. Davis, R. and Resnick, S. (1985a) Limit theory for moving averages of random variables with regularly varying tail probabilities, Ann. Probab., 13, 179±195. Davis, R. and Resnick, S. (1985b) More limit theory for the sample correlation function of moving averages, Stoch. Proc. Appl., 20, 257±279. Davis, R. and Resnick, S. (1986) More limit theory for the sample covariance and correlation functions of moving averages, Ann. Statist., 14, 533-558. Davis, R., Marengo, J. and Resnick, S. (1985) Extremal properties of a class of multivariate moving averages, Proceedings of the 45th International Statistical Institute, Vol. 4, Amsterdam. Davis, R. and Marengo, J. (1990) Limit theory for the sample covariance and correlation matrix functions of a class of multivariate linear processes, Commun. Statist. Stochastic Models, 6, 483±497. Feller, W. (1971) An Introduction to Probability Theory and Its Applications, Vol. II, 2nd Ed., Wiley, New York. Janicki, A. and Weron, A. (1994) Simulation and Chaotic Behavior of á-Stable Stochastic Processes, Marcel Dekker, New York. Jansen, D. and de Vries, C. (1991) On the frequency of large stock market returns: Putting booms and busts into perspective, Review of Econ. and Statist., 23, 18±24. Kokoszka, P. and Taqqu, M. (1994) In®nite variance stable ARMA processes, J. Time Series Anal., 115, 203±220. Kokoszka, P. and Taqqu, M. (1996) Parameter estimation for in®nite variance fractional ARIMA, Ann. Statist., 24, 1880±1913. Loretan, M. and Phillips, P. (1994) Testing the covariance stationarity of heavy tailed time series, J. Empirical Finance 1, 211±248. Mandelbrot, B. (1963) The variation of certain speculative prices, J. Business, 36, 394±419. Meerschaert, M. (1993) Regular variation and generalized domains of attraction in R k , Stat. Probab. Lett. 18, 233±239. # Blackwell Publishers Ltd 2000 328 M. M. MEERSCHAERT AND H.-P. SCHEFFLER Meerschaert, M. (1994) Norming operators for generalized domains of attraction, J. Theoretical Probab., 7, 793±798. Meerschaert, M. and Scheffler, H. P. (1999) Sample covariance matrix for random vectors with regularly varying tails, J. Theoretical Probab., 12, 821±838. Meerschaert, M. and Scheffler, H. P. (1999) Multivariable regular variation of functions and measures, J. Applied Analysis., 5, 125±146. Meerschaert, M. and Scheffler, H. P. (1998) Sample cross-correlations for moving averages with regularly varying tails, preprint. Mikosch, T., Gadrich, T., KlÏppenberg, C. and Adler, R. (1995) Parameter estimation for ARMA models with in®nite variance innovations, Ann. Statist., 23, 305±326. Mikknik, S. and Rachev, S. (1998) Asset and Option Pricing with Alternative Stable Models, Wiley, New York. Nikias, C. and Shao, M. (1995) Signal Processing with Alpha Stable Distributions and Applications, Wiley, New York. Resnick, S. (1986) Point processes, regular variation, and weak convergence, Adv. Appl. Probab., 18, 66±138. Resnick, S. and StAÆric AÆ, C. (1995) Consistency of Hill's estimator for dependent data, J. Appl. Probab., 32, 139±167. Samorodnitsky, G. and Taqqu, M. (1994) Stable non-Gaussian Random Processes: Stochastic Models with In®nite Variance, Chapman and Hall, London. Scheffler, H. P. (1998) Multivariate R±O varying measures, preprint. Seneta, E. (1976) Regularly Varying Functions, Lecture Notes Mathematics, Springer, BerlinHeidelberg, New York. Sharpe, M. (1969) Operator-stable probability distributions on vector groups, Trans. Amer. Math. Soc. 136, 51±65. # Blackwell Publishers Ltd 2000