EXPLICIT CONDITIONS FOR THE CONVERGENCE OF POINT PRO-

advertisement
Elect. Comm. in Probab. 15 (2010), 428–441
ELECTRONIC
COMMUNICATIONS
in PROBABILITY
EXPLICIT CONDITIONS FOR THE CONVERGENCE OF POINT PROCESSES ASSOCIATED TO STATIONARY ARRAYS
RALUCA BALAN1
Corresponding author. Department of Mathematics and Statistics, University of Ottawa, 585 King
Edward Avenue, Ottawa, ON, K1N 6N5, Canada.
email: rbalan@uottawa.ca
SANA LOUHICHI2
Laboratoire de mathématiques, Équipe de Probabilités, Statistique et modélisation, Université de ParisSud, Bât. 425, F-91405 Orsay Cedex, France.
email: Sana.Louhichi@math.u-psud.fr
Submitted December 7, 2009, accepted in final form August 6, 2010
AMS 2000 Subject classification: Primary 60E07, 60G55; secondary 60G10, 60G57
Keywords: infinite divisibility, point process, asymptotic independence, weak convergence, extremal index
Abstract
In this article, we consider a stationary array (X j,n )1≤ j≤n,n≥1 of random variables (which satisfy
some asymptotic independence conditions), and the corresponding sequence (Nn )n≥1 of point
processes, where Nn has the points X j,n , 1 ≤ j ≤ n. Our main result identifies some explicit
conditions for the convergence of the sequence (Nn )n≥1 , in terms of the probabilistic behavior of
the variables in the array.
1
Introduction
The study of the asymptotic behavior of the sum (or the maximum) of the row variables in an
array (X j,n )1≤ j≤n,n≥1 is one of the oldest problem in probability theory. When the variables are
independent on each row, classical results identify the limit to have an infinitely divisible distribution in the case of the sum (see [10]), and a max-infinitely divisible distribution, in the case of the
maximum (see [3]). A crucial observation, which can be traced back to [20], [27] (in the case of
the maximum), and [23] (in the case of the sum) P
is that these results are deeply connected to the
n
convergence in distribution of the sequence Nn = j=1 δX j,n , n ≥ 1 of point processes to a Poisson
process N . (See Section 5.3 of [21] and Section 7.2 of [22], for a modern account on this subject.)
Subsequent investigations showed that a similar connection exists in the case of arrays which
possess a row-wise dependence structure (e.g. [9]). The most interesting case arises when X i,n =
X i /an , where (X i )i≥1 is a (dependent) stationary sequence with regularly varying tails and (an )n
1
2
SUPPORTED BY A GRANT FROM THE NATURAL SCIENCES AND ENGINEERING RESEARCH COUNCIL OF CANADA.
PARTIALLY SUPPORTED BY THE GRANT ANR-08-BLAN-0314-01.
428
Conditions for Point Process Convergence
429
is a sequence of real numbers such that nP(|X 1 | > an ) → 1 (see [8] and the references therein).
The behavior of the maxima of m-dependent sequences exhibiting clustering of big values was
first studied in [17], whereas the first formula for the limits of sums of m-dependent heavytailed random variables was obtained in [14]. In the dependent case, the limit N may not be
a Poisson process, but belongs to the class of infinitely divisible point processes (under generally
weak assumptions). These findings reveal that the separate study of the point process convergence
is an important topic, which may yield new asymptotic results for triangular arrays.
In the present article, we consider an array (X j,n )1≤ j≤n,n≥1 whose row variables are asymptotically
independent, in the sense that the block (X 1,n , . . . , X n,n ) behaves asymptotically as kn “smaller”
i.i.d. blocks, a small block having the same distribution as (X 1,n , . . . , X rn ,n ), with n ∼ rn kn . This
condition, that we call here (AD-1), was considered by many authors (e.g. [12] [13], [8], [11],
[1]).
The rows of the array also possess an “anti-clustering” property (AC), which specifies the dependence structure within a small block. Intuitively, under (AC), it becomes improbable to find two
points X j,n , X k,n whose indices j, k are situated in the same small block at a distance larger than
a fixed value m, and whose values (in modulus) exceed a fixed threshold " > 0. Condition (AC)
appeared, in various forms, in the literature related to the asymptotic behavior of the maximum
(e.g. [16], [18]) or the sum (e.g. [6], [7], [4]). In addition, we assume the usual asymptotic
negligibility (AN) condition for X 1,n .
d
Our main result says that under (AD-1), (AC) and (AN), the convergence Nn → N , where N is an
infinitely divisible point process, reduces to the convergence of:
nP( max |X j,n | ≤ x, X m,n > x), and
(1)
n[P(Am,n , max |X j,n | > x) − P(Am−1,n , max |X j,n | > x)],
(2)
1≤ j≤m−1
1≤ j≤m
1≤ j≤m−1
where Am,n is the event that at least l i among X 1,n , . . . , X m,n lie in Bi , for all i = 1, . . . , d (for
arbitrary d, l1 , . . . , l d ∈ N and compact sets B1 , . . . , Bd ).
The novelty of this result compared to the existing results (e.g. Theorem 2.6 of [1]), is the fact
that the quantities appearing in (1) and (2) speak explicitly about the probabilistic behavior of the
variables in the array.
The article is organized as follows. In Section 2, we give the statements of the main result (Theorem 2.5) and a preliminary result (Theorem 2.4). Section 3 is dedicated to the proof of these two
results. Section 4 contains a separate result about the extremal index of a stationary sequence,
whose proof is related to some of the methods presented in this article.
2
The Main Results
We begin by introducing the terminology and the notation. Our main reference is [15]. We denote
R+ = [0, ∞), Z+ = {0, 1, 2, . . .} and N = {1, 2, . . .}.
If E is a locally compact Hausdorff space with a countable basis (LCCB), we let B be the class of
all relatively compact Borel sets in E, and CK+ (E) be the class of continuous functions f : E → R+
with compact support. We let M p (E) be the class of Radon measures on E with values in Z+
(endowed with the topology of vague convergence), and M p (E) be the associated Borel σ-field.
R
For µ ∈ M p (E) and f ∈ CK+ (E), we denote µ( f ) = E f (x)µ(d x). We denote by o the null measure.
Let (Ω, K , P) be a probability space. A measurable map N : Ω → M p (E) is called a point process.
Its distribution P ◦ N −1 is determined by the Laplace functional L N ( f ) = E(e−N ( f ) ), f ∈ CK+ (E).
430
Electronic Communications in Probability
A point process N is infinitely divisible if for any k ≥ 1, there exist some i.i.d. point processes
d Pk
(Ni,k )1≤i≤k such that N = i=1 Ni,k . By Theorem 6.1 of [15], the Laplace functional of an infinitely
divisible point process is given by:
( Z
)
(1 − e−µ( f ) )λ(dµ) ,
L N ( f ) = exp −
M p (E)\{o}
∀ f ∈ CK+ (E),
where λ is a measure on M p (E)\{o}, called the canonical measure of N .
All the point processes considered in this article have their points in R\{0}. For technical reasons,
we embed R\{0} into the space E = [−∞, ∞]\{0}. Let B be the class of relatively compact sets
in E. Note that
[−x, x]c := [−∞, −x) ∪ (x, ∞] ∈ B, for all x > 0.
We consider a triangular array (X j,n ) j≤n,n≥1 of random variables with values in R\{0}, such that
(X j,n ) j≤n is a strictly stationary sequence, for any n ≥ 1.
Definition 2.1. The triangular array (X j,n )1≤ j≤n,n≥1 satisfies:
(i) condition (AN) if
lim sup nP(|X 1,n | > ") < ∞, for all " > 0.
n→∞
(ii) condition (AD-1) if there exists (rn )n ⊂ N with rn → ∞ and kn = [n/rn ] → ∞, such that:
n P rn
okn Pn
= 0, for all f ∈ C + (E).
lim E e− j=1 f (X j,n ) − E e− j=1 f (X j,n )
K
n→∞
(iii) condition (AC) if there exists (rn )n ⊂ N with rn → ∞, such that:
lim lim sup n
m→m0
n→∞
rn
X
P(|X 1,n | > ", |X j,n | > ") = 0,
for all " > 0,
j=m+1
P rn
where m0 := inf{m ∈ Z+ ; limn→∞ n j=m+1
P(|X 1,n | > ", |X j,n | > ") = 0, for all " > 0}. We use the
conventions: inf ; = ∞ and limm→m0 φ(m) = φ(m0 ) if m0 < ∞.
Pn
P kn
Remark 2.2. (i) For each n ≥ 1, let Nn = j=1 δX j,n and Ñn = i=1
Ñi,n , where (Ñi,n )i≤kn are i.i.d.
P rn
copies of Nrn ,n = j=1 δX j,n . Under (AD-1), (Nn )n converges in distribution if and only if (Ñn )n
does, and the limits are the same.
(ii) Condition (AN) is an asymptotic negligibility condition which ensures that (Ñi,n )i≤kn ,n≥1 is a
null-array of point processes, i.e. P(Ñ1,n (B) > 0) → 0 for all B ∈ B. By Theorem 6.1 of [15]
d
Ñn → N if and only if
kn E(1 − e
−Nrn ,n ( f )
)→
Z
(1 − e−µ( f ) )λ(dµ),
M p (E)\{o}
∀ f ∈ CK+ (E).
In this case, N is an infinitely divisible point process with canonical measure λ.
Remark 2.3. (i) Condition (AD-1) is satisfied by arrays whose row-wise dependence structure is
of mixing type (see e.g. Lemma 5.1 of [1]).
(ii) Condition (AC) is satisfied with m0 = m if (X j,n )1≤ j≤n is m-dependent.
Conditions for Point Process Convergence
431
(iii) When X j,n = X j /un and m0 = 1, condition (AC) is known in the literature as Leadbetter’s
condition D0 ({un }) (see [16]).
(iv) Condition (AC) coincides with (22) of [1], and originates in the work of [6] and [7]. Note
that, by the stationarity of the array,
P(
kn
[
[
i=1
j,l∈Bi,n , j−l≥m
{|X l,n | > ", |X j,n | > "}) ≤ kn rn
rn
X
P(|X 1,n | > ", |X j,n | > "),
j=m+1
where Bi,n = {(i − 1)rn + 1, . . . , i rn } for i = 1, . . . , kn . Therefore, (AC) is an asymptotic anticlustering condition which implies that, when m is close to m0 and n is large enough, it is unlikely
that there will be two indices j > l in the same block Bi,n , located at a minimum distance m of each
other, and whose corresponding measurements X j,n and X l,n exceed (in modulus) the threshold
". Condition (2.10) of [4] is similar to (AC) and was used for obtaining the convergence of the
partial sum sequence to an infinitely divisible random variable (with infinite variance). We refer
to [4] for a detailed discussion of this condition.
As in [8], let M0 = {µ ∈ M p (R\{0}); µ 6= o, ∃ x ∈ (0, ∞) such that µ([−x, x]c ) = 0}. If µ =
P
j≥1 δ t j ∈ M0 , we let x µ := sup j≥1 |t j | < ∞. For each x > 0, let
M x = {µ ∈ M0 ; µ([−x, x]c ) > 0} = {µ ∈ M0 ; x µ > x}.
Recall that x is a fixed atom of a point process N if P(N {x} > 0) > 0. To simplify the writing, we
introduce some additional notation. If x > 0 and λ is a measure on M p (E) with λ(M0c ) = 0, we let
B x,λ = {B ∈ B; λ({µ ∈ M x ; µ(∂ B) > 0}) = 0},
and J x,λ be the class of sets M = ∩di=1 {µ ∈ M p (E); µ(Bi ) ≥ l i } for some Bi ∈ B x,λ , l i ≥ 1 (integers)
and d ≥ 1.
The following result is a refinement of Theorem 3.6 of [2].
Theorem 2.4. Suppose that (X j,n )1≤ j≤n,n≥1 satisfies (AN) and (AD-1) (with sequences (rn ) and (kn )).
Let N be an infinitely divisible point process on R\{0} with canonical measure λ. Let D be the set of
fixed atoms of N and D0 = {x > 0; x ∈ D or − x ∈ D}.
The following statements are equivalent:
d
(i) Nn → N ;
(ii) We have λ(M0c ) = 0, and the following two conditions hold:
(a)
kn P(max |X j,n | > x) → λ(M x ), for any x > 0, x 6∈ D0 ,
(b)
kn P(Nrn ,n ∈ M , max |X j,n | > x) → λ(M ∩ M x ), for any x > 0, x 6∈ D0
j≤rn
j≤rn
and for any set M ∈ J x,λ .
Pm
For each 1 ≤ m ≤ n, let Nm,n = j=1 δX j,n and Mm,n = max j≤m |X j,n |, with the convention that
M0,n = 0. The next theorem is the main result of this article, and gives an explicit form for
conditions (a) and (b), under the additional anti-clustering condition (AC).
Theorem 2.5. Let (X j,n )1≤ j≤n,n≥1 and N be as in Theorem 2.4. Suppose in addition that (AC) holds,
with the same sequence (rn )n as in (AD-1).
The following statements are equivalent:
432
Electronic Communications in Probability
d
(i) Nn → N ;
(ii) We have λ(M0c ) = 0 and the following two conditions hold:
(a0 )
lim lim sup |n[P(Mm,n > x) − P(Mm−1,n > x)] − λ(M x )| = 0, for any
m→m0
n→∞
x > 0, x 6∈ D0 ,
(b0 )
lim lim sup |n[P(Nm,n ∈ M , Mm,n > x) − P(Nm−1,n ∈ M , Mm−1,n > x)]
m→m0
n→∞
−λ(M ∩ M x )| = 0, for any x > 0, x 6∈ D0 and for any set M ∈ J x,λ .
Remark 2.6. Note that
P(Mm,n > x) − P(Mm−1,n > x) = P( max |X j,n | ≤ x, |X m,n | > x).
1≤ j≤m−1
Remark 2.7. Suppose that m0 = 1 in Theorem 2.5. One can prove that in this case, the limit N is
a Poisson process of intensity ν given by:
ν(B) = λ({µ ∈ M p (E)\{o}; µ(B) = 1}),
3
3.1
∀B ∈ B.
The Proofs
Proof of Theorem 2.4
Before giving the proof, we need some preliminary results.
Lemma 3.1. Let E be a LCCB space and M = ∩di=1 {µ ∈ M p (E); µ(Bi ) ≥ l i } for some Bi ∈ B, l i ≥ 1
(integers) and d ≥ 1. Then:
(i) M is closed (with respect to the vague topology);
(ii) ∂ M ⊂ ∪di=1 {µ ∈ M p (E); µ(∂ Bi ) > 0}.
Proof: Note that ∂ M ⊂ ∪di=1 ∂ Mi , where Mi = {µ ∈ M p (E); µ(Bi ) ≥ l i }. Since the finite intersection of closed sets is a closed set, it suffices to consider the case d = 1, i.e. M = {µ ∈ M p (E); µ(B) ≥
l} for some B ∈ B and l ≥ 1.
v
(i) Let (µn )n ⊂ M be such that µn → µ. If µ(∂ B) = 0, then µn (B) → µ(B), and since µn (B) ≥ l for
all n, it follows that µ(B) ≥ l. If not, we proceed as in the proof of Lemma 3.15 of [21]. Let B δ be a
δ-swelling of B. Then S = {δ ∈ (0, δ0 ]; µ(∂ B δ ) > 0} is a countable set. By the previous argument,
µ(B δ ) ≥ l for all δ ∈ (0, δ0 ]\S. Let (δn )n ∈ (0, δ0 ]\S be such that δn ↓ 0. Since µ(B δn ) ≥ l for all
n, and µ(B δn ) ↓ µ(B), it follows that µ(B) ≥ l, i.e. µ ∈ M .
(ii) By part (i), ∂ M = M̄ \ M o = M ∩ (M o )c . We will prove that ∂ M ⊂ {µ ∈ M ; µ(∂ B) > 0}, or
equivalently
A := {µ ∈ M ; µ(∂ B) = 0} ⊂ M o .
Since M o is the largest open set included in M and A ⊂ M , it suffices to show that A is open. Let
v
µ ∈ A and (µn )n ⊂ M p (E) be such that µn → µ. Then µn (B) → µ(B), and since µ(B) ≥ l and
{µn (B)}n are integers, it follows that µn (B) ≥ l for all n ≥ n1 , for some n1 .
On the other hand, µn (∂ B) → µ(∂ B), since ∂ B ∈ B and µ(∂ B) = 0 (note that ∂ (∂ B) = ∂ B).
Since µ(∂ B) = 0 and {µn (∂ B)}n are integers, it follows that µn (∂ B) = 0 for all n ≥ n2 , for some
n2 . Hence µn ∈ A for all n ≥ max{n1 , n2 }. ƒ
Conditions for Point Process Convergence
433
Lemma 3.2. Let E be a LCCB space and (Q n )n , Q be probability measures on M p (E). Let BQ be the
class of all sets B ∈ B which satisfy:
Q({µ ∈ M p (E); µ(∂ B) > 0}) = 0,
and JQ be the class of sets M = ∩di=1 {µ ∈ M p (E); µ(Bi ) ≥ l i } for some Bi ∈ BQ , l i ≥ 1 (integers) and
d ≥ 1.
w
Then Q n → Q if and only if Q n (M ) → Q(M ) for all M ∈ JQ .
Proof: Let (Nn )n , N be point processes on E, defined on a probability space (Ω, F , P), such that
P ◦ Nn−1 = Q n for all n, and P ◦ N −1 = Q. Note that BQ = BN := {B ∈ B; N (∂ B) = 0 a.s.}.
d
w
d
By definition, Nn → N if and only if Q n → Q. By Theorem 4.2 of [15], Nn → N if and only if
d
(Nn (B1 ), . . . , Nn (Bd )) → (N (B1 ), . . . , N (Bd ))
d
for any B1 , . . . , Bd ∈ BN and for any d ≥ 1. Since these random vectors have values in Z+
, the
previous convergence in distribution is equivalent to:
P(Nn (B1 ) = l1 , . . . , Nn (Bd ) = l d ) → P(N (B1 ) = l1 , . . . , N (Bd ) = l d )
for any l1 , . . . , l d ∈ Z+ , which is in turn equivalent to
P(Nn (B1 ) ≥ l1 , . . . , Nn (Bd ) ≥ l d ) → P(N (B1 ) ≥ l1 , . . . , N (Bd ) ≥ l d )
for any l1 , . . . , l d ∈ Z+ . Finally, it suffices to consider only integers l i ≥ 1 since, if there exists a set
I ⊂ {1, . . . , d} such that l i = 0 for all i ∈ I and l i ≥ 1 for i 6∈ I, then P(Nn (B1 ) ≥ l1 , . . . , Nn (Bd ) ≥
l d ) = P(Nn (Bi ) ≥ l i , i 6∈ I) → P(N (Bi ) ≥ l i , i 6∈ I) = P(N (B1 ) ≥ l1 , . . . , N (Bd ) ≥ l d ). ƒ
Proof of Theorem 2.4: Note that {max j≤rn |X j,n | > x} = {Nrn ,n ∈ M x }.
Suppose that (i) holds. As in the proof of Theorem 3.6 of [2], it follows that λ(M0c ) = 0 and (a)
w
holds. Moreover, we have Pn,x → Px where Pn,x and Px are probability measures on M p (E) defined
by:
kn P(Nrn ,n ∈ M ∩ M x )
λ(M ∩ M x )
Pn,x (M ) =
and Px (M ) =
.
kn P(Nrn ,n ∈ M x )
λ(M x )
Therefore, Pn,x (M ) → Px (M ) for any M ∈ M p (E) with Px (∂ M ) = 0. Since kn P(Nrn ,n ∈ M x ) →
λ(M x ) (by (a)), it follows that
kn P(Nrn ,n ∈ M ∩ M x ) → λ(M ∩ M x ),
(3)
for any M ∈ M p (E) with λ(∂ M ∩ M x ) = 0.
In particular, (3) holds for a set M = ∩di=1 {µ ∈ M p (E); µ(Bi ) ≥ l i }, with Bi ∈ B x,λ , l i ≥ 1 (integers)
and d ≥ 1. To see this, note that by Lemma 3.1, ∂ M ∩ M x ⊂ ∪di=1 {µ ∈ M x ; µ(∂ Bi ) > 0}, and hence
λ(∂ M ∩ M x ) ≤
d
X
λ({µ ∈ M x ; µ(∂ Bi ) > 0}) = 0.
i=1
w
Suppose that (ii) holds. As in the proof of Theorem 3.6 of [2], it suffices to show that Pn,x → Px .
This follows by Lemma 3.2, since the class of sets B ∈ B which satisfy:
Px ({µ ∈ M p (E); µ(∂ B) > 0}) = 0
coincides with B x,λ . ƒ
434
3.2
Electronic Communications in Probability
Proof of Theorem 2.5
We begin with an auxiliary result, which is of independent interest.
Lemma 3.3. Let h : Rd → R be a twice continuously differentiable function, such that
∂ 2h
kD2 hk∞ := max sup (x) < ∞.
i, j=1,...,d x∈Rd ∂ x i ∂ x j
(4)
(1)
(d)
Let (Yi )i≥1
a strictly stationary sequence of d-dimensional random vectors with Yi = (Yi , . . . , Yi
Pbe
n
Let Sn = i=1 Yi for n ≥ 1 and S0 = 0. Then for any 1 ≤ m ≤ r,
).
|E[h(S r )] − r E[h(Sm ) − h(Sm−1 )]| ≤ m|E[h(Sm )] + E[h(Sm−1 )]|+
kD2 hk∞
rX
n −m
d
X
(i)
(l)
E|Sk Yk+m |.
k=0 i,l=1
Proof: As in Lemma 3.2 of [13] (see also Theorem 2.6 of [1]), we have:
E[h(S r )]
=
E[h(Sm−1 )] +
r−m
X
E[h(Sk+m ) − h(Sk+m−1 )]
k=0
r E[h(Sm ) − h(Sm−1 )]
=
(m − 1)E[h(Sm ) − h(Sm−1 )] +
r−m
X
E[h(Sk+m − Sk ) − h(Sk+m−1 − Sk )],
k=0
where the second equality is due to the stationarity of (Yi )i . Taking the difference, we get:
E[h(S r )] − r E[h(Sm ) − h(Sm−1 )] = mE[h(Sm−1 )] − (m − 1)E[h(Sm )]+
r−m
X
E{[h(Sk+m ) − h(Sk+m − Sk )] − [h(Sk+m−1 ) − h(Sk+m−1 − Sk )]} =: I1 + I2 .
k=0
Clearly |I1 | ≤ m|E[h(Sm−1 )]+E[h(Sm )]|. For treating I2 , we use the Taylor’s formula (with integral
remainder) for twice continuously differentiable functions f : Rd → R:
d
X
(i)
f (x) − f (x 0 ) =
(x (i) − x 0 )
Z
1
0
i=1
∂f
∂ xi
(x − s(x − x 0 ))ds.
We get:
h(Sk+m ) − h(Sk+m − Sk )
=
d
X
(i)
Sk
h(Sk+m−1 ) − h(Sk+m−1 − Sk )
=
i=1
1
0
i=1
d
X
Z
(i)
Sk
Z
1
0
∂h
∂ xi
∂h
∂ xi
(Sk+m − xSk )d x,
(Sk+m−1 − xSk )d x.
(5)
Conditions for Point Process Convergence
435
Taking the difference of the last two equations, and using (5) for f = ∂ h/∂ x i with i = 1, . . . , d,
we obtain:
[h(Sk+m ) − h(Sk+m − Sk )] − [h(Sk+m−1 ) − h(Sk+m−1 − Sk )] =
Z 1
d
X
∂h
∂h
(i)
(Sk+m − xSk ) −
(Sk+m−1 − xSk ) d x
Sk
∂ xi
∂ xi
0
i=1
Z
Z
1X
1
d
d
X
∂ 2h
(i)
(l)
=
((Sk+m − xSk ) − θ Yk+m )dθ d x.
Sk
Yk+m
∂ xi∂ xl
0 l=1
0
i=1
From here we conclude that:
|[h(Sk+m ) − h(Sk+m − Sk )] − [h(Sk+m−1 ) − h(Sk+m−1 − Sk )]| ≤ kD2 hk∞
d
X
(i)
(l)
|Sk Yk+m |,
i,l=1
which yields the desired estimate for I2 . ƒ
Proposition 3.4. Let E be a LCCB space. For each n ≥ 1, let (X j,n ) j≤n be a strictly stationary sequence
of E-valued random variables, such that:
lim sup nP(X 1,n ∈ B) < ∞, for all B ∈ B.
(6)
n→∞
Suppose that there exists (rn )n ⊂ N with rn → ∞ and kn = [n/rn ] → ∞, such that:
rn
X
lim lim sup n
m→m0
n→∞
P(X 1,n ∈ B, X j,n ∈ B) = 0, for all B ∈ B,
(7)
j=m+1
where m0 =: {m ∈ Z+ ; limn→∞ n
Pm
j=1 δX j,n . Then
P rn
j=m+1
P(X 1,n ∈ B, X j,n ∈ B) = 0, for all B ∈ B}. Let Nm,n =
lim lim sup |kn P(Nrn ,n ∈ M ) − n[P(Nm,n ∈ M ) − P(Nm−1,n ∈ M )]| = 0,
m→m0
n→∞
for any set M = ∩di=1 {µ ∈ M p (E); µ(Bi ) ≥ l i }, with Bi ∈ B, l i ≥ 1 (integers) and d ≥ 1.
d
Proof: Let h : R+
→ R+ be a twice continuously differentiable function which satisfies (4), such
d
that h(x 1 , . . . , x d ) ≤ x 1 + . . . + x d for all (x 1 , . . . , x d ) ∈ R+
, and
h(x 1 , . . . , x d ) =
0
1
if x i ≤ l i − 1 for some i = 1, . . . , d
if x i ≥ l i for all i = 1, . . . , d
Note that:
h(x 1 , . . . , x d ) = 1{x 1 ≥l1 ,...,x d ≥ld }
for all x 1 , . . . , x d ∈ Z+ .
(8)
For any n ≥ 1, we consider strictly stationary sequence of d-dimensional random vectors {Y j,n =
(1)
(d)
(Y j,n , . . . , Y j,n ), 1 ≤ j ≤ n} defined by:
(i)
Y j,n = 1{X j,n ∈Bi } ,
for any i = 1, . . . , d.
436
Electronic Communications in Probability
Using (8), we obtain for any 1 ≤ m ≤ n,
P(Nm,n ∈ M ) = P(Nm,n (B1 ) ≥ l1 , . . . , Nm,n (Bd ) ≥ l d ) =
m
m
X
X
(1)
(d)
P(
Y j,n ≥ l1 , . . . ,
Y j,n ≥ l d ) = E[1{Pm Y (1) ≥l
j=1
j=1
j=1
j,n
1 ,...,
Pm
j=1
(d)
Y j,n ≥l d }
]=
m
m
m
X
X
X
(1)
(d)
E[h(
Y j,n , . . . ,
Y j,n )] = E[h(
Y j,n )].
j=1
j=1
j=1
Using Lemma 3.3, and letting C = kD2 hk∞ , we obtain:
kn |P(Nrn ,n ∈ M ) − rn [P(Nm,n ∈ M ) − P(Nm−1,n ∈ M )]| ≤
m
m−1
d rX
k
n −m
X
X
X
X
(i) (l)
mkn {E[h(
Y j,n )] + E[h(
Y j,n )]} + C kn
E(
Y j,n Yk+m,n )
j=1
=:
(1)
I m,n
+
Using the fact that h(x) ≤
(1)
I m,n
j=1
i,l=1 k=0
j=1
(2)
C I m,n
Pd
i=1
(9)
x i , and the stationary of (X j,n ) j≤n ,
m X
d
d X
m
X
X
(i)
2mkn E(
Y j,n ) = 2mkn
P(X j,n ∈ Bi )
≤
j=1 i=1
=
2m2 kn
d
X
i=1 j=1
P(X 1,n ∈ Bi ) ≤ 2m2
i=1
d
1 X
rn
nP(X 1,n ∈ Bi ).
i=1
(1)
From (6), it follows that limn→∞ I m,n
= 0 for all m, and hence
(1)
= 0.
lim lim sup I m,n
m→m0
(10)
n→∞
Using the stationarity of (X j,n ) j≤n , and letting B = ∪di=1 Bi ∈ B,
(2)
I m,n
=
kn
d rX
k
n −m X
X
P(X j,n ∈ Bi , X k+m,n ∈ Bl )
i,l=1 k=0 j=1
=
kn
rn
d
X
X
(rn − j + 1)P(X 1,n ∈ Bi , X j,n ∈ Bl )
i,l=1 j=m+1
≤
n
rn
d
X
X
P(X 1,n ∈ Bi , X j,n ∈ Bl )
i,l=1 j=m+1
≤
d2n
rn
X
P(X 1,n ∈ B, X j,n ∈ B).
j=m+1
From (7), it follows that:
(2)
lim lim sup I m,n
= 0.
m→m0
n→∞
(11)
Conditions for Point Process Convergence
437
From, (9), (10) and (11), it follows that:
lim lim sup kn |P(Nrn ,n ∈ M ) − rn [P(Nm,n ∈ M ) − P(Nm−1,n ∈ M )]| = 0.
m→m0
n→∞
Note that limn→∞ (n − kn rn )|P(Nm,n ∈ M ) − P(Nm−1,n ∈ M )| = 0 for all m, and hence
lim lim sup(n − kn rn )|P(Nm,n ∈ M ) − P(Nm−1,n ∈ M )| = 0.
m→m0
n→∞
The conclusion follows. ƒ
Corollary 3.5. For each n ≥ 1, let (X j,n )1≤ j≤n be a strictly stationary sequence of random variables
with values in R\{0}. Suppose that (X j,n )1≤ j≤n,n≥1 satisfies (AN) and (AC).
Pm
For any 1 ≤ m ≤ n, let Nm,n = j=1 δX j,n and Mm,n = max j≤m |X j,n |. Then,
lim lim sup |kn P(M rn ,n > x) − n[P(Mm,n > x) − P(Mm−1,n > x)]| = 0
m→m0
n→∞
lim lim sup |kn P(Nrn ,n ∈ M , M rn ,n > x) − n[P(Nm,n ∈ M , Mm,n > x) −
m→m0
n→∞
P(Nm−1,n ∈ M , Mm−1,n > x)]| = 0,
for any x > 0, and for any set M = ∩di=1 {µ ∈ M p (E); µ(Bi ) ≥ l i }, with Bi ∈ B, l i ≥ 1 (integers) and
d ≥ 1.
Proof: Since {Mm,n > x} = {Nm,n ([−x, x]c ) ≥ 1} for any 1 ≤ m ≤ n, the result follows from
Proposition 3.4. ƒ
Proof of Theorem 2.5: The result follows from Theorem 2.4 and Corollary 3.5. ƒ
4
The extremal index
In this section, we give a recipe for calculating the extremal index of a stationary sequence, using
a method which is similar to that used for proving Theorem 2.5, in a simplified context. Although
this recipe (given by Theorem 4.5 below) seems to be known in the literature (see [18], [19],
[26]), we decided to include it here, since we could not find a direct reference for its proof.
We recall the following definition.
Definition 4.1. Let (X j ) j≥1 be a strictly stationary sequence of random variables. The extremal
index of the sequence (X j ) j≥1 , if it exists, is a real number θ with the following property: for
(τ)
any τ > 0, there exists a sequence (u(τ)
n )n ⊂ R such that nP(X 1 > un ) → τ and P(max j≤n X j ≤
(τ)
−τθ
un ) → e
.
In particular, for τ = 1, we denote u(1)
n = un , and we have
nP(X 1 > un ) → 1
and
P(max X j ≤ un ) → e−θ .
j≤n
It is clear that if it exists, θ ∈ [0, 1].
Remark 4.2. The extremal index of an i.i.d sequence exists and is equal to 1.
The following definition was originally considered in [18].
(12)
438
Electronic Communications in Probability
Definition 4.3. We say that (X j ) j≥1 satisfies condition (AIM) (or admits an asymptotic independence representation for the maximum) if there exists (rn )n ⊂ N with rn → ∞ and kn = [n/rn ] →
∞, such that:
P(max X ≤ u ) − P(max X ≤ u )kn → 0.
n
j
n
j≤n j
j≤rn
Remark 4.4. It is known that (Leadbetter’s) condition D({un }) implies (AIM) (see Lemma 2.1 of
[16]). Recall that (ξ j ) j satisfies condition D({un }) if there exists a sequence (mn )n ⊂ N, such that
mn = o(n) and αn (mn ) → 0, where
αn (m) = sup |P(max X j ≤ un , max X j ≤ un ) − P(max X j ≤ un )P(max X j ≤ un )|,
j∈I
I,J
j∈J
j∈I
j∈J
where the supremum ranges over all disjoint subsets I, J of {1, . . . , n}, which are separated by a
block of length greater of equal than m.
The following theorem is the main result of this section.
Theorem 4.5. Let (X j ) j≥1 be a strictly stationary sequence whose extremal index θ exists, and (un )n
be a sequence of real numbers satisfying (12).
Suppose that (X j ) j≥1 satisfies (AIM), and in addition,
lim lim sup n
m→m0
n→∞
rn
X
P(X 1 > un , X j > un ) = 0,
P rn
P(X 1 > un , X j > un ) = 0}.
where m0 := inf{m ∈ Z+ ; limn→∞ n j=m+1
Then
θ = lim lim sup nP( max X j ≤ un , X m > un ).
m→m0
(13)
j=m+1
n→∞
1≤ j≤m−1
(14)
Due to the stationarity, and the fact that nP(X 1 > un ) → 1, (14) can be written as:
θ
=
=
lim lim sup nP( max X j ≤ un , X 1 > un )
m→m0
n→∞
2≤ j≤m
lim lim sup P( max X j ≤ un |X 1 > un ),
m→m0
n→∞
2≤ j≤m
which coincides with (2.3) of [26]. We refer to formula (5) in [24] and to Theorem 3.1 in [25]
for a similar expression of θ . We refer also to [5], Chapter 10, for an overview on the extremal
index.
Remark 4.6. Let (Yi )i≥1 be a sequence of i.i.d. random variables and X i = max(Yi , · · · , Yi+m−1 ).
Then (X i )i≥1 satisfies condition (13), since it is an m-dependent sequence. A direct calculation
shows that the extremal index of (X i )i≥1 exists and is equal to 1/m, which can be deduced also
from (14).
The proof of Theorem 4.5 is based on some intermediate results.
Proposition 4.7. Let (X j ) j≥1 be a strictly stationary sequence whose extremal index θ exists, and
(un )n be a sequence of real numbers satisfying (12). If (X j ) j≥1 satisfies (AIM), then kn P(max j≤rn X j >
un ) → θ .
Conditions for Point Process Convergence
439
Proof: Due to (AIM), P(max j≤rn X j ≤ un )kn → e−θ . The result follows, since
‚
P(max X j ≤ un )
j≤rn
kn
=
1−
kn P(max j≤rn X j > un )
Œ kn
kn
.
ƒ
Proposition 4.8. Let (X j ) j≥1 be a strictly stationary sequence such that:
lim sup nP(X 1 > un ) < ∞.
n→∞
Suppose that there exists (rn )n ⊂ N with rn → ∞ and kn = [n/rn ] → ∞, such that (13) holds. Then
lim lim sup |kn P(max X j > un ) − n[P(max X j > un ) − P( max X j > un )]| = 0.
m→m0
j≤rn
n→∞
j≤m
j≤m−1
Proof: The argument is the same as in the proof of Proposition 3.4, using Lemma 3.3. More
precisely, we let h : R+ → R+ be a twice continuously differentiable such that kh00 k∞ < ∞,
h(0) = 0, h(1) = 1 if y ≥ 1, and h(x) ≤ x for all x ∈ R+ . Then h(x) = 1{x≥1} for all x ∈ Z+ , and
P(max X j > un )
j≤m
=
E[1{Pmj=1 1{X
≥1} ]
j >un }
m
X
= E[h(
1{X j >un } )].
j=1
We omit the details. ƒ
Proof of Theorem 4.5: The result follows from Proposition 4.7 and Proposition 4.8, using the fact
that:
P( max X j ≤ un , X m > un ) = P(max X j > un ) − P( max X j > un ).
j≤m−1
j≤m
j≤m−1
ƒ
Acknowledgement. The first author would like to thank the second author and Laboratoire de mathématiques de l’Université
de Paris-sud (Équipe de Probabilités, Statistique et modélisation) for their warm hospitality. The authors would like to
thank J. Segers for drawing their attention to references [5], [24] and [25].
References
[1] R. M. Balan and S. Louhichi, S. Convergence of point processes with weakly dependent
points. J. Theoret. Probab. 22 (2009), 955-982. MR2558660. MR2558660
[2] R. M. Balan and S. Louhichi. A cluster limit theorem for infinitely divisible point processes
(2009). Preprint available at: http://arxiv.org/abs/0911.5471. Math. Review number not
available.
[3] A. Balkema and S. I. Resnick. Max-infinite divisibility. J. Appl. Probab. 14 (1977), 309-319.
MR0438425. MR0438425
[4] K. Bartkiewicz, A. Jakubowski, T. Mikosch and O. Wintenberger. Stable limits for sums of
dependent infinite variance random variables. To appear in Probab. Th. Rel. Fields (2009).
Math. Review number not available.
440
Electronic Communications in Probability
[5] J. Beirlant, Y. Goegebeur, Y, J. Segers and J. Teugels Statistics of Extremes: Theory and Applications (2004). Wiley Series in Probability and Statistics. MR2108013. MR2108013
[6] J. Dedecker and S. Louhichi. Conditional convergence to infinitely divisible distributions
with finite variance. Stoch. Proc. Appl. 115 (2005), 737-768. MR2132596. MR2132596
[7] J. Dedecker and S. Louhichi. Convergence to infinitely divisible distributions with finite variance. ESAIM Probab. Stat. 9 (2005), 38-73. MR2148960. MR2148960
[8] R. A. Davis and T. Hsing. Point process and partial sum convergence for weakly dependent
random variables with infinite variance. Ann. Probab. 23 (1995), 879-917. MR1334176.
MR1334176
[9] R. Durrett and S. I. Resnick. Functional limit theorems for dependent variables. Ann. Probab.
6 (1978), 829-846. MR0503954. MR0503954
[10] B. V. Gnedenko and A. N. Kolmogorov. Limit Distributions for Sums of Independent Random
Variables (1954). Addison-Wesley, MA. MR0062975. MR0062975
[11] T. Hsing, J. Hüsler and M. R. Leadbetter. On the exceedance process for a stationary sequence. Probab. Th. Rel. Fields 78 (1988), 97-112. MR0940870. MR0940870
[12] A. Jakubowski. Minimal conditions in p-stable limit theorems. Stoch. Proc. Appl. 44 (1993),
291-327. MR1200412. MR1200412
[13] A. Jakubowski. Minimal conditions in p-stable limit theorems. Stoch. Proc. Appl. 68 (1997),
1-20. MR1454576. MR1454576
[14] A. Jakubowski and M. Kobus. α-stable limit theorems for sums of dependent random vectors.
J. Multiv. Anal. 29 (1989), 219-251. MR1004336. MR1004336
[15] O. Kallenberg. Random Measures. Third edition (1983). Springer, New York. MR0818219.
MR0818219
[16] M. R. Leadbetter. Extremes and local dependence in stationary sequences. Z. Wahr. verw.
Gebiete 65 (1983), 291-306. MR0722133. MR0722133
[17] G. F. Newell. Asymptotic extremes for m-dependent random variables. Ann. Math. Stat. 35
(1964), 1322-1325. MR0164361. MR0164361
[18] G. L. O’Brien. The maximal term of uniformly mixing stationary processes. Z. Wahr. verw.
Gebiete 30 (1974), 57-63. MR0362451. MR0362451
[19] G. L. O’Brien. Extreme values for stationary and Markov sequences. Ann. Probab. 15 (1987),
281-291. MR0877604. MR0877604
[20] S. I. Resnick. Weak convergence to extremal processes. Ann. Probab. 3 (1975), 951-960.
MR0428396. MR0428396
[21] S. I. Resnick. Extreme Values, Regular Variation, and Point Processes (1987). Springer, New
York. MR0900810. MR0900810
[22] S. I. Resnick. Heavy Tail Phenomena: Probabilistic and Statistical Modelling (2007). Springer,
New York. MR2271424. MR2271424
Conditions for Point Process Convergence
441
[23] S. I. Resnick and P. Greenwood. A bivariate stable characterization and domains of attraction.
J. Multiv. Anal. 9 (1979), 206-221. MR0538402. MR0538402
[24] J. Segers. Approximate Distributions of Clusters of Extremes. Statist. Probab. Lett. 74 (2005),
330-336. MR2186477. MR2186477
[25] J. Segers. Rare events, temporal dependence, and the extremal index. J. Appl. Probab. 43
(2006), 463-485. MR2248577. MR2248577
[26] R. L. Smith. The extremal index for a Markov chain. J. Appl. Probab. 29 (1992), 37-45.
MR1147765. MR1147765
[27] I. Weissman. On weak convergence of extremal processes. Ann. Probab. 4 (1976), 470-473.
MR0400330. MR0400330
Download