n a l r u o

advertisement
J
i
on
Electr
o
u
a
rn l
o
f
P
c
r
ob
abil
ity
Vol. 2 (1997) Paper no.3, pages 1–27.
Journal URL
http://www.math.washington.edu/˜ejpecp/
Paper URL
http://www.math.washington.edu/˜ejpecp/EjpVol2/paper3.abs.html
Avoiding-probabilities for Brownian snakes and
super-Brownian motion
Romain Abraham and Wendelin Werner
UFR de Mathématiques et d’Informatique, Université René Descartes,
45, rue des Saints-Pères, F-75270 PARIS cedex 06, FRANCE
e-mail: abraham@math-info.univ-paris5.fr
Laboratoire de Mathématiques de l’Ecole Normale Supérieure
45, rue d’Ulm, F-75230 PARIS cedex 05, FRANCE
e-mail: wwerner@dmi.ens.fr
Abstract: We investigate the asymptotic behaviour of the probability that
a normalized d-dimensional Brownian snake (for instance when the life-time
process is an excursion of height 1) avoids 0 when starting at distance ε from the
origin. In particular we show that when ε tends to 0,√this probability
respectively
√
behaves (up to multiplicative constants) like ε4 , ε2 2 and ε( 17−1)/2 , when d =
1, d = 2 and d = 3. Analogous results are derived for super-Brownian motion
started from δx (conditioned to survive until some time) when the modulus of
x tends to 0.
Key words: Brownian snakes, superprocesses, non-linear differential equations
AMS-classification: 60J25, 60J45
Submitted to EJP on October 24, 1996. Final version accepted on May 7, 1997.
1
Avoiding-probabilities for Brownian snakes
and super-Brownian motion
Romain Abraham and Wendelin Werner
Université Paris V and C.N.R.S.
1
Introduction
The Brownian snake is a path-valued Markov process that has been shown to
be a powerful tool to study probabilistically some non-linear partial differential
equations (see e.g. Le Gall [13, 14, 15], Dhersin-Le Gall [8]). It has also been
studied on its own right (see e.g. [24], [25]), and because of its intimate connection with super-Brownian motion, it can be used to study certain branching
particle systems and superprocesses (see for instance Le Gall-Perkins [18], Serlet
[21, 22]).
Our first purpose here is to study the asymptotical probability (when ε →
0+) that a d-dimensional Brownian snake started at distance ε from the origin
does not hit 0, for instance when the life-time process is conditioned on hitting 1.
This has a rather natural interpretation in terms of critical branching particule
systems that we shall briefly point out at the end of this introduction. Recall
that if B denotes a linear Brownian motion started from ε under the probability
measure Pε , then the reflection principle immediately shows that
r
2
/ B[0, 1]) ∼
ε, when ε → 0.
(1)
Pε(0 ∈
π
We shall for instance see (see Theorem 1 below) that the analogous quantity for
a conditioned one-dimensional Brownian snake behaves (up to a multiplicative
constant) like ε4 when ε → 0. Note that the type of conditioning (conditioned to
hit a fixed point, to survive after time t etc.) will only change the multiplicative
constant, but not the power of ε. We will also use these to study analogous
problems for conditioned d-dimensional super-Brownian motion.
Let us first briefly recall the definition of the d-dimensional Brownian snake
(see e.g. Le Gall [13, 14] for a detailed construction and some properties of
2
this process): It is a Markov process W = ((Ws , ζs ), s ∈ [0, σ]) where Ws takes
values in the set of stopped continuous functions
F = {w : [0, T (w)] → Rd , w is continuous}
and its life-time is T (Ws ) = ζs . Conditionally on the life-time process (ζs , s ≤
σ), the process (Ws , s ≥ 0) is a time-inhomogeneous Markov process in F, and
its transition kernels are described by the two following properties: For any
s < s0 < σ,
• Ws (t) = Ws0 (t) for all t ≤ inf [s,s0 ] ζ.
• Ws0 (inf [s,s0 ] ζ + t) − Ws0 (inf [s,s0 ] ζ) for t ∈ [0, ζs0 − inf [s,s0 ] ζ] is a Brownian
motion in Rd started from 0 and independent of Ws .
Loosely speaking, Ws is a Brownian path in Rd with random lifetime ζs . When
ζ decreases, the extremity of the path is erased and when ζ increases, the path
is extended by adding an independent Brownian motion.
We shall essentially focus on the cases where the life-time process ζ = (ζs , s ∈
[0, σ]) is a Brownian excursion away from 0. When ζ is reflected Brownian motion, then the above description defines the Brownian snake, which is a Markov
process for which the trajectory started from x and with zero life-time is regular. We will denote the associated (infinite) excursion measure N[d]
x . We choose
N[d]
in
such
a
way
that
ζ
is
a
standard
Brownian
excursion
defined
under the
x
Itô measure. We shall also consider the case where ζ is a Brownian excursion
renormalized by its height (i.e. sup[0,σ] ζ = 1); in this case, the ‘normalized’
Brownian snake started from x is defined under a probability measure that we
[d]
shall denote Ñx .
Some large deviation results for the Brownian snake normalized by the length
of the excursion have been recently derived by Dembo-Zeitouni [6], Serlet [24].
The range R of the Brownian snake W is defined as follows:
R = {Ws (ζs ), s ∈ [0, σ]} = {Ws (t), s ∈ [0, σ], t ∈ [0, ζs ]}.
It is the set of points visited by the snake. Recall that points are not polar (i.e.
a fixed point has a striclty positive probability/measure to be in R) only for
d < 4 (see e.g. Dawson-Iscoe-Perkins [4]).
Here is the statement of our main results for the Brownian snake:
Theorem 1 Define
α(1) = 4,
√
α(2) = 2 2
√
17 − 1
α(3) =
.
2
3
Then for d = 1, 2, 3:
(i) There exists constants k1 (d) ∈ (0, ∞) such that
[d]
Ñε (0 ∈
/ R) ∼ k1 (d) εα(d) when ε → 0 + .
(ii) There exist constants k2 (d) ∈ (0, ∞), such that for all fixed X > 0,
/ R and R ∩ {z ∈ Rd , kzk = X} =
6 ∅) ∼
N[d]
ε (0 ∈
k2 (d) α(d)
ε
when ε → 0 + .
X α(d)+2
For d = 1, one has
k2 (1) =
Γ(1/3)18
≈ 29.315.
1792π 6
(2)
The two statements (i) and (ii) are similar, but their proofs differ significantly. The proof of (ii) relies on analytical arguments (Sections 4 and 5.1),
whereas that of (i) uses the (probabilistic) decomposition of the snake via its
longest path (Section 3 and 5.2).
We will then use these results to study similar quantities for d-dimensional
super-Brownian motion. For references on superprocesses, see e.g. the monographs and lecture notes by Dawson [3], Dawson-Perkins [5] and Dynkin [10]
and the references therein. Let (µt )t≥0 denote a super-Brownian motion with
branching mechanism 2u2 (just as in Le Gall [17], Theorem 2.3) started from
the finite positive measure ν on Rd under the probability measure Pν . Let |µ1 |
denote the total mass of µ1 and the closure S of the union over all t > 0 of
the supports of µt . Then the following Theorem will turn out to be a simple
consequence of Theorem 1:
Theorem 2 Define α(d) and k1 (d) as in Theorem 1. Then for d = 1, 2 and 3,
one has:
4−d
Pδε (0 ∈
/ S and |µ1 | > 0) ∼ k3 (d) εα(d) exp −
, when ε → 0
(3)
2ε2
and where k3 (d) = k1 (d)/(1 + α(d)/2).
This paper is organized as follows: The next section is devoted to some
preliminaries (we put down some notation and recall some relevant facts). For
the sake of clarity, we will then first focus on the one-dimensional case. In
Section 3, we use probabilistic arguments to derive Theorem 1-(i) for d = 1,
and in Section 4, we derive Theorem 1-(ii) for d = 1 using the Laurent series
expansion of elliptic functions. In section 5, we study the multi-dimensional
problem, and more generally, the case where the Brownian snake is replaced
by a snake, where the underlying spatial Markov process is a Bessel process of
dimension δ < 4 (we will somewhat improperly call this a Bessel snake). In
Section 6, we use these results to derive Theorem 2. Finally, we conclude with
some remarks in Section 7.
4
An informal motivation: Consider a critical branching particule system that
can be modelized in the long-time scale by super-Brownian motion (for instance,
each particule moves randomly without drift, and when it dies, it gives birth
to exactly two offsprings with probability 1/2, and it has no offsprings with
propability 1/2). Take a living particule at time T (T is very large compared to
the mean life-time of one particule), and look at its ancester X at time 0 (this
corresponds very roughly speaking to condition X on having at least one living
descendant at time T ). Let Π be a hyperplane at distance ε of the position of X
at time 0 (ε is small, but large compared to the mean displacement of X during
its life-time). Then, the probability that NO descendant of X before time T has
crossed the hyperplane Π decays up to a multiplicative constant like ε4 when
ε → 0.
In view of possible applications, we will also (in Section 6.2) consider the
following problem: Suppose that we have to implant a large number of particules
(we can choose how many) at distance ε of an ’infected’ hyperplane Π. We want
to maximize the probability that the system survives up to time T , and that no
particule hits the hyperplane Π before T (i.e. the system does not die out and
does not get infected by the hyperplane Π). It turns out that for this problem,
the optimal probability decays like a constant times ε6 , when ε → 0.
2
Preliminaries, Notation
Throughout this paper:
• B will denote a one-dimensional Brownian motion started from x under
the probability measure Px.
• When δ > 0, R will denote a δ-dimensional Bessel process started from x
(δ)
under the probability measure Px ; the filtration associated to the process
(Rt )t≥0 is denoted by (Ft )t≥0 . Let us recall that Bessel processes hit zero
with positive probability only when their dimension is strictly smaller than
2.
The following result (Yor [27], Proposition 1) shall be very useful in this
paper:
Proposition 1 (Yor) Suppose that Φt is a positive Ft -measurable random variable and that λ ≥ 0 and µ > −1; then for x > 0, one has:
2Z t
λ
ds
Ex(2+2µ) Φt exp −
= xν−µEx(2+2ν) (Rt )−ν+µ Φt , (4)
2
2 0 (Rs )
where ν 2 = λ2 + µ2 .
5
• n will denote the Itô measure on positive Brownian excursions (ζ(s), s ≤
σ). The set of all positive excursions will be denoted E. The length
of the excursion ζ will be denoted σ(ζ) and its height will be denoted
H(ζ) = sups≤σ ζ(s). Recall that
n(H(ζ) > x) =
1
.
2x
(5)
• We also put
n>h = n1{H(ζ)>h} and n<h = n1{H(ζ)<h} .
n=h will denote the law of Brownian excursions renormalized by their
height (so that n=h is a probability measure and n=h (H(ζ) = h) = 1).
For convenience, we put n=1 = ñ.
• W = ((Ws , ζs ), s ≤ σ) will now denote a one-dimensional Brownian snake
associated to the excursion ζ. When ζ is defined under the measure n
(respectively n<h , n=h , n>h and ñ) then the snake W is started from x
=h
>h
under the measure Nx (respectively N<h
x , Nx , Nx and Ñx ). Throughout
the paper, we put
f(ε) = Ñε (0 ∈
/ R).
• The following result is a direct consequence in terms of the Brownian snake
of a result of Dawson, Iscoe and Perkins ([4], Theorem 3.3):
c
(6)
N<h
0 (1 ∈ R) ≤ 1/2 exp(−a/h),
h
for all sufficiently small h (c and a are fixed positive constants).
Theorem 3.3 in [4] states in particular that in dimension d, for all small
enough t,
Pδ0 (∃s ≤ t, µs ({x ∈ Rd , kxk ≥ 1}) > 0) ≤
c0
exp(−a/t)
td/2
where a and c0 are fixed positive constants (the constant a instead of 1/2
is because the super-Brownian motion µ has branching mechanism 2u2
and not u2 ; we do not need the exact value of a here). This implies that
for d = 1,
c0
Pδ0 (1 ∈ S and µt = 0) ≤ 1/2 exp(−a/t).
t
But on the other hand, the Poissonian representation of super-Brownian
motion (see e.g. Le Gall [17], Theorem 2.3) and the exponential formula
(see e.g. Revuz-Yor [20], Chapter XII, (1.12)) imply that
Pδ0 (1 ∈ S and µt = 0) = 1 − Pδ0 (1 ∈
/ S or µt 6= 0)
Z 1
= 1 − exp −
N0 (H(ζ) < t and 1 ∈ R)du
0
6
Comining the last two statements implies (6). For exact related asymptotics, see e.g. Dhersin [7].
• When δ > 0, W (δ) = (W (δ) , s ≤ σ) will denote a δ-dimensional Bessel
snake. This process is defined exactly as the one-dimensional Brownian
snake, except that the spatial motion is this time a δ-dimensional Bessel
process (absorbed or reflected at 0; this does not make any difference for
the quantities that we shall consider here). For convenience, we shall also
denote its range by R. When the excursion ζ is defined under the measure
n (respectively ñ), then the Bessel snake is defined under the measure N(δ)
(δ)
(resp. Ñ
). We will put
(δ)
f (δ) (ε) = Ñε (0 ∈
/ R).
• When A and A0 are two topological spaces, C(A, A0 ) will denote the set of
continuous functions of A into A0 .
3
Probabilistic approach in one dimension
3.1
Decomposition of the normalized snake
We are first going to decompose the normalized Brownian snake (defined under
Ñx ) with respect to its longest path. Our goal in this part is to derive the
following statement:
Proposition 2 For all ε > 0,
f(ε)
= Pε (0 ∈
/ B[0, 1])
Z
×Eε exp −2
0
Z
1
1−t
dt
0
√
dh
(1 − f(Bt / h))
h2
!
/ B[0, 1] .
0∈
Suppose that ζ is defined under the probability measure ñ and W under Ñx .
Define
T = inf{s ≥ 0, ζs = 1}.
It is well-known (see e.g. Revuz-Yor [20]) that under ñ, the two processes
(ζs , s ≤ T ) and (ζσ−s , s ≤ σ − T ) are two independent three-dimensional Bessel
processes started from 0 and stopped at their respective hitting times of 1.
Combining this with the definition of the Brownian snake shows that under Ñx
and conditionally on {WT = γ}, the two processes W + = (Ws , s ≤ T ) and
W − = (Wσ−s , s ≤ σ − T ) are independent and that their conditional laws are
identical. Define the reversed process:
←
−
ζ s = 1 − ζT −s
7
for s ∈ [0, T ]. It is well-known (see e.g. Proposition (4.8), Revuz-Yor [20],
←
−
chapter VII) that the law of ( ζ s , s ≤ T ) is again that of a three-dimensional
Bessel process stopped at its hitting time of 1. We now put
←
−
←
−∗
ζ s = sup ζ u
u<s
and we define, for u ∈ [0, 1],
←
−
αu = inf{s > 0, ζ s = u}
and
←
−
βu = inf{s > αu , ζ s = u}.
←
−∗ ←
−
[αu, βu ]u∈[0,1] are the excursion intervals of the process ζ s − ζ s above level 0.
u
Let ζ denote the excursion of that process in the interval [αu , βu ] (i.e. ζ u is
defined on the time-interval [0, βu − αu ]).
P
Lemma 1 The random measure u∈[0,1] δ(u,ζ u ) is a Poisson measure on [0, 1]×
E of intensity 2dt × n<t (de).
←
−
Proof- Recall that for all u0 > 0, the law of ( ζ s+αu0 , s ≤ T − αu0 ) is that of a
linear Brownian motion started from u0 and conditioned to hit 1 before 0, and
←
−
that it is independent of ( ζ s , s ≤ αu0 ).
Suppose now that B is a linear Brownian motion started from u0 , and define
Bs∗ = sup Bu ,
u<s
and the excursion intervals [αu, βu ] of B ∗ − B above zero for u ∈ [u0, 1] exactly
∗
as above. Let B u denote the associated excursion. Then (as
PB − B is reflecting
Brownian motion, see e.g. Revuz-Yor [20]), the measure u∈[u0 ,1] δ(u,Bu ) is a
Poisson point process of intensity 2dt × n(de). Note that B hits 1 before 0 if
and only if
H(B u ) < u
for all u ≥ u0 . Hence, the measure
X
δ(u,ζ u )
u∈[u0 ,1]
←
−
is a Poisson measure on [u0 , 1] × E of intensity 2dt × n<t (de). As ( ζ s , s ≤ αu0 )
←
−
and ( ζ s , s ≥ αu0 ) are independent, the lemma follows, letting u0 → 0+. 2
We are now going to state the following counterpart of this lemma for the
Brownian snake: Note that for all s ∈ [αu, βu ], the path WT −s coincides with
8
WT on the interval [0, 1 − u]. Let us define, for all u ∈ [0, 1], and for all
s ∈ [0, βu − αu ],
Wsu (t) = WT −αu −s (1 − u + t) for t ∈ [0, ζT −αu−s − (1 − u)].
Note that all the paths Wsu are started from WT (1 − u). In the sequel W u will
denote the process (Wsu , s ∈ [0, βu − αu ]).
Lemma 2 Under thePprobability measure Ñx and conditionaly on {WT = γ},
the random measure u∈[0,1] δ(u,W u ) is a Poisson measure on [0, 1] × C(R+ , F)
with intensity 2dtN<t
γ(1−t) (dκ).
Proof- The proof goes along similar lines than that of Proposition
2.5 in Le Gall
u
[14]. This type of result can also be found in Serlet [23]. Let Θζγ(1−u) (dκ) denote
the law of W u under Ñx and conditional on {WT = γ} and ζ u . Let F (t, κ)
denote a positive measurable functional on the space [0, 1] × C(R+ , F). Note
0
0
that for all u 6= u0 , conditionaly on ζ u and ζ u , W u and W u are independent,
because of the Markov property of the Brownian snake. Hence,

 

 X
 Ñx exp −
F (u, W u) WT = γ 

 u∈[0,1]


Y Z ζu
−F
(u,κ)
.
Θγ(1−u) (dκ)e
= ñ 
u∈[0,1]
Using Lemma 1 and the exponential formula (see [20], Chapter XII, (1.12)), we
then get

 

 X
 Ñx exp −
F (u, W u) WT = γ 

 u∈[0,1]
Z 1
Z
= exp −
2du n<u 1 − Θζγ(1−u) (dκ)e−F (u,κ)
0
Z 1
Z
= exp −2
Θζγ(1−u) (dκ) 1 − e−F (u,κ)
du n<u
0
Z 1
<u
−F (u,W u )
= exp −2
du Nγ(1−u) 1 − e
.
0
The lemma follows.
2
In particular,
Ñx (∃u ∈ [0, 1], 0 ∈ W u | WT = γ)
9
Z 1 Z u
dh =h
= exp −2
du
1
−
N
(0
∈
R)
γ(1−u)
2
0
0 2h
Z 1 Z 1−u
√
dh
= exp −
du
(1 − f(Bu / h))
h2
0
0
Combining this with the fact that WT is a linear Brownian motion started from
x and that, conditionally on the path WT , the two processes W + and W − are
independent and have the same law, one gets
2
u
f(ε) = Eε 1{0∈B
/ [0,1] } Nε (∃u ∈ [0, 1], 0 ∈ W | WT (·) = B(min(·, 1)))
= Pε (0 ∈
/ B[0, 1]) ×
Z
Eε exp −2
Z
1
du
0
3.2
1−u
0
√
dh
(1 − f(Bu / h))
h2
!
/ B[0, 1] .
0∈
Application
We are now going to derive Theorem 1-(i) for d = 1 without giving the explicit
exact value of α. The law of (Bt , t ≤ 1) with B0 = ε conditioned by the event
{0 ∈
/ B[0, 1]} is that of a Brownian meander of length 1, started from ε. It
is well-known (see e.g. Biane-Yor [2], Imhof [12] or Yor [26], formula (3.5))
that its density with respect to the law of a three-dimensional Bessel process
(3)
(Rt , t ∈ [0, 1]) started from R0 = ε (defined under the probability measure Pε )
(3)
−1 −1
is equal to 1/R1 up to the renormalizing constant Eε ((R1 ) ) . Hence,
Proposition 2 can be rewritten as:
Z 1 Z 1−t
√
Pε (0 ∈
/ B[0, 1]) (3) 1
dh
E
f(ε) =
exp
−2
dt
(1
−
f(R
/
h))
t
ε
(3)
R1
h2
0
0
Eε ((R1 )−1 )
/ B[0, 1])
Pε (0 ∈
=
(3)
Eε ((R1 )−1 )
( Z
)!
Z
1
1
dt ∞
(3)
×Eε
exp −4
v(1 − f(v))dv
.
(7)
2
√
R1
0 Rt Rt / 1−t
Note that (6) implies easily that
R∞
0
v(1 − f(v))dv < ∞. Note also that
r
lim
ε→0+
Eε(3) ((R1 )−1 )
=
(3)
E0 ((R1 )−1 )
=
2
π
(see for instance Yor [26] formula (3.4)). Hence, combining this with (1), one
gets
f(ε) ∼ εg(ε) when ε → 0+
10
where
g(ε) :=
( Z
)!
Z
1
1
dt ∞
exp −4
v(1 − f(v))dv
.
2
√
R1
0 Rt Rt / 1−t
Eε(3)
Let us put
Z
λ= 8
∞
1/2
v(1 − f(v))dv
(8)
0
and define the function
Z
x
v(1 − f(v))dv.
G(x) = 4
0
We now rewrite g(ε) as follows:
g(ε) =
exp{
Eε(3)
R1
0
√
2Z 1
!
R−2
λ
dt
t G(Rt / 1 − t)dt}
exp −
.
R1
2 0 R2t
We can now apply directly Proposition 1 (for µ = 1/2) so that
ν−1/2
g(ε) = ε
exp{
Eε(2+2ν)
where
ν=
R1
0
!
√
R−2
t G(Rt / 1 − t)dt}
,
(R1 )ν+1/2
1
+ λ2
4
1/2
.
(9)
We now need the following result:
Lemma 3 One has
lim
ε→0+
!
√
R−2
t G(Rt / 1 − t)dt}
(R1 )ν+1/2
!
R1
√
exp{ 0 R−2
t G(Rt / 1 − t)dt}
< ∞.
(R1 )ν+1/2
exp{
Eε(2+2ν)
(2+2ν)
= E0
R1
0
This lemma combined with the above shows that
f(ε) ∼ k1 εγ when ε → 0+,
where
k1 =
(2+2ν)
E0
exp{
R1
0
!
√
R−2
t G(Rt / 1 − t)dt}
(R1 )ν+1/2
11
(10)
(11)
and
γ = ν + 1/2.
(12)
Proof of the Lemma- It suffices to apply a dominated convergence argument.
2
Note
R x that G(x)2 ≤ λ /2ε for all x, so that G is bounded. Note also that G(x) ≤
4 0 vdv = 2x . Let ρ1/2 (.) denote the density of R1/2 under the probability
(2+2ν)
measure Pε
. Then
Z 1
√
R−2
t G(Rt / 1 − t)dt ≤
0
≤
λ2
2
2
λ
2
Z
1
1/2
1
Z
R−2
t dt +
Z
1/2
√
R−2
t G( 2Rt )dt
0
R−2
t + 2.
1/2
Hence (here ε can take the value 0),
!
R1
√
exp{ 0 R−2
t G(Rt / 1 − t)dt}
ν
(2+2ν)
hε := Eε
(R1 )ν+1/2


2 R 1
exp{ λ2 1/2 R−2
dt}
t

≤ e2 Eε(2+2ν) 
(R1 )ν+1/2
Z
=
2
∞
dρε1/2 (a)Ea(2+2ν)
e
0
!
2 R 1/2
exp{ λ2 0 R−2
t dt}
.
(R1/2 )ν+1/2
Using Proposition 1 once again, we get:
Z
hνε
∞
≤ e
2
0
Z
∞
= e2
0
dρε1/2 (a)Ea(3)
(R1/2 )ν−1/2
ν−1/2
a
(R1/2)ν+1/2
!
dρε1/2 (a)a1/2−ν Ea(3) ((R1/2 )−1 ).
Note that a simple combination of the strong Markov and the scaling property
for the Bessel processes yields
Ea(3) ((R1/2 )−1 ) ≤ E0 ((R1/2 )−1 ) < ∞
(3)
so that
hνε ≤ e2 E0 ((R1/2 )−1 )Eε(2+2ν) ((R1/2 )1/2−ν ).
(3)
Note that similarly (as ν ≥ 1/2)
(2+2ν)
Eε(2+2ν) ((R1/2)1/2−ν ) ≤ E0
((R1/2 )1/2−ν ) < ∞
so that hνε is bounded by a constant uniformly in ε, which completes the proof
of the lemma. 2
12
3.3
Explicit value of the exponent
R∞
It now remains to compute 0 v(1 − f(v))dv to complete the proof of Theorem
1-(i) for d = 1. Note that the scaling property and (5) imply that
Z ∞
Z ∞
v(1 − f(v))dv =
vN=1
v (0 ∈ R) dv
0
0
Z ∞
=1/v 2
vN1
(0 ∈ R) dv
=
Z0 ∞
du =u
=
N (0 ∈ R)
2 1
2u
0
= N1 (0 ∈ R)
Define u(x) = Nx (0 ∈ R). The scaling property implies that
u(x) =
N1 (0 ∈ R)
,
x2
and it is also well-known (see e.g. Le Gall [17]) that u solves the equation
u00 = 4u2
in (0, ∞). This implies immediately that
3
N1 (0 ∈ R) = .
2
(13)
Hence combining this with (12), (8) and (9) yields
γ=
1 p
+ (1 + 48)/4 = 4,
2
and the proof of Theorem 1-(i) for d = 1 is complete.
4
Analytical approach in one dimension
In this section, we are going to estimate the asymptotic behaviour of
/ R and X ∈ R)
Nx (0 ∈
when x → 0+ and X > 0 is fixed.
Throughout this section X > 0 is fixed. Let us now define for x ∈ (0, X),
u(x) = Nx (0 ∈ R or X ∈ R).
Note that by symmetry, u(x) = u(X − x) for all x ∈ (0, X); the scaling property
of the Brownian snake implies that
u(x) ≥ Nx (0 ∈ R) = x−2 N1 (0 ∈ R).
13
(14)
Hence
lim u(x) = lim u(x) = +∞.
x→0+
x→X−
It is in fact well-known that u is the only positive solution to the equation
u00 (x) = ∆u(x) = 4u2 (x)
for x ∈ (0, X)
(15)
with boundary conditions u(0) = u(X) = +∞ (see e.g. Le Gall [15]). In other
words, integrating this equation once,
8
u02 (x) = u3 (x) − c,
3
(16)
where c is a constant (depending on X) that we now determine using the
boundary conditions: Symmetry considerations imply that u0 (X/2) = 0; hence,
c = 8u(X/2)3 /3 and
X
=
2
Z
Z
X/2
∞
q
dx =
0
u(X/2)
dv
8 3
3 (v
− u(X/2)3 )
.
Hence
r
2
X
3
Z
=
=
∞
p
dv
v3 − u(X/2)3
u(X/2)
Z ∞
1
dh
p
√
.
h3 − 1
u(X/2) 1
Let us from now on put
Z
I=
1
Then
u(X/2) =
∞
dh
√
.
h3 − 1
9I 6
3I 2
and
c
=
.
2X 2
X6
(17)
If we put v = 2u/3, we have
v02 = 4v3 −
4c
4I 6
= 4v3 − 6 .
9
X
v is therefore a Weierstrass function. It can be extended into its Laurent series
in the neighbourhood of 0 as follows (see e.g. Abramowitz-Stegun [1], Chapter
18)
∞
X
1
I 6p
v(x) = 2 +
cp 6p x6p−2,
x
X
p=1
14
where (cp ) is the sequence of rational numbers defined by induction as follows:
c1 = 1/7 and for all p ≥ 2,
cp =
j=p−1
X
1
cj cp−j .
(6p + 1)(p − 1) j=1
In particular,
u(x) −
3I 6 4
3
∼
x when x → 0 + .
2
2x
14X 6
(18)
On the other hand, recall that Nx (0 ∈ R) = x−2 N1 (0 ∈ R)3/(2x2 ), (cf.
(13)) so that
Nx (0 ∈
/ R and X ∈ R)
= Nx (0 ∈ R or X ∈ R) − Nx (0 ∈ R)
∞
X
I 6p
=
cp 6p x6p−2 ,
X
p=1
(19)
which proves Part (ii) of the Theorem with
k2 =
3I 6
.
14X 6
(20)
I can be expressed in terms of Γ(1/3), noticing that in fact
1
I=
3
Z
1
dyy−5/6 (1 − y)−1/2 =
0
Γ(1/6)Γ(1/2)
3Γ(2/3)
and using the duplication formulae (e.g. 6.1.18 and 6.1.19 in Abramowitz-Stegun
[1]), which imply that
Γ(1/3)3
.
I=
π24/3
Plugging the result into (20) gives (2).
Remark- It is not surprising that we obtain the same exponent as in Theorem 1-(i). There is indeed a direct way to show that the two quantities are
logarythmically equivalent:
log Nε (0 ∈
/ R and 1 ∈ R) ∼ log Ñε (0 ∈
/ R) when ε → 0;
however, Theorem 1-(i) (respectively (ii)) does not imply directly (ii) (respectively (i)).
15
5
‘Bessel snakes’ and dimensions 2 and 3
We are now going to turn our attention towards the multi-dimensional problem
(i.e. cases d = 2 and d = 3 in Theorem 1), and its generalization for ‘Bessel
snakes’. Suppose that δ < 4, that W δ is a ‘Bessel snake’ as defined in section 2,
(δ)
defined under the measures N(δ)
x or Ñx . Bessel processes have the same scaling
property than Brownian motion so that we will be able to adapt the proofs as
we did give in the δ = 1 case (i.e. the one-dimensional Brownian case). The
following theorem clearly contains Theorem 1:
Theorem 3 Define for all δ < 4,
α(δ) =
2−δ +
√
δ 2 − 20δ + 68
.
2
Then:
(i) There exists a constant k1 (δ) ∈ (0, ∞) such that
(δ)
Ñε (0 ∈
/ R) ∼ k1 εα(δ) when ε → 0 + .
(ii) There exists a constant k2 , such that for all fixed X > 0,
/ R and X ∈ R) ∼ k2
N(δ)
ε (0 ∈
εα(δ)
when ε → 0 + .
X α(δ)+2
As the proof goes along similar lines than that of Theorem 1 for d = 1, we
will only focus on the aspects where they differ significantly.
5.1
The analytical approach
Suppose now that δ < 4 and X > 0 are fixed.
Lemma 4 The function u(x) = N(δ)
x (0 ∈ R or X ∈ R) is the maximal solution
of the equation
δ−1 0
u = 4u2 in (0, X).
(21)
u00 +
x
Proof of the lemma- One has to be a little bit careful because 1/x is not bounded
in the neighbourhood of 0, so that this is not a subcase of the results stated for
nicer diffusions (we can not use the maximum principle for the solutions defined
on a larger interval). However, there is no real difficulty in adapting the proofs
to this particular case: Note that for integer δ, the lemma can be deduced from
the known results for δ-dimensional Brownian motion in a ball.
Theorem 3.2 in [17] implies readily that u defined in the Lemma is a solution
of (21). It is very easy to check directly that limX− u = +∞ but it is not
16
immediate to see for which values of δ, one has lim0+ u = +∞, because 0 can
be polar for Bessel processes. Define now for all sufficiently large n,
un (x) = N(δ)
x (1/n ∈ R or X − 1/n ∈ R).
Similarly, one has
u00n +
δ−1 0
un = 4u2 in (n−1 , X − n−1 ),
x
(22)
and it is easy to check that limx→(X−(1/n))− un (x) = limx→(1/n)+ un (x) = ∞.
Moreover, the maximum principle (see e.g. Dynkin [9], appendix) shows that un
is in fact the maximal solution of the differential equation (22) in (1/n, X −1/n).
It is also easy to check (using the compactness of R) that for all x ∈ (0, X),
u(x) = lim un (x).
n→∞
Combining the results above yields readily that u is the maximal solution of
(21) in (0, X). 2
The scaling property shows again that
N(δ)
x (0 ∈ R) =
1 (δ)
N (0 ∈ R).
x2 1
(23)
Let c0 = N1 (0 ∈ R). It is then very easy to check (because v(x) = c0 x−2 is
also a solution of v00 + (δ − 1)v0 /x = 4v2 ), that
(δ)
c0 =
4−δ
.
2
(24)
By analogy with the case δ = 1, we are looking for solution of (21) of the type
u(x) =
∞
1 X
ak xβk ,
x2
k=0
for some β > 0. Formally, that would imply that


∞
∞
k
X
X
X

(βk − 2)(βk + δ − 4)ak xβk−4 = 4
aj ak−j  xβk−4 .
k=0
k=0
j=0
Identifying the terms of these series yields that
δ
a0 = 2 − ,
2
a1 ((β − 2)(β + δ − 4) − 8a0 ) = 0,
17
(25)
and for k ≥ 2,
ak ((βk − 2)(βk + δ − 4) − 4(4 − δ)) =
k−1
X
aj ak−j .
(26)
j=1
Conversely, let us now define
β=
6−δ+
√
δ 2 − 20δ + 68
2
(27)
so that (25) is satisfied for any a1 (if a0 = 2 − δ/2). We now define by induction
the sequence (bk ), such that b0 = 2 − δ/2, b1 = 1, and for any k ≥ 2,
X
k−1
bj bk−j
bk β(6 − δ)(k 2 − k) + 2(4 − δ)(k 2 − 1) =
(28)
j=1
(this definition parallels Equation (26) with β given as in (27)).
Lemma
5 For all k ≥ 0, bk > 0. The radius of convergence
series
P
P R of the
k
k
b
x
is
strictly
positive
and
finite.
Moreover,
one
has
b
R
=
+∞.
k≥1 k
k≥1 k
Proof of the lemma- The sequence (bk ) is well-defined, and it is immediate to
check using (28) that bk ≥ 0 for all k ≥ 1. (28) also implies that there exists
0 < m < M < ∞ such that for all k ≥ 2,
mk −2
k−1
X
bj bk−j ≤ bk ≤ M k −2
j=1
As
k−1
X
bj bk−j .
j=1
Z 1
k−1
1
1 X
j(k
−
j)
=
t(1 − t)dt = ,
k→∞ k 3
6
0
lim
j=1
0
there also exist 0 < m < M 0 < ∞, such that for all k ≥ 2,
m0 ≤
k−1
1 X
j(k − j) ≤ M 0 .
k3
j=1
Hence, an easy induction shows that for all k ≥ 1,
k(mm0 )k−1 ≤ bk ≤ k(M M 0 )k−1 ,
P
and this implies that the radius of convergence R of
bk xk is strictly positive
and finite. Let us now define
X
w(y) =
bk y k
k≥0
18
so that w solves the equation
β 2 y2 w00 + β(δ + β − 6)yw0 + (8 − 2δ)w = 4w2
P
on (0, R). Suppose now that w(R) = k≥0 bk Rk < ∞. Then, as
2
0
4w − (8 − 2δ)w
(δ+β−6)/β 0
(δ−β−6)/β
y
w =y
β2
(29)
on [R/2, R), w0 and w00 are also uniformly bounded on [R/2, R), and therefore, w0 (R) is well-defined. Cauchy’s theorem, then implies the existence of a
planar neighbourhood V ⊂ C of the point R and the existence and unicity of
a holomorphic solution w̃ of equation (29) in V , such that w̃(R) = w(R) and
w̃0 (R) = w0 (R). The unicity ensures that w = w̃ on V ∩ {z ∈ C, |z| < R},
and as both w and w̃ are holomorphic, w can be extended analytically onto
{z ∈ C, |z| < R} ∪ V . This contradicts the definition of R and the fact that for
all k ≥ 1, bk > 0. The lemma is proved. 2
Lemma 6 For all x ∈ (0, X),
u(x) =
β k
β ∞
1 X
Rx
1
x R
b
=
w
.
k
2
β
2
x
X
x
Xβ
k=0
Proof of the lemma- Define, for all Y > 0,
β β k
∞
1 X
1
Rx
x R
uY (x) = 2
bk
=
w
.
x
Yβ
x2
Yβ
k=0
The radius of convergence of this series is Y ; it is straightforward to check that
uY is a solution of (21) on (0, Y ) and that uY (0+) = uY (Y −) = +∞. We can
not apply directly the maximum priciple as we cannot compare u and uX at the
neighbourhood of 0. We therefore go back to the function
/ R and X ∈ R).
r(x) = N(δ)
x (0 ∈
Using (23) and (24), one has
r(x) = u(x) − N(δ)
x (0 ∈ R) = u(x) −
c0
x2
on (0, X). Hence, one gets immediately that
r00 +
δ−1 0
r = 4r(r + 2c0 x−2 )
x
on (0, X). Note also that the definition of r ensures that
lim r = 0, lim r = +∞ and r ≥ 0.
0+
X−
19
(30)
Define also, for all Y > 0,
∞
rY (x) = uY (x) −
X Rk xβk−2
c0
=
bk
.
x2
Y βk
k=1
Note that β > 2 (because δ < 4), so that lim0+ rY = 0. As a consequence of the
results for uY , rY ≥ 0, limY − rY = +∞, and rY satisfies the same differential
equation as r (i.e. (30)) on the time-interval (0, Y ). Take a sequence Yn , such
that Yn ≤ Y and limn→∞ Yn = X. We can now apply the maximum principle
for this new equation in (0, Yn ), so that
r ≤ rYn in (0, Yn ).
Clearly, for all x ∈ (0, X),
X2 X
r (xX/Yn ) = rX (x).
n→∞ Yn2
lim rYn (x) = lim
n→∞
Hence, r ≤ rX on (0, X), and eventually,
u(x) ≤ uX (x).
Combining this with Lemma 4 finishes the proof of this lemma.
2
Proof of Theorem 3(ii)- The previous lemma shows that
N(δ)
/ R and X ∈ R) = r(x) =
x (0 ∈
∞
X
bk R k
xβk−2 .
(31)
R β−2
x
when x → 0+,
Xβ
(32)
k=1
X βk
In particular,
N(δ)
/ R and X ∈ R) ∼
x (0 ∈
which proves part (ii) of Theorem 3.
5.2
5.2.1
The probabilistic approach
The case δ ≥ 2
In the case δ ≥ 2, the path of one Bessel process of dimension δ never reaches 0.
The decomposition of the snake with respect to its longest branch is therefore
even simpler than in the Brownian case, as there is no need to condition by the
event {0 ∈
/ R[0, 1]}. Hence, one immediately gets
Ñx(δ) (0 ∈
/ R)
√
2Z 1
Z 1 (δ)
−λ
dt
G (Rt / 1 − t)dt
(δ)
= Eε
exp
exp
,
2 0 R2t
R2t
0
20
Z
where
G
(δ)
x
vÑv(δ) (0 ∈ R)dv
(x) = 4
0
and
λ = (2(G(δ)(∞))1/2 .
p
Using (4), and if we put µ(δ) = −1 + δ/2 and ν = λ2 + µ2 , we get
√
Z 1 (δ)
1
G (Rt / 1 − t)dt
Ñx(δ) (0 ∈
/ R) = εν−µ Eε(2+2ν)
exp
.
R2t
R1ν−µ
0
Using exactly the same arguments than for Lemma 3, one can show the convergence of
√
Z 1 (δ)
1
G (Rt / 1 − t)dt
(2+2ν)
Eε
exp
R2t
R1ν−µ
0
towards a finite constant k1 , when ε → 0, so that
Ñx(δ) (0 ∈
/ R) ∼ k1 εν−µ , when ε → 0 + .
5.2.2
The case δ < 2
This time, a Bessel process of dimension δ hits zero with strictly positive probability. The proof now follows exactly the same lines as in Section 3. We now
have to use the following fact (which follows for instance from formula (3.5)
(δ)
in Yor [26], paragraph 3.6): The conditional law of (Rt , t ∈ [0, 1]) under Pε
conditioned by the event {0 ∈
/ R[0, 1]} is that of a Bessel meander of dimension
δ, and is proportional (up to a constant term) to
1
Pε(4−δ).
R2−δ
1
Also, recall that
Pε(δ) (0 ∈
/ R[0, 1]) ∼ k(δ)ε2−δ when ε → 0 + .
Hence,
Ñx(δ) (0 ∈
/ R) ∼
k 0 (δ)ε2−δ Eε(4−δ)
1
R2−δ
1
exp
−λ2
2
Z
0
1
dt
R2t
Z
exp
0
1
√
G(δ)(Rt / 1 − t)dt
R2t
when ε → 0, with the same notation than above. Note that µ(4 − δ) = −µ(δ)
(as functions of δ), so that one eventually gets
/ R) ∼ k 000 (δ)εν+µ−2µ when ε → 0 + .
Ñx(δ) (0 ∈
21
5.2.3
Identification
It now remains to show that
Z ∞
(δ)
4−δ
v(1 − Ñv (0 ∈
/ R))dv =
2
0
to complete the proof of Theorem 3. This can be done exactly as in Section 3.3,
and is safely left to the reader.
6
6.1
Super-Brownian motion
Proof of Theorem 2
We now turn our attention towards super-Brownian motion. We shall use Theorem 1 and the Poissonian representation of super-Brownian motion in terms of
the Brownian snake (see e.g. Le Gall [17], Theorem 2.3 or Dynkin-Kuznetsov
[11]) to derive Theorem 2. Let (µt )t≥0 denote a super-Brownian motion started
from the finite positive measure ν on Rd under the probability measure Pν .
Throughout this section, to avoid complications, we shall always omit the superscript d (the spatial dimension), which is fixed (d = 1, 2 or 3).
Fix λ ≥ 0. Consider a Brownian snake W started from ε (this is a point
in Rd at distance ε from the origin) with life-time process a reflected Brownian
motion ζ, killed when its local time at level 0 exceeds λ. Define (W u , ζ u )u∈[0,λ] ,
the corresponding excursions of W away from the path started at ε with zero
life-time. Let H(ζ u ) denote the height of the excursion ζ u . We now construct
the super-Brownian motion µt under the probability measure Pλδε using the
Brownian snake W as in Le Gall [17], Theorem 2.3. Then, one has:
{|µ1 | > 0} = {∃u ≤ λ, H(ζ u ) > 1}.
Hence, as the excursion process is Markovian, shows that
Pλδε (0 ∈
/ S and |µ1 | > 0)
= Pλδε (∃u ≤ λ, H(ζ u ) > 1 and W u does not hit 0) × Pλδε (0 ∈
/ S).
The exponential formula (see e.g. (1.12) in [20], chapter XII) implies that
( Z
)
λ
Pλδε (0 ∈
/ S) = exp −
Nε (0 ∈ R)du .
0
Using (24) and (23), one gets,
λ(4 − d)
Pλδε (0 ∈
/ S) = exp −
.
2ε2
22
On the other hand, the exponential formula also implies that
Pλδε (∃u ≤ λ, H(ζ u ) > 1 and W u does not hit 0) = 1 − exp(λN>1
/ R)).
ε (0 ∈
Hence
Pλδε (0 ∈
/ S and |µ1 | > 0)
λ(4 − d) = exp
1 − exp{−λN>1
/ R)}
ε (0 ∈
2
2ε
(33)
Exactly as in the one-dimensional case, Theorem 1 then implies immediately
that
k1 (d)εα(d)
(0
∈
/
R)
∼
N>1
when ε → 0.
ε
1 + α/2
Combining this with (33) when λ = 1 implies immediately Theorem 2.
6.2
An optimization problem
We are now going to derive the following result that we announced at the end
of the introduction:
Proposition 3 One has
sup Pλδε (0 ∈
/ S and |µ1 | > 0) ∼
λ≥0
k3 (d) 2+α(d)
ε
, when ε → 0.
4−d
(34)
/ S and |µ1 | > 0) is obtained when
Moreover, supλ≥0 Pλδε (0 ∈
1
/ R)
2ε2
2ε2 N>1
ε (0 ∈
λ = λ (ε) = >1
∼
when ε → 0.
log 1 +
4−d
4−d
Nε (0 ∈
/ R)
(35)
∗
Proof- (35) is an immediate consequance of (33). The first statement follows
immediately.
7
7.1
Other related problems
Non-intersection between two ranges
It is very easy to use Theorem 1 to investigate the probability that the ranges
of two independent one-dimensional Brownian snakes do not intersect. Let us
give an example for Brownian snakes renormalized by their height: Suppose
now that R and R0 denote the ranges of two independent Brownian snakes,
renormalized by their height (the height of the excursions is equal to 1) and
respectively started from 0 and ε under the probability measure N ε . Then:
23
Proposition 4
N ε (R ∩ R0 = ∅) ∼
k12 8
ε when ε → 0 + .
70
Proof- Fix the integer p ≥ 2 for a while. Let us put
R∗ = sup R and R0# = inf R0 .
It is easy to notice that
N ε(R ∩ R0 = ∅)
(j − 1)ε jε
N ε R ∩ R0 = ∅ and R∗ ∈
,
p
p
j=1
p
X
(j − 1)ε jε
(j − 1)ε
≤
N ε R∗ ∈
,
and R0# ≥
p
p
p
j=1
p
X
jε
(j − 1)ε
(p − j + 1)ε
≤
f
−f
f
.
p
p
p
j=1
=
p
X
Using the asymptotic expansion of f, we get
lim sup ε−8 N ε (R ∩ R0 = ∅) ≤ (k1 )2
ε→0+
p
X
j 4 − (j − 1)4
j=1
p8
(p − j + 1)4 ,
and hence (letting p → ∞), one gets immediately (as the above is a Riemann
sum)
Z 1
(k1 )2
lim sup ε−8 N ε (R ∩ R0 = ∅) ≤ 4(k1 )2
.
x3 (1 − x)4 dx =
70
ε→0+
0
Similarly, one gets
−8
lim inf ε
ε→0+
0
N ε (R ∩ R = ∅) ≥ (k1 )
2
p
X
j 4 − (j − 1)4
j=1
p8
(p − j)4 ,
and therefore,
lim inf ε−8 N ε (R ∩ R0 = ∅) ≥
ε→0+
(k1 )2
70
which completes the proof.
In higher dimensions, this problem seems to be more complicated. Of course
one gets lower and upper bounds for non-intersection probabilities between
ranges like
(d)
c1 ε8 ≤ N ε (R ∩ R0 = ∅) ≤ c2 ε2α(d) ,
with obvious notation, but this is not very informative (especially when d =4,
5, 6 or 7!).
24
7.2
Other remarks
One could also have derived results analogous to Theorem 1 for Brownian snakes
renormalized by the length (and not the height) of its genealogical excursion.
This can be for instance derived from Theorem 1-(i), using the fact that for all
M > 1,
n=1 (σ(ζ) > M ) ≤ ae−bM
for some well-chosen a and b, and the scaling property.
If W now denotes a Brownian snake in Rd , and H a smooth d0 -dimensional
manifold in Rd (with d0 = d − 1, d − 2 or d − 3). The above results can be
easily adapted to show that the asymptotic behaviour of the probability that
W , started at distance ε from H and conditionned to survive after a certain
0
time decays also like εα(d−d ) , when ε → 0+.
Acknowledgements. This paper would probably not exist without an enlightening discussion with Jean-François Le Gall, and some very useful information
provided to us by Marc Yor. We also thank Antoine Chambert-Loir and Pierre
Colmez for their kind assistance on Elliptic functions. Last but not least, we
would like to thank the participants of the 95-96 ‘Tuesday morning workshop’
of the University of Paris V for stimulating talks and discussions.
References
[1] M. Abramowitz, I.A. Stegun (Eds.): Handbook of mathematical functions,
Dover, New-York, 1965.
[2] P. Biane, M. Yor: Quelques précisions sur le méandre Brownien, Bull. Sci.
Math. 111, 101-109 (1988)
[3] D.A. Dawson: Measure-valued Markov processes, Ecole d’été de St-Flour
1991, Lecture Notes in Math. 1541, Springer, Berlin, 1993.
[4] D.A. Dawson, I. Iscoe, E.A. Perkins: Super-Brownian motion: Path properties and hitting probabilities, Probab. Theor. Rel. Fields 83, 135-205
(1989)
[5] D.A. Dawson, E.A. Perkins: Historical superprocesses, Memoirs Amer.
Math. Soc. 454, 1991.
[6] A. Dembo, O. Zeitouni: Large deviations for random distribution of mass,
Proceedings of the IMA workshop on random discrete structures (Ed. D.J.
Aldous, R. Pemantle), IMA vol. 76, Springer, 45-53 (1994)
[7] J.-S. Dhersin: Super-mouvement brownien, serpent brownien et équations
aux dérivées partielles, Thèse de doctorat de l’université Paris 6, 1997.
25
[8] J.-S. Dhersin, J.-F. Le Gall: Wiener’s test for super-Brownian motion and
the Brownian snake, Probab. Theor. Rel. Fields, to appear.
[9] E.B. Dynkin: A probabilistic approach to one class of nonlinear differential
equations, Probab. Theor. Rel. Fields 89, 89-115 (1991)
[10] E.B. Dynkin: An introduction to branching measure-valued processes,
CRM Monograph Series Vol.6, Amer. Math. Soc., Providence, 1994.
[11] E.B. Dynkin, S.E. Kuznetsov: Markov snakes and superprocesses, Probab.
Theor. Rel. Fields 103, 433-473 (1995)
[12] J.-P. Imhof: Density factorizations for Brownian motion and the threedimensional Bessel processes and applications, J. Appl. Prob. 21, 500-510
(1984)
[13] J.-F. Le Gall: A class of path-valued Markov processes and its applications
to superprocesses, Probab. Th. Rel. Fields 95, 25-46 (1993)
[14] J.-F. Le Gall: A path-valued Markov process and its connections with
partial differential equations, Proceedings 1st European Congress of Mathematics, Vol. II, pp. 185-212, Birkhäuser, Boston, 1994.
[15] J.-F. Le Gall: The Brownian snake and solutions of ∆u = u2 in a domain,
Probab. Th. Rel. Fields 104, 393-432
[16] J.-F. Le Gall: A probabilistic Poisson representation for positive solutions
of ∆u = u2 in a domain, Comm. Pure Appl. Math. 50, 69-103 (1997)
[17] J.-F. Le Gall: Superprocesses, Brownian snakes and partial differential
equations, Lecture Notes from the 11th winter school on Stochastic processes, Sigmundsburg, March 1996, Prépublication 337 du Laboratoire de
Probabilités, Université Paris VI (1996).
[18] J.-F. Le Gall, E.A. Perkins: The Hausdorff measure of the support of twodimensional super-Brownian motion, Ann. Probab. 23, 1719-1747 (1995)
[19] S.C. Port, C.J. Stone: Brownian motion and classical potential theory,
Academic Press, New-York, 1978.
[20] D. Revuz, M. Yor: Continuous martingales and Brownian motion, Springer,
Berlin, 1991.
[21] L. Serlet: Some dimension results for super-Brownian motion, Probab.
Theor. Rel. Fields 101, 371-391 (1995)
[22] L. Serlet: On the Hausdorff measure of multiple points and collision points
of super-Brownian motion, Stochastics Stoch. Rep. 54, 169-198 (1995)
26
[23] L. Serlet: The occupation measure of super-Brownian motion conditioned
on non-extinction, J. Theor. Prob. 9, 561-578 (1996)
[24] L. Serlet: Large deviation principle for the Brownian snake, Stoch. Proc.
Appl., to appear.
[25] J. Verzani: Cone paths for the planar Brownian snake, Probab. Theor. Rel.
Fields, to appear
[26] M. Yor: Some aspects of Brownian motion, Part I: Some special functionals,
Lectures in Mathematics, ETH Zürich, Birkhäuser, 1992
[27] M. Yor: Generalized meanders as limits of weighted Bessel processes, and
an elementary proof of Spitzer’s asymptotic result on Brownian windings,
Stud. Sci. Math. Hung., to appear.
—————
Romain Abraham
UFR de Mathématiques et d’Informatique
Université René Descartes
45 rue des Saint-Pères
F- 75270 PARIS cedex 06
France
e-mail: abraham@math-info.univ-paris5.fr
Wendelin Werner
Laboratoire de Mathématiques
Ecole Normale Supérieure
45 rue d’Ulm
F- 75230 PARIS cedex 05
France
e-mail: wwerner@dmi.ens.fr
27
Download