Stochastic Calculus for Brownian Motion on a Brownian Fracture By Davar Khoshnevisan*

advertisement
THE ANNALS OF APPLIED PROBABILITY, 9, 629–667 (1999)
Stochastic Calculus for Brownian
Motion on a Brownian Fracture
By Davar Khoshnevisan* & Thomas M. Lewis
University of Utah & Furman University
Abstract. The impetus behind this work is a pathwise development of
stochastic integrals with respect to iterated Brownian motion. We also provide a detailed analysis of the variations of iterated Brownian motion. These
variations are linked to Brownian motion in random scenery and iterated
Brownian motion itself.
Keywords and Phrases. Iterated Brownian Motion, Brownian Motion in Random Scenery,
Stochastic integration, Sample–path Variations, Excursion Theory.
1991 AMS Subject Classification. Primary. 60H05; Secondary. 60Kxx, 60Fxx
Short Title. Stochastic Calculus for IBM
*
Research supported by grants from the National Science Foundation and the National Security Agency
THE ANNALS OF APPLIED PROBABILITY, 9, 629–667 (1999)
1. Introduction and Preliminaries
Heat flow on a fractal F has been the subject of recent and vigorous investigations. See, for example,
the survey article [2]. As in the more classical studies of heat flow on smooth manifolds (cf. [16]),
a probabilistic interpretation of such problems comes from the description and analysis of “the
canonical stochastic process” on F, which is usually called Brownian motion on F. One of the
many areas of applications is heat flow along fractures. In this vein, see [14,17,20,21,22,26,39]
These articles start with an idealized fracture (usually a simple geometric construct such as a
comb) and proceed to the construction and analysis of Brownian motion on this fracture. Let us
begin by attacking the problem from a different point of view. Namely, rather than considering a
fixed idealized fracture, we begin with the following random idealization of a fracture: we assume
that R is a vertically homogeneous, two–dimensional rectangular medium with sides parallel to
the axes. Then the our left–to–right random fracture R looks like the graph of a one–dimensional
Brownian motion. (To make this more physically sound, one needs some mild conditions on the local
growth of the fracture together with the invariance principle of Donsker; see [6].) Approximating
the Brownian graph by random walks and once again applying Donsker’s invariance principle ([6]),
it is reasonable to suppose that Brownian motion on a Brownian fracture is described by (Yt , Zt ),
where Y is a one–dimensional Brownian motion and Z is the iterated Brownian motion built from
Y . To construct Z, let X ± be two independent one–dimensional Brownian motions which are
independent of Y, as well (throughout this paper, we assume that all Brownian motions start at
the origin). Let X be the two–sided Brownian motion given by

if t > 0
 X + (t),
Xt =
 −
X (−t), if t < 0.
Iterated Brownian motion Z can be defined as
Zt = X(Yt ).
As is customary, given a function f (random or otherwise), we freely interchange between f (t) and
ft for typographical ease or for reasons of aesthetics.
The above model for Brownian motion on a Brownian fracture appears earlier (in a slightly
different form) in [13]. Our model is further supported by the results of [11]. There, it is shown
that iterated Brownian motion arises naturally as the (weak) limit of reflected Brownian motion
in an infinitesimal fattening of the graph of a Brownian motion.
Recently iterated Brownian motion and its variants have been the subject of various works;
see [1,4,5,8,9,10,11,13,15,23,24,25,29,30,38,40]. In addition to its relation to heat flow on fractures,
iterated Brownian motion has a loose connection with the parabolic operator 18 ∆2 − ∂/∂t; see [19]
for details.
In this paper, we are concerned with developing a stochastic calculus for Z. It is not surprising
that the key step in our analysis is a construction of stochastic integral processes of form
Rt
f
(Z
)dZ
s
s , where f is in a “nice” family of functions. Since Z is not a semi-martingale, such a
0
construction
is necessarily non–trivial. (A folk theorem of C. Dellacherie essentially states that for
R
HdM to exist as an “integral” for a large class of H’s, M need necessarily be a semi–martingale.)
Rt
Our construction of 0 f (Zs )dZs is reminiscent of the integrals of Stratonovich and Lebesgue. More
precisely, for each nonnegative integer n, we divide space into an equipartition of mesh size 2−n/2 .
According to the times at which the Brownian motion Y is in this partition, one obtains an induced random partition {Tk,n ; 1 6 k 6 2n t} of the time interval [0, t]. One of the useful features of
this random partition is that it uniformly approximates the more commonly used dyadic partition
{k2−n ; 1 6 k 6 2n t}. Having developed the partition, we show that
Z t
Z(T
X
k+1,n ) + Z(Tk,n )
f (Zs )dZs = lim
f
· Z(Tk+1,n ) − Z(Tk,n )
n→∞
2
0
n
1
6k62
t
THE ANNALS OF APPLIED PROBABILITY, 9, 629–667 (1999)
exists with probability one and can be explicitly identified in terms of other (better understood)
processes. This material is developed in §2. The use of the midpoint rule in defining the stochastic
integral is significant. The midpoint rule is a symmetric rule, and symmetry will play an important
role in our analysis. As we will show later in this section, the analogous partial sum process based
on the right–hand rule does not converge.
Based on Donsker’s invariance principle, we have already argued that iterated Brownian motion is a reasonable candidate for the canonical process on a Brownian fracture. This viewpoint is
further strengthened by our results in the remainder of this paper which are concerned with the
variations of iterated Brownian motion. To explain these results, define — for smooth functions f ,
X
Vn(j) (f, t) =
1
6k62
f
Z(T
j
+ Z(Tk,n ) · Z(Tk+1,n ) − Z(Tk,n ) ,
2
k+1,n )
nt
(j)
(j = 1, 2, 3, 4)
(j)
When f ≡ 1, we will write Vn (t) for Vn (1, t), which we call the j–th variation of Z. A more
traditional definition of the variation of iterated Brownian motion has been studied in [9]. In §3
and §4 we extend the results of [9] along the random partitions {Tk,n }. In fact, we prove that with
probability one, for a nice function f ,
−n/2
lim 2
n→∞
Z
Vn(2) (f, t)
t
=
f (Zs )ds,
0
lim Vn(3) (f, t) = 0,
n→∞
and
Z
lim
n→∞
Vn(4) (f, t)
t
=3
f (Zs )ds.
0
Further refinements appear in the second–order analysis of these strong limit theorems. In essence,
(2)
(4)
we show that appropriately normalized versions of Vn (t) − 2n/2 t and Vn (t) − 3t converge in
distribution to Kesten and Spitzer’s Brownian motion in random scenery (see [27]), while an
(3)
appropriately normalized version of Vn (t) converges in distribution to iterated Brownian motion
itself. Indeed, it can be shown that — after suitable normalizations — all even variations converge
weakly to Brownian motion in random scenery while the odd variations converge weakly to iterated
Brownian motion.
Our analysis of the variation of iterated Brownian motion indicates the failure of the right–
hand rule in defining the stochastic integral. If f is sufficiently smooth and has enough bounded
derivatives, then, by Taylor’s theorem, we have
X
1
6k62
f Z(Tk,n ) · Z(Tk+1,n ) − Z(Tk,n )
nt
1
1
1
= Vn(1) (f, t) + Vn(2) (f 0 , t) + Vn(3) (f 00 , t) + Vn(4) (f 000 , t) + o(1),
2
4
12
where o(1) → 0 almost surely and in L2 (P) as n → ∞. It follows that


1 (2) 0
lim 
f Z(Tk,n ) · Z(Tk+1,n ) − Z(Tk,n ) − Vn (f , t)
n→∞
2
n
16k62 t
Z t
Z
1 t 000
=
f (Zs )dZs +
f (Zs )ds.
4 0
0
X
THE ANNALS OF APPLIED PROBABILITY, 9, 629–667 (1999)
Consequently, the right–hand rule process will converge if and only if the associated quadratic
variation process converges. However the quadratic variation process diverges whenever f 0 is a
positive function, to name
R tan obvious, but by no means singular, example.
Our construction of 0 f (Zs )dZs is performed pathwise and relies heavily on excursion theory
for Brownian motion. It is interesting that a simplified version of our methods
yields an excursion–
Rt
theoretic construction of ordinary Itô integral processes of the type 0 f (Ys )dYs for Brownian
motion Y (see §5 for these results). While stochastic calculus treatments of excursion theory have
been carried out in the literature (cf. [37]), ours appears to be the first attempt in the reverse
direction.
A general pathwise approach to integration is carried out in [35]. This is based on a construction
of Lévy–type stochastic areas. It would be interesting to see the connection between our results
and those of [35].
We conclude this section by defining some notation which will be used throughout the paper.
For any array {ai,n , j ∈ Z, n > 0}, we define ∆aj,n = aj+1,n − aj,n . Whenever a process U has
local times, we denote them by Lxt (U ). This means that for any Borel function f and all t > 0
Z t
Z ∞
f (Us )ds =
f (a)Lat (U )da,
−∞
0
almost surely. We write I{A} for the indicator of a Borel set A. In other words, viewed as a random
variable,
(
1, if ω ∈ A
I{A}(ω) =
0, if ω 6∈ A.
Let C 2 (R ) be the collection of all twice continuously differentiable functions, f : R → R . By Cb2 (R )
we mean the collection of all f ∈ C 2 (R ) such that kf kCb2 (R) < ∞, where
kf kCb2 (R) = sup |f (x)| + |f 0 (x)| + |f 00 (x)| .
(1.1)
x
It is easy to see that endowed with the norm k · · · kCb2 (R), Cb2 (R ) is a separable Banach space. For
each integer j and each nonnegative integer n, let rj,n = j2−n/2 . Recalling that X is a two–sided
Brownian motion, we let
Xj,n = X(rj,n )
(1.2)
X(rj+1,n ) + X(rj,n )
Mj,n =
.
2
Finally, for any p > −1, µp will denote the absolute p–th moment of a standard normal distribution,
that is,
Z ∞
p + 1
2
−1/2
µp = (2π)
|x|p e−x /2 dx = π −1/2 2p/2 Γ
.
(1.3)
2
−∞
Acknowledgments. We thank Chris Burdzy for several interesting discussions on iterated Brownian motion. The presentation of this paper has been improved, largely due to the remarks of two
anonymous referees to whom we wish to extend our thanks.
2. The Stochastic Integral
In this section we will define a stochastic integral with respect to iterated Brownian motion. For
each t > 0, we will construct a sequence of partitions {Tk,n , 0 6 k 6[2n t]} of the interval [0, t] along
which the partial sum process,
[2n t]−1 X
Z(Tk+1,n ) + Z(Tk,n )
(1)
Vn (f, t) =
f
Z(Tk+1,n ) − Z(Tk,n ) ,
2
k=0
THE ANNALS OF APPLIED PROBABILITY, 9, 629–667 (1999)
converges almost surely and in L2 (P) as n → ∞, provided only that f is sufficiently smooth. The
limiting random variable is properly called the stochastic integral of f (Zs ) with respect to Zs over
Rt
the interval [0, t] and will be denoted by 0 f (Zs )dZs . Our point of departure from the classical
development of the stochastic integral is that the partitioning members Tk,n are random variables,
which we will define presently. For each integer n > 0 and each integer j, recall that rj,n = j2−n/2
and let
n = rj,n , j ∈ Z .
D
To define the elements of the nth partition, let T0,n = 0 and, for each integer k > 1, let
Tk,n = inf s > Tk−1,n : Ys ∈
Dn r {Y (Tk−1,n)}.
For future reference we observe that the process Y (Tk,n ), k > 0 is a simple symmetric random
walk on n .
Here is the main result of this section.
D
Theorem 2.1. Let t > 0 and f ∈ Cb2 (R ). Then
Z
Vn(1) (f, t)
Yt
→
0
1
f (Xs )dXs + sgn(Yt )
2
Z
Yt
f 0 (Xs )ds,
0
almost surely and in L2 (P) as n → ∞.
We have used the following natural definition for two–sided stochastic integrals:
Z
t
f (Xs )dXs =
0
Rt
 0 f (Xs+ )dXs+ ,
 R −t
0
if t > 0
f (Xs− )dXs− , if t < 0,
whenever the Itô integrals on the right exist.
Remark 2.1.1. For any f ∈ Cb2 (R ), define
Z
t
hf, Xi(t) =
0
1
f (Xs )dXs + sgn(t)
2
Z
t
f 0 (Xs )ds.
0
Then {hf, Xi(t), t ∈ R} is the correct two–sided Stratonovich integral process of the integrand
f ◦ X. In the notation of §1, Theorem 2.1 asserts that
Z
t
f (Zs )dZs = hf, Xi(Yt ).
0
In other words, stochastic integration with respect to Z is invariant under the natural composition
map: (X, Y ) 7→ Z.
Before proceeding to the proof of Theorem 2.1, a few preliminary remarks and observations
are in order. First we will demonstrate that the random partition {Tk,n , k ∈ Z} approximates the
dyadic partition {k/2n , k ∈ Z} as n tends to infinity.
Lemma 2.2. Let t > 0. Then
0
sup T[2n s],n − s → 0
6s6t
THE ANNALS OF APPLIED PROBABILITY, 9, 629–667 (1999)
almost surely and in L2 (P) as n → ∞.
Proof. By the strong Markov property, ∆Tj,n , j > 0 is an i.i.d. sequence of random variables.
Moreover, by Brownian scaling, ∆T1,n has the same distribution as 2−n T1,0 . By Itô’s formula,
exp(λYt − λ2 t/2) is a mean-one martingale. Thus, by Doob’s optional sampling theorem,
−1
E exp(−λ2 T1,0 /2) = cosh(λ)
.
2
It follows that E (T1,0 ) = 1, E (T1,0
) = 5/3; consequently, var(T1,0 ) = 2/3. Thus, by Brownian
scaling,
2
E (∆T1,n ) = 2−n and var(∆T1,n ) = 2−2n .
(2.1)
3
Given 0 6 s 6 t, we have
0
sup T[2n s],n − s 6 sup T[2n s],n − [2n s]2−n + sup [2n s]2−n − s
6s6t
0
6s6t
6 1 6max
k 6[2
n t]
0
|Tk,n − E (Tk,n )| + 2
Since
Tk,n − E (Tk,n ) =
6s6t
−n
k−1
X
.
∆Tj,n − E (∆Tj,n ) ,
j=0
we have, by Doob’s maximal inequality and (2.1),
E
1
max n |Tk,n − E (Tk,n )|
6 k 6[2
2
t]
64
[2n t]−1
X
var(∆Tj,n )
j=0
= O(2−n ).
In summary
sup T[2n s],n − s = O(2−n/2 ),
0 6 s 6 t
(2.2)
2
which demonstrates the L2 (P) convergence in question. The almost sure convergence follows from
applications of Markov’s inequality and the Borel-Cantelli lemma.
For each n > 0, let
τn = τ (n, t) = T[2n t],n
j ∗ = j ∗ (n, t) = 2n/2 Y (τn ).
In keeping with the notation that we have already developed, we have rj ∗ ,n = Y (τn ).
Lemma 2.3. Let t > 0. Then, as n → ∞,
(a) kY (τn ) − Y (t)k2 = O(2−n/8
);
(b) sgn(Y (t)) − sgn(Y (τn )) |Y (t)| + |Y (τn )| 2 = O(2−n/64 ).
Proof. For each integer n > 1, let εn = kτn − tk2 . From (2.2) we have
1/2
εn = O(2−n/4 ).
(2.3)
THE ANNALS OF APPLIED PROBABILITY, 9, 629–667 (1999)
Observe that
E (|Y (τn ) − Y (t)|2 ) = E |Y (τn ) − Y (t)|2 I(|τn − t|
6 εn )
+ E |Y (τn ) − Y (t)|2 I(|τn − t| > εn )
= An + Bn ,
with obvious notation. By (2.3) and the elementary properties of Browian motion, we have
2
An 6 2E
sup
|Y (s) − Y (t)|
t
= 2εn E
6 s 6 t+ε
n
sup |Y (s)|
2
6s61
0
= O(2−n/4 ).
Concerning Bn , observe that {2n/2 Y (Tk,n ), k > 0} is a simple symmetric random walk on Z.
As such,
h
4 i
= 3[2n t]2 − 2[2n t].
E 2n/2 Y (τn )
It follows that {kY (τn )k4 , n > 0} is a bounded sequence. By the Hölder, Minkowski and Markov
inequalities,
p
Bn 6 kY (τn ) − Y (t)k24 P(|τn − t| > εn )
6 kY (τn)k4 + kY (t)k4 2 kτn ε− tk2
n
= O(εn )
= O(2−n/4 ),
which proves (a).
1/2
For each integer n > 1, let δn = kY (t) − Y (τn )k2 and observe that δn = O(2−n/16 ). By
elementary considerations we obtain
|sgn(Y (t)) − sgn((Y (τn ))| 6 2I(|Y (t)| 6 δn ) + 2I(|Y (τn )| 6 δn ) + 2I(|Y (t) − Y (τn )| > 2δn ).
Consequently
ksgn(Y (t)) − sgn((Y (τn ))k4 6 2kI(|Y (t)| 6 δn )k4 + 2kI(|Y (τn )| 6 δn )k4
+ 2kI(|Y (t) − Y (τn )| > 2δn )k4 .
We will obtain bounds for each of the terms on the right.
Since t > 0, |Y (t)| has a bounded density function. In particular,
P(|Y (t)|
6 δn ) 6
r
2
δn .
πt
√
This shows that kI(|Y (t)| 6 δn )k4 6 4 2δn = O(2−n/64 ). Once again, let us observe that
{2n/2 Y (Tk,n ), k > 0} is a simple symmetric random walk on Z. Consequently, E (Y (τn )) = 0 and
var(Y (τn )) = [2n t]2−n . From the Berry–Esseen theorem we obtain the estimate
P(|Y (τn )|
p
6 δn) 6 P(|Y (1)| 6 δn /
[2n t]2−n ) +
C
2n/2
,
where C depends only on t. Arguing as above, we have kI(|Y (τn )| 6 δn )k4 = O(2−n/64 ).
THE ANNALS OF APPLIED PROBABILITY, 9, 629–667 (1999)
By Markov’s inequality,
P(|Y (t) − Y (τn )| > 2δn )
6 kY (t) −4δY2 (τn )k2
n
1
= δn ,
4
which shows that kI(|Y (t) − Y (τn )| > 2δn )k4 = O(2−n/64 ). In summary, we have
ksgn(Y (t)) − sgn(Y (τn ))k4 = O(2−n/64 ).
(2.4)
Finally, by the Hölder and Minkowski inequalities, we have
sgn(Y (t)) − sgn(Y (τn )) |Y (t)| + |Y (τn )| 2
6 ksgn(Y (t)) − sgn(Y (τn ))k4
kY (t)k4 + kY (τn )k4 .
As we have already observed, {kY (τn )k4 , n > 0} is a bounded sequence. Thus, item (b) of this
lemma follows from (2.4).
We will adopt the following notation and definitions. For each integer n > 0, j ∈ Z and real
number t > 0, let
[2n t]−1
Uj,n (t) =
X
I Y (Tk,n ) = rj,n , Y (Tk+1,n ) = rj+1,n
(2.5)
I Y (Tk,n ) = rj+1,n , Y (Tk+1,n ) = rj,n .
(2.6)
k=0
[2n t]−1
Dj,n (t) =
X
k=0
Thus, Uj,n (t) and Dj,n (t) denote the number of upcrossings and downcrossings of the interval
[rj,n , rj+1,n ] within the first [2n t] steps of the random walk {Y (Tk,n ), k > 0}, respectively.
As is customary, we will say that ϕ : R2 → R is symmetric provided that
ϕ(x, y) = ϕ(y, x)
for all x, y ∈ R. We will say that ϕ is skew symmetric provided that
ϕ(x, y) = −ϕ(y, x)
for all x, y ∈ R. Recalling (1.2), we state and prove a useful real–variable lemma.
Lemma 2.4. If ϕ is symmetric, then
[2n t]−1
X
X
ϕ Z(Tk,n ), Z(Tk+1,n ) =
ϕ(Xj,n , Xj+1,n )(Uj,n (t) + Dj,n (t)).
j∈Z
k=0
If ϕ is skew–symmetric, then
[2n t]−1
X
k=0
X
ϕ Z(Tk,n ), Z(Tk+1,n ) =
ϕ(Xj,n , Xj+1,n )(Uj,n (t) − Dj,n (t)).
j∈Z
THE ANNALS OF APPLIED PROBABILITY, 9, 629–667 (1999)
Proof. Since each step of the random walk {Y (Tk,n ), k > 0} is either and upcrossing or a downcrossing of some interval [rj,n , rj+1,n ], j ∈ Z, it follows that
X
1=
I{Y (Tk,n ) = rj,n , Y (Tk+1,n ) = rj+1,n } + I{Y (Tk,n ) = rj+1,n , Y (Tk+1,n ) = rj,n } .
j∈Z
Consequently
[2n t]−1
X
n
[2 t]−1
X X
ϕ Z(Tk,n ), Z(Tk+1,n ) =
ϕ Z(Tk,n ), Z(Tk+1,n )
j∈Z k=0
k=0
× I{Y (Tk,n ) = rj,n , Y (Tk+1,n ) = rj+1,n }
+ I{Y (Tk,n ) = rj+1,n , Y (Tk+1,n ) = rj,n } .
Observe that from (2.5) and (2.6) we have
[2n t]−1
X
ϕ Z(Tk,n ), Z(Tk+1,n ) I{Y (Tk,n ) = rj,n , Y (Tk+1,n ) = rj+1,n }
k=0
= ϕ(Xj,n , Xj+1,n )Uj,n (t)
n
[2 t]−1
X
ϕ Z(Tk,n ), Z(Tk+1,n ) I{Y (Tk,n ) = rj+1,n , Y (Tk+1,n ) = rj,n }
k=0
= ϕ(Xj+1,n , Xj,n )Dj,n (t)
The remainder of the argument follows from the definitions of symmetric and skew symmetric. Our next result will be used in conjunction with the decomposition developed in Lemma 2.3;
its proof is easily obtained by observing that the upcrossings and downcrossings of the interval
[rj,n , rj+1,n ] alternate.
Lemma 2.5. Let t > 0. For each j ∈ Z,

 I(0 6 j < j ∗ )
Uj,n (t) − Dj,n (t) = 0

−I(j ∗ 6 j < 0)
if j ∗ > 0
if j ∗ = 0
if j ∗ < 0.
We will need a set of auxiliary processes. For s > 0, let
e ± = X ± (rj,n )
X
s
For s ∈ R , let
es =
X
when rj,n 6 s < rj+1,n .
e+
X
s
−
e−s
X
if s > 0
if s < 0.
We will adopt the following conventions: given t ∈ R , let
Rt
e + )dX +
Z t
 0 f (X
s
s
e
f (Xs )dXs = R
 −t e −
0
f (Xs )dXs−
0
if t > 0
if t < 0,
THE ANNALS OF APPLIED PROBABILITY, 9, 629–667 (1999)
es , s ∈ R }, we have
whenever the integrals on the right are defined. Due to the definition of {X
 Pk−1
+
+

j=0 f (Xj,n )∆Xj,n



Z rk,n

e
f (Xs )dXs = 0

0



 P|k|−1
−
−
j=0 f (Xj,n )∆Xj,n
if k > 0
(2.7)
if k = 0
if k < 0.
Similarly, by consideration of the cases, we obtain
 Pk−1
+

j=0 f (Xj,n )∆rj,n




Z rk,n

e
sgn(rk,n )
f (Xs )ds = 0

0


P|k|−1

−

 j=0 f (Xj,n )∆rj,n
if k > 0
if k = 0
(2.8)
if k < 0.
It will be convenient to rewrite the results of (2.8) in a modified form. For k > 0, it will be preferable
to write
Z rk,n
k−1
X
+
+ 2
es )ds =
sgn(rk,n )
f (X
f (Xj,n
)(∆Xj,n
)
0
j=0
+
k−1
X
j=0
(2.9)
+
+ 2
f (Xj,n
) ∆rj,n − (∆Xj,n
) .
The obvious modifications should be made for the case k < 0.
Proof of Theorem 2.1. Recall (1.1)–(1.3). For each integer n > 0, let
Ven(1) (f, t) =
Z
0
es )dXs + 1 sgn(rj ∗ ,n )
f (X
2
Z
rj ∗ ,n
es )ds
f 0 (X
0
Z Y (τn )
1
f (Xs )dXs + sgn(Y (τn ))
f 0 (Xs )ds
2
0
0
Z Yt
Z Yt
1
V (1) (f, t) =
f (Xs )dXs + sgn(Yt )
f 0 (Xs )ds.
2
0
0
Vbn(1) (f, t) =
Z
rj ∗ ,n
Y (τn )
(1)
In this notation, we need to show that Vn (f, t) → V (1) (f, t) almost surely and in L2 (P) as n → ∞.
To this end, we have
kVn(1) (f, t) − V (1) (f, t)k2 6 kVn(1) (f, t) − Ven(1) (f, t)k2 + kVen(1) (f, t) − Vbn(1) (f, t)k2
+ kVbn(1) (f, t) − V (1) (f, t)k2 .
(1)
We will estimate each of the terms on the right in order. We will begin by expressing Vn (f, t)
in an alternate form. We will place a ± superscript on Mj,n whenever the underlying Brownian
motion is so signed. Since the function
ϕ(x, y) = f
y+x
2
(y − x)
THE ANNALS OF APPLIED PROBABILITY, 9, 629–667 (1999)
is skew symmetric, by Lemma 2.4 we have
X
Vn(1) (f, t) =
f (Mj,n )∆Xj,n Uj,n (t) − Dj,n (t) .
j∈Z
In light of Lemma 2.5, there will be three cases to consider, according to the sign of j ∗ . If j ∗ = 0,
(1)
then Uj,n (t) − Dj,n (t) = 0 and, consequently, Vn (f, t) = 0. If j ∗ > 0, then Uj,n (t) − Dj,n (t) = 1
for 0 6 j 6 j ∗ − 1 and 0 otherwise; consequently,
j ∗ −1
Vn(1) (f, t)
=
X
j=0
+
+
f (Mj,n
)∆Xj,n
.
If, however, j ∗ < 0, then Uj,n (t) − Dj,n (t) = −1 for j ∗ 6 j 6 −1 and 0 otherwise; consequently,
Vn(1) (f, t)
=−
−1
X
j=j ∗
=
−1
X
f
j=j ∗
f
Xj+1,n + Xj,n
2
−
−
+ X−j,n
X−j−1,n
2
(Xj+1,n − Xj,n )
!
−
−
− X−j−1,n
)
(X−j,n
|j ∗ |−1
=
X
j=0
In summary,
−
−
f (Mj,n
)∆Xj,n
.
 Pj ∗ −1
+
+

j=0 f (Mj,n )∆Xj,n




(1)
Vn (f, t) = 0




 P|j ∗ |−1
−
−
f (Mj,n
)∆Xj,n
j=0
if j ∗ > 0
if j ∗ = 0
(2.10)
if j ∗ < 0.
By combining (2.7), (2.9) and (2.10), we have
Vn(1) (f, t) − Ven(1) (f, t) = An + Bn ,
where
and

Pj ∗ −1

+
+
+
+
+
1 0

f (Mj,n ) − f (Xj,n ) − 2 f (Xj,n )∆Xj,n ∆Xj,n

j=0





An = 0




P|j ∗ |−1

−
−
−
−
−
1 0

 j=0
f (Mj,n ) − f (Xj,n ) − 2 f (Xj,n )∆Xj,n ∆Xj,n

if j ∗ > 0
if j ∗ = 0
if j ∗ < 0,
 Pj ∗ −1
+
+ 2
1
0
if j ∗ > 0
f
(X
)
(∆X
)
−
∆r

j,n
j,n
j,n
j=0

2



Bn = 0
if j ∗ = 0




 1 P|j ∗ |−1 0 −
− 2
f (Xj,n ) (∆Xj,n
) − ∆rj,n if j ∗ < 0.
j=0
2
Note that by Taylor’s theorem
f (M ± ) − f (X ± ) − 1 f 0 (X ± )∆X ± 6 1 kf kC 2 (R)|∆X ± |2 .
j,n
j,n
j,n
j,n j,n
b
2
8
THE ANNALS OF APPLIED PROBABILITY, 9, 629–667 (1999)
Hence,
1
Pj ∗ −1
+ 3
kf kCb2 (R) j=0 |∆Xj,n
|


8



|An | 6 0



1
P ∗ |−1

− 3
kf kCb2 (R) |j
|∆Xj,n
|
j=0
8
if j ∗ > 0
if j ∗ = 0
if j ∗ < 0.
However, for any integer m, we have, by the triangle inequality and Brownian scaling,
|m|−1
X
± 3
± 3
|∆X
|
j,n
6 |m| k(∆X0,n ) k2
j=0
2
= |m| µ6 2−3n/4 .
1/2
Since the random variable j ∗ is independent of X, by conditioning on the value of j ∗ and applying
the above inequality we obtain
1
1/2
kAn k2 6 kf kCb2 (R)µ6 2−3n/4 E (|j ∗ |).
8
Since {2n/2 Y (Tk,n ), k > 0} is a simple symmetric random walk on Z, it follows that for each t > 0
E (|j ∗ |)
Consequently
6 kj ∗ k2 =
p
[2n t] = O(2n/2 ).
kAn k2 = O(2−n/4 ).
(2.11)
(2.12)
Let us turn our attention to the analysis of Bn . For each j ∈ Z, let
± 2
ε±
j,n = (∆Xj,n ) − ∆rj,n .
±
−n
Observe that E (ε±
var(X(1)2 ). Let m ∈ Z. Since the random variables
j,n ) = 0 and var(εj,n ) = 2
±
±
±
0
{f 0 (Xj,n
)ε±
j,n , 0 6 j 6 |m| − 1} are pairwise uncorrelated and since εj,n is independent of f (Xj,n ),
it follows that


|m|−1
|m|−1
X
X
1
1
±
±
± 2
0
var 
f (Xj,n )εj,n  =
E f 0 (Xj,n
) var(ε±
j,n )
2 j=0
4 j=0
6 C1 |m|2−n ,
where C1 = kf k2C 2 (R)var(X(1))2 /4. Arguing as above, since j ∗ is independent of X, it follows that
b
kBn k22 6 C1 2−n E (|j ∗ |)
= O(2−n/2 ).
We have used (2.11) to arrive at this last estimate. This estimate, in conjunction with (2.12), yields
kVn(1) (f, t) − Ven(1) (f, t)k2 = O(2−n/4 ).
(2.13)
Recalling that rj ∗ ,n = Y (τn ), we have
Z rj ∗ ,n
Z
1 rj ∗ ,n
(1)
(1)
e
b
e
e
|Vn (f, t) − Vn (f, t)| 6 f (Xs ) − f (Xs ) dXs + f (Xs ) − f (Xs ) ds .
2 0
0
THE ANNALS OF APPLIED PROBABILITY, 9, 629–667 (1999)
For j > 0 we have
" Z
rj,n
E
es ) dXs
f (Xs ) − f (X
2 #
Z
rj,n
=
0
0
6
h
i
e + ) − f (X + )|2 ds
E |f (X
s
s
Z
kf k2C 2 (R)
b
= kf k2C 2 (R)
b
rj,n
0
j−1 Z rk+1,n
X
k=0 rk,n
Z r1,n
= kf k2C 2 (R)j
E |X + (s) − X + (rk,n )|2 ds
sds
b
=
h
i
es+ |2 ds
E |Xs+ − X
0
1
2
.
kf k2C 2 (R)jr1,n
b
2
A similar argument handles the case j < 0, and in general
" Z
2 #
rj,n
e
f (Xs ) − f (Xs ) dXs
E
6 12 kf k2Cb2 (R)|j|2−n .
0
(2.14)
Since j ∗ is independent of the X, by conditioning on the value of j ∗ and applying (2.14), we obtain
" Z
2 #
rj ∗ ,n
1
es )dXs − f (X
es ) dXs
= kf k2C 2 (R)2−n E (|j ∗ |)
f (X
E
b
2
(2.15)
0
= O(2−n/2 ).
We have used (2.11) to obtain this last estimate. Similarly, for any integer j > 0, we have
Z rj,n
Z rj,n
e
e + )k2 ds
f (Xs ) − f (Xs ) ds 6
kf (Xs+ ) − f (X
s
0
0
2
Z rj,n
e + k2 ds
6 kf kCb2 (R)
kXs+ − X
s
0
= kf kCb2 (R)
j−1 Z
X
k=0
Z
= kf kCb2 (R)j
=
rk+1,n
rk,n
r1,n
√
kX + (s) − X + (rk,n )k2 ds
sds
0
2
3/2
kf kCb2 (R)jr1,n .
3
A similar proof handles the case j < 0, and in general we have
" Z
2 #
rj,n
es ) ds
f (Xs )ds − f (X
E
6 49 kf k2Cb2 (R)j 2 2−3n/2 .
0
(2.16)
Since j ∗ is independent of X, by conditioning on the value of j ∗ and applying (2.16), we have
" Z
2 #
rj ∗ ,n
es ) ds
f (Xs )ds − f (X
E
6 49 kf k2Cb2 (R)2−3n/2 E (j ∗ )2
(2.17)
0
= O(2−n/2 ).
THE ANNALS OF APPLIED PROBABILITY, 9, 629–667 (1999)
We have used (2.11) to obtain this last estimate. From (2.15) and (2.17), we have
kVen(1) (f, t) − Vbn(1) (f, t)k2 = O(2−n/4 ).
(2.18)
Recalling that rj ∗ ,n = Y (τn ), we have
Z
Y (t)
(1)
(1)
b
|Vn (f, t) − V (f, t)| 6 f (Xs )dXs Y (τn )
Z Y (t)
Z Y (τn )
+ sgn(Y (t))
f 0 (Xs )ds − sgn(Y (τn ))
f 0 (Xs )ds .
0
0
Let a, b ∈ R . Then by the Itô isometry
" Z
E
2 #
b
a
Z
b∨a
=
f (Xs )dXs
a∧b
E f 2 (Xs ) ds
6 kf k2C (R)|b − a|.
2
b
Since X are Y are independent, by item (a) of Lemma 2.3 we obtain
" Z
E
2 #
Y (t)
Y (τn )
f (Xs )dXs
6 kf k2C (R)E
2
b
|Y (t) − Y (τn )|
(2.19)
= O(2−n/8 ).
By consideration of the cases,
Z Y (t)
Z Y (τn )
f 0 (Xs )ds − sgn(Y (τn ))
f 0 (Xs )ds
sgn(Y (t))
0
0
is bounded by
Z
Z
Z Y (τn )
Y (t)
1
Y (t)
f 0 (Xs )ds + |sgn(Y (t)) − sgn(Y (τn ))| f 0 (Xs )ds +
f 0 (Xs )ds .
Y (τn )
2
0
0
However, by an elementary bound on the integral and item (a) of Lemma 2.3,
Z
Yt
0
f (Xs )ds
Y (τn )
6 kf kC (R)kY (t) − Y (τn)k2
2
b
2
−n/8
= O(2
(2.20)
).
Finally, note that
Z
Z Y (τn )
Y (t)
0
0
|sgn(Y (t)) − sgn(Y (τn ))| f (Xs )ds +
f (Xs )ds
0
0
6 kf kC (R)|sgn(Y (t)) − sgn(Y (τn ))|
2
b
|Y (t)| + |Y (τn )| .
THE ANNALS OF APPLIED PROBABILITY, 9, 629–667 (1999)
By Lemma 2.3(b),
Z Y (t)
Z Y (τn )
f 0 (Xs )ds +
f 0 (Xs )ds = O(2−n/64 ).
sgn(Y (t)) − sgn(Y (τn ))
0
0
(2.21)
2
From (2.19), (2.20) and (2.21) we obtain
kVbn(1) (f, t) − V (1) (f, t)k2 = O(2−n/64 ).
(2.22)
Combining (2.13), (2.18) and (2.22), it follows that
kVn(1) (f, t) − V (1) (f, t)k2 = O(2−n/64 ),
which yields the L2 (P) convergence in question. The almost sure convergence follows from applications of Markov’s inequality and the Borel–Cantelli lemma.
3. The Quadratic Variation of Iterated Brownian Motion
Given an integer n > 0 and a real number t > 0, let
[2n t]−1
Vn(2) (t) =
X
2
Z(Tk+1,n ) − Z(Tk,n )
k=0
[2n t]−1
Vn(2) (f, t)
=
X
k=0
f
Z(Tk+1,n ) + Z(Tk,n )
2
2
Z(Tk+1,n ) − Z(Tk,n ) .
In this section, we will examine both strong and weak limit theorems associated with these
(2)
quadratic variation processes. Our first result is the strong law of large numbers for Vn (f, t).
Theorem 3.1. Let t > 0 and f ∈ Cb2 (R ). Then,
2−n/2 Vn(2) (f, t) →
Z
t
f (Zs )ds,
0
almost surely and in L2 (P) as n → ∞.
As a corollary, we have 2−n/2 Vn (t) → t almost surely and in L2 (P) as n → ∞. Our next
(2)
result examines the deviations of the centered process 2−n/2 Vn (t) − t and was inspired by
the connection between the quadratic variation of iterated Brownian motion and the stochastic
process called Brownian motion in random scenery, first described and studied in [27]. Since the
introduction of this model, various aspects of Brownian motion in random scenery have been
studied in [7, 31, 32, 33, 34, 36],
We will use the following notation in the sequel. Let DR[0, 1] denote the space of real–valued
functions on [0, 1] which are right continuous and have left–hand limits. Given random elements
{Tn } and T in DR[0, 1], we will write Tn =⇒ T to denote the convergence in distribution of the {Tn }
to T (see [6, Chapter 3]). Let {B1 (t), t ∈ R } be a two–sided Brownian motion and let {B2 (t), t > 0}
denote an independent standard Brownian motion. Let
Z
(t) =
Lxt (B2 )B1 (dx),
(2)
G
R
THE ANNALS OF APPLIED PROBABILITY, 9, 629–667 (1999)
G
The process { (t), t ∈ R} is called a Brownian motion in random scenery. Our next result states
(2)
that Vn (t), suitably normalized, converges in DR[0, 1] to (t).
G
Theorem 3.2. As n → ∞,
2n/4 −n/2 (2)
√ 2
Vn (t) − t =⇒ (t).
2
G
We will prove these theorems in order, but first we will develop several lemmas pertaining to
the local time of Brownian motion.
Lemma 3.4. For real numbers p, q > 0,
X
j∈Z
r
kLt j,n (Y )kqp = O(2n/2 ).
Proof. We will use the following notation: given x ∈ R , let
τx = inf{s > 0 : Ys = x}.
Let C = E ((L01 )p ). Then, from the strong Markov property, elementary properties of the local time
process, the reflection principle, and a standard Gaussian estimate, it follows that
E ((Lxt (Y
Z
p
t
)) ) =
0
Z
t
=
0
E ((Lxt (Y ))p | τx = s)dP(τx
E ((L0t−s (Y ))p )dP(τx
6 s)
6 s)
6 E ((L0t (Y ))p )P(τx 6 t)
= 2Ctp/2 P(Yt > |x|)
6 2Ctp/2 exp −x2/(2t) .
Consequently, for real numbers p, q > 0,
Z
R
kLxt (Y )kqp dx < ∞.
Since the mapping x 7→ kLxt (Y )kqp is uniformly continuous,
lim
n→∞
It follows that
X
r
kLt j,n (Y
)kqp 2−n/2
= lim
j∈Z
X
n→∞
X
j∈Z
r
kLt j,n (Y
)kqp ∆rj,n
Z
∞
=
−∞
kLxt (Y )kqp dx.
r
kLt j,n (Y )kqp = O(2n/2 ),
j∈Z
which proves the lemma in question.
Lemma 3.5. Let a, b ∈ R with ab > 0. Then there exists a positive constant µ, independent of a, b
and t, such that
p
kLbt (Y ) − Lat (Y )k2 6 µ |b − a|t1/4 exp − a2 /(4t) .
THE ANNALS OF APPLIED PROBABILITY, 9, 629–667 (1999)
Proof. Let c ∈ R and t > 0. From [37, Theorem 1.7, p. 210] and its proof, there exists a constant
γ, independent of c and t, such that
E (Lct (Y ) − L0t (Y ))2 6 γ|c|t1/2 .
(3.2)
By symmetry, it is enough to consider the case 0 6 a < b. By the strong Markov property, Brownian
scaling, the reflection principle, item (3.2), and a standard estimate, we obtain
Z a
E ((Lbt (Y ) − Lat (Y ))2 ) =
E ((Lbt (Y ) − Lat (Y ))2 | τa = s)dP(τa 6 s)
Z0 a
0
2
=
E ((Lb−a
t−s (Y ) − Lt−s (Y )) )dP(τa 6 s)
0
6 γ(b − a)t1/2 P(τa 6 t)
6 γ(b − a)t1/2 exp − a2 /(2t) .
The desired result follows upon taking square roots and setting µ = γ 1/2 .
What follows is an immediate application of the preceeding lemma.
Lemma 3.6. Let t > 0. In the notation of (1.2),
X
j∈Z
Z
r
f (Mj,n )Lt j,n (Y )∆rj,n →
t
f (Zs )ds,
0
almost surely and in L2 (P) as n → ∞.
Proof. By the occupation times formula,
Z t
Z
f (Zs )ds =
0
=
∞
−∞
XZ
f (Xu )Lut (Y )du
rj+1,n
j∈Z rj,n
It follows that
Z t
X
rj,n
f (Zs ) −
f (Mj,n )Lt (Y )∆rj,n 0
j∈Z
2
Z
X rj+1,n
6
j∈Z rj,n
f (Xu )Lut (Y )du.
r
kf (Xu )Lut (Y ) − f (Mj,n )Lt j,n (Y )k2 du.
(3.3)
Since f ∈ Cb2 (R ) and X is independent of Y, we have
r
kf (Xu )Lut (Y ) − f (Mj,n )Lt j,n (Y )k2
6 kf kCb2 (R) kLut (Y ) − Lrt j,n (Y )k2 + kXu − Mj,nk2 kLrt j,n (Y )k2 .
However, by Lemma 3.5,
u
L (Y ) − Lrj,n (Y )
t
t
p
(rj,n ∧ rj+1,n )2
6 C ∆rj,nexp −
,
2
(4t)
(3.4)
THE ANNALS OF APPLIED PROBABILITY, 9, 629–667 (1999)
where C depends only upon t. By the integral test, the sums,
X
(rj,n ∧ rj+1,n )2
exp −
∆rj,n ,
(4t)
j∈Z
are bounded in n. Thus,
XZ
rj+1,n
j∈Z rj,n
p
u
Lt (Y ) − Lrt j,n (Y ) du = O ∆rj,n
2
−n/4
= O(2
(3.5)
).
p
For rj,n 6 u 6 rj+1,n we have kXu − Mj,n k2 = ∆rj,n . Thus, by Lemma 3.4, we have
X Z rj+1,n
r
kXu − Mj,n k2 kLt j,n (Y )k2 du = O(2−n/4 ).
j∈Z
(3.6)
rj,n
Combining (3.3), (3.4), (3.5) and (3.6) we see that
Z
t
X
rj,n
−n/4
f
(Z
)ds
−
f
(M
)L
(Y
)∆r
).
s
j,n
j,n = O(2
t
0
j∈Z
2
This demonstrates the convergence in L2 (P). By applications of Markov’s inequality and the Borel–
Cantelli lemma, this convergence is almost sure, as well.
Our next result is from [28, Theorem 1.4] and its proof. See [3] for a related but slightly weaker
version in Lp (P).
Lemma 3.7. There exists a positive random variable K ∈ L8 (P) such that for all j ∈ Z, n > 0,
and t > 0,
q
n/2
Uj,n (t) − 2 Lrt j,n (Y ) 6 Kn2n/4 Lrt j,n (Y )
2
q
n/2
Dj,n (t) − 2 Lrt j,n (Y ) 6 Kn2n/4 Lrt j,n (Y ).
2
Proof of Theorem 3.1. Since the mapping
ϕ(x, y) = f
is symmetric, by Lemma 2.4,
2−n/2 Vn(2) (f, t) =
X
j∈Z
y+x
2
(y − x)2
2−n/2 f (Mj,n )(∆Xj,n )2 Uj,n (t) + Dj,n (t)
= An + Bn + Cn ,
where
An =
Bn =
Cn =
X
j∈Z
X
j∈Z
X
j∈Z
r
2−n/2 f (Mj,n )(∆Xj,n )2 Uj,n (t) + Dj,n (t) − 2n/2 Lt j,n (Y ) ,
f (Mj,n ) (∆Xj,n )2 − E (∆Xj,n )2
r
f (Mj,n )Lt j,n (Y )∆rj,n .
r
Lt j,n (Y ),
THE ANNALS OF APPLIED PROBABILITY, 9, 629–667 (1999)
By Lemma 3.7, since f ∈ Cb2 (R ),
|An | 6 kf kCb2 (R)n2−n/4
X
(∆Xj,n )2 K
q
j∈Z
r
Lt j,n (Y ).
Since X is independent of Y, by Hölder’s inequality, for each j ∈ Z,
k(∆Xj,n )2 K
q
Lt j,n (Y )k2 6 k(∆X1,n )2 k2 kKk4 kLt j,n (Y )k2 .
r
r
1/2
√
By scaling, k(∆Xj,n )2 k2 = 2−n/2 µ4 . Hence, by the triangle inequality and Lemma 3.4,
√ X rj,n
1/2
kAn k2 6 n2−3n/4 kKk4 µ4
kLt (Y )k2
j∈Z
−n/4
= O(n2
),
which shows that An → 0 in L2 (P) as n → ∞. By Markov’s inequality and the Borel–Cantelli
lemma, the convergence is almost sure, as well.
Let
if j > 0
Xj,n
∗
Xj,n
=
Xj+1,n if j < 0.
(1)
(2)
Then we may write Bn = Bn + Bn , where
Bn(1) =
Bn(2) =
X
j∈Z
X
j∈Z
r
∗
f (Mj,n ) − f (Xj,n
) (∆Xj,n )2 − E (∆Xj,n )2 Lt j,n (Y ),
∗
f (Xj,n
) (∆Xj,n )2 − E (∆Xj,n )2
r
Lt j,n (Y ).
By noting that |Mj,n − Xj ∗ ,n | = 12 |∆Xj,n |, we see that
X
r
1
|Bn(1) | 6 kf kCb2 (R)
|∆Xj,n | (∆Xj,n )2 − E (∆Xj,n )2 Lt j,n .
2
j∈Z
Since X and Y are independent,
X
1
r
kBn(1) k2 6 kf kCb2 (R)
k∆Xj,n k4 ∆Xj,n )2 − E (∆Xj,n )2 4 kLt j,n (Y )k2
2
j∈Z
−n/4
= O(2
).
We have used Brownian scaling and Lemma 3.4 to obtain this lastestimate.
∗
Observe that the collection {f (Xj,n
) (∆Xj,n )2 − E (∆Xj,n )2 , j ∈ Z} is centered and pairwise uncorrelated. Since X and Y are independent, we obtain
var(Bn(2) ) =
X
j∈Z
r
∗
kf (Xj,n
)k22 kLt j,n (Y )k22 var (∆Xj,n )2 − E (∆Xj,n )2
Since f is bounded, by Brownian scaling,
var (∆Xj,n )2 − E (∆Xj,n )2
= O(2−n ).
.
THE ANNALS OF APPLIED PROBABILITY, 9, 629–667 (1999)
Therefore,
kBn(2) k2 = O(2−n/4 ).
In summary, kBn k2 = O(2−n/4 ), which shows that Bn → 0 in L2 (P) as n → ∞. By applications
of Markov’s inequality and the Borel–Cantelli
lemma, this convergence is almost sure, as well.
Rt
Finally, by Lemma 3.6, Cn → 0 f (Zs )ds almost surely and in L2 (P) as n → ∞, which proves
the theorem in question.
We turn our attention to the proof of Theorem 3.2. In preparation for the proof of this result,
we will prove several lemmas. For each integer j, each positive integer n and each positive real
number t, let
−n/2
Uj,n (t) + Dj,n (t) .
j,n (t) = 2
L
Lemma 3.8. For each t > 0,
X
j∈Z
Lj,n (t)|3 = O(2n/2 ).
E |
Proof. By the triangle inequality and a standard convexity argument, it follows that
r
r
E | j,n (t)|3 6 4E |Lt j,n (Y )|3 + 4E | j,n (t) − Lt j,n (Y )|3 .
L
L
By Lemma 3.4,
X
j∈Z
By Lemma 3.7,
Lj,n(t) − Lrt
|
r
E |Lt j,n (Y )|3 = O(2n/2 ).
j,n
3/2
r
(Y )|3 6 K 3 n3 2−3n/4 Lt j,n (Y )
.
By Hölder’s inequality,
Lj,n(t) − Lrt
E |
j,n
(Y )|3
6 kKk36 n3 2−3n/4 kLrt
From Lemma 3.4, it follows that
X
E |
Lj,n(t) − Lrt
j∈Z
j,n
j,n
3/2
(Y )k3 .
(Y )|3 = O(n3 2−n/4 ).
This proves the lemma in question.
Lemma 3.9. For each pair of nonnegative real numbers s and t we have,
X
r
lim
E | j,n (s) j,n (t) − Lrsj,n (Y )Lt j,n (Y )| 2−n/2 = 0.
n→∞
L
j∈Z
L
Proof. We have the decomposition
Lj,n(s)Lj,n(t) − Lrs
|
j,n
(Y )Lt j,n (Y )| 6
r
+
Lj,n(s) − Lrs
j,n
Lj,n(s) − Lrs
j,n
r
(Y ) Lt j,n
(Y )
Lj,n(t) − Lrt (Y ) r
(Y ) + Lj,n (t) − Lt (Y ) Lrs
j,n
j,n
j,n
By Lemma 3.7,
Lj,n(s) − Lrs
j,n
(Y )
Lj,n(t) − Lrt
j,n
q
q
r
r
(Y ) 6 K 2 n2 2−n/2 Lsj,n (Y ) Lt j,n (Y ).
(Y ).
THE ANNALS OF APPLIED PROBABILITY, 9, 629–667 (1999)
By Hölder’s inequality, we have
rj,n
E j,n (s) − Lrsj,n (Y )
(Y ) j,n (t) − Lt
L
L
6 kKk26 n22−n/2 kLrs
j,n
1/3
r
1/3
(Y )k3/2 kLt j,n (Y )k3/2 .
By applications of the Cauchy–Schwarz inequality and Lemma 3.4, we obtain the following:
X rj,n
E j,n (s) − Lrsj,n (Y )
(Y ) 2−n/2
j,n (t) − Lt
j∈Z
L
L
6 kKk26 n22−n
sX
j∈Z
r
2/3
kLsj,n (Y )k3/2
sX
j∈Z
r
2/3
kLt j,n (Y )k3/2
= O(n2 2−n/2 ).
The remaining terms can be handled similarly.
Lemma 3.10. Let 0 6 s 6 t 6 1. Then,
X
j∈Z
Lrsj,n (Y
r
)Lt j,n (Y
Z
)∆rj,n →
R
Lxs (Y )Lxt (Y )dx,
in L1 (P) as n → ∞.
Proof. We have
Z
R
Lxs (Y )Lxt (Y )dx =
XZ
rj+1,n
j∈Z rj,n
Lxs (Y )Lxt (Y )dx.
From this it follows that
Z
X
rj,n
rj,n
Lxs (Y )Lxt (Y )dx −
Ls (Y )Lt (Y )∆rj,n R
j∈Z
1
Z
r
j+1,n
X
x
Ls (Y )Lxt (Y ) − Lrsj,n (Y )Lrt j,n (Y ) dx.
6
1
j∈Z rj,n
By the Hölder and Minkowski inequalities, we obtain
x
Ls (Y )Lxt (Y ) − Lrsj,n (Y )Lrt j,n (Y ) 6 kLxt (Y ) − Lrt j,n (Y )k2 kLxs (Y )k2
1
r
+ kLxs (Y ) − Lrsj,n (Y )k2 kLt j,n (Y )k2 .
Since s, t ∈ [0, 1], it follows that kLxs (Y )k2 and kLxt (Y )k2 are bounded by 1. Therefore, by Lemma
3.5 and Jensen’s inequality, there exists a universal constant C such that
2
x
p
Ls (Y )Lxt (Y ) − Lrsj,n (Y )Lrt j,n (Y ) 6 C ∆rj,n exp − (rj,n ∧ rj+1,n ) .
1
4
By the integral test, the sums,
X
j∈Z
(rj,n ∧ rj+1,n )2
exp −
4
∆rj,n ,
THE ANNALS OF APPLIED PROBABILITY, 9, 629–667 (1999)
are bounded in n. Since
p
∆rj,n = 2−n/4 ,
Z
X
rj,n
rj,n
−n/4
Lxs (Y )Lxt (Y )dx −
Ls (Y )Lt (Y )∆rj,n ).
= O(2
R
j∈Z
1
This proves the lemma.
Given a function f defined on [0, 1] and δ > 0, let
ω(f, δ) =
sup
6 6
0 s,t 1
|s−t|<δ
|f (t) − f (s)|.
Lemma 3.11. There exists a universal c ∈ (0, ∞) such that for all a ∈ R and δ > 0,
kω(La (Y ), δ)k22 6 c (δ ln(1/δ)) ∧ |a|exp(−a2 /2) .
Proof. Since local times are increasing in the time variable, we have ω(La (Y ), δ) 6 La1 (Y ). Consequently, by Lemma 3.5,
ω(La (Y ), δ)2 6 kLa1 (Y )k22
2
6 c|a|exp(−a2 /4).
However, by Tanaka’s formula,
Lat (Y
Z
t
) = |Yt − a| − |a| −
sgn (Yr − a)dYr .
0
Hence, for s < t,
Lat (Y ) − Las (Y ) 6 |Yt − Ys | −
Z
t
s
sgn (Yr − a)dYr .
Rt
By Lévy’s representation theorem (see [37]), t 7→ 0 sgn (Yr −a)dYr is a standard Brownian motion.
Thus, there exists positive numbers c and δ0 such that for all 0 6 δ 6 δ0 ,
kω(La (Y ), δ)k22 6 cδ log(1/δ).
We have used Lévy’s theorem concerning the modulus of continuity of Brownian motion to obtain
this last result; see [37] for details.
Proof of Theorem 3.2. For each integer j and each positive integer n, let
εj,n
2n/2
2
2
.
= √
(∆Xj,n ) − E (∆Xj,n )
2
For each n, the random variables {εj,n , j ∈ Z} are independent√and identically distributed. A
scaling argument shows that εj,n is distributed as ε = (X12 − 1)/ 2 for all admissible integers j
and n. Let φ denote the characteristic function of ε. Since E (ε) = 0 and E (ε2 ) = 1, we have, as
z → 0,
z2
log φ(z) = − + O(z 3 ).
2
THE ANNALS OF APPLIED PROBABILITY, 9, 629–667 (1999)
Thus, there exist γ > 0 and 0 < δ 6 1 such that
2
log φ(z) + z 6 γ|z|3 ,
2
(3.7)
for all |z| 6 δ.
By Lemma 2.4 and the definition of {
Lj,n (t), j ∈ Z},
2−n/2 Vn(2) (t) =
X
∆Xj,n
j∈Z
2
Lj,n(t).
Noting that E (∆Xj,n )2 = ∆rj,n = 2−n/2 , we arrive at the following:
2−n/2 Vn(2) (t) =
X√
2εj,n
j∈Z
Lj,n(t)2−n/2 + 2−n
X
(Uj,n (t) + Dj,n (t)).
j∈Z
Concerning this last term on the right, we have
X
2−n
(Uj,n (t) + Dj,n (t)) = 2−n [2n t] = t + O(2−n ),
j∈Z
since the number of upcrossings and downcrossings of all the intervals [rj,n , rj+1,n ] by the random
walk is equal to the number of steps taken by this same random walk. It follows that
X
2n/4 (2)
√ Vn (t) − t =
εj,n
2
j∈Z
Letting
Gn (t) =
it is enough to show that
X
j∈Z
Lj,n(t)2−n/4 + O(2−3n/4 ).
εj,n
Lj,n(t)2−n/4 ,
Gn(t) =⇒ G(t).
(3.8)
First we will demonstrate the convergence of the finite–dimensional distributions and then we will
give the tightness argument.
Let 0 6 t1 < t2 < · · · < tm 6 1 and let λ1 , λ2 , · · · , λm ∈ R . To demonstrate the convergence of
the finite–dimensional distributions, it is enough to show that
m
h
X
E exp i
λk
k=1
Gn (tk )
i
as n → ∞. For simplicity, let,
aj,n = 2−n/4
e
aj,n = 2−n/4
m
h
X
i
→ E exp i
λk (tk ) ,
k=1
m
X
k=1
m
X
λk
G
Lj,n(tk )
r
λk Ltkj,n (Y ).
k=1
We have the following:
h
m
X
E exp i
λk
k=1
G
m
h
X
i
i − E exp i
λk (tk ) 6 An + Bn + Cn ,
n (tk )
k=1
G
(3.9)
THE ANNALS OF APPLIED PROBABILITY, 9, 629–667 (1999)
where,
h
i
1 X 2 i
− E exp −
aj,n ,
n (tk )
2
k=1
j∈Z
h
h
1 X 2 i
1 X 2 i
Bn = E exp −
aj,n − E exp −
e
aj,n ,
2
2
h
m
X
An = E exp i
λk
G
j∈Z
j∈Z
h
m
h
X
i
1 X 2 i
Cn = E exp −
e
aj,n − E exp i
λk (tk ) .
2
j∈Z
We will estimate each term in turn.
Observe that
m
X
λk
k=1
Y
k=1
Gn(tk ) =
X
j∈Z
G
εj,n aj,n .
Let
denote the σ–algebra generated by {Yt , t > 0} and observe that the random variables
{aj,n , j ∈ Z} are –measureable. Thus,
Y
h
E exp i
m
X
k=1
λk
hY
i
i
iaj,n εj,n =E E
e
n (tk )
G
=E
Assuming that
P
3
j∈Z |aj,n |
hY
Y
j∈Z
i
φ(aj,n ) .
j∈Z
6 δ3 , we have, by (3.7),
X X
log φ(aj,n ) − 1 a2j,n 6 γ
|aj,n |3 .
2
j∈Z
j∈Z
From this it follows that
Y
X
1 X 2 φ(aj,n ) − exp −
aj,n 6 γeγ
|aj,n |3 .
2
j∈Z
Since
j∈Z
j∈Z
Y
1 X 2 φ(aj,n ) − exp −
aj,n 6 2,
2
j∈Z
j∈Z
we may conclude that
Y
X
X
1 X 2 φ(a
)
−
exp
−
aj,n 6 γeγ
|aj,n |3 + 2I
|aj,n |3 > δ3 .
j,n
2
j∈Z
j∈Z
j∈Z
j∈Z
Upon taking expectations and applying Markov’s inequality, we obtain
X
An 6 C
E (|aj,n |3 ),
j∈Z
where C = (γeγ + 2δ−3 ). However, by a convexity argument and Lemma 3.8, we have
X
j∈Z
E(|aj,n |3 ) 6 m2 2−3n/4
−n/4
= O(2
m
X
k=1
),
|λk |3
X
j∈Z
E
L3j,n(tk )
THE ANNALS OF APPLIED PROBABILITY, 9, 629–667 (1999)
which shows that An → 0 as n → ∞.
Note that,
Bn 6
1X
E |a2j,n − e
a2j,n |
2
6 12
j∈Z
m X
m
X
|λk ||λ` |
k=1 `=1
X
j∈Z
Lj,n (tk )Lj,n(t` ) − Lrt
E |
j,n
k
r
(Y )Lt`j,n (Y )| 2−n/2 .
By Lemma 3.9, we see that Bn → 0.
Finally, observe that
h
E exp i
m
X
λk
k=1
Thus,
Z X
m
h
i
2 i
1
(tk ) = E exp −
λk Lxtk (Y ) dx .
2 R
k=1
G
Z X
m
2 i
1 h X 2
Cn 6 E
e
aj,n −
λk Lxtk (Y ) dx
2
R k=1
j∈Z
Z
m X
m
X
X
x
x
6 12
|λk | |λ` | (t
)
(t
)∆r
−
L
(Y
)L
(Y
)dx
.
j,n k
j,n `
j,n
tk
t`
1
R
j∈Z
k=1 `=1
L
L
By Lemma 3.10, Cn → 0, which, in conjunction with the above, verifies (3.9).
To demonstrate tightness, observe that
Gn , δ) 6
ω(
It follows that
X
j∈Z
Gn , δ) =
var ω(
Lj,n, δ).
2−n/4 εj,n ω(
X
j∈Z
Lj,n, δ)k22 .
2−n/2 kω(
(3.10)
By Lemma 3.7, the triangle inequality and the fact that the local times are increasing in the time
variable, we have
q
r
−n/4
ω( j,n , δ) 6 2Kn2
L1j,n (Y ) + ω(Lrj,n (Y ), δ).
L
Thus, by a simple convexity inequality,
L
kω(
2
j,n , δ)k2
6 8n 2
2 −n/2
q
2
K Lrj,n (Y ) + 2kω(Lrj,n (Y ), δ)k22
1
= A(n) + 2kω(L
rj,n
(Y
2
), δ)k22 ,
with obvious notation.
By Hölder’s inequality and some algebra,
q
2
K Lrj,n (Y ) 6 kKk2 kLrj,n (Y )k2 .
4
1
1
2
Thus, by Lemma 3.4, we have
X
j∈Z
2−n/2 An = O(n2 2−n/2 ).
(3.11)
THE ANNALS OF APPLIED PROBABILITY, 9, 629–667 (1999)
Given δ > 0, let us divide the integers into two classes J1 and J2 , where
J1 = {j ∈ Z : |j| 6 δ−1/2 2n/2 }
J2 = J1c .
Then by Lemma 3.11,
X
j∈J1
2−n/2 kω(Lrj,n , δ)k22 6 c|J1 |2−n/2 δ log(δ−1 )
(3.12)
√
6 2c δ log(δ−1 ).
However, recalling that ∆rj,n = 2−n/2 and applying Lemma 3.11,
X
j∈J2
2−n/2 kω(Lrj,n , δ)k22 6 c
X
j∈J2
Z
∼ 2c
2
|rj,n |exp − rj,n
/2 ∆rj,n
∞
δ −1/2
(3.13)
|x|exp − x /2 dx.
2
Combining (3.10), (3.11), (3.12) and (3.13) gives the requisite tightness. This demonstrates (3.8)
and the theorem is proved.
4. Higher Order Variation
In this section, we will examine strong and weak limit theorems for the tertiary and quartic variation
of iterated Brownian motion. Let us begin by recalling a theorem, essentially due to [8].
Proposition 4.1. Let t > 0 and p > 0. The following hold in Lp (P) :
[2n/2 t]
(a)
X
3
Z(rk+1,n ) − Z(rk,n ) → 0;
k=0
[2n/2 t]
(b)
X
4
Z(rk+1,n ) − Z(rk,n ) → 3t.
k=0
Our next two theorems generalize the above along our random partitions. Given an integer
n > 0 and a real number t > 0, let
[2n t]−1
Vn(3) (f, t)
=
X
f
k=0
[2n t]−1
Vn(4) (f, t)
=
X
f
k=0
(3)
Z(Tk+1,n ) + Z(Tk,n )
2
Z(Tk+1,n ) + Z(Tk,n )
2
(4)
3
Z(Tk+1,n ) − Z(Tk,n ) ,
4
Z(Tk+1,n ) − Z(Tk,n ) .
(3)
(4)
Whenever f ≡ 1, we will write Vn (t) and Vn (t) in place of Vn (f, t) and Vn (f, t), respectively.
Our first result is a strong limit theorem for the tertiary variation of iterated Brownian motion
and is related to Theorem 2.1 and Proposition 4.1(a).
THE ANNALS OF APPLIED PROBABILITY, 9, 629–667 (1999)
Theorem 4.2. Let t > 0 and let f ∈ Cb2 (R ). Then
Vn(3) (f, t) → 0,
almost surely and in L2 (P) as n → ∞.
Our next result is a strong limit theorem for the quartic variation of iterated Brownian motion
and is related to Theorem 3.1 and Proposition 4.1(b).
Theorem 4.3. Let t > 0 and let f ∈ Cb2 (R ). Then
Z
Vn(4) (f, t)
t
→3
f (Zs )ds,
0
almost surely and in L2 (P) as n → ∞.
(3)
(4)
As corollaries to Theorem 4.2 and Theorem 4.3, we have Vn (t) → 0 and Vn (t) → 3t almost
(3)
(4)
surely and in L2 (P). Our next two results concern the deviations of Vn (t) and Vn (t) − 3t : we
(3)
(4)
will demonstrate that Vn (t) and Vn (t) − 3t, suitably normalized, converge in distribution to
an iterated Brownian motion and to Brownian motion in random scenery, respectively. As in §3,
let {B1 (t), t ∈ R} denote a standard two–sided Brownian motion and let {B2 (t), t > 0} denote an
independent standard Brownian motion. Observe that {B1 ◦ B2 (t), t > 0} is an iterated Brownian
motion and that
Z
(t) =
Lxt (B2 )B1 (dx),
G
R
is a Brownian motion in random scenery.
Theorem 4.4. As n → ∞,
2n/2 (3)
√ Vn (t) =⇒ B1 ◦ B2 (t).
15
Theorem 4.5. As n → ∞,
2n/4 (4)
√
Vn (t) − 3t =⇒ (t).
96
G
We will prove these theorems in order.
Proof of Theorem 4.2. Since the mapping
ϕ(x, y) = f
y+x
2
(y − x)3
is skew symmetric, by Lemma 2.4 we have
Vn(3) (f, t) =
X
j∈Z
f (Mj,n )(∆Xj,n )3 Uj,n (t) − Dj,n (t) .
THE ANNALS OF APPLIED PROBABILITY, 9, 629–667 (1999)
By Lemma 2.5 and by following the argument preceeding (2.10), we obtain
 Pj ∗ −1
+
+ 3

j=0 f (Mj,n )(∆Xj,n )




(3)
Vn (f, t) = 0




 P|j ∗ |−1
−
− 3
f (Mj,n
)(∆Xj,n
)
j=0
if j ∗ > 0
if j ∗ = 0
if j ∗ < 0.
However, for any integer m, by the triangle inequality, the boundedness of f and Brownian scaling,
we have
|m|−1
X
±
± 3
± 3
f (Mj,n ) ∆Xj,n 6 kf kCb2 (R)k(∆X0,n
) k2 |m|
j=0
2
= kf kCb2 (R)|m|µ6 2−3n/4 .
1/2
Since the random variable j ∗ is independent of X, by conditioning on the value of j ∗ and applying
the above inequality we obtain
E
2 Vn(3) (f, t)
6 kf k2C2 (R)µ62−3n/2 E (j ∗ )2
b
−n/2
= O(2
).
We have used (2.11) to obtain this last estimate. This demonstrates the L2 (P)–convergence in
question. By applications of Markov’s inequality and the Borel–Cantelli lemma, the convergence
is almost sure, as well.
Proof of Theorem 4.3. Since the mapping
ϕ(x, y) = f
y+x
2
(y − x)4
is symmetric, by Lemma 2.4 we have
Vn(4) (f, t) =
X
j∈Z
f (Mj,n )(∆Xj,n )4 Uj,n (t) + Dj,n (t)
= An + Bn + Cn ,
where
An =
Bn =
Cn =
X
j∈Z
X
j∈Z
X
j∈Z
r
f (Mj,n )(∆Xj,n )4 Uj,n (t) + Dj,n (t) − 2n/2 Lt j,n (Y ) ,
f (Mj,n ) (∆Xj,n )4 − E (∆Xj,n )4
r
2n/2 Lt j,n (Y ),
r
3f (Mj,n )Lt j,n (Y )∆rj,n .
Since f ∈ Cb2 (R ), by Lemma 3.7 we have
|An | 6 kf k
n/4
R)n2
Cb2 (
q
X
r
4
(∆Xj,n ) K Lt j,n (Y ).
j∈Z
THE ANNALS OF APPLIED PROBABILITY, 9, 629–667 (1999)
Since X is independent of Y, by Hölder’s inequality we have, for each j ∈ Z,
k(∆Xj,n )4 K
q
Lt j,n (Y )k2 6 k(∆X1,n )4 k2 kKk4 kLt j,n (Y )k2 .
r
r
1/2
√
By scaling, k(∆Xj,n )4 k2 = 2−n µ8 . Hence, by the triangle inequality and Lemma 3.4,
√ X rj,n
1/2
kAn k2 6 kf kCb2 (R)n2−3n/4 kKk4 µ8
kLt (Y )k2
j∈Z
−n/4
= O(n2
),
which shows that An → 0 in L2 (P) as n → ∞. By Markov’s inequality and the Borel–Cantelli
lemma, the convergence is almost sure, as well.
Let
if j > 0
Xj,n
∗
Xj,n =
Xj+1,n if j < 0.
(1)
(2)
Then we may write Bn = Bn + Bn , where
Bn(1) =
Bn(2)
=
X
r
∗
f (Mj,n ) − f (Xj,n
) (∆Xj,n )4 − E (∆Xj,n )4 2n/2 Lt j,n (Y ),
j∈Z
X
j∈Z
∗
f (Xj,n
) (∆Xj,n )4 − E (∆Xj,n )4
r
2n/2 Lt j,n (Y ).
∗
Noting that |Mj,n − Xj,n
| = 12 |∆Xj,n |, we have
|Bn(1) | 6
kf kCb2 (R)
2
2n/2
X
j∈Z
|∆Xj,n | (∆Xj,n )4 − E (∆Xj,n )4
r
Lt j,n (Y ).
Since X and Y are independent,
kBn(1) k2 6
kf kCb2 (R)
2
−n/4
= O(2
2n/2
X
j∈Z
r
k∆Xj,n k4 ∆Xj,n )4 − E (∆Xj,n )4 4 kLt j,n (Y )k2
).
We have used Brownian scaling and Lemma 3.4 to obtain this lastestimate.
∗
Observe that the collection {f (Xj,n
) (∆Xj,n )4 − E (∆Xj,n )4 , j ∈ Z} is centered and pairwise uncorrelated. Since X and Y are independent,
var(Bn(2) ) = 2n
X
j∈Z
r
∗
kf (Xj,n
)k22 kLt j,n (Y )k22 var (∆Xj,n )4 − E (∆Xj,n )4
By Brownian scaling,
var (∆Xj,n )4 − E (∆Xj,n )4
Therefore,
.
= O(2−2n ).
kBn(2) k2 = O(2−n/4 ).
In summary, kBn k2 = O(2−n/4 ), which shows that Bn → 0 in L2 (P) as n → ∞. By applications
of Markov’s inequality and the Borel–Cantelli lemma, this convergence is almost sure, as well.
THE ANNALS OF APPLIED PROBABILITY, 9, 629–667 (1999)
Finally, by Lemma 3.6, Cn → 3
Theorem 4.3.
Rt
0
f (Zs )ds almost surely and in L2 (P) as n → ∞. This proves
Proof of Theorem 4.4. Since the mapping
ϕ(x, y) = (y − x)3
is skew–symmetric, by Lemma 2.4 we have,
Vn(3) (t) =
X
j∈Z
∆Xj,n
3
Uj,n (t) − Dj,n (t) .
From Lemma 2.5 and some algebra, it follows that
 j ∗ −1
X


+ 3

∆Xj,n


 j=0




(3)
Vn (t) = 0

 ∗


|j |−1

X


− 3

∆Xj,n


if j ∗ > 0
if j ∗ = 0
(4.1)
if j ∗ < 0.
j=0
For each j ∈ Z and each integer n > 0, we have
var
Let
∆Xj,n
3
= 15 · 2−3n/2 .
1 3n/4
± 3
ε±
∆Xj,n
.
2
j,n = √
15
(4.2)
A scaling argument shows that, for each n, the random variables {ε±
j,n , j > 0} are independent
√
and identically distributed as ε = X13 / 15. For future reference, let us note that E (ε) = 0 and
E (ε2 ) = 1. For each t > 0, let
Xn± (t)




=



[2n/2 t]−1
2−n/4
X
j=0
ε±
j,n
0
For t ∈ R , let
Xn (t) =
Xn+ (t)
Xn− (t)
if t > 2−n/2
if 0 6 t < 2−n/2 .
if t > 0
if t < 0.
In order that we may emphasize their dependence upon n and t, recall that
τn = τ (n, t) = T[2n t],n
∗
j = j ∗ (n, t) = 2n/2 Y (τn ).
For t ∈ [0, 1], let
Yn (t) = Y (τ (n, t)).
THE ANNALS OF APPLIED PROBABILITY, 9, 629–667 (1999)
We observe that
1
√ 2n/2 Vn(3) (t) = Xn ◦ Yn (t).
15
(4.3)
Let DR[0, ∞) denote the space of all real–valued functions on [0, ∞) which are right continuous
and have left limits. Given a function g : R → R , let us define g+ , g− : [0, ∞) → R accordingly: for
each t > 0, let g+ (t) = g(t) and let g− (t) = g(−t). Let
∗
DR
(R ) = {g : R → R : g+ ∈ DR[0, ∞) and g− ∈ DR[0, ∞)}.
Let q denote the usual Skorohod metric on DR[0, ∞) (cf. [18, p. 117]). Then we can define a metric
∗
∗
q ∗ on DR
(R ) as follows: given f, g ∈ DR
(R ), let
q ∗ (f, g) = q(f + , g+ ) + q(f − , g− ).
∗
So defined, DR
(R ), q ∗ is a complete separable metric space. Moreover, {gn } converges to g in
∗
DR
(R ) if and only if {gn+ } and {gn− } converge to g+ and g− in DR[0, ∞), respectively. By Donsker’s
theorem, Xn+ =⇒ B1+ and Xn− =⇒ B1− in DR[0, ∞) consequently,
Xn =⇒ B1
in
∗
DR
(R ).
(4.4)
By another application of Donsker’s theorem,
Yn =⇒ B2
in DR([0, 1]).
(4.5)
From (4.4) and (4.5), the independence of X and Y and the independence of B1 and B2 , it
follows that
∗
(Xn , Yn ) =⇒ (B1 , B2 ) in DR
(R ) × DR([0, 1]).
∗
Since (x, y) ∈ DR
(R )×DR ([0, 1]) 7→ DR([0, 1]) 3 x◦y is measurable and since B1 ◦B2 is continuous,
it follows that
Xn ◦ Y =⇒ B1 ◦ B2 in DR([0, 1]).
Recalling (4.3), this proves the theorem.
Proof of Theorem 4.5. For each integer j and each positive integer n, let
4
2n
4
εj,n = √
.
∆Xj,n − E (∆Xj,n )
96
For each n, the random variables {εj,n , j ∈ Z} are independent√ and identically distributed. A
scaling argument shows that εj,n is distributed as ε = (X14 − 3)/ 96 for all admissible integers j
and n. For future reference, we note that E (ε) = 0 and E (ε2 ) = 1.
By Lemmas 2.4 and 2.5,
X
4
Vn(4) (t) =
∆Xj,n Uj,n (t) + Dj,n (t)
j∈Z
=
X√
j∈Z
96 2−n/2 εj,n
Lj,n(t) + 3 · 2−n
X
j∈Z
Uj,n (t) + Dj,n (t) .
Arguing as in the proof of Theorem 3.2, we have
X
3 · 2−n
Uj,n (t) + Dj,n (t) = 3t + O(2−n ).
j∈Z
THE ANNALS OF APPLIED PROBABILITY, 9, 629–667 (1999)
From this it follows that
X
2−n/4 Vn(4) (t) − 3t = O(2−3n/4 ) +
εj,n
j∈Z
Lj,n(t)2−n/4 .
As was shown in the proof of Theorem 3.2,
X
εj,n j,n (t)2−n/4 =⇒ (t).
j∈Z
L
G
This finishes the proof.
5. An Excursion–Theoretic Construction of the Itô Integral
Rt
In this section we show that for f ∈ Cb2 (R ), the Itô integral process 0 f (Yr )dYr can be defined by
means of the random partitions defined in §2. For each integer n > 0 and k ∈ Z, let
Yk,n = Y (Tk,n ).
We offer the following theorem:
Theorem 5.1. Let t > 0 and let f ∈ Cb2 (R ). Then
Z
[2n t]−1
X
f (Yk,n )∆Yk,n →
t
f (Yr )dYr .
0
k=0
almost surely and in L2 (P), as n → ∞.
We will need the following lemma, which is a simple consequence of the mean value theorem
for integrals.
Lemma 5.2. Let a, b ∈ R , a < b, and let f ∈ Cb2 (R ). Let
a = u0 < u1 < · · · < un−1 < un = b
be a partition of [a, b]. Then
Z
n−1
b
X
f (s)ds −
f (uk )∆uk 6 kf kCb2 (R)|b − a| max {|∆uk |}.
a
0 6 k 6 n−1
k=0
Proof of Theorem 5.1. By the proof of Lemma 2.4,
[2n t]−1
X
f (Yk,n )∆Yk,n =
k=0
=
X
j∈Z
X
j∈Z
[f (rj,n )∆rj,n Uj,n (t) − f (rj+1,n )∆rj,n Dj,n (t)]
f (rj,n )∆rj,n Uj,n (t) − Dj,n (t)
−
= In − IIn ,
X
j∈Z
f (rj+1,n ) − f (rj,n ) ∆rj,n Dj,n (t)
THE ANNALS OF APPLIED PROBABILITY, 9, 629–667 (1999)
in the obvious notation. We will next show that
Z Yt
In →
f (u)du,
0
Z
1 t 0
IIn →
f (Yu )du,
2 0
(5.1)
(5.2)
almost surely and in L2 (P) as n → ∞.
First observe that
Z
Z Yt
Z Yt
Z Y (τn )
Y (τn )
f (u)du 6 f (u)du −
f (u)du + In −
f (u)du
In −
0
0
0
0
= An + Bn ,
in the obvious notation.
Together with an elementary bound, Lemma 2.3 implies,
kAn k2 6 kf kCb2 (R)kY (t) − Y (τn )k2
= O(2−n/8 ).
By Lemma 2.5,
 Pj ∗ −1
 j=0 f (rj,n )∆rj,n
In = 0 P

−1
− j=j ∗ f (rj,n )∆rj,n
Thus, by Lemma 5.2,
(5.3)
if j ∗ > 0
if j ∗ = 0
if j ∗ < 0.
Bn 6 kf kCb2 (R)|Y (τn )|2−n/2 .
In the proof of Lemma 2.3, it was shown that {kY (τn )k, n > 0} is bounded in n. It follows that
kBn k2 = O(2−n/2 ).
(5.4)
Combining (5.3) and (5.4), we have the following:
Z
Yt
In →
f (u)du,
0
in L2 (P) as n → ∞. By Markov’s inequality and the Borel–Cantelli lemma, this convergence is
almost sure, as well. This verifies (5.1).
Observe that
X
IIn =
f (rj+1,n ) − f (rj,n ) − f 0 (rj,n )∆rj,n ∆rj,n Dj,n (t)
j∈Z
+
X
j∈Z
2
f 0 (rj,n ) ∆rj,n Dj,n (t)
= An + Bn ,
using obvious notation. By Taylor’s theorem,
X
1
|An | 6 kf kCb2 (R)2−3n/2
Dj,n (t)
2
j∈Z
−n/2
= O(2
).
(5.5)
THE ANNALS OF APPLIED PROBABILITY, 9, 629–667 (1999)
P
We have used the the fact that j∈Z Dj,n (t) 6[2n t] to obtain this last bound. Observe that,
Z
1
0
u
Bn − 1
f (u)Lt (Y )du 6 Bn(1) + Bn(2) ,
2 R
2
where,
Bn(1)
and
Bn(2)
= kf k
−n
R)2
Cb2 (
n/2
X
Dj,n (t) − 2 Lrt j,n (Y ) ,
2
j∈Z
X
Z
rj,n
0
0
u
=
f (rj,n )Lt (Y )∆rj,n − f (u)Lt (Y )du .
R
j∈Z
By Lemma 3.7,
An 6 kf kCb2 (R)n2−3n/4
X
j∈Z
K
q
r
Lt j,n (Y ).
Consequently, by the Minkowski and Hölder inequalities,
X
r
1/2
kBn(1) k2 6 kf kCb2 (R)n2−3n/4
kKk4 kLt j,n (Y )k2
j∈Z
−n/4
= O(n2
(5.6)
).
We have used Lemma 3.4 to obtain this last bound.
As in the proof of Proposition 3.3, we have
X Z rj+1,n
r
(2)
Bn 6 kf kCb2 (R)
|Lut (Y ) − Lt j,n (Y )|du.
j∈Z rj,n
Consequently, by symmetry,
kBn(2) k2
By Lemma 3.5,
Thus,
6 2kf kC (R)
2
b
XZ
j
>0
rj+1,n
rj,n
kLut (Y ) − Lt j,n (Y )k2 6 C
r
p
kBn(2) k2 6 2kf kCb2 (R)C2−n/4
−n/4
= O(2
Combining (5.6) and (5.7), we see that,
r
kLut (Y ) − Lt j,n (Y )k2 du.
2
∆rj,n exp −rj,n
/2 .
X
j
>0
2
exp −rj,n
/2 ∆rj,n
(5.7)
).
Z
1
IIn →
f 0 (u)Lut (Y )du,
2 R
in L2 (P) as n → ∞. By Markov’s inequality and the Borel–Cantelli lemma, this convergence is
almost sure, as well. By the occupation times formula, this verifies (5.2).
We can now finish the proof. By (5.1) and (5.2),
Z Yt
Z
[2n t]
X
1 t 0
f (Yk,n)∆Yk,n →
f (u)du −
f (Yu )du,
(5.8)
2 0
0
k=0
Rt
2
almost surely and in L (P) as n → ∞. Let F (t) = 0 f (u)du and apply Itô’s formula to F (Yt ) to
Rt
see that the right hand side of (5.8) is another way to write 0 f (Ys )dYs . This proves the theorem.
THE ANNALS OF APPLIED PROBABILITY, 9, 629–667 (1999)
References
[1] M.A. Arcones (1995), On the law of the iterated logarithm for Gaussian processes, J. Theoret. Probab., 8, 877–903
[2] M.T. Barlow (1992), Harmonic analysis on fractal spaces, Sém. de Bourbaki, Astérisque,
44éme année, 1991–92, no 755, 345–368
[3] R.F. Bass (1987), Lp –inequalities for functionals of Brownian motion, Sém. de Prob. XXI,
206–217, Lecture Notes in Math., 1247, Springer–Verlag, Berlin–Heidelberg–New York
[4] J. Bertoin (1996), Iterated Brownian motion and stable (1/4) subordinator, Stat. Probab.
Lett., 27(2), 111–114
[5] J. Bertoin and Z. Shi (1996), Hirsch’s integral test for iterated Brownian motion. Séminaire
de Probabilités, XXX, Lecture Notes in Mathematics, #1626, 361–368 (ed’s: J. Azéma, M.
Emery, P.A. Meyer and M. Yor)
[6] P. Billingsley (1979), Convergence of Probability Measures, Wiley, New York
[7] E. Bolthausen (1989), A central limit theorem for two-dimensional random walks in random
sceneries, Ann. Prob. 17 (1), 108–115.
[8] K. Burdzy (1993), Some path properties of iterated Brownian motion. Seminar in Stochastic
Processes 1992, 67–87, Birkhäuser, Boston (ed’s: E. Çinlar, K.L. Chung and M. Sharpe)
[9] K. Burdzy (1993), Variation of iterated Brownian motion, Workshop and Conference on
Measure–Valued Processes, Stochastic Partial Differential Equations and Interacting Particle
Systems. CRM Proceedings and Lecture Notes
[10] K. Burdzy and D. Khoshnevisan (1995), The level sets of iterated Brownian motion.
Séminaire de Probabilités, XXIX, Lecture Notes in Mathematics, #1613, 231–236 (ed’s: J.
Azéma, M. Emery, P.A. Meyer and M. Yor)
[11] K. Burdzy and D. Khoshnevisan (1996), Brownian motion in a Brownian crack. Preprint
[12] E. Csáki, M. Csörgő, A. Földes and P. Révész (1995), Global Strassen type theorems
for iterated Brownian motion. Stoch. Proc. Appl., 59, 321–341
[13] E. Csáki, M. Csörgő, A. Földes and R. Révész (1996), The local time of iterated
Brownian motion. J. Theor. Prob., 9(3), 717–743
[14] D.S. Dean and K.M. Jansons (1993), Brownian excursions on combs, J. Stat. Phys., 70,
1313–1331
[15] P. Deheuvels and D.M. Mason (1992), A functional LIL approach to pointwise Bahadur–
Kiefer theorems, Prob. in Banach Spaces, 8, 255–266 (ed’s: R.M. Dudley, M.G. Hahn and J.
Kuelbs)
[16] K.D. Elworthy (1982), Stochastic Differential Equations on Manifolds, Cambridge University Press, Cambridge-New York
[17] R. Engelman and Z. Jaeger (1986), Fragmentation, Form and Flow in Fractured Media,
Proc. of the 1986 F3 conference held at Neve Ilan, Israel, Isr. Phys. Soc. and Amer. Inst. Phys.,
Bristol, England
[18] S.N. Ethier and T.G. Kurtz (1986), Markov Processes: Characterization and Convergence,
Wiley, New York
[19] T. Funaki (1979), A probabilistic construction of the solution of some higher order parabolic
differential equations, Proc. Japan Acad., 55, 176–179
[20] S. Havlin and D. Ben-Avraham (1987) Diffusion in disordered media, Adv. Phys., 36(6),
695–798
[21] S. Havlin, J.E. Kiefer and G.H. Weiss (1987), Anomolous diffusion on a random comb–
like structure, Phys. Rev. A, 36, Aug., Ser. 3, 1403–1408
[22] S. Havlin and G.H. Weiss (1986), Some properties of a random walk on a comb structure,
Physica A, 134, 474–482
[23] Y. Hu (1997), Hausdorff and packing functions of the level sets of iterated Brownian motion.
Preprint
THE ANNALS OF APPLIED PROBABILITY, 9, 629–667 (1999)
[24] Y. Hu, D. Pierre-Lotti-Viaud and Z. Shi (1995), Laws of the iterated logarithm for
iterated Wiener processes, J. Theor. Prob., 8(2), 303–319.
[25] Y. Hu and Z. Shi (1995), The Csörgő–Révész modulus of non–differentiability of iterated
Brownian motion, Stochastic Processes and their Applications, 58, 267–279.
[26] B. Kahng and S. Redner (1989), Scaling of the first–passage time and the survival probability on exact and quasi–exact self–similar structures, J. Phys. A: Math. Gen., 22(7), 887–902
[27] H. Kesten and F. Spitzer (1979), A limit theorem related to a new class of self–similar
process, Z. Wahrsch. Verw. Gebiete 50, 327–340.
[28] D. Khoshnevisan (1994), Exact rates of convergence to Brownian local times, Ann. Prob.,
22(3), 1295–1330
[29] D. Khoshnevisan and T.M. Lewis (1996), The uniform modulus of continuity for iterated
Brownian motion, J. Theoret. Probab. 9(2), 317–333.
[30] D. Khoshnevisan and T.M. Lewis (1996), Chung’s law of the iterated logarithm for iterated Brownian motion, Ann. Inst. Henri Poin.: Prob. et Stat. 32(3), 349–359.
[31] R. Lang and X. Nguyen (1983), Strongly correlated random fields as observed by a random
walker. Z. Wahrsch. Verw. Gebiete 64 (3), 327–340.
[32] T. M. Lewis (1992), A self-normalized law of the iterated logarithm for random walk in random
scenery. J. Theoret. Probab. 5(4), 629–659.
[33] T. M. Lewis (1993), A law of the iterated logarithm for random walk in random scenery with
deterministic normalizers. J. Theoret. Probab. 6(2), 209–230.
[34] J. H. Lou (1985) Some properties of a special class of self-similar processes. Z. Wahrsch.
Verw. Gebiete 68(4), 493–502.
[35] T. Lyons (1997), Differential Equations driven by rough signals, To appear in Revista Math.
Iberioamericana
[36] B. Rémillard and D. A. Dawson (1991), A limit theorem for Brownian motion in a random
scenery. Canad. Math. Bull. 34(3), 385–391.
[37] D. Revuz and M. Yor (1991), Continuous Martingales and Brownian Motion, Springer–
Verlag, Berlin–Heidelberg–New York
[38] Z. Shi (1995), Lower limits of iterated Wiener processes, Stat. Prob. Lett., 23, 259–270
[39] G.H. Weiss and S. Havlin (1987), Use of comb–like models to mimic anamolous diffusion
on fractal structures, Philosophical Magazine B: Physics of Condensed Matter, 56(6), 941–947
[40] Y. Xiao (1996), Local Times and Related Properties of Multi-dimensional iterated Brownian
Motion. To appear in J. Theoret. Probab.
Davar Khoshnevisan
Department of Mathematics
University of Utah
Salt Lake City, UT. 84112
davar@math.utah.edu
Thomas M. Lewis
Department of Mathematics
Furman University
Greenville, SC. 29613
tom.lewis@furman.edu
Download