LYAPOUNOV NORMS FOR RANDOM WALKS IN LOW

advertisement
LYAPOUNOV NORMS FOR RANDOM WALKS IN LOW
DISORDER AND DIMENSION GREATER THAN THREE.
N. ZYGOURAS
Abstract. We consider a simple random walk on Z d , d > 3. We also consider
a collection of i.i.d. positive and bounded random variables ( Vω (x) )x∈Z d ,
which will serve as a random potential. We study the annealed and quenched
cost to perform long crossing in the random potential −(λ + βVω (x) ), where
λ is positive constant and β > 0 is small enough . These costs are measured
by the Lyapounov norms. We prove the equality of the annealed and the
quenched norm.
1. Introduction
Consider a simple random walk (Sn )n≥0 on Zd , d > 3 and denote by Px its
distribution when it starts from x ∈ Zd . Consider also λ, β > 0 and a collection
of i.i.d. random variables ( Vω (x))x∈Zd , independent of the walk. We denote by P
the distribution of this collection. We assume that Vω is nonnegative and bounded.
We think of Sn as a random walk in the random potential (−λ − β Vω (x) )x∈Zd .
One of the fundamental quantities in the study of random walks is the Green’s
function , which is defined as
∞
h PN
i
X
Gλ (x, y, ω) :=
Ex e− n=1 (λ+βVω (Sn )) ; SN = y
N =1
We may think of the Green’s function as the expected number of visits to y ∈ Zd of a
random walk starting at x ∈ Zd , before it gets killed by the potential −(λ+βVω (·)).
It is known [11] that when |x − y| tends to infinity, self-averaging phenomena take
place, that result to an almost sure asymptotic exponential decay of the Green’s
function.
In particular, it is shown by Zerner [11] that there exists a nondegenerate norm
αλ (·) on Rd such that for P − a.e. disorder ω
(1.1)
lim −
x→∞
log Gλ (0, [x], ω)
= 1.
αλ (x)
The norm αλ (x) is called the quenched Lyapounov norm. It was first introduced in
the continuous setting of Brownian motion in a Poissonian potential by Sznitman.
Equation (1.1) is the discrete analogue of his shape theorem [9].
The Lyapounov norm is a measure of the cost of the random walk to perform
long crossings, in the potential −(λ + βVω (x)). To make this more clear we consider
Date: September 4, 2007.
2000 Mathematics Subject Classification. 60xx.
Key words and phrases. Random walks, random potential, Lyapunov norms, mass gap
estimate.
1
2
the quantity
(1.2)
N. ZYGOURAS
h PTy
i
eλ (x, y, ω) := Ex e− n=1 (λ+βVω (Sn )) ,
where Ty = inf{n : Sn = y} is the hitting time of the site y. One can think of
eλ (x, y, ω) as the probability of the random walk, which starts at x to hit the site
y before it gets killed by the potential.
The Lyapounov norm can be now defined as follows. For any x ∈ Zd
1
log eλ (0, [nx], ω),
P-a.s.
n
The P − a.s existence of the limit is guaranteed by the following supermultiplicative
property
(1.3)
(1.4)
αλ (x) = lim −
n→∞
eλ (0, x + y, ω) ≥ eλ (0, x, ω) eλ (x, x + y, ω).
It is easy to conclude from (1.3) and (1.4) that
x ∈ Zd , n ∈ N
αλ (nx) = nαλ (x),
x, y ∈ Zd ,
αλ (x + y) ≤ α(x) + α(y),
and one can use these properties to extend αλ as a norm in Rd .
Besides the P − a.s. asymptotic exponential decay of the Green’s function, one
is interested to know the averaged or annealed asymptotic exponential decay of it.
It turns out that this is governed by the annealed Lyapounov norm βλ (x), x ∈ Zd
and in analogy with (1.1) we have that
(1.5)
lim −
x→∞
log E Gλ (0, [x], ω)
= 1.
βλ (x)
The annealed Lyapounov norm βλ is defined for x ∈ Zd by
(1.6)
βλ (x) = lim −
n→∞
1
log E eλ (0, [nx], ω)
n
and in analogy with the quenched case, it can be extended as a norm on Rd .
The existence of the above limit is guaranteed once again by the subadditivity of
− log E eλ (0, [nx], ω), as this follows from (1.4).
Another very important property of the Lyapounov norms is that they govern
the large deviations properties of the random walk Sn in the random potential
−β Vω (x), x ∈ Zd . This fact was first established in the deep work of A.-S. Sznitman
[9], and was later extended in the discrete case by Zerner and Flury [11],[6]. In fact,
it was the need to describe these large deviations, that led to the introduction of
the Lyapounov norms. To be more precise, consider the measure
(1.7)
where ZN,ω
out that
1
e−β
PN
n=1 Vω (Sn ) dP ,
0
ZN,ω
i
h
PN
= E0 e−β n=1 Vω (Sn ) is the normalisation constant. Then it turns
dQ0,ω :=
³S
´
n
Q0,ω
w x w e−n I(x) ,
n
¡
¢
where I(x) = supλ>0 αλ (x) − λ . Similar large deviations principle holds
also for
¡
¢
the annealed measures, with the annealed rate function J(x) = supλ>0 βλ (x)−λ .
(1.8)
LYAPOUNOV NORMS FOR RANDOM WALKS
3
It was conjectured in [9] that in high dimensions and low disorder the annealed
and quenched norms coincide. We will prove in this paper that, for d > 3 and β
small enough, αλ ≡ βλ , thus verifying the conjecture. More precisely we have that
Theorem 1.1. For any λ > 0, and d > 3 there exists a β∗ (λ) > 0, that depends
on λ, such that if 0 < β < β∗ , then αλ (x) = βλ (x), for every x ∈ Rd .
This belief was based on analogies with the situation of directed polymers [2],[4].
In this case one considers a space-time potential β Vω (x, n), (x, n) ∈ Zd−1 × N, as
a collection of i.i.d. random variables, and a simple (d − 1)−dimensional random
walk (Xn )n≥0 . It has been proved among many other things, that when d ≥ 4 and
the disorder is low, i.e. β is small enough, then the fraction
h
i
PN
E0 e−β n=1 Vω (Xn ,n)
h
i,
PN
EE0 e−β n=1 Vω (Xn ,n)
converges P − a.s, as N → ∞, to a strictly positive random variable W∞ . In
fact, the convergence to a strictly positive limit can be shown to be an equivalent
characterization of the low disorder regime.
The belief that the annealed and the quenched Lyapounov norms are equal, was
further reinforced by the very nice work of M.Flury. In [5] it was established that
if (Xn )n≥0 is a random walk on Zd , d > 3, with a drift in the coordinate direction
ê1 , then, as N tends to infinity
h
i
h
i
PN
PN
1
1
(1.9) − log E0h e−β n=1 Vω (Xn ) ∼ − log E E0h e−β n=1 Vω (Xn ) ,
N
N
for β small enough. Here E h is the expectation of the random with drift in direction
x̂1 .
In our setting, the presence of λ > 0 in (1.2) penalises the walks that move very
slowly towards the target site y, thus imposing an effective drift on the random
walks Sn . In other words our situation parallels in a sense the directed case, and at
the same time is a generalisation of [5].
The method we follow to prove Theorem 1.1 uses ideas of [5], which are also
present in the work of Bolthausen and Sznitman [3] in the context or random walks
in random environments. Central in our work, as well as in [5],[3], is a secondto-first moment estimate, which, among other things, depends on what is known
as a mass gap estimate ( see section 3). The mass gap estimate appears in [5],
but it can traced back to works related to the behavior of self avoiding walks, see
for example [7]. It states roughly that the annealed cost of a walk to move from
a hyperplane to a hyperplane at distance L, restricted to move only in between
the hyperplanes and such that the graph of the walk cannot be splitted into two
nonintersecting sets, is exponentially faster, in L, than the cost of the walk, with
just the restriction to move in between the hyperplanes. It has in some sense
the same flavor as the exponential moment estimate on the displacement , up to
a regeneration time, of a random walk in a random environment [8]. The mass
gap estimate is proved in [5],[7] using a rather involved multiscale argument. This
argument is also difficult to extend when the walk’s drift is not along a coordinate
axis, which is essentially the framework we will be working. In section 3 we provide
a simple proof independent of the direction, in the case that β is small. Moreover,
4
N. ZYGOURAS
in the case that the direction coincides with a coordinate axis, we provide a proof
for arbitrary β, which significantly simplifies the already existing ones.
The proof of Theorem 1.1 proceeds as follows. In Section 2 we show how the
point-to-point Lyapounov norms are related to the point-to-hyperplane Lyapounov
norms. Moreover, we relate the presence of λ in (1.2) to the presence of an effective
drift for the walk in the random potential and state the second-to-first moment
condition. In Section 3 we prove the mass gap estimate. In Section 4 we built
a Markovian Structure in our model, in such a way to parallel the situation in
directed polymers. In Section 5 we proceed to the estimate of the second-to-first
moment. Finally, in Section 6 we show some consequences of the equality of the
two norms.
2. Some Auxiliary Results
In this section we prove some auxiliary results, that lead to the statement of the
main estimate in Proposition 2.16. A notational convention, that we follow through
out the paper, is that we refrain denoting in the expectations of the random walks
the starting position, if this is 0. In this case, will just write P, E, as opposed to
P0 , E0 .
2.1. Dual Norms. We define the dual to the quenched Lyapounov norm as
1
(2.1)
αλ∗ (`) :=
sup
; ` ∈ Rd
{x∈Rd : `·x=1} αλ (x)
and the dual to the annealed Lyapounov norm as
1
(2.2)
βλ∗ (`) :=
sup
; ` ∈ Rd
{x∈Rd : `·x=1} βλ (x)
.
The dual norms αλ∗ (`) and βλ∗ (`) are in fact norms and they govern the cost for
the walk to perform crossings from a point to a hyperplane. This is described in the
following proposition, the proof of which can be found in [6]. Reference [6] contains
further properties of these norms. The continuous analogue of the next proposition
was proven in [8].
Proposition 2.1. Let ` ∈ Rd and T`,L = inf{n : Sn · ` ≥ L}. Then we have that
¸
· PT
`,L
1
1
− n=1
(λ+βVω (Sn ))
lim − log E e
= ∗ ,
L→∞
L
αλ (`)
and
lim −
L→∞
¸
· PT
`,L
1
1
log EE e− n=1 (λ+βVω (Sn )) = ∗ ,
L
βλ (`)
The next proposition justifies the characterisation as dual norms and it will lead
to the important observation of Corollary 2.3
Proposition 2.2. Consider the quenched and the annealed dual norms αλ∗ (`) and
βλ∗ (`). Then
1
(2.3)
(i)
αλ (x) =
sup
∗
{`∈Rd : `·x=1} αλ (`)
(2.4)
(ii)
βλ (x) =
sup
1
.
∗
{`∈Rd : `·x=1} βλ (`)
LYAPOUNOV NORMS FOR RANDOM WALKS
5
Proof. We will only prove (i), the proof of (ii) being identical. Let us denote by
αλ∗∗ (x) the right hand side of (2.3). By the definition of the dual norm αλ∗ we have
that, for every ` ∈ Rd , αλ∗ (`) ≥ αλ1(x) , for every x ∈ Rd such that ` · x = 1. Thus,
αλ∗∗ (x) :=
sup
1
∗
{` : `·x=1} αλ (`)
≤ αλ (x).
On the other hand,
αλ∗∗ (x) :=
=
=
sup
1
∗
{` : `·x=1} αλ (`)
sup
=
sup
{` : `·x=1}
inf
{` : `·x=1} {y : `·y=1}
sup
inf
αλ (y) =
¡
{` : `·x=1} {y : `·y=1}
1
sup{y : `·y=1}
sup
1
αλ (y)
inf
{` : `·x=1} {y : `·y=1}
¡
αλ (y) + ` · (x − y)
¢
¢
αλ (y) − αλ (x) + ` · (x − y) + αλ (x).
By the convexity of αλ (·) it follows that there exists an ` ∈ Rd such that for every
y ∈ Rd , αλ (y) − αλ (x) + ` · (x − y) ≥ 0, and so αλ∗∗ (x) ≥ αλ (x).
ˆ = β ∗ (`),
ˆ for every unit vector `ˆ ∈ Rd , then the quenched
Corollary 2.3. If αλ∗ (`)
λ
and annealed Lyapounov norms are equal, i.e. αλ ≡ βλ .
Proof. It follows immediately from the fact that αλ∗ and βλ∗ are norms and Proposition 2.2.
2.2. A Change Of Measure. In this paragraph we show how the presence of a
positive λ in the potential gives rise to an effective drift for the walk.
Let `ˆ ∈ Rd be an arbitrary unit vector, ` = |`| `ˆ and denote by P ` the random
walk with transition probabilities
( (y−x)·`
e
, if |x − y| = 1,
Z`
(2.5)
π` (x, y) =
0
, if |x − y| 6= 1.
Pd
Z` = 2 i=1 cosh(êi ·`), where (êi )di=1 denote the canonical unit vectors. Notice that
ˆ > 0. The Radon-Nikodym
the random walk P ` has a drift such that E ` [(S1 −S0 )· `]
derivative of the biased random walk P ` with respect to the simple random walk
P on the σ-algebra Fn = σ{Si : 0 ≤ i ≤ n} is easily computed to be
µ ¶n P
n
dP ` ¯¯
2d
e i=1 (Si −Si−1 )·` .
¯ =
dP Fn
Z`
Let us now compute
"
#
· PT
¸
PT`,L
ˆ
ˆ
`,L
dP ¯¯
−λT`,L
−
β
V
(S
)
− i=1
(λ+β Vω (Si ))
`
ω
i
ˆ
i=1
E e
=E
e
¯
dP ` FT`,L
ˆ
¸
· PT ˆ
PT`,L
ˆ
`,L
log(2d/Z ` ) −λT`,L
ˆ −
ˆ
i=1 β Vω (Si ) .
(2.6)
e
= E ` e− i=1 (Si −Si−1 )·`−T`,L
We will now choose |`|, such that
(2.7)
log(
2d
) + λ = 0.
Z`
Pd
That is, we will choose |`|, such that Z` = 2d eλ , or i=1 cosh(eˆi · `) = d eλ . Notice
that if |`| = 0, then the left hand side of the last equation is equal to d, while for
6
N. ZYGOURAS
|`| → ∞, it tends to infinity. Thus, there will be a |`|, depending on λ, such that
(2.7) is satisfied. For this |`|, (2.6) is equal to
¸
·
PT`,L
ˆ
ˆ )·` −
i=1 β Vω (Si ) .
E ` e−S(T`,L
e
Notice |`| L ≤ S(T`,L
ˆ ) · ` ≤ |`| L + |`|, since T`,L
ˆ is defined as the first time that the
d
random walk enters the half space {x ∈ Z : x · `ˆ ≥ L}. We then have that
· PT
¸
·
¸
Tˆ
ˆ
`,L
−ST ˆ ·` − P `,L
`,L e
i=1 β Vω (Si )
e−|`| L−|`| E ` e− i=1 β Vω (Si )
≤ E` e
· PT
¸
ˆ
`,L
≤ e−|`| L E ` e− i=1 β Vω (Si ) .
Thus we have proven that
ˆ so that
Proposition 2.4. Let `ˆ ∈ Rd an arbitrary unit vector and choose ` = |`| `,
Pd
it satisfies (2.7), or equivalently i=1 cosh(eˆi · `) = deλ , then
¸
· PT
ˆ
`,L
1
− i=1
(λ+β Vω (Si ) )
lim
log E e
L→∞ L
· PT
¸
ˆ
`,L
1
`
− i=1
β Vω (Si )
= lim
log E e
− |`|.
L→∞ L
Clearly, the analogue of Proposition 2.4 for the annealed measures is also valid.
We can combine Propositions 2.1, 2.4 and Corollary 2.3 to arrive at
Corollary 2.5. If for every unit vector `ˆ ∈ Rd
· PT
¸
ˆ
`,L
1
`
− i=1
β Vω (Si )
log E e
lim
L→∞ L
· PT
¸
ˆ
`,L
1
`
− i=1
β Vω (Si )
(2.8)
= lim
log E E e
,
L→∞ L
where ` ∈ Rd is chosen as in Proposition 2.4, E ` is defined by (2.5) and T`,L
=
ˆ
ˆ
inf{n : Sn · ` ≥ L}, then αλ ≡ βλ .
Our focus will therefore be to verify the assumption of the last corollary, when β
is small enough. From now on, ` ∈ Rd will be an arbitrary, fixed vector with rational
coordinates and `ˆ the corresponding unit vector `/|`|. P ` denotes the distribution
ˆ
of the walk with transition probabilities as in (2.5), corresponding to the chosen `.
We denote by
h
i Pd ê · `ˆsinh(ê · `)
i
`
i=1 i
ˆ
(2.9)
h := Ex (S1 − S0 ) · ` = P
> 0,
d
i=1 cosh( êi · `)
ˆ
the lentgh of the projection of the local drift on the direction `.
ˆ
The reason we have chosen the vector ` to have rational coordinates is to be able
to construct renewal structures, through which we obtain our estimates. One way to
see the difficulties that would arise if one considers an `ˆ with irrational coordinates
is to notice that in this case, except from trivial situations, the hyperplane z · `ˆ = 0
includes no points of the lattice, other than 0.
LYAPOUNOV NORMS FOR RANDOM WALKS
7
Let us also assume, without loss of generality, that `ˆ · êi > 0, for i = 1, . . . , d.
Let
ˆ
(2.10)
l1 := ê1 · `,
ˆ which contains 0, from the
the distance of the hyperplane with normal vector `,
corresponding hyperplane which contains (1, 0, . . . , 0). Due to the fact that ` has
rational coordinates, there will only be a finite number of hyperplanes in between
the above mentioned ones, which are normal to `ˆ and contain lattice points.
ˆ that are needed
We denote by r the number of hyperplanes with normal vector `,
d
ˆ
ˆ
to exhaust the lattice points {z ∈ Z : 0 < z · ` ≤ l1 }. Since ` has rational coordinates, it follows that r is finite. It is easy to see, that the closest hyperplane
ˆ which contains lattice points is at distance l1 /r from the
with normal vector `,
d
hyperplane {z ∈ R : z · `ˆ = 0}.
In the rest of the paper we will work in the framework set in these last paragraphs.
In particular, we will prove the equality of the point to hyperplane dual norms for
vectors with rational coordinates. The equality at arbitrary vectors will then follow
by the continuity of αλ∗ (·) and βλ∗ (·) - recall the fact that they are norms.
2.3. Bridges and Irreducible Bridges. It will be convenient for our analysis
to make one more reduction. Namely to reduce the evaluation of the point to
hyperplane dual norms to the evaluations of masses of bridges. We will explain the
terminology in the sequel.
PN
Definition 2.6. Let us define the local time at x ∈ Zd as L(M,N ) = n=M +1 1x (Sn ).
Definition 2.7. Let ω i , i = 1, . . . , p, p ≥ 1 independent copies of ω ∈ Ω and
Li(M i ,N i ) , i = 1, . . . , p the local times for p random walk trajectories, M i < N i .
Then we define
h
i
P
P
i
−β p
(p)
x Vω i (x)L(M i ,N i ) (x)
i=1
Φβ (M 1 , . . . , M p ; N 1 , . . . , N p ) = − log E e
h
i
P
P
i
−β p
(p)
x Vω (x) L(M i ,N i ) (x)
i=1
Φ̃β (M 1 , . . . , M p ; N 1 , . . . , N p ) = − log E e
(p)
Notice that in Φβ we consider random walk trajectories in independent poten(p)
tials, Vωi , while in Φ̃β we consider the trajectories in the same potential Vω .
In order to lighten the notation we will often write the above functions as
(p)
(p)
Φβ (M i ; N i ) and Φ̃β (M i ; N i ). In the case that M i = 0, for i = 1, . . . , p, we
(p)
(p)
will write Φβ (N i ) and Φ̃β (N i ) instead. Finally, when p = 1 we will write Φβ ,
(1)
instead of Φβ .
The next proposition collects some properties of the function Φβ , which are easy
to verify
Proposition 2.8. (i) For M, N integers we have that
Φβ (M + N ) ≤ Φβ (N ) + Φβ (M ).
(ii) Let N1 < N2 < N , then
Φβ (N ) ≥ Φβ ([0, N1 ] ∪ [N2 , N ]).
(iii) If (S(n))0≤n≤N1 ∩ (S(n))N1 ≤n≤N1 +N2 = ∅, then
Φβ (N1 + N2 ) = Φβ (N1 ) + Φβ (N2 )
8
N. ZYGOURAS
The notation used on the right hand side of (ii) means that in the evaluation of
Φβ ([0, N1 ] ∪ [N2 , N ]) we consider the local time L[0,N1 ]∪[N2 ,N ] := LN1 + LN2 ,N . The
proof of (ii) makes use of the monotonicity LN ≥ L[0,N1 ]∪[N2 ,N ] , while the proof of
(iii) the independence of the potentials seen by the two parts of the walk. Finally,
the proof of (i) makes an easy use of Hölder’s inequality. More precisely, we have
h
i L LM (x)(x)
h
i
E e−βVω (x)LM +N (x) M +N ≥ E e−βVω (x)LM (x) ,
and
h
E e
−βVω (x)LM +N (x)
+N ) (x)
i L(M,M
L
(x)
M +N
h
i
≥ E e−βVω (x)L(M,M +N ) (x) ,
and it only remains to multiply the above inequalities, take the product over the
sites x and use the independence of the potentials.
Definition 2.9. We define the entrance times TL := inf{n : S(n) · `ˆ ≥ L}, and
T̃L := inf{n : S(n) · `ˆ ≤ L}
Notice that TL coincides with T`,l
ˆ , which appears below (2.8). For simplicity we
will be using the notation TL instead.
Definition 2.10. (i) Consider the walk ( S(n) )M ≤n≤N . We will say that the walk
forms a bridge of span L, and denote it by Br(M, N ; L), if
ˆ
S(M ) · `ˆ ≤ S(n) · `ˆ < S(N ) · `,
for M ≤ n < N , and (S(N ) − S(M )) · `ˆ = L. When M = 0, we will write Br(N ; L)
instead.
(ii) Let us denote
∞
h
i X
h
i
(2.11) B(L) = E ` e−Φβ (TL ) ; Br(TL ; L) =
E ` e−Φβ (N ) ; Br(N ; L) .
N =0
Definition 2.11. Consider the random walk (S(n))M ≤n≤N . We will say that the
random walk has a break point at level L, if there exists an M < n < N such that
S(n) · `ˆ = L and
ˆ
S(n1 ) · `ˆ < S(n) · `ˆ ≤ S(n2 ) · `,
for n1 < n ≤ n2 .
Definition 2.12. (i) Consider the random walk ( S(n) )M ≤n≤N . We will say
that the random walk forms an irreducible bridge of span L, and we denote it by
Ir(M, N ; L), if it forms a bridge of span L with no break points. When M = 0 we
will write Ir(N ; L) instead.
(ii) Let us denote
∞
h
i
h
i X
(2.12) Iβ (L) = E ` e−Φβ (TL ) ; Ir(TL ; L) =
E ` e−Φβ (N ) ; Ir(N ; L) .
N =0
Definition 2.13. Let us define the (annealed) mass for bridges by
h PT L
i
1
mB (β) = lim − log EE ` e− n=0 β Vω (Sn ) ; Br(TL ; L)
L→∞
L
1
= lim − log B(L).
(2.13)
L→∞
L
LYAPOUNOV NORMS FOR RANDOM WALKS
9
ˆ that we choose. We will refrain,
Notice, that mB (β) depends on the direction `,
though, from denoting this explicitly.
Proposition 2.14. There exists a constant C < 1, such that for every L,
Ce−mB (β)L ≤ B(L) ≤ e−mB (β)L .
Proof. Regarding the rightmost inequality we have that, for every L1 , L2
h
i
B(L1 + L2 ) = E ` e−Φβ (TL1 +L2 ) ; Br(TL1 +L2 ; L1 + L2 )
h
i
≥ E ` e−Φβ (TL1 +L2 ) ; Br(TL1 ; L1 ) ∩ Br(TL1 , TL2 ; L2 )
= Br(L1 )B(L2 ),
where in the last equality we used Proposition 2.8 (iii). The right hand side of the
desired inequality follows now by the supermultiplicativity.
Regarding the leftmost inequality, we will obtain a submultiplicative inequality.
This will be done as follows. We will observe a path in Br(TL1 +L2 ; L1 + L2 ) until
the first time it crosses level L1 , the contribution of which to the partition function
is essentially B(L1 ), and then from the last time that the path lies below level L1
until the first time it lies on level L1 + L2 . This contribution is essentially B(L2 ).
Let S L := sup{n : Sn · `ˆ ≤ L}. In more detail, using again Proposition 2.8 (ii) and
(iii), we have
h
i
B(L1 + L2 ) ≤ E ` e−Φβ (TL1 −1)−Φβ (S L1 ,TL1 +L2 ) ; Br(TL1 +L2 ; L1 + L2 )
h
i
X
=
E ` e−Φβ (TL1 −1)−Φβ (S L1 ,TL1 +L2 ) ; S L1 = M, SM = x, STL1 −1 = y, Br(TL1 +L2 ; L1 + L2 )
M,x,y
=
X
M,x,y
·
¸
E ` e−Φβ (TL1 −1) ; STL1 −1 = y, 0 < inf Sn · `ˆ · Py` ( SM = x )1L1 −l1 <x·`≤L
ˆ
1
n≤TL1
·
ˆ
·Ex` e−Φβ (TL1 +L2 ) ; Br(TL1 +L2 ; L1 + L2 − x · `),
inf
1≤n<TL1 +L2
¸
Sn · `ˆ > L1 .
We now want to make use of the fact, that the above expectations and probabilities,
as functions of the intial point, really depend on which hyperplane the initial point
belongs to, and not on the point itself. To make use of this, let us denote, for
any point x, by [x] to be a representative lattice point of the hyperplane, that x
belongs to. With a slight abuse of notation we will use the notation [x] for the
corresponding hyperplane, as well. Then, using the translation invariance of the
last expectation, we have, that the above is estimated by
·
¸ X
X
`
−Φβ (TL1 −1)
ˆ
E e
; STL1 −1 = y, 0 < inf Sn · ` ·
Py` ( SM ∈ [x] )1L1 −l1 <[x]·`≤L
ˆ
1
[x],y
n≤TL1
·
M
`
ˆ
·E[x]
e−Φβ (TL1 +L2 ) ; Br(TL1 +L2 ; L1 + L2 − [x] · `),
inf
1≤n<TL1 +L2
¸
Sn · `ˆ > L1 ,
and since
X
X
`
Py` ( SM ∈ [x] )1L1 −l1 <[x]·`≤L
=
P[y]
( SM ∈ [x] )1L1 −l1 <[x]·`≤L
,
ˆ
ˆ
1
1
M
M
10
N. ZYGOURAS
the last is equal to
·
¸ X
X
`
E ` e−Φβ (TL1 −1) ; STL1 −1 ∈ [y], 0 < inf Sn · `ˆ ·
P[y]
( SM ∈ [x] )1L1 −l1 <[x]·`≤L
ˆ
1
n≤TL1
[x],[y]
M
·
ˆ
e
; Br(TL1 +L2 ; L1 + L2 − [x] · `),
·
¸
X
`
−Φβ (TL1 −1)
ˆ
≤
E e
; 0 < inf Sn · ` ·
`
·E[x]
−Φβ (TL1 +L2 )
¸
inf
1≤n<TL1 +L2
Sn · `ˆ > L1
n≤TL1
[x]
·
X
sup
ˆ
L1 −l1 ≤[y]·`<L
1 M
`
P[y]
( SM ∈ [x] )1L1 −l1 ≤[x]·`≤L
ˆ
1
·
·
`
E[x]
−Φβ (TL1 +L2 )
e
¸
ˆ
; Br(TL1 +L2 ; L1 + L2 − [x] · `),
inf
1≤n<TL1 +L2
Sn · `ˆ > L1
It is easy to see, that there exist constants C1 , C2 , such that, for every [x] with
L1 − l1 ≤ [x] · `ˆ ≤ L1 ,
·
¸
`
−Φβ (TL1 +L2 )
ˆ
ˆ
E[x] e
; Br(TL1 +L2 ; L1 + L2 − [x] · `),
inf
Sn · ` > L 1
1≤n<TL1 +L2
≤ C1 B(L2 )
and
·
E
`
e
−Φβ (TL1 −1)
¸
ˆ
; 0 < inf Sn · ` ≤ C2 B(L1 )
n≤TL1
The only difference between the above expectations and the corresponding values
B(L1 ) and B(L2 ) is that either the initial of final point of the trajectories in the
corresponding expectations might not lie on the beginning or ending hyperplane,
that determine the bridges. Nevertheless, they lie nearby, and so we could change
the beginning or ending of these paths in a deterministic way, so that they become
typical bridges of spans L1 and L2 , respectively. The cost for these alterations is
clearly uniformly, over L1 , L2 , finite ( it depends, though, on β and kVω k ). We,
therefore, have that
X
`
B(L1 + L2 ) ≤ C1 C2
sup
P[y]
( L1 − l1 ≤ SM · `ˆ ≤ L1 ) B(L1 )B(L2 ).
ˆ
L1 −l1 ≤[y]·`≤L
1 M
Since the random walk P ` has a drift, the constant
X
`
C1 C2
sup
P[y]
( L1 − l1 ≤ SM · `ˆ ≤ L1 )
ˆ
L1 −l1 ≤[y]·`≤L
1 M
is finite, uniformly in L1 . From this, the leftmost inequality of the proposition
follows by submultiplicativity.
Finally, we show that as β → 0 the annealed mass converges to 0.
Proposition 2.15. The annealed mass is continous at β = 0, and therefore
lim mB (β) = 0.
β→0
LYAPOUNOV NORMS FOR RANDOM WALKS
11
Proof. Clearly, mB (0) = 0 and so we only need to prove the continuity at 0. But
this follows immediately from the fact that
h
i
1
0 ≤ mB (β) ≤ lim − log E ` e−βkVω kTL
L→∞
L
E ` [TL ]
≤ βkVω k lim
,
L→∞
L
where we used Jensen’s inequality at the last step, and the fact that, due to the
drift of E ` , the last limit is finite.
2.4. The Second To First Moment Condition. The next proposition highlights the main estimate. The validity of (2.14) or equivalently (2.16) implies the
equality of the Lyapounov norms. In the rest of the paper we will be working
towards the proof of (2.16).
Proposition 2.16. Let Py`1 ,y2 denote the joint distribution of two independent
(2)
(2)
random walks with distribution Py`1 and Py`2 . Let also Φ̃β and Φβ
definition 2.7. If
·
¸
1
2
PTL
PTL
1
2
EE ` e− i=1 βVω (Si )− i=1 βVω (Si )
·
¸ <∞
(2.14)
sup
1
2
PTL
PTL
1
2
L
−
βV
βV
`
1 (Si )−
2 (Si )
i=1
ω
i=1
ω
EE e
as defined in
then
h PTL
i
h PTL
i
1
1
log E ` e− i=1 β Vω (Si ) = lim − log EE ` e− i=1 β Vω (Si ) .
L→∞
L→∞
L
L
d
If (2.14) is valid for every vector ` ∈ R with rational coordinates ( or, in other
words, for the corresponding to it unit vector `ˆ = `/|`| ), then the annealed and
quenched Lyapounov norms are equal.
(2.15)
lim −
Proof. The left hand side of (2.15) is greater or equal than its right hand side. This
follows by Jensen’s inequality, since
h PTL
i
h PTL
i
1
1
lim − E log E ` e− i=1 β Vω (Si ) ≥ lim − log EE ` e− i=1 β Vω (Si ) .
L→∞
L→∞
L
L
and by dominated convergence the left hand side of the last inequality is equal to
the left hand side of (2.15). Suppose, now, that this inequality is strict. Consider,
then, the function
h PT
i
L
E ` e− i=1 β Vω (Si )
h PT
i,
Uω (L) =
L
EE ` e− i=1 β Vω (Si )
and observe, that, in the case of strict inequality, a.s. Uω (L) → 0 as L → ∞,
since then the numerator will decay exponentially faster than the denominator.
A straightforward computation shows that the left hand side of (2.14) is equal to
supL E Uω2 , and so (2.14) impies that Uω is uniformly bounded in L2 (P). It therefore
follows, that EUω (L) → 0, as L → ∞. On the other hand, this is a contradiction,
since EUω (L) = 1, for every L.
Finally, we can combine Corollary 2.5 with the continuity of the dual norms to
obtain the equality of the Lyapounov norms.
12
N. ZYGOURAS
Proposition 2.17. The estimate (2.14) is valid, if the following estimate is valid
h
i
(2)
P
−Φ̃β (N 1 ,N 2 )
`
1
2
E
e
;
Br(N
;
L)
∩
Br(N
;
L)
1
2
1
2
N ,N
y ,y
h
i < ∞.
(2.16)
sup sup P
(2)
−Φβ (N 1 ,N 2 )
`
L y1 ,y2
; Br(N 1 ; L) ∩ Br(N 2 ; L)
N 1 ,N 2 Ey 1 ,y 2 e
Proof. We will establish, that the left side of (2.16) bounds, up to constants, the left
side of (2.14). To this end, we restrict the expectation in the denominator of (2.14)
to the paths, that belong to Br(TL1 ; L) ∩ Br(TL2 ; L), to get that this denominator is
bounded below by
h
i
X
(2)
1
2
E ` e−Φβ (N ,N ) ; Br(N 1 ; L) ∩ Br(N 2 ; L) .
(2.17)
N 1 ,N 2
Regarding the numerator in (2.14) we have that it is bounded above by
#
"
j
P
PTL
X X
j
− j=1,2
βVω (Sij )
j
j
`
j
i=M +1
EE e
; S 0 = M , SM j = y , for j = 1, 2
M 1 ,M 2 y 1 ,y 2
X
=
³
´
j
j
P ` SM
·
j = y , j = 1, 2
X
ˆ 2 ·`≤0
ˆ
M 1 ,M 2 −l1 <y 1 ·`,y
"
·EEy` 1 ,y2
≤ C3
X
e
−
P
j=1,2
·
sup
j
i=1 βVω (Si )
#
; inf S (n ) · `ˆ > 0
j
j
j
nj <TL
³
´
j
j
P ` SM
·
j = y , j = 1, 2
X
ˆ 2 ·`≤0
ˆ
M 1 ,M 2 −l1 <y 1 ·`,y
(2.18)
j
PTL
X
ˆ 2 ·`≤0
ˆ
−l1 <y 1 ·`,y
N 1 ,N 2
h
i
(2)
1
2
Ey` 1 ,y2 e−Φ̃β (N ,N ) ; Br(N 1 ; L) ∩ Br(N 2 ; L) .
The last inequality is obtained in a similar fashion as the corresponding estimates
at the end of Proposition 2.14: one needs to change in a deterministic fashion the
end points of the trajectories in the last expectation, in order to obtain bridges of
span L. The resulting constant C3 depends on β and kVω k. Since, due to the drift
of P `
´
³
X
X
j
j
P ` SM
=
y
,
j
=
1,
2
j
ˆ 2 ·`≤0
ˆ
M 1 ,M 2 −l1 <y 1 ·`,y
is finite, we can combine the estimates of (2.17) (notice that the expectation in this
relation is independent of the starting point ) and (2.18) to obtain that the left
hand side of (2.16) dominates (up to constants) the left hand side of (2.14).
3. Mass Gap Estimate
Let us define define the following random variables
M0 := 0,
η0 := 0
ˆ
η1 := inf{n : Sn · `ˆ > S0 · `}
ˆ
D := inf{n : Sn · `ˆ < S0 · `}
M := sup{Sn · `ˆ: n ≤ D}
M1 := sup{Sn · `ˆ: η1 ≤ n ≤ D ◦ θη1 }
LYAPOUNOV NORMS FOR RANDOM WALKS
13
and inductively
ηi := inf{n > ηi−1 : Sn · `ˆ > Mi−1 }
Mi := sup{Sn · `ˆ: ηi < n < D ◦ θη }.
i
Proposition 3.1. Let β small enough. Then there exists ρ = ρ(λ) > 0, such that
∞
X
(3.1)
L:
e(mB (β)+ρ)L Iβ (L) < ∞.
l
L∈ r1
N
Proof. Let us first prove the result for the case, that β = 0. Let ρ0 > 0 to be
specified later.
∞
X
L : L∈
l1
r
eρ0 L I0 (L)
∞
X
=
N
L : L∈
l1
r
∞
X
N k=1
∞
X
∞
X
=
L : L∈
l1
r
¡
¢
eρ0 L P ` Ir(TL ; L); TL = ηk
h Pk−1
Pk
E ` eρ0 i=1 (Mi −Sηi ) eρ0 i=1 (Sηi −Mi−1 ) ;
N k=1
i
D ◦ θη1 < · · · < D ◦ θηk < ηk−1 = TL
∞
X
≤eρ0
L : L∈
∞
X
l1
r
h Pk−1
E ` eρ0 i=1 (Mi −Sηi +1) ;
N k=1
i
D ◦ θη1 < · · · < D ◦ θηk−1 < ηk = TL
= eρ0
∞
X
k=1
(3.2)
= eρ0
∞
X
h
Pk−1
E ` eρ0 i=1 (Mi −Sηi +1) ;
i
D ◦ θη1 < · · · < D ◦ θηk−1 < ηk < ∞
h
ik−1
E ` eρ0 (1+M ) ; D < ∞
.
k=1
h
i
We now need to show that, for ρ0 small enough, E ` eρ0 (1+M ) ; D < ∞ < 1. Since
for ρ0 = 0 the last quantity
equals P i` [ D < ∞] < 1, it is enough to show that
h
`
ρ0 M
for ρ0 small enough E e
; D < ∞ < ∞. To this end, we estimate the tails
¡
¢
`
P M > x; D < ∞ . Notice, that this event implies that the walk can backtrack
ˆ ].
below 0, only after it goes beyond level x, that is only after time [ x/(ê1 · `)
Therefore, we have
X
¡
¢
P ` ( D = n)
P ` M > x; D < ∞ ≤
(3.3)
≤
X
n>[ x/( ê1 ·`ˆ) ]
P ` (Sn · `ˆ < 0) < e−Cx ,
n>[ x/( ê1 ·`ˆ) ]
where the last inequality, for some C > 0, follows from standard large deviation
results. This proves that (3.2) is finite, and thus the mass gap estimate for β = 0.
To prove (3.1) for arbitrary small β, we pick ρ = ρ0 /2. By Proposition 2.15, we
14
N. ZYGOURAS
have that for β small enough, mB (β) + ρ < ρ0 . Using also the fact that Iβ < I0 , we
have that (3.1) writes as
∞
X
L:
L lr
1
∞
X
e(mB (β)+ρ)L Iβ (L) <
=1
L:
L lr
1
eρ0 L I0 (L) < ∞,
=1
by the first part of the proof.
The next proposition proves the mass gap estimate for any arbitrary β. We state
it in the case that `ˆ = ê1 .
Proposition 3.2. Assume, that `ˆ = ê1 . Then, for any β > 0, there exists a
ρ = ρ(λ) > 0, such that
X
(3.4)
e(ρ+mB (β)) L Iβ (L) < ∞.
L
Proof. For the sake of this proof only, Pk` , k an integer, will denote the distribution
of the random walk P ` starting from level k. That is, starting from the hyperplane
{x : x · `ˆ = k}. Let us observe the walk that forms the Ir(L), until the first time it
hits level L − 1. This part of the walk, (Sn )n≤TL−1 , might have break points, and
let k ∈ [1, L − 1] be its first break point. In order to have an Ir(L), the walk needs
to backtrack, in order to cover that break point. Based on this observation, we can
write the following renewal equation
Iβ (L) ≤
L−1
X
¡
¢
Iβ (k) Pk` TL−1 < D < TL Bβ (L − k + 1).
k=1
We now multiply by e(ρ+mB (β)) L and sum up the inequality in L.
∞
X
e(ρ+mB (β))L Iβ (L) ≤
L=2
∞ L−1
X
X
¡
¢
e(ρ+mB (β))k Iβ (k) eρ(L−k) Pk` TL−1 < D < TL
L=2 k=1
=
emB (β)(L−k) Bβ (L − k)
∞
∞
X
X
¡
¢
e(ρ+mB (β))k Iβ (k)
eρ(L−k+1) Pk` TL < D < TL+1
k=1
L=k
emB (β) (L−k+1) Bβ (L − k + 1).
We can now use the inequality emB (β) (L−k+1) Bβ (L − k + 1) ≤ 1, from proposition
2.14, to get the bound
∞
X
e(ρ+mB (β))L Iβ (L) ≤
L=2
=
∞
X
k=1
∞
X
e(ρ+mB (β))k Iβ (k)
∞
X
¡
¢
eρ(L−k+1) Pk` TL < D < TL+1
L=k
e(ρ+mB (β))k Iβ (k)
k=1
∞
X
L=k
¢
¡
eρ(L−k+1) P ` M = L − k; D < ∞
LYAPOUNOV NORMS FOR RANDOM WALKS
and
¢ as in (3.3), we have that for ρ small enough
∞ := δ < 1. Therefore
(1 − δ)
∞
X
P∞
L=k
15
¡
eρ(L−k+1) P ` M = L−k; D <
e(ρ+mB (β))L Iβ (L) < δe(ρ+mB (β)) Iβ (1),
L=2
thus proving our claim.
4. Markovian Structure
In this section we built a Markovian structure with the purpose to formalise the
notion of direction that underlies our model, and therefore make the analogy with
dierected polymers [4] more transparent. The notion of direction is based on the
following regeneration structure. Due to the presence of a drift, or equivalently the
presence of a positive λ, the path of the walk includes points, such that after the
walk hits them, it does not go below the hyperplane they belong to. In other words
the trajectory of the walk consists of a union of irreducible bridges. We formalise
this as follows
Definition 4.1. The measure P β denotes the distribution of the Markov process
ˆ τi ), with transition probabilities
(S(τi ), S(τi ) · `,
pβ (yi+1 , Li+1 , ni+1 ; yi , Li , ni ) := emB (β)(Li+1 −Li ) ·
h
i
·Ey`i e−Φβ (ni+1 −ni ) ; Ir (ni+1 − ni , Li+1 − Li ), S(ni+1 − ni ) = yi+1 .
Let us mention, that the notation in the above above definition would had been
lighter, if we had considered, equivallently, the Markov process (S(τi ), τi )). The
reason we insist in the above notation is to highlight the levels where the renewals
take place. This will make things more transparent later on.
The next proposition shows, that the above kernel is indeed a probability kernel.
Proposition 4.2. For any β we have that
∞
X
L : L∈
∞
X
l1
r
h
i
emB (β)L E ` e−Φβ (N ) ; Ir(N ; L) = 1.
N N =0
£
¤
P∞
Proof. Consider B(L) = N =0 E ` e−Φβ (N ) ; Br(N ; L) , and decompose the expectation according to the level of the first break point. We then obtain the following
renewal equation
X
B(L) =
I(k) B(L − k).
1≤k≤L
P
P
Define b(s) :=
B(L) and i(s) := L≥1 sL emB (β)L I(L), for 0 <
L≥0 s e
s < 1. Then the previous equation can be transformed to the equation
(4.1)
L mB (β)L
b(s) = 1 + b(s) i(s).
By proposition
2.14
P∞ we have that £b(1) = ∞, and so¤ (4.1) implies that i(1) = 1, or
P∞
that L : L∈ l1 N N =0 emB (β)L E ` e−Φ(N ) ; Ir(N ; L) = 1
r
16
N. ZYGOURAS
The mass gap shows that S(τ1 ) · `ˆ has exponential moments under P β . The next
proposition shows that τ1 has also exponential moments. The proof follows the
lines of the moment estimates of the regeneration times of random walks in random
environment [3].
Proposition 4.3. For β small enough, there exists a constant c1 such that
P β (τ1 > u) ≤ e−c1 u .
Proof. Consider h as defined in (2.9), then
µ
¶
µ
¶
h
h
β
β
β
ˆ
ˆ
S(τ1 ) · ` > u + P
τ1 > u, S(τ1 ) · ` ≤ u .
P (τ1 > u) ≤ P
2
2
Regarding the first term we have that
µ
¶
h
i
h
h
h
ˆ
P β S(τ1 ) · `ˆ > u ≤ e−ρ 2 E β eρS(τ1 )·` ≤ Ce−ρ 2 u ,
2
by the exponential mass gap, Proposition 3.1.
Regarding the second term we have that
µ
¶ XX
∞
£
¤
h
β
ˆ
P
τ1 > u, S(τ1 ) · ` ≤ u =
1N >u, L≤ h u emB (β)L E ` e−Φβ (N ) ; Ir(N ; L)
2
2
L N =1
X X
¡
¢
h
≤ e 2 mB (β)u
P ` Br(N, L)
N >u
L≤ h
2u
¡
h
h ¢
≤ e 2 mB (β)u P ` S(u) · `ˆ ≤ u
2
On the other hand Nn · `ˆ := S(n) · `ˆ − S(0) · `ˆ −
Pn−1
ES` i−1 (Si − Si−1 ) · `ˆ is a P ` ¯
¯
ˆ n−1 ·`ˆ¯ ≤ 1+h
martingale. By (2.9) we have that increments of Nn ·`ˆ satisfy ¯Nn ·`−N
- recall that under P ` , S(0) = 0. Moreover, on the set {S(u) · `ˆ ≤ h2 u} we have that
Nu · `ˆ ≤ − h2 u.
By Azuma’s inequality [1] we have that
i=1
(u−2)2
¡
¡
h2
h
h
h
h ¢
h ¢
e 2 mB (β)u P ` S(u)·`ˆ ≤ u ≤ e 2 mB (β)u P ` Nu ·`ˆ ≤ − u ≤ e 2 mB (β)u e− 32(1+h) u ,
2
2
and this implies the result for β small enough, since mB (β) tends to 0, as β tends
to 0.
The following corollary generalises the mass gap estimate and it will also be
usefull.
Corollary 4.4. For β and θ small enough, we have that
i
h
E β eθ|S(τ1 )| < ∞.
Proof. The proof follows easily from the previous proposition and the observation
that |S(τ1 )| ≤ τ1 .
LYAPOUNOV NORMS FOR RANDOM WALKS
17
5. Second To First Moment Estimate
We are now moving towards the proof of the main estimate (2.16). We will
need to consider two independent copies of the walks with distributions Py`1 , Py`2 .
We will denote the two paths by (Sn1 ) and (Sn2 ), respectively. We denote their
joint distribution by Py`1 ,y2 . These two copies will then naturally give rise to two
independent copies of the Markovian process defined in Definition 4.1. We will
denote this joint distribution by Pyβ1 ,y2 . Let τi1 and τi2 as defined in Definition 4.1,
corresponding to the two independent Markovian copies. We then define
ι1 (N1 ) := inf{n :
n
X
τi1 ≥ N1 }
i=1
ι2 (N2 ) := inf{n :
(5.1)
n
X
τi2 ≥ N2 }.
i=1
and
ζi1 := 1{∃j : Ir(τi1 ;`i )∩Ir(τj2 ;`0j )6=∅} .
(5.2)
The random variable ζi1 indicates whether the two copies intersect within an irreducible bridge. Ir(τi1 ; `1i ) will denote the i − th irreducible bridge for the first walk,
and similarly Ir(τj2 ; `2j ) the j − th irreducible bridge for the second walk.
Lemma 5.1. Consider two walks (Sn1 )n≤N1 and (Sn2 )n≤N2 . Let Φ̃(2) (N1 , N2 ) and
Φ(2) (N1 , N2 ) as defined in Definition 2.7. Then if α(β) := 2βkVω k, we have
ι(N1 )
−Φ̃(2) (N1 , N2 ) ≤ α(β)
X
τi1 ζi1 − Φ(2) (N1 , N2 )
i=1
L1N1 (·)
L2N2 (·)
Proof. Let
and
be the local times of the paths (Sn1 )n≤N1 and (Sn2 )n≤N2 ,
respectively. Let also (Vω (x))x∈Zd and (Vω0 (x))x∈Zd two independent copies of the
disorder. We have by Definition 2.7 that
³
´
X
−Φ̃(2) (N1 , N2 ) =
log E exp − βVω (x)( L1N1 (x) + L2N2 (x)) .
x∈Zd
Moreover, by adding and subtracting the term βVω0 (x)L1N1 (x), we have that
³
´
log E exp − βVω (x)( L1N1 (x) + L2N2 (x))
³
´
= log E exp − β (Vω (x) − Vω0 (x))L1N1 (x) ·
³
´
exp − βVω0 (x)L1N1 (x) − βVω (x)L2N2 (x)
1
1
2
≤ log eα(β)LN1 (x) Ee−β Vω (x)LN1 (x) Ee−β Vω (x)LN2 (x)
1
2
= α(β)L1N1 (x) + log Ee−β Vω (x)LN1 (x) + log Ee−β Vω (x)LN2 (x) ,
where α(β) = 2βkVω k. Repeating the same calculation with N1 and N2 interchanged we obtain that
X
(5.3) −Φ̃(2) (N1 , N2 ) ≤ α(β)
L1N1 (x) ∧ L2N2 (x) − Φ(N1 ) − Φ(N2 )
x∈Zd
18
N. ZYGOURAS
Let τi1 and τi2 be the time durations of the ith irreducible bridge for the walks S 1
and S 2 , respectively.
Then we have that
(5.4)
X
ι(N1 )
L1N1 (x) ∧ L2N2 (x) ≤
X
τi1 ζi1 ,
i=1
x∈Zd
that is, the total amount of time, that the first walk spends on sites visited also by
the second walk, is bounded by the total time of the irreducible bridges of the first
walk, inside which, it intersects with the second walk. The result now follows from
(5.3) and (5.4).
Let us point out that if ζi1 , was equal to zero for every i ≥ 1, then we see from
the previous proposition, that the estimate (2.16) holds trivially. This, of course,
would not be the case, and so the main point is to be able control the frequency of
the event ζi1 > 0 and the exponential moments of the duration of the corresponding
irreducible bridges. This is summarised in proposition 5.3, which follows.
We denote first by
σ11 = inf{i ≥ 1 : ζi1 > 0},
(5.5)
and in a similar way we define σ12 .
We will also need the following general estimate on Green’s function, which is
proven in [3]
Proposition 5.2. ([3]) Let p(x), x ∈ Zd a probability distribution on Zd , with
covariance matrix Σp . Let pn denote the n-fold convolution of it and
X X
G(z) :=
pi (x)pj (x + z).
i,j≥0 x∈Zd
If d ≥ 4 and there are constants γ1 , γ2 , γ3 > 0, such that
X
p(x)eγ1 |x| < ∞,
Σp ≥ γ2 Id
x∈Zd
|
X
xp(x)| ≥ γ3
x∈Zd
then there exist constants K1 (d, γ1 , γ2 ), K2 (d, γ1 , γ2 , γ3 ), such that
sup pn (x) < K2 n−d/2
x∈Zd
sup (1 + |x|
x∈Zd
d−3
2
)G(x) < K3
It is clear that the distribution p(x) := P β (S(τ1 ) = x) is nondegenerate, so
the covariance matrix satisfies Σp ≥ γ2 Id (Id is the identity matrix). Also , that
P
P
| x∈Zd xp(x)| ≥ γ3 for appropriate γ3 , and by Corollary 4.4, x∈Zd p(x)eγ1 |x|
< ∞. Therefore, the previous proposition is applicable, and this will be usefull in
the following proposition.
Proposition 5.3. For β small enough it holds that
·
¸
α(β)τσ11
1
1; σ
(5.6)
sup Eyβ1 ,y2 e
<
∞
< 1.
1
y 1 ,y 2
LYAPOUNOV NORMS FOR RANDOM WALKS
19
Proof. We decompose the expectation with respect to the positions of the random
walks at the beginning of the irreducible bridges, inside which they intersect. Using
also the Markov property we have that
·
¸
α(β)τσ11
β
1
1
Ey1 ,y2 e
; σ1 < ∞ =
=
(5.7) =
h
i
1
Eyβ1 ,y2 eα(β)τm1 ; σ11 = m1 , σ12 = m2
∞
X
m1 ,m2 =1
∞
X
X
h
i
¢
¡
1
Pyβ1 ,y2 S 1 (m1 ) = x1 , S 2 (m2 ) = x2 Exβ1 ,x2 eα(β)τ1 ; ζ11 > 0 .
m1 ,m2 =1 x1 ,x2
We now use the fact, that the event ζ11 > 0 implies that τ11 + τ12 ≥ |x1 − x2 |. In
other words if the walks starting at x1 , x2 intersect inside their first irreducible
bridges then the total duration of these bridges has to be greater than their initial
distance. Then (5.7) is estimated by -we also use the fact that τ11 and τ12 have the
same distribution∞
X
X β ¡
¢
1
2
Py1 ,y2 S 1 (m1 ) = x1 , S 2 (m2 ) = x2 e−α(β) |x −x |
m1 ,m2 =1 x1 ,x2
∞
X
≤
X
h
i
1
1
2
Exβ1 ,x2 eα(β)τ1 eα(β)(τ1 +τ1 ) ; ζ11 > 0
¡
¢
1
2
Pyβ1 ,y2 S 1 (m1 ) = x1 , S 2 (m2 ) = x2 e−α(β) |x −x |
m1 ,m2 =1 x1 ,x2
=
∞
X
X
h
i
1
Exβ1 ,x2 e4α(β)τ1 ; ζ11 > 0
¡
¢
Pyβ1 ,y2 S 1 (m1 ) = x, S 2 (m2 ) = x + z e−α(β) |z|
m1 ,m2 =1 x,z
i
h
1
β
Ex,x+z
e4α(β)τ1 ; ζ11 > 0
By Proposition 4.3, for β small enough, we can bound the above by
∞
X
X β ¡
¢
C
Py1 ,y2 S 1 (m1 ) = x, S 2 (m2 ) = x + z e−α(β) |z| .
m1 ,m2 =1 x,z
Define
G(z; y1 , y2 ) :=
∞
X
X
¡
¢
Pyβ1 ,y2 S 1 (m1 ) = x, S 2 (m2 ) = x + z .
m1 ,m2 =1 x
Thus, we have obtained that
h
i X
α(β)τσ1
1
1; σ
e−α(β) |z| G(z; y1 , y2 ) < ∞,
Eyβ1 ,y2 e
1 <∞ ≤
z
by Proposition 5.2. Therefore we can apply the dominated convergence in (5.7) to
obtain that
h
i
α(β)τσ1
1
1; σ
lim Eyβ1 ,y2 e
1 <∞
β→0
£
¤
¡
¢
= Py01 ,y2 σ11 < ∞ = 1 − Py`1 ,y2 the walks do not intersect < 1,
20
N. ZYGOURAS
unifromly in y 1 , y 2 , since d > 3. This clearly implies that for β small enough (5.6)
is valid.
We are finally ready for the proof of the main estimate.
Proposition 5.4. Uniformly on y 1 , y 2 ∈ Zd and β small enough, we have that
h
i
(2)
P
−Φ̃β (N 1 ,N 2 )
1
2
`
e
;
Br(N
;
L)
∩
Br(N
;
L)
E
1
2
1
2
N ,N
y ,y
h
i < ∞.
(5.8)
sup P
(2)
−Φβ (N 1 ,N 2 )
`
1 ; L) ∩ Br(N 2 ; L)
L
E
e
;
Br(N
1
2
1
2
N ,N
y ,y
Proof. Proposition 2.14 implies that the left hand side of (5.8) is bounded by
h
i
2
1
2
Ey` 1 ,y2 e−Φ̃β (N ,N ) ; Br(N 1 ; L) ∩ Br(N 2 ; L)
∞
X
Ce2mB (β)L
(5.9)
N 1 ,N 2 =1
Moreover, we have the decomposition
∞
[
Br(N j ; L) =
(5.10)
k=1
[
k
\
[
Ir(nji ; `ji )
(`ji )i≤k
(nji )i≤k i=1
`1 +···+`k =L
for j = 1, 2. Using this fact, Lemma 5.1 and Proposition 2.8 (iii) we have that (5.9)
(we drop the constant C) can be estimated by
L
h
Pk 1 1 1
Ey`1 ,y2 eα(β) i ni ζi
∞
X
X
X
X
k1 ,k2 =1
(`ji )i≤kj
j
`1 +···+`j j =L
k
N1 ,N2
(nji )i≤kj
j
n1 +···+nj j =N j
k
sup
j
k
Y Y
j
j
emB (β)`i −Φβ (ni ) Ir(nji ; `ji )
i
j=1,2 i=1
(5.11)
= sup
L
∞
X
Eyβ1 ,y2
h
e
α(β)
Pk 1
i
j
τi1 ζi1
;
k
X
`ji = L, j = 1, 2
i
i=1
k1 ,k2 =1
Denote by σ1j = inf{i : ζij > 0}. In the case that σ11 > k 1 , the last expectation is
equal to
Pyβ1 ,y2
h
j
σ1j
j
>k ;
k
X
i
`ji = L, j = 1, 2 ≤ 1.
i=1
So it follows that (5.11) is bounded by
1 + sup
L
∞
X
kj
h
i
X
Pk 1 1
Eyβ1 ,y2 eα(β) i τi ζi ; σ1j = sj , j = 1, 2;
`ji = L, j = 1, 2
X
i=1
k1 ,k2 =1 sj ≤kj , j=1,2
For any y1 , y2 ∈ Zd , denote by
(5.12)
Ψ(y1 , y2 ) := sup
L
∞
X
k1 ,k2 =1
Eyβ1 ,y2
h
α(β)
e
Pk 1
i
j
τi1 ζi
;
k
X
i=1
i
`ji = L, j = 1, 2
LYAPOUNOV NORMS FOR RANDOM WALKS
21
and kΨk := supy1 ,y2 ∈Zd Ψ(y1 , y2 ). Use the Markov property of the coupled measure
Pyβ1 ,y2 at (s1 , s2 ) to write the expectation as
∞
X
1 + sup
L
h
1
Eyβ1 ,y2 eα(β)τs1 ; σ1j = sj , j = 1, 2
X
k1 ,k2 =1 sj ≤kj
ESβ1 (τ 1 ),S 2 (τ 2 ) [ eα(β)
1
2
s
≤ 1 + kΨk
X
Eyβ1 ,y2
j
Pk 1
i=s1 +1
τi1 ζi
;
s
h
α(β)τs11
e
σ1j
;
i
j
k
X
j
`ji
=L−
i=sj +1
s
X
`ji ]
i
i=1
= s , j = 1, 2
sj ; j=1,2
h α(β)τ j
i
σ1
= 1 + kΨkEyβ1 ,y2 e
; σ1j < ∞, j = 1, 2 ,
where we used the fact that
sup
L
X
Ezβ1 ,z2 [ eα(β)
Pk 1
i=s1 +1
j
τi1 ζi
;
kj ≥sj
k
X
j
`ji
=L−
i=sj +1
s
X
`ji ] = Ψ(z 1 , z 2 ).
i=1
Thus we have obtained the estimate
h α(β)τ j
i
σ1
Ψ(y1 , y2 ) ≤ 1 + kΨkEyβ1 ,y2 e
; σ1j < ∞, j = 1, 2 .
Since y1 , y2 are arbitrary we have that
h α(β)τ j
i
σ1
kΨk ≤ 1 + kΨk sup Eyβ1 ,y2 e
; σ1j < ∞, j = 1, 2 .
y1 ,y2
and by Proposition 5.3, it follows that kΨk < ∞ for small enough β. This concludes
the proof.
Proof of Theorem 1.1. It follows from Proposition 5.4, Proposition 2.17 and
Proposition 2.16.
6. Some Consequences
Let us mention in this paragraph some consequences of Theorem 1.1.
It is in general difficult to obtain asymptotic properties of Green’s function for
random walks in a potential - not necessarilly random. As far as we know, the only
case that this has been successful is the case when the potential is constant [11]. In
this case it has been computed that
Theorem 6.1. (Zerner) Suppose that the potential Vω is identically equal to 0 and
let αλ (·), λ > 0 the corresponding Lyapounov norm. Then for x = (x1 , . . . , xd ) ∈ Rd
we have that
d
X
αλ (x) =
xi sinh−1 (xi s),
i=1
where s > 0 solves the equation
eλ d =
d p
X
1 + (xi s)2 .
i=1
22
N. ZYGOURAS
The situation where the potential is inhomogeneous becomes much more involved. In the annealed case, one expects that the random walk behaves as if it
was in a constant, averaged potential. Still, though, this correspondence is non
trivial. In the low disorder regime, W.M. Wang [10] has obtained an asymptotic
expansion, with respect to the disorder β, of the annealed Lyapounov norm. Using
supersymmetric methods, she obtained that in the low disorder regime the annealed Lyapounov norm becomes asymptotic to the Lyapounov norm of a walk in
a constant potential. Let Gλ (x, y) denote the Green’s function corresponding to a
constant potential −λ. The translation of the main result in [10] is the following
Theorem 6.2. (Wang) For every λ > 0 there exists a β0 , such that for 0 < β < β0
and every x, y ∈ Zd ,
log EGλ (x, y, ω) = log Gλ̃ (x, y) + O(γ 4 (β))(|x − y| + 1),
where
µ
λ̃ = log
¶
1
ẽ ,
2d
with
ẽ := 2deλ − γ(β)E[v] − γ 2 (β)E[(v − Ev )2 ] G2λ1 (x, y) − γ 3 (β)E[(v − Ev )3 ] G3λ1 (x, y)
1
(2d eλ − γ(β)Ev)
2d
and γ(β) and v defined by the relation
λ1 := log
β V (x) := − log(2deλ ) + log(2d eλ − γ(β) v(x)),
x ∈ Zd
Notice that γ(β) ∼ 0, as β ∼ 0. Moreover, the importance of the above theorem
is that the expansion is uniform as |x − y| → ∞.
Our Theorem 1.1 can be combined with Theorems 6.1 and 6.2 to provide an
asymptotic expression of the quenched Lyapounov norms αλ (·) in the low disorder
regime.
Corollary 6.3. For every λ > 0, there exists a β∗ , such that for every 0 < β < β∗
αλ (x) = βλ (x) = α̂λ̃ (x) + O(γ 4 (β)),
where
α̂λ̃ (x) := lim
N →∞
1
log Gλ̃ (0, N x).
N
Finally, Corollary 6.3 in combination with Corollary of Theorem 7.2 in [10], shows
that the quenched Lyapounov norm αλ can be extended as an analytic function in
λ in the right half plane of the complex plane. This answers, in the case when d > 3
and β small enough, another question posed in [9].
Acknowledgement: This work was initiated when I was visiting ETH-Zurich.
I am greatful to Alain-Sol Sznitman for suggesting the problem and for sharing
his ideas with me. In particular, I would like to thank him for pointing out the
references [3], [5] as well as the connections of these works with the present one. I
would also like to thank the referee of this paper, whose remarks improved a lot its
presentation.
LYAPOUNOV NORMS FOR RANDOM WALKS
23
References
[1] Azuma, Kazuoki Weighted sums of certain dependent random variables. Thoku Math. J. (2)
19 1967 357–367.
[2] Bolthausen, Erwin; A note on the diffusion of directed polymers in a random environment.
Comm. Math. Phys. 123 (1989), no. 4, 529–534.
[3] Bolthausen, Erwin; Sznitman, Alain-Sol; On the static and dynamic points of view for certain
random walks in random environment. Special issue dedicated to Daniel W. Stroock and
Srinivasa S. R. Varadhan on the occasion of their 60th birthday.
[4] Comets, Francis; Shiga, Tokuzo; Yoshida, Nobuo; Probabilistic analysis of directed polymers
in a random environment: a review. Stochastic analysis on large scale interacting systems,
115–142, Adv. Stud. Pure Math., 39, Math. Soc. Japan, Tokyo, 2004
[5] Flury, Markus; Coincidence of Lyapunov Exponents for Random Walks in Weak Random
Potentials math.PR/0608357
[6] Flury, Markus; Large Deviations and Phase Transition for Random Walks in Random Nonnegative Potentials math.PR/0609766
[7] Madras, Neal; Slade, Gordon; The self-avoiding walk. Probability and its Applications.
Birkhuser Boston, Inc., Boston, MA, 1993. xiv+425 pp. ISBN: 0-8176-3589-0
[8] Sznitman, Alain-Sol; Topics in random walks in random environment. School and Conference
on Probability Theory, 203–266 ICTP Lect. Notes, XVII, Abdus Salam Int. Cent. Theoret.
Phys., Trieste, 2004.
[9] Sznitman, Alain-Sol; Brownian motion, obstacles and random media. Springer Monographs
in Mathematics. Springer-Verlag, Berlin, 1998. xvi+353 pp. ISBN: 3-540-64554-3
[10] Wang, Wei-Min; Supersymmetry, Witten complex and asymptotics for directional Lyapunov
exponents in Z d . Journées ”Équations aux Dérivées Partielles” (Saint-Jean-de-Monts, 1999),
Exp. No. XVIII, 16 pp., Univ. Nantes, Nantes, 1999.
[11] Zerner, Martin P. W.; Directional decay of the Green’s function for a random nonnegative
potential on Z d . Ann. Appl. Probab. 8 (1998), no. 1, 246–280.
Department of Mathematics, University of Southern California, Los Angeles, CA,
90089, USA. e-mail: zygouras@usc.edu
Download