An Introduction to Random Interlacements Alexander Drewitz Bal´ azs R´

advertisement
An Introduction to Random Interlacements
Preliminary version
Alexander Drewitz
1
Balázs Ráth
2
Artëm Sapozhnikov
3
December 30, 2013
1
Columbia University, Department of Mathematics, RM 614, MC 4419, 2990 Broadway, New York
City, NY 10027, USA. email: drewitz@math.columbia.edu
2
The University of British Columbia, Department of Mathematics, Room 121, 1984 Mathematics
Road, Vancouver, B.C. Canada V6T 1Z2. email: rathb@math.ubc.ca
3
Max-Planck Institute for Mathematics in the Sciences, Inselstrasse 22, 04103 Leipzig, Germany.
email: artem.sapozhnikov@mis.mpg.de
2
Contents
1 Introduction
5
2 Random walk, Green function, equilibrium
2.1 Some notation . . . . . . . . . . . . . . . .
2.2 Simple random walk . . . . . . . . . . . . .
2.3 Equilibrium measure and capacity . . . . .
2.4 Notes . . . . . . . . . . . . . . . . . . . . .
measure
. . . . . .
. . . . . .
. . . . . .
. . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
9
9
10
12
15
3 Random interlacements: first definition and basic properties
3.1 Space of subsets of Zd and random interlacements . . . . . . . .
3.2 Correlations, shift-invariance, ergodicity . . . . . . . . . . . . . .
3.3 Increasing and decreasing events, stochastic domination . . . . .
3.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
17
17
18
21
22
4 Random walk on the torus and random interlacements
4.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . .
4.1.1 Lazy random walk . . . . . . . . . . . . . . . . . .
4.1.2 Hitting of small sets in short times . . . . . . . . .
4.1.3 Fast mixing of lazy random walk . . . . . . . . . .
4.2 Proof of Theorem 4.1 . . . . . . . . . . . . . . . . . . . . .
4.3 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
23
24
24
25
28
29
30
5 Poisson point processes
5.1 Poisson distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.2 Poisson point processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.3 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
31
31
32
34
6 Random interlacements point process
6.1 A sigma-finite measure on doubly infinite trajectories . . . . . . . . . . . . . . .
6.1.1 Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.1.2 Construction of the intensity measure underlying random interlacements
6.2 Random interlacements point process . . . . . . . . . . . . . . . . . . . . . . . .
6.2.1 Canonical space and random interlacements . . . . . . . . . . . . . . . .
6.2.2 Finite point measures on W+ and random interlacements in finite sets .
6.3 Covering of a box by random interlacements . . . . . . . . . . . . . . . . . . . .
6.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
35
35
35
36
40
40
42
43
44
3
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
4
CONTENTS
7 Percolation of the vacant set
7.1 Percolation of the vacant set, zero-one law . .
7.2 Exponential decay and proof of Theorem 7.2
7.3 Proof of Proposition 7.3 . . . . . . . . . . . .
7.4 Proof of Proposition 7.7 . . . . . . . . . . . .
7.5 Notes . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
47
47
49
50
53
53
.
.
.
.
.
55
55
57
58
61
63
.
.
.
.
.
65
65
67
69
70
73
10 Phase transition of V u
10.1 Subcritical regime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.2 Supercritical regime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.3 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
75
76
78
80
11 Coupling of point measures of excursions
11.1 Choice of sets . . . . . . . . . . . . . . . .
11.2 Coupling of point measures of excursions .
11.2.1 Comparison of intensity measures .
11.2.2 Construction of coupling . . . . . .
11.2.3 Error term . . . . . . . . . . . . .
11.2.4 Proof of Theorem 11.4 . . . . . . .
11.3 Proof of Theorem 9.3 . . . . . . . . . . . .
11.4 Notes . . . . . . . . . . . . . . . . . . . .
83
83
84
85
89
92
94
94
95
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
8 Source of correlations and decorrelation via coupling
8.1 A polynomial upper bound on correlations . . . . . . . .
8.2 Perturbing the value of u . . . . . . . . . . . . . . . . .
8.3 Point measures on random walk excursions . . . . . . .
8.4 Decorrelation via coupling . . . . . . . . . . . . . . . . .
8.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9 Decoupling inequalities
9.1 Hierarchical events . . . . . . . . . . . . . . . . .
9.2 Decoupling inequalities . . . . . . . . . . . . . . .
9.3 Proof of Theorem 9.5 . . . . . . . . . . . . . . . .
9.4 Long ∗-paths of unlikely events are very unlikely
9.5 Notes . . . . . . . . . . . . . . . . . . . . . . . .
Index
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
96
Chapter 1
Introduction
The model of random interlacements was introduced in 2007 by A.-S. Sznitman in the seminal
paper [S10], motivated by questions about the disconnection of discrete cylinders and tori by the
trace of simple random walk. In fact, random interlacements is a random subset of Zd which on
a mesoscopic scale does appear as the limiting distribution of the trace of simple random walk
on a large torus when it runs up to times proportional to the volume. It serves as a model for
corrosion and in addition gives rise to interesting and challenging percolation problems.
Formally, random interlacements can be constructed via a Poisson point process with intensity
measure u · ν, where u ≥ 0. The measure ν is supported on the space of equivalence classes
of doubly infinite simple random walk trajectories modulo time shift and the parameter u is
often referred to as the level. The union of the traces of trajectories which are contained in
the support of this Poisson point process then constitutes the random subset I u ⊂ Zd , called
random interlacements at level u. The law of I u has nice properties such as invariance and
ergodicity with respect to lattice shifts. It also exhibits long range correlations, which leads to
interesting challenges in its investigation.
Interestingly enough, I u does not exhibit a percolation phase transition, namely, for any intensity
parameter u > 0, the graph induced by I u is almost surely connected. On the other hand, its
complement, the so-called vacant set V u = Zd \I u , does exhibit a phase transition, namely, there
exists u∗ ∈ (0, ∞) such that
• for all u > u∗ , the graph induced by V u consists almost surely of only finite connected
components,
• for all u < u∗ , this graph contains an almost surely unique infinite connected component.
The intensive research that has been conducted on this model during the last years has led to
the development of powerful techniques (such as various decoupling inequalities), which have
found their applications to other percolation models with long range correlations also, such as
for example the level sets of the Gaussian free field.
These lecture notes grew out of a graduate class “Selected topics in probability: random interlacements” which was held by the authors during the spring semester 2012 at ETH Zürich. Our
aim is to give an introduction to the model of random interlacements which is self-contained
and is accessible for graduate students in probability theory.
We will now provide a short outline of the structure of these lecture notes.
In Chapter 2 we introduce some notation and basic facts of simple random walk on Zd , d ≥ 3.
We will also review some potential theory, since the notion of the capacity cap(K) of a finite
subset K of Zd will play a central role when we deal with random interlacements.
5
6
CHAPTER 1. INTRODUCTION
In Chapter 3 we give an elementary definition of random interlacements I u at level u as a
random subset of Zd , the law of which is characterized by the equations
P[I u ∩ K = ∅] = e−u·cap(K)
for any finite subset K of Zd .
(1.0.1)
The above equations provide the shortest definition of random interlacements I u at level u,
which will be shown to be equivalent to a more constructive definition that we give in later
chapters. In Chapter 3 we deduce some of the basic properties of I u from the definition (1.0.1).
In particular, we show that the law of I u is invariant and ergodic with respect to lattice shifts.
We also point out some evidence that the random set I u behaves rather differently from classical
percolation: we show that I u exhibits polynomial decay of spatial correlations and that I u and
Bernoulli site percolation with density p do not stochastically dominate each other for any value
of u > 0 and 0 < p < 1. Along the way, we introduce some important notions such as a
measurable space of subsets of Zd , increasing and decreasing events and stochastic domination,
which will be used frequently throughout the notes.
In Chapter 4 we prove that if we run a simple random walk with a uniform starting point
on the d-dimensional torus (Z/N Z)d , d ≥ 3 with side-length N for ⌊uN d ⌋ steps, then the trace
I u,N of this random walk converges locally in distribution to I u as N → ∞.
In Chapter 5 we give a short introduction to the notion of a Poisson point process (PPP) on
a general measurable space as well as the basic operations with PPPs.
In Chapter 6 we introduce the random interlacements point process as a PPP on the space of
equivalence classes modulo time shift of bi-infinite nearest neighbor paths labelled with positive
real numbers, which we call interlacement trajectories. This way we get a more hands-on
definition of random interlacement I u at level u as the trace of interlacement trajectories with
label less than u.
In the rest of the notes we develop some methods that will allow us to study the percolative
properties of the vacant set V u = Zd \ I u .
In Chapter 7 we formally define the percolation threshold u∗ and give an elementary proof of
the fact that u∗ > 0 (i.e., the existence of a non-trivial percolating regime of intensities u for the
vacant set V u ) in high dimensions. In fact we will use a Peierls-type argument to show that the
intersection of the d-dimensional vacant set V u and the plane Z2 × {0}d−2 contains an infinite
connected component if d ≥ d0 and u ≤ u1 (d) for some u1 (d) > 0.
In order to explore the percolation of V u in lower dimensions, we will have to obtain a better
understanding of the strong correlations occurring in random interlacements. This is the purpose
of the following chapters.
In Chapter 8 we take a closer look at the spatial dependencies of I u and argue that the
correlations between two locally defined events with disjoint spatial supports are caused by the
PPP of interlacement trajectories that hit both of the supports. The way to achieve decorrelation
is to use a clever coupling to dominate the effect of these trajectories on our events by the PPP
of trajectories that hit the support of only one event. This trick is referred to as sprinkling in the
literature and it allows us to compare the probability of the joint occurrence of two monotone
′
events under the law of I u with the product of their probabilities under the law of I u , where
u′ is a small perturbation of u. The difference u′ − u and the error term of this comparison will
sensitively depend on the efficiency of the coupling and the choice of the support of our events.
In Chapter 9 we provide a set-up where the decorrelation result of the previous chapter can be
effectively implemented to a family of events which are hierarchically defined on a geometrically
growing sequence of scales. This renormalization scheme involves events that are spatially well
separated on each scale, and this property guarantees a decoupling with a small error term
7
and a manageable amount of sprinkling. We state (but not yet prove) the basic decorrelation
inequality in this setting and use iteration to derive the decoupling inequalities that are the
main results of the chapter. As a corollary we deduce that if the density of a certain “pattern”
of locally defined, monotone, shift invariant events observed in I u is reasonably small, then the
′
probability to observe big connected islands of that pattern in I u (where |u−u′ | is small) decays
very rapidly with the size of the island.
In Chapter 10 we apply the above mentioned corollary to prove the non-triviality of the phase
transition of the vacant set V u for any d ≥ 3. Otherwise stated, we prove that there exists some
u∗ ∈ (0, ∞) such that for all u < u∗ the set V u almost surely contains a connected component,
but for all u > u∗ the set V u only consists of finite connected components. We also define the
threshold u∗∗ ∈ [u∗ , ∞) of local subcriticality and show that for any u > u∗∗ the probability
that the diameter of the vacant component of the origin is greater than k decays stretched
exponentially in k.
In Chapter 11 we complete the proof of the basic decorrelation inequality stated in Chapter 9
by constructing the coupling of two collections of interlacement trajectories, as required by the
method of Chapter 8. The proof combines potential theoretic estimates with PPP techniques to
achieve an error term of decorrelation which decays much faster than one would naively expect
in a model with polynomial decay of correlations.
At the end of most of the chapters we collected some bibliographic notes that put the results of
that chapter in historical context and also provide the reader with pointers to the literature of
those results that are related to the topic of the chapter, but are out of the scope of these notes.
Let us already mention here that a significant part of the material covered in these notes is an
adaptation of results of [S10] and [S12a].
The main goal of these lecture notes is to provide a self-contained treatise of the percolation
phase transition of the vacant set of random interlacements using decoupling inequalities. Since
the body of work on random interlacements is already quite vast (and rapidly growing), there are
some interesting topics that are not at all covered in these notes. The least we can do is to point
out two other lecture notes, also aimed at graduate students interested in questions related to the
fragmentation of a graph by random walk and Poisson point measures of Markovian trajectories.
The lecture notes [S12d] give a self-contained introduction to Poisson gases of Markovian loops
and their relation to random interlacements and Gaussian free fields. The lecture notes [CT12]
also give an introduction to random interlacements with an emphasis on the sharp percolation
phase transition of the vacant set of (a) random interlacements on trees and (b) the trace of
random walk on locally tree-like mean field random graphs.
Let us now state our convention about constants. Throughout these notes, we will denote
various positive and finite constants by c and C. These constants may change from place to
place, even within a single chain of inequalities. If the constants only depend on the dimension
d, this dependence will be omited from the notation. Dependence on other parameters will be
emphasized, but usually just at the first time the constant is introduced.
Acknowledgements: We thank Alain-Sol Sznitman for intruducing us to the topic of random
interlacements. In addition, we thank Yinshan Chang, Philippe Deprez, Regula Gyr, and a
very thorough anonymous referee for reading and commenting on successive versions of the
manuscript. We thank Omer Angel and Qingsan Zhu for sharing their ideas that made the
proof of the result of Chapter 4 shorter.
8
CHAPTER 1. INTRODUCTION
Chapter 2
Random walk, Green function,
equilibrium measure
In this chapter we collect some preliminary facts that we will need in the sequel. In Section
2.1 we introduce our basic notation related to subsets of Zd and functions on Zd . In Section
2.2 we introduce simple random walk on Zd and discuss properties of the Green function in the
transient case d ≥ 3. In Section (2.3) we discuss the basics of potential theory, introduce the
notion of equilibrium measure and capacity and derive some of their properties.
2.1
Some notation
For d ≥ 1, x = (x1 , . . . , xd ) ∈ Rd , and p ∈ [1, ∞], we denote by |x|p the p-norm of x in Rd , i.e.,
|x|p =
( P
d
p
i=1 |xi |
1/p
,
if p < ∞,
max(|x1 |, . . . , |xd |), if p = ∞.
For the ease of notation, we will abbreviate the most frequently occurring ∞-norm by |·| := |·|∞ .
We consider the integer lattice Zd , the graph with the vertex set given by all the points from
Rd with integer-valued coordinates and the set of undirected edges between any pair of vertices
within | · |1 -distance 1 from each other. We will use the same notation Zd for the graph and for
the set of vertices of Zd . Any two vertices x, y ∈ Zd such that |x − y|1 = 1 are called nearest
neighbors (in Zd ), and we use the notation x ∼ y. If |x − y|∞ = 1, then we say that x and y are
∗-neighbors (in Zd ).
For a set K, we denote by |K| its cardinality. We write K ⊂⊂ Zd , if |K| < ∞. Let
∂ext K := {x ∈ Zd \K : ∃y ∈ K such that x ∼ y}
(2.1.1)
be the exterior boundary of K, and
∂int K := {x ∈ K : ∃y ∈ Zd \K such that x ∼ y}
the interior boundary of K. We also define
K := K ∪ ∂ext K.
For x ∈ Zd and R > 0, we denote by
B(x, R) := {y ∈ Zd : |x − y| ≤ R}
9
(2.1.2)
10
CHAPTER 2. RANDOM WALK, GREEN FUNCTION, EQUILIBRIUM MEASURE
the intersection of Zd with the closed |·|∞ -ball of radius R around x, and we use the abbreviation
B(R) := B(0, R).
For functions f, g : Zd → R we write f ∼ g if
f (x)
= 1,
|x|→∞ g(x)
lim
and we write f ≍ g if there are constants c, C (that might depend on d) such that
cf (x) ≤ g(x) ≤ Cf (x)
for all x ∈ Zd .
2.2
Simple random walk
We consider the measurable space (RW, RW), where RW is the set of infinite Zd -valued sequences w = (wn )n≥0 , and RW is the sigma-algebra on RW generated by the coordinate maps
Xn : RW → Zd , Xn (w) = wn .
For x ∈ Zd , we consider the probability measure Px on (RW, RW) such that under Px , the
random sequence (Xn )n≥0 is a Markov chain on Zd with initial state x and transition probabilities
1
′
′
2d , if |y − y |1 = 1,
Px [Xn+1 = y | Xn = y] :=
0, otherwise,
for all y, y ′ ∈ Zd . In words, Xn+1 has the uniform distribution over the | · |1 -neighbors of Xn .
The random sequence (Xn )n≥0 on (RW, RW, Px ) is called the simple random walk (SRW) on
Zd starting in x. The expectation corresponding to Px is denoted by Ex .
Remark 2.1. Often in the literature the SRW is defined as a sum of its i.i.d. increments,
avoiding the introduction of the space of trajectories (RW, RW). We use the current definition
to prepare the reader for constructions in future chapters.
For K ⊂⊂ Zd and a measure m(·) supported on K we denote by Pm the measure on (RW, RW)
defined by
X
Pm =
m(x)Px ,
(2.2.1)
x∈K
where we use the convention to write x instead of {x}, when evaluating set functions at a one
point set. If m(·) is a probability measure, then Pm is also a probability measure on (RW, RW),
namely the law of SRW started from an initial state which is distributed according to m(·).
There are a number of random times which we will need in the sequel. For w ∈ RW and A ⊂ Zd ,
let
HA (w) := inf{n ≥ 0 : Xn (w) ∈ A}, ‘first entrance time’,
(2.2.2)
and
e A (w) := inf{n ≥ 1 : Xn (w) ∈ A},
H
‘first hitting time’,
(2.2.3)
‘first exit time’,
(2.2.4)
LA (w) := sup{n ≥ 0 : Xn (w) ∈ A},
‘time of last visit’.
(2.2.5)
TA (w) := inf{n ≥ 0 : Xn (w) ∈
/ A},
Here and in the following we use the convention that inf ∅ = ∞ and sup ∅ = −∞. If A = {x},
e T, L}. If X is a
in accordance with the above convention we write Dx = D{x} for D ∈ {H, H,
collection of random variables, we denote by σ(X ) the sigma-algebra generated by X .
11
2.2. SIMPLE RANDOM WALK
e A and TA are stopping times with respect to the canonical
Exercise 2.2. Prove that HA , H
filtration Fn := σ(X0 , X1 , . . . , Xn ).
If H ⊆ Zd , then h : H → R is harmonic on H if
∀ x ∈ H : h(x) =
1 X
h(x + y),
2d
|y|1 =1
i.e., if the value of h at x is the average of the h-values of its neighbors. The next lemma
formalizes the intuition that harmonic functions are “smooth”.
Lemma 2.3 (Harnack inequality). There exists a constant CH < ∞ such that for all L ≥ 1 and
any function h : B(2L) → R+ which is harmonic on B(2L) we have
max h(x) ≤ CH · min h(x).
x∈B(L)
x∈B(L)
The proof of the Harnack inequality can be found in [L91, Section 1.7].
In many situations, properties of SRW can be formulated in terms of the Green function g(·, ·),
which is defined as
i
hX
X
1{Xn =y} , x, y ∈ Zd ,
g(x, y) =
Px [Xn = y] = Ex
n≥0
n≥0
and hence equals the expected number of visits of the SRW started in x to y. The next lemma
gives basic properties of the Green function.
Lemma 2.4. For all x, y ∈ Zd , the following properties hold:
(a) (symmetry) g(x, y) = g(y, x),
(b) (translation invariance) g(x, y) = g(0, y − x),
(c) (harmonicity) the value of g(x, ·) at y ∈ Zd \ {x} is the average of the values g(x, ·) over
the neighbors of y:
X
1
+ 1{y=x} .
(2.2.6)
g(x, y) =
g(x, y + z) ·
2d
d
: |z|1 =1
z∈Z
By Lemma 2.4(b), if we define
g(x) := g(0, x),
then g(x, y) = g(y − x), for all x, y ∈
(2.2.7)
Zd .
Proof of Lemma 2.4. The statement (a) follows from the fact that Px [Xn = y] = Py [Xn = x],
and (b) from Px [Xn = y] = P0 [Xn = y − x]. For (c), first note that by (b) it suffices to prove it
for x = 0. Using (2.2.7) and the Markov property, we compute for each y ∈ Zd ,
X
X
g(y) =
P0 [Xn = y] = 1{y=0} +
P0 [Xn = y]
n≥0
= 1{y=0} +
X X
n≥1
P0 [Xn−1 = y + z, Xn = y]
n≥1 z:|z|1=1
= 1{y=0} +
1 X X
P0 [Xn−1 = y + z]
2d
z:|z|1 =1 n≥1
= 1{y=0} +
1 X
g(y + z),
2d
z:|z|1 =1
which implies (2.2.6).
12
CHAPTER 2. RANDOM WALK, GREEN FUNCTION, EQUILIBRIUM MEASURE
Exercise 2.5. Prove the properties of SRW used to show (a) and (b) of Lemma 2.4.
e 0 = ∞] > 0, otherwise it is called recurrent.
Definition 2.6. SRW is called transient if P0 [H
The following theorem is a celebrated result of George Pólya [P21].
Theorem 2.7. SRW on Zd is recurrent if d ≤ 2 and transient if d > 2.
We will not prove Theorem 2.7 in these notes. The original proof of Pólya uses path counting
and the Stirling formula, it can be found in many books on discrete probability. Currently there
exist many different proofs of this theorem. Perhaps, the most elegant one is by Lyons and Peres
using construction of flows of finite energy, see the discussion in the bibliographic notes to this
chapter.
By solving the next exercise, the reader will learn that the notions of transience and recurrence
are closely connected to properties of the Green function.
Exercise 2.8. Prove that
e 0 = ∞]−1 , in particular, SRW is transient if and only if g(0) < ∞,
(a) g(0) = P0 [H
(b) for each y ∈ Zd , g(y) < ∞ if and only if g(0) < ∞,
(c) if SRW is transient, then P0 [lim inf n→∞ |Xn | = ∞] = 1, and if it is recurrent, then
P0 [lim inf n→∞ |Xn | = 0] = 1.
Now we state a bound on the univariate SRW Green function g(·) which follows from a much
sharper asymptotic formula proved in [L91, Theorem 1.5.4].
Claim 2.9. For any d ≥ 3 there exist dimension dependent constants cg , Cg ∈ (0, ∞) such that
cg · (|x| + 1)2−d ≤ g(x) ≤ Cg · (|x| + 1)2−d ,
x ∈ Zd .
(2.2.8)
Remark 2.10. Let us present a short heuristic proof of (2.2.8). Let 1 ≪ R and denote by A(R)
the annulus B(2R) \ B(R). Using the Harnack inequality (Lemma 2.3) one can derive that there
is a number g ∗ (R) such that g(x) ≍ g∗ (R) for any x ∈ A(R). By the diffusivity and transience
of random walk, the expected total number of steps that the walker spends in A(R) is of order
R2 . Therefore
i
hX
X
g(x) ≍ |A(R)| · g ∗ (R) ≍ Rd · g∗ (R).
1{Xn ∈A(R)} =
R2 ≍ E0
n≥0
x∈A(R)
Thus g ∗ (R) ≍ R2−d , so for any x ∈ A(R) we obtain g(x) ≍ R2−d ≍ |x|2−d .
From now on we will tacitly assume d ≥ 3.
2.3
Equilibrium measure and capacity
For K ⊂⊂ Zd and x ∈ Zd , we set
e K = ∞] · 1x∈K = Px [LK = 0] · 1x∈K ,
eK (x) := Px [H
which gives rise to the equilibrium measure eK of K. Its total mass
X
cap(K) :=
eK (x)
x∈K
(2.3.1)
(2.3.2)
13
2.3. EQUILIBRIUM MEASURE AND CAPACITY
is called the capacity of K. Note that eK is supported on the interior boundary ∂int K of K, and
it is not trivial when d ≥ 3 because of the transience of SRW. In particular, for any K ⊂⊂ Zd ,
cap(K) ∈ (0, ∞). We can thus define a probability measure on K by normalizing eK :
eeK (x) :=
eK (x)
.
cap(K)
(2.3.3)
This measure is called the normalized equilibrium measure (or, sometimes, the harmonic measure
with respect to K). The measure eeK is obviously also supported on ∂int K.
Our first result about capacity follows directly from the definition.
Lemma 2.11 (Subadditivity of capacity). For any K1 , K2 ⊂⊂ Zd ,
cap(K1 ∪ K2 ) ≤ cap(K1 ) + cap(K2 ).
(2.3.4)
Proof. Denote by K = K1 ∪ K2 . We first observe that
e K = ∞] ≤ Px [H
e K = ∞] = eK (x),
eK (x) = Px [H
i
i
x ∈ Ki ,
i = 1, 2.
(2.3.5)
Using this we obtain (2.3.4):
cap(K) =
X
x∈K
≤
X
eK (x) ≤
X
eK (x) +
eK1 (x) +
eK (x)
x∈K2
x∈K1
X
X
eK2 (x) = cap(K1 ) + cap(K2 ).
x∈K2
x∈K1
The following identity provides a connection between the hitting probability of a set, the Green
function, and the equilibrium measure.
Lemma 2.12. For x ∈ Zd and K ⊂⊂ Zd ,
Px [HK < ∞] =
X
g(x, y)eK (y).
(2.3.6)
y∈K
Proof. Since SRW is transient in dimension d ≥ 3, we obtain that almost surely LK < ∞. This
in combination with the Markov property yields that
X Px [HK < ∞] = Px [0 ≤ LK < +∞] =
Px Xn = y, LK = n
=
X
y∈K
n≥0
y∈K
n≥0
e K = ∞] =
Px [Xn = y]Py [H
X
g(x, y)eK (y).
y∈K
If we let |K| = 1 in Lemma 2.12 we get
cap(x) = 1/g(0),
Px [Hy < ∞] = g(x, y)/g(0),
x, y ∈ Zd .
(2.3.7)
Hence the Green function g(x, y) equals the hitting probability Px [Hy < ∞] up to the constant
multiplicative factor g(0). The case |K| = 2 can also be exactly solved, as illustrated in the next
lemma.
14
CHAPTER 2. RANDOM WALK, GREEN FUNCTION, EQUILIBRIUM MEASURE
Lemma 2.13. For x 6= y ∈ Zd ,
cap({x, y}) =
2
.
g(0) + g(y − x)
(2.3.8)
Proof. The equilibrium measure e{x,y} is supported on {x, y}, whence we can write
e{x,y} = αx δx + αy δy .
Using Lemma 2.12 we deduce that
1 = g(x, x)αx + g(x, y)αy ,
1 = g(y, x)αx + g(y, y)αy .
Solving this system yields αx = αy = (g(0) + g(x − y))−1 , and hence (2.3.8).
The next lemma gives an equivalent definition of capacity.
Lemma 2.14 (A variational characterization of capacity). Let K ⊂⊂ Zd . Define the families
of functions Σ↑ and Σ↓ by




X
Σ↑ = ϕ ∈ RK : ∀ x ∈ K
ϕ(y)g(x, y) ≤ 1 ,
(2.3.9)


y∈K




X
Σ↓ = ϕ ∈ RK : ∀ x ∈ K
ϕ(y)g(x, y) ≥ 1 .
(2.3.10)


y∈K
Then
cap(K) = max
ϕ∈Σ↑
X
ϕ(y) = min
ϕ∈Σ↓
y∈K
X
ϕ(y).
(2.3.11)
y∈K
Proof of Lemma 2.14. Let K ⊂⊂ Zd and ϕ ∈ RK . On the one hand,
X
x∈K
eeK (x)
X
ϕ(y)g(x, y) =
y∈K
X
X
X
1
1
(2.3.6)
ϕ(y)
eK (x)g(x, y) =
ϕ(y).
cap(K)
cap(K)
y∈K
x∈K
y∈K
On the other hand, by (2.3.9) and (2.3.10),
X
x∈K
Therefore,
eeK (x)
sup
X
y∈K
X
ϕ∈Σ↑ y∈K
ϕ(y)g(x, y)
≤ 1, for ϕ ∈ Σ↑ ,
≥ 1, for ϕ ∈ Σ↓ .
ϕ(y) ≤ cap(K) ≤ inf
ϕ∈Σ↓
X
ϕ(y).
y∈K
To finish the proof of (2.3.11), it suffices to note that eK ∈ Σ↑ ∩ Σ↓ . Indeed, by (2.3.6),
X
∀x ∈ K :
eK (y)g(x, y) = Px [HK < ∞] = 1.
y∈K
The proof of (2.3.11) is complete.
The next exercises give some useful applications of Lemma 2.14.
15
2.4. NOTES
Exercise 2.15 (Monotonicity of capacity). Show that for any K ⊆ K ′ ⊂⊂ Zd ,
cap(K) ≤ cap(K ′ ).
(2.3.12)
Exercise 2.16. Show that for any K ⊂⊂ Zd ,
supx∈K
|K|
P
y∈K
g(x, y)
≤ cap(K) ≤
inf x∈K
|K|
P
y∈K
g(x, y)
.
(2.3.13)
Finally, using (2.3.13), one can easily obtain bounds on the capacity of balls which are useful
for large radii in particular.
Exercise 2.17 (Capacity of a ball). Use (2.2.8) and (2.3.13) to prove that
cap(B(R)) ≍ Rd−2 .
2.4
(2.3.14)
Notes
An excellent introduction to simple random walk on Zd is the book of Lawler [L91]. The
most fruitful approach to transience and recurrence of SRW on Zd and other graphs is through
connections with electric networks, see [DS84]. Pólya’s theorem was first proved in [P21] using
combinatorics. Alternative proofs use connections with electric networks, and can be found in
[DS84, L83, LP11].
Simple random walk on Zd is a special case of a large class of reversible discrete time Markov
chains on infinite connected graphs G = (V, E), where V is the vertex set and E the edge set.
Every such Markov chain can be described using a weight function µ : V × V → [0, ∞) such
that µ(x, y) > 0 if and only if
P{x, y} ∈ E. More precisely, its transition probability from x ∈ V1
to y ∈ V is given by µ(x, y)/ z∈V µ(x, z). The simple random walk corresponds to µ(x, y) = 2d
if x ∼ y, and 0 otherwise.
Lemma 2.14 shows that the capacity of a set arises as the solution of a linear programming
problem. There are many other examples of variational characterizations of capacity that involve
convex quadratic objective functions, see e.g., [S12d, Proposition 1.9] or [W08, Proposition 2.3].
16
CHAPTER 2. RANDOM WALK, GREEN FUNCTION, EQUILIBRIUM MEASURE
Chapter 3
Random interlacements: first
definition and basic properties
In this chapter we give the first definition of random interlacements at level u > 0 as a random
subset of Zd . We then prove that it has polynomially decaying correlations, and is invariant and
ergodic with respect to the lattice shifts.
Along the way we also collect various basic definitions pertaining to a measurable space of
subsets of Zd , which will be used often throughout these notes.
3.1
Space of subsets of Zd and random interlacements
d
Consider the space {0, 1}Z , d ≥ 3. This space is in one-to-one correspondence with the space
d
of subsets of Zd , where for each ξ ∈ {0, 1}Z , the corresponding subset of Zd is defined by
S(ξ) = {x ∈ Zd : ξx = 1}.
d
Thus, we can think about the space {0, 1}Z as the space of subsets of Zd . For x ∈ Zd , we define
d
the function Ψx : {0, 1}Z → {0, 1} by Ψx (ξ) = ξx for ξ ∈ {0, 1}d . The functions (Ψx )x∈Zd are
called coordinate maps.
Definition 3.1. For K ⊂ Zd , we denote by σ(Ψx , x ∈ K) the sigma-algebra on the space
d
{0, 1}Z generated by the coordinate maps Ψx , x ∈ K, and we define F = σ(Ψx , x ∈ Zd ).
If K ⊂⊂ Zd and A ∈ σ(Ψx , x ∈ K), then we say that A is a local event with support K.
For any K0 ⊆ K ⊂⊂ Zd , K1 = K \ K0 we say that
{ ∀ x ∈ K0 : Ψx = 0 ; ∀ x ∈ K1 : Ψx = 1 } = { S ∩ K = K1 }
(3.1.1)
is a cylinder event with base K.
Remark 3.2. Every local event is a finite disjoint union of cylinder events. More precisely
stated, for any K ⊂⊂ Zd , the sigma-algebra σ(Ψx , x ∈ K) is atomic and has exactly 2|K| atoms
of form (3.1.1).
d
For u > 0, we consider the one parameter family of probability measures P u on ({0, 1}Z , F)
satisfying the equations
P u [S ∩ K = ∅] = e−ucap(K) ,
17
K ⊂⊂ Zd .
(3.1.2)
18CHAPTER 3. RANDOM INTERLACEMENTS: FIRST DEFINITION AND BASIC PROPERTIES
These equations uniquely determine the measure P u since the events
n
d
ξ ∈ {0, 1}Z
o n
o
d
: S(ξ) ∩ K = ∅ = ξ ∈ {0, 1}Z : Ψx (ξ) = 0 for all x ∈ K ,
K ⊂⊂ Zd
form a π-system (i.e., a family of sets which is closed under finite intersections) that generates
F, and Dynkin’s π − λ lemma (see Theorem 3.2 in [B86]) implies the following result.
Claim 3.3. If two probability measures on the same measurable space coincide on a π-system,
then they coincide on the sigma-algebra generated by that π-system.
The existence of a probability measure P u satisfying (3.1.2) is not immediate, but it will follow
from Definition 6.7 and Remark 6.8. The measure P u also arises as the local limit of the trace of
the first ⌊uN d ⌋ steps of simple random walk with a uniform starting point on the d-dimensional
torus (Z/N Z)d , see Theorem 4.1 and Exercise 4.2.
d
The random subset S of Zd in ({0, 1}Z , F, P u ) is called random interlacements at level u. The
reason behind the use of “interlacements” in the name will become clear in Chapter 6, see
Definition 6.7, where we define random interlacements at level u as the range of the (interlacing)
SRW trajectories in the support of a certain Poisson point process.
By the inclusion-exlusion formula, we obtain from (3.1.2) the following explicit expressions for
the probabilities of cylinder events: for any K0 ⊆ K ⊂⊂ Zd , K1 = K \ K0 ,
P u [ Ψ|K0 ≡ 0, Ψ|K1 ≡ 1 ] = P u [S ∩ K = K1 ] =
X
K ′ ⊆K
′
′
(−1)|K | e−u cap(K0 ∪K ) .
(3.1.3)
1
Exercise 3.4. Show (3.1.3).
In the remaining part of this chapter we prove some basic properties of random interlacements
at level u using the equations (3.1.2) and (3.1.3).
3.2
Correlations, shift-invariance, ergodicity
In this section we prove that random interlacements at level u has polynomially decaying correlations, is invariant and ergodic with respect to the lattice shifts. We begin with computing the
asymptotic behavior of the covariances of S under P u .
Claim 3.5. For any u > 0,
CovP u (Ψx , Ψy ) ∼
n
2u o
2u
,
g(y
−
x)
exp
−
g(0)2
g(0)
|x − y| → ∞.
(3.2.1)
Remark 3.6. By (3.2.1) and the Green function estimate (2.2.8), for any u > 0 and x, y ∈ Zd ,
c · (|y − x| + 1)2−d ≤ CovP u (1{x∈S} , 1{y∈S} ) ≤ C · (|y − x| + 1)2−d ,
for some constants 0 < c ≤ C < ∞ depending on u. We say that random interlacements at level
u exhibits polynomial decay of correlations.
19
3.2. CORRELATIONS, SHIFT-INVARIANCE, ERGODICITY
Proof of Claim 3.5. We compute
CovP u (Ψx , Ψy ) = CovP u (1 − Ψx , 1 − Ψy ) = P u [ Ψx = Ψy = 0 ] − P u [ Ψx = 0 ]P u [ Ψy = 0 ]
(3.1.2)
= exp {−ucap({x, y})} − exp {−ucap({x})} · exp {−ucap({y})}
o
n
n
2u o
2u
(2.3.7),(2.3.8)
− exp −
=
exp −
g(0) + g(y − x)
g(0)
o
n
o
n
2ug(x − y)
2u
exp
−1
= exp −
g(0)
g(0)(g(0) + g(x − y))
n
o
2u 2ug(x − y)
∼ exp −
.
g(0)
g(0)2
d
Definition 3.7. If Q is a probability measure on ({0, 1}Z , F), then a measure-preserving transd
d
d
formation T on ({0, 1}Z , F, Q) is an F-measurable map T : {0, 1}Z → {0, 1}Z , such that
Q[T −1 (A)] = Q[A]
for all
A ∈ F.
Such a measure-preserving transformation is called ergodic if all T -invariant events, i.e., all
A ∈ F for which T −1 (A) = A, have Q-probability 0 or 1.
We now define the measure-preserving transformations we will have a look at. For x ∈ Zd we
introduce the canonical shift
d
d
tx : {0, 1}Z → {0, 1}Z ,
Ψy (tx (ξ)) = Ψy+x (ξ),
d
y ∈ Zd , ξ ∈ {0, 1}Z .
Also, for K ⊆ Zd , we define K + x = {y + x : y ∈ K}.
Lemma 3.8. For any x ∈ Zd and any u > 0 the transformation tx preserves the measure P u .
Proof. Let x ∈ Zd . We want to prove that the pushforward of P u by tx coincides with P u , i.e.,
(tx ◦ P u )[A] = P u [A] for all A ∈ F. By Claim 3.3 it suffices to show that tx ◦ P u satisfies (3.1.2).
Let K ⊂⊂ Zd . We compute
(3.1.2)
(tx ◦ P u ) [S ∩ K = ∅] = P u [S ∩ (K − x) = ∅] = e−ucap(K−x) = e−ucap(K) .
Before we state the next property of random interlacements, we recall the following classical
approximation result (see e.g., [B86, Theorem 11.4]). For A, B ∈ F, we denote by A∆B ∈ F
the symmetric difference between A and B, i.e.,
A∆B = (A \ B) ∪ (B \ A).
(3.2.2)
d
Exercise 3.9. Let ({0, 1}Z , F, Q) be a probability space, and take B ∈ F. Prove that
for any ε > 0 there exist K ⊂⊂ Zd and Bε ∈ σ(Ψx , x ∈ K) such that Q[Bε ∆B] ≤ ε. (3.2.3)
Hint: it is enough to show that the family of sets B ∈ F that satisfy (3.2.3) is a sigma-algebra
that contains the local events.
The next result states that random interlacements is ergodic with respect to the lattice shifts.
20CHAPTER 3. RANDOM INTERLACEMENTS: FIRST DEFINITION AND BASIC PROPERTIES
Theorem 3.10 ([S10], Theorem 2.1). For any u ≥ 0 and 0 6= x ∈ Zd , the measure-preserving
d
transformation tx is ergodic on ({0, 1}Z , F, P u ).
Proof of Theorem 3.10. Let us fix 0 6= x ∈ Zd . In order to prove that tx is ergodic on
d
({0, 1}Z , F, P u ), it is enough to show that for any K ⊂⊂ Zd and Bε ∈ σ(Ψx , x ∈ K) we
have
lim P u [Bε ∩ tnx (Bε )] = P u [Bε ]2 .
(3.2.4)
n→∞
Indeed, let B ∈ F be such that tx (B) = B. Note that for any integer n, tnx (B) = B.
For any ε > 0, let Bε ∈ F be a local event satisfying (3.2.3) with Q = P u , i.e., P u [Bε ∆B] ≤ ε.
Note that
P u [tnx (Bε )∆B] = P u [tnx (Bε )∆tnx (B)] = P u [tnx (Bε ∆B)] = P u [Bε ∆B] ≤ ε,
where in the last equality we used Lemma 3.8. Therefore, for all n, we have
|P u [Bε ∩ tnx (Bε )] − P u [B]| ≤ P u [(Bε ∩ tnx (Bε ))∆B] ≤ 2ε,
and we conclude that
(3.2.4)
P u [B] = lim lim P u [Bε ∩ tnx (Bε )] =
ε→0 n→∞
lim P u [Bε ]2 = P u [B]2 ,
ε→0
d
which implies that P u [B] ∈ {0, 1}. This proves the ergodicity of tx on ({0, 1}Z , F, P u ) given
the mixing property (3.2.4).
We begin the proof of (3.2.4) by showing that for any K1 , K2 ⊂⊂ Zd ,
lim cap(K1 ∪ (K2 + y)) = cap(K1 ) + cap(K2 ).
|y|→∞
(3.2.5)
Let Ky = K1 ∪ (K2 + y). By the definition (2.3.2) of capacity, we only need to show that
∀ z ∈ K1 : lim eKy (z) = eK1 (z),
(3.2.6)
∀ z ∈ K2 : lim eKy (z + y) = eK2 (z)
(3.2.7)
|y|→∞
|y|→∞
in order to conclude (3.2.5). We only prove (3.2.6). For any z ∈ K1 we have
e Ky < ∞] ≤ Pz [HK +y < ∞]
e K = ∞, H
0 ≤ eK1 (z) − eKy (z) = Pz [H
1
2
X
X
(2.2.8)
(2.3.7) X
|v + y − z|2−d → 0,
g(z, v + y) ≤ Cg
Pz [H{v+y} < ∞] ≤
≤
v∈K2
v∈K2
v∈K2
|y| → ∞,
thus we obtain (3.2.6). The proof of (3.2.7) is analogous and we omit it.
We first prove (3.2.4) if A is a cylinder event of form (3.1.1).
If n is big enough, then K ∩ (K + nx) = ∅, therefore we have
(3.1.3)
P u [A ∩ tnx (A)] = P[S ∩ (K ∪ (K + nx)) = K1 ∪ (K1 + nx)] =
X X
′′
′
(−1)|K |+|K | exp − ucap((K0 ∪ K ′′ ) ∪ ((K0 ∪ K ′ ) + nx)) .
K ′′ ⊆K1 K ′ ⊆K1
21
3.3. INCREASING AND DECREASING EVENTS, STOCHASTIC DOMINATION
From the above identity and (3.2.5) we deduce
lim P u [A ∩ tnx (A)] =
n→∞
=
X
X
(−1)|K
′′ |+|K ′ |
K ′′ ⊆K1 K ′ ⊆K1
X
K ′′ ⊆K1
′′
(−1)|K | e−ucap(K0 ∪K
exp − u(cap(K0 ∪ K ′′ ) + cap(K0 ∪ K ′ ))
X
′′ )
′
(−1)|K | e−ucap(K0 ∪K
K ′ ⊆K1
′)
(3.1.3)
= P u [A]2 ,
thus (3.2.4) holds for cylinder events. Now by Remark 3.2 the mixing result (3.2.4) can be
routinely extended to any local event. The proof of Theorem 3.10 is complete.
Exercise 3.11. Show that the asymptotic independence result (3.2.4) can indeed be extended
from the case when A is a cylinder set of form (3.1.1) to the case when Bε ∈ σ(Ψx , x ∈ K) for
some K ⊂⊂ Zd .
3.3
Increasing and decreasing events, stochastic domination
In this section we introduce increasing and decreasing events, which will play an important
role in the sequel. We also define stochastic domination of probability measures, and use it to
compare the law of random interlacements with that of the classical Bernoulli percolation.
d
d
There is a natural partial order on the space {0, 1}Z : we say that ξ ≤ ξ ′ for ξ, ξ ′ ∈ {0, 1}Z , if
for all x ∈ Zd , ξx ≤ ξx′ .
Definition 3.12. An event G ∈ F is called increasing (resp., decreasing), if for all ξ, ξ ′ ∈
d
{0, 1}Z with ξ ≤ ξ ′ , ξ ∈ G implies ξ ′ ∈ G (resp., ξ ′ ∈ G implies ξ ∈ G).
It is immediate that if G is increasing, then Gc is decreasing, and that the union or intersection
of increasing events is again an increasing event.
d
Definition 3.13. If P and Q are probability measures on the measurable space {0, 1}Z , F
then we say that P stochastically dominates Q if for every increasing event G ∈ F, Q[G] ≤ P [G].
Random interlacements at level u is a random subset of Zd , so it is natural to try to compare
it to a classical random subset of Zd , namely the Bernoulli site percolation with density p. It
turns out that the laws of the two random subsets do not stochastically dominate one another.
We first define Bernoulli percolation. For p ∈ [0, 1], we consider the probability measure Qp on
d
({0, 1}Z , F) such that under Qp , the coordinate maps (Ψx )x∈Zd are independent and each is
distributed as a Bernoulli random variable with parameter p, i.e.,
Qp [Ψx = 1] = 1 − Qp [Ψx = 0] = p.
While P u exhibits long-range correlations, see Remark 3.6, Qp is a product measure. For
applications, it is often helpful if there is a stochastic domination by (of) a product measure.
Unfortunately, it is not the case with P u , as we will now see in Claims 3.14 and 3.15.
Claim 3.14. For any u > 0 and p ∈ (0, 1), P u does not stochastically dominate Qp .
Proof. Fix u > 0 and p ∈ (0, 1). For R ≥ 1, let GR = { S ∩ B(R) = ∅ } ∈ F be the event that
box B(R) is completely vacant. The events GR are clearly decreasing, therefore, in order to
prove Claim 3.14 it is enough to show that for some R ≥ 1,
P u [GR ] > Qp [GR ].
22CHAPTER 3. RANDOM INTERLACEMENTS: FIRST DEFINITION AND BASIC PROPERTIES
For large enough R, we have
(∗)
(3.1.2)
P u [GR ] = e−ucap(B(R)) > (1 − p)|B(R)| = Qp [GR ],
where the inequality marked by (∗) indeed holds for large enough R, because
cap(B(R))
(2.3.14)
≍
Rd−2
and
|B(R)| ≍ Rd ,
thus e−ucap(B(R)) decays to zero slower than (1 − p)|B(R)| as R → ∞. The proof is complete.
The proof of the next claim is an adaptation of [CT12, Lemma 4.7].
Claim 3.15. For any u > 0 and p ∈ (0, 1), Qp does not stochastically dominate P u .
Proof. Fix u > 0 and p ∈ (0, 1). For R ≥ 1, let G′R = {S ∩ B(R) = B(R)} ∈ F be the event that
B(R) is completely occupied. The event G′R is clearly increasing, therefore, in order to prove
Claim 3.15, it suffices to show that for some R ≥ 1,
P u [G′R ] > Qp [G′R ].
(3.3.1)
On the one hand, Qp [G′R ] = p|B(R)| . On the other hand, we will prove in Claim 6.10 of Section 6.3
using the more constructive definition of random interlacements that there exists R0 = R0 (u) <
∞ such that
1
(3.3.2)
∀ R ≥ R0 : P u [G′R ] ≥ exp − ln(R)2 Rd−2 .
2
Since |B(R)| ≍ Rd , (3.3.1) holds for large enough R, and the proof of Claim 3.15 is complete.
3.4
Notes
The results of Section 3.2 are proved in [S10] using Definition 6.7. The latter definition allows
to deduce many other interesting properties of random interlacements. For instance, for any
u > 0, the subgraph of Zd induced by random interlacements at level u is almost surely infinite
and connected (see [S10, Corollary 2.3]).
For any u > 0, the measure P u satisfies the so-called FKG inequality, i.e., for any increasing
events A1 , A2 ∈ F,
P u [A1 ∩ A2 ] ≥ P u [A1 ] · P u [A2 ],
see [T09a].
Despite the results of Claims 3.14 and 3.15, there are many similarities in geometric properties
of the subgraphs of Zd induced by random interlacements at level u and Bernoulli percolation
with parameter p > pc , where pc ∈ (0, 1) is the critical threshold for the existence of an (unique)
infinite connected component in the resulting subgraph (see [G99]). For instance, both the
infinite connected component of random interlacements and of Bernoulli percolation are almost
surely transient graphs (see [GKZ93, RS11a]), their graph distances are comparable with the
graph distance in Zd (see [AP96, CP12, DRS12b]), and simple random walk on each of them
satisfies the quenched invarinace principle (see [BeBi07, MP07, PRS13, SidSz04]).
It is worth mentioning that if d is high enough and if we restrict our attention to a subspace V
of Zd with large enough co-dimension, then the law of the restriction of random interlacements
to V does stochastically dominate Bernoulli percolation on V with small enough p = p(u), see
the Appendix of [CP12].
Chapter 4
Random walk on the torus and
random interlacements
In this chapter we consider simple random walk on the discrete torus TdN := (Z/N Z)d , d ≥ 3.
We prove that for every u > 0, the local limit (as N → ∞) of the set of vertices in TdN visited
by the random walk up to time uN d steps is given by random interlacements at level u.
We denote by ϕ : Zd → TdN the canonical projection map of the equivalence relation on Zd
induced by mod N , omitting the dependence on N from the notation. Recall that Px is the
law of simple random walk (Xn )n≥0 on Zd started from x. The sequence (ϕ(Xn ))n≥0 is called
simple random walk on TdN started from ϕ(x). Its distribution is given as the pushforward of Px
by ϕ, i.e., ϕ ◦ Px . In other words, the distribution of simple random walk on TdN at step n + 1
is uniform over the neighbors of the vertex where the walker is at step n.
We use bold font to denote vertices and subsets of the torus, i.e., x ∈ TdN and K ⊂ TdN . We
write K ⊂⊂ TdN if the size of K does not depend on N . Similarly, we denote the random walk
on the torus by (Xt )t≥0 = (ϕ(Xt ))t≥0 . We write Px for the law of simple random walk on TdN
started from x ∈ TdN , and
X
1
Px
(4.0.1)
P= d ·
N
d
x∈TN
for the law of simple random walk started from a uniformly chosen vertex of TdN . For simplicity,
we omitted from the notation above the dependence on N of the random walk on TdN and its
law.
The goal of this chapter is to prove that there is a well defined local limit (as N → ∞) of
{X0 , . . . , X⌊uN d ⌋ } ⊂ TdN for any u > 0. The key to this statement is the following theorem.
Theorem 4.1 ([W08]). For a given K ⊂⊂ Zd ,
lim P[{X0 , . . . , X⌊uN d ⌋ } ∩ ϕ(K) = ∅] = e−u cap(K) .
(4.0.2)
N →∞
By (3.1.2), the right-hand side of (4.0.2) is precisely the probability that random interlacements
at level u does not intersect K.
d
Exercise 4.2. Define the probability measure P u,N on ({0, 1}Z , F) by
P u,N [B] = P
h
1[ϕ(x) ∈ {X0 , . . . , X⌊uN d ⌋ }]
23
x∈Zd
i
∈B ,
B ∈ F.
24CHAPTER 4. RANDOM WALK ON THE TORUS AND RANDOM INTERLACEMENTS
Use Theorem 4.1, the inclusion-exclusion formula (3.1.3) and Remark 3.2 to show that for any
K ⊂⊂ Zd and any local event B ∈ σ(Ψx , x ∈ K) we have
lim P u,N [B] = P u [B],
N →∞
d
where P u is the unique probability measure on ({0, 1}Z , F) that satisfies (3.1.2).
Thus, Theorem 4.1 implies that the local limit of the sequence of random sets {X0 , . . . , X⌊uN d ⌋ }
is given by random interlacements at level u.
The proof of Theorem 4.1 relies crucially on the two facts that (a) simple random walk on Zd
is transient when d ≥ 3, recall Theorem 2.7, and (b) uniformly over all the possible starting
positions, the so-called lazy random walk on TdN converges with high probability to its stationary
measure (which is the uniform measure on vertices of TdN ) in about N 2+ǫ steps. The first fact
implies that even though the random walk on TdN is recurrent, it is “locally transient”, which
means that with positive probability it returns to its starting position after a very long time only
(of order N d ). The second fact implies that the distributions of the lazy random walk before
step t and after step t + N 2+ǫ are approximately independent as N → ∞.
The lazy random walk (Yt )t≥0 on TdN is defined as the Markov chain which stays in the current
state with probability 1/2 and otherwise selects a new state uniformly at random from the
neighbors of the current state. The main reason to introduce the lazy random walk is to avoid
issues caused by the periodicity of simple random walk. (Indeed, if N is even, then simple
random walk on TdN will visit disjoint subsets of TdN at even and at odd steps.) The lazy random
walk is aperiodic and starting from any position converges to the uniform measure on vertices
of TdN .
It will be convenient to link the lazy random walk with simple random walk. We define the
sequence (ξt )t≥1 of independent identically distributed
Pt random variables with Bernoulli distribution with parameter 1/2, and let S0 = 0, St = i=1 ξi for t ≥ 1. Then Yt := XSt is the
lazy random walk on TdN . Indeed, here St stands for the number of steps up to t when the
lazy random walk changes its location. In what follows, for simplicity of notation, we use Px
to denote both the law of simple random walk on TdN started from x and the law of the lazy
random walk started from x.
Before we give a proof of Theorem 4.1, we collect some useful facts about simple and lazy random
walks on TdN .
4.1
4.1.1
Preliminaries
Lazy random walk
Our first statement gives a useful connection between simple and lazy random walks. Note that
by the law of large numbers Stt → 12 , as t → ∞. Thus, on average, in t steps, the lazy random
walk changes its position about t/2 times. This is formalized in the next lemma.
Lemma 4.3. For any ǫ > 0 there exists α = α(ǫ) ∈ (0, 1) such that for all n ≥ 0,
Px {Y0 , . . . , Y2(1−ǫ)n } ⊂ {X0 , . . . , Xn } ⊂ {Y0 , . . . , Y2(1+ǫ)n } ≥ 1 − 2 · αn .
(4.1.1)
Proof of Lemma 4.3. Note that the event in (4.1.1) contains the event {S2(1−ǫ)n < n} ∩
{S2(1+ǫ)n > n}. By the exponential Markov inequality, for any λ > 0,
1 λ 1 2(1−ǫ)n
−λn
·e +
Px [S2(1−ǫ)n > n] ≤ e
·
2
2
25
4.1. PRELIMINARIES
and
λn
Px [S2(1+ǫ)n < n] ≤ e
·
1 −λ 1
·e +
2
2
2(1+ǫ)n
.
2(1−ǫ)
and
To finish the proof, choose λ = λ(ǫ) > 0 small enough so that both e−λ · 12 · eλ + 12
2(1+ǫ)
eλ · 12 · e−λ + 12
are smaller than 1. (Note that from the asymptotic expansion of the two
expressions for λ → 0 one can deduce that such a choice of λ always exists.)
Exercise 4.4. Give details for the proof of Lemma 4.3, i.e., give a possible expression for α.
The following corollary is going to be useful in the proof of Theorem 4.1.
Corollary 4.5. For any ǫ > 0 and δ > 0, there exist C = C(ǫ, δ) < ∞ and β = β(ǫ) ∈ (0, 1)
such that for all N ≥ 1 (size of TdN ), K ⊂⊂ TdN , and n = ⌊N δ ⌋,
(1 − C · β n ) · P {Y0 , . . . , Y2(1−ǫ)n } ∩ K 6= ∅ ≤ P [{X0 , . . . , Xn } ∩ K 6= ∅]
≤ (1 + C · β n ) · P {Y0 , . . . , Y2(1+ǫ)n } ∩ K 6= ∅ . (4.1.2)
Proof of Corollary 4.5. It follows from (4.0.1) and Lemma 4.3 that for any K ⊂⊂ TdN ,
P {Y0 , . . . , Y2(1−ǫ)n } ∩ K 6= ∅ − 2 · αn ≤ P [{X0 , . . . , Xn } ∩ K 6= ∅]
≤ P {Y0 , . . . , Y2(1+ǫ)n } ∩ K 6= ∅ + 2 · αn ,
with α chosen as in (4.1.1). In addition, since X0 under P has uniform distribution over
√
vertices of TdN , we have P [{X0 , . . . , Xn } ∩ K 6= ∅] ≥ P[X0 ∈ K] ≥ N1d . We take β = α. Let
N1 = N1 (ǫ, δ) be such that for all N ≥ N1 , one has 2 · αn · N d < 1. For any N ≤ N1 , the
inequalities (4.1.2) hold for some C = C(ǫ, δ, N1 ), and for any N > N1 , the inequalities (4.1.2)
hold for C = C(ǫ, δ) such that (1−C ·β n )·(1+2·αn ·N d ) ≤ 1 and (1+C ·β n )·(1−2·αn ·N d ) ≥ 1.
We complete the proof by taking C which is suitable for both cases, N ≤ N1 and N > N1 .
Exercise 4.6. Give details for the proof of Corollary 4.5.
4.1.2
Hitting of small sets in short times
Theorem 4.1 gives an asymptotic expression for the probability that simple random walk visits
a subset of TdN at times proportional to N d . The next lemma gives an asymptotic expression
for the probability that simple random walk visits a subset of TdN after much shorter time than
N d.
Lemma 4.7. Let δ ∈ (0, d), N ≥ 1, and n = ⌊N δ ⌋. For any K ⊂⊂ Zd ,
Nd
· P [{X0 , . . . , Xn } ∩ ϕ(K) 6= ∅] = cap(K).
N →∞ n
lim
(4.1.3)
Proof of Lemma 4.7. Let K ⊂⊂ Zd . In this proof we use the notation K for ϕ(K). Recall from
e A of A,
(2.2.2), (2.2.3), and (2.2.4) the notion of the entrance time HA in A, the hitting time H
d
and the exit time TA from A for simple random walk on Z . These definitions naturally extend
e A , and TA ,
to the case of random walk on TdN rather than Zd , and we use the notation HA , H
respectively. Using the reversibility of simple random walk on TdN , i.e., the fact that for any
x0 , . . . , xt ∈ TdN ,
P[(X0 , . . . , Xt ) = (x0 , . . . , xt )] = P[(X0 , . . . , Xt ) = (xt , . . . , x0 )],
we deduce that for all N ≥ 1, t ∈ N, and x ∈ K,
e K > t].
P[HK = t, XHK = x] = P[X0 = x, H
(4.1.4)
26CHAPTER 4. RANDOM WALK ON THE TORUS AND RANDOM INTERLACEMENTS
Exercise 4.8. Prove (4.1.4).
Now we start to rewrite the left-hand side of (4.1.3).
n
Nd X X
Nd
P[HK = t, XHK = x]
· P[{X0 , . . . , Xn } ∩ K 6= ∅] =
·
n
n
(4.1.4)
=
t=0 x∈K
n X
d X
N
·
n
t=0 x∈K
e K > t]
P[X0 = x, H
n
n
1 XX
1 XX
e
= ·
Px [HK > t] = ·
eK (x, t),
n
n
t=0 x∈K
t=0 x∈K
e K > t] omitting the dependence on N from the notation.
where we defined eK (x, t) = Px [H
d
d
e K > t]. Note that limt→∞ eK (x, t) = eK (x),
For x ∈ Z , K ⊂⊂ Z , and t ≥ 0, let eK (x, t) = Px [H
where eK (x) is defined in (2.3.1). Thus, it follows from (2.3.2) that
n
1XX
eK (x, t) = cap(K).
N →∞ n
t=0
lim
x∈K
Therefore, to prove (4.1.3), it is enough to show that for any x ∈ K and x = ϕ(x)(∈ K),
lim max |eK (x, t) − eK (x, t)| = 0.
(4.1.5)
N →∞ 0≤t≤n
By using the fact that Xt = ϕ(Xt ), we obtain that for each t ≤ n,
h
h
i
i
e K > t − Px H
e ϕ−1 (K) > t
|eK (x, t) − eK (x, t)| = Px H
≤ Px Hϕ−1 (K)\K ≤ t ≤ Px Hϕ−1 (K)\K ≤ n .
Therefore, (4.1.5) will follow once we show that for any x ∈ K
lim Px Hϕ−1 (K)\K ≤ n = 0.
(4.1.6)
N →∞
By Doob’s submartingale inequality applied to the non-negative submartingale (|Xt |22 )t≥0 , for
any ǫ > 0,
lim Px [TB(x,n(1+ǫ)/2 ) ≤ n] = 0.
(4.1.7)
N →∞
Exercise 4.9. Prove (4.1.7). Hint: first show that Ex [ |Xt |22 | X0 , . . . , Xt−1 ] = |Xt−1 |22 + 1.
We fix ǫ > 0 and decompose the event in (4.1.6) according to whether the random walk has
left the box B(x, n(1+ǫ)/2 ) up to time n or not. Taking into account (4.1.7), to prove (4.1.6) it
suffices to show that for any x ∈ K
h
i
h
i
lim Px Hϕ−1 (K)\K ≤ n, TB(x,n(1+ǫ)/2 ) > n ≤ lim Px H(ϕ−1 (K)∩B(x,n(1+ǫ)/2 ))\K < ∞ = 0.
N →∞
N →∞
We write
h
i
Px H(ϕ−1 (K)∩B(x,n(1+ǫ)/2 ))\K < ∞ ≤
(2.3.7)
≤
X
y∈(ϕ−1 (K)∩B(x,n(1+ǫ)/2 ))\K
X
y∈(ϕ−1 (K)∩B(x,n(1+ǫ)/2 ))\K
(2.2.8)
g(x, y) ≤
Px [Hy < ∞]
X
y∈(ϕ−1 (K)∩B(x,n(1+ǫ)/2 ))\K
Cg · |x − y|2−d .
27
4.1. PRELIMINARIES
Note that if N is big enough, then ϕ−1 (K) ∩ B(x, 12 N ) = K, therefore we can write the sum
above as the sum over vertices in disjoint annuli
')
(
&
1
1
n(1+ǫ)/2
B(x, N · (k + )) \ B(x, N · (k − )),
,
(4.1.8)
k ∈ 1, . . . ,
2
2
N
see Figure 4.1. Note that for each such k,
Figure 4.1: An illustration of (4.1.8) and (4.1.9). The dark set in the middle is K, and the
lighter ones are copies of K from ϕ−1 (K) \ K. The copies of K are shifted by elements of N Zd .
The concentric boxes around K with dotted boundaries are of radius N/2, 3N/2, 5N/2, . . .
X
y∈ϕ−1 (K)∩(B(x,N ·(k+ 21 ))\B(x,N ·(k− 12 ))
≤ |K| · C · k
Therefore,
X
y∈(ϕ−1 (K)∩B(x,n(1+ǫ)/2 ))\K
d−1
Cg · |x − y|2−d
1 2−d
· Cg · N · (k − )
≤ C · |K| · N 1−d · n(1+ǫ)/2 .
2
Cg · |x − y|2−d
&
n(1+ǫ)/2
≤ C · |K| · N 1−d · n(1+ǫ)/2 ·
N
'
(4.1.9)
≤ C · |K| ·
n1+ǫ
.
Nd
Since δ ∈ (0, d) and n = ⌊N δ ⌋, we choose ǫ > 0 such that δ(1 + ǫ) < d and conclude with (4.1.6).
The proof of Lemma 4.7 is complete.
Corollary 4.10. Let δ ∈ (0, d), N ≥ 1, and n = ⌊N δ ⌋. For any K ⊂⊂ Zd ,
1
Nd
· P [{Y0 , . . . , Yn } ∩ ϕ(K) 6= ∅] = · cap(K).
N →∞ n
2
lim
Exercise 4.11. Use Corollary 4.5 and Lemma 4.7 to prove (4.1.10).
(4.1.10)
28CHAPTER 4. RANDOM WALK ON THE TORUS AND RANDOM INTERLACEMENTS
4.1.3
Fast mixing of lazy random walk
The lazy random walk on TdN is an irreducible, aperiodic Markov chain, and thus converges to a
unique stationary distribution. It is not difficult to check that the uniform measure on vertices
of TdN is the stationary distribution for the lazy random walk. Convergence to the stationary
distribution is often described by the function


X X
εn (N ) :=
P0 [Yn = y] − N −d =
Px [Yn = y] − N −d , for any x ∈ TdN  .
y∈TdN
y∈TdN
Mixing properties of the random walk can also be described using εn (N ), as we discuss in
Lemma 4.12.
Lemma 4.12. For N ≥ 1, 1 ≤ t1 ≤ t2 ≤ T , E1 ∈ σ(Y0 , . . . , Yt1 ) and E2 ∈ σ(Yt2 , . . . , YT ), we
have
|P[E1 ∩ E2 ] − P[E1 ] · P[E2 ]| ≤ εt2 −t1 (N ).
(4.1.11)
Proof. For x, y ∈ TdN , define
f (x) = P[E1 | Yt1 = x],
g(y) = P[E2 | Yt2 = y].
P
P
Note that P[E1 ] = E[f (Yt1 )] = N1d · x∈Td f (x) and P[E2 ] = E[g(Yt2 )] = N1d · y∈Td g(y).
N
N
Also by the Markov property,
X
P[E1 ∩ E2 ] = E[f (Yt1 ) · g(Yt2 )] = N −d ·
f (x)g(y)Px [Yt2 −t1 = y].
(4.1.12)
x,y∈TdN
Therefore,
|P[E1 ∩ E2 ] − P[E1 ] · P[E2 ]| = |E[f (Yt1 ) · g(Yt2 )] − E[f (Yt1 )] · E[g(Yt2 )]|
X −d X
= N ·
f (x)g(y) Px [Yt2 −t1 = y] − N −d ≤ sup
Px [Yt2 −t1 = y] − N −d d
x∈TN y∈Td
x,y∈Td
N
N
= εt2 −t1 (N ).
Exercise 4.13. Prove (4.1.12).
In the proof of Theorem 4.1 we will use the following corollary of Lemma 4.12.
Corollary 4.14. Fix K ⊂⊂ TdN . For 0 ≤ s ≤ t, let Es,t = {{Ys , . . . , Yt } ∩ K = ∅}. Then for
any k ≥ 1 and 0 ≤ s1 ≤ t1 ≤ · · · ≤ sk ≤ tk ,
"
#
k
k
Y
\
(4.1.13)
P
[E
]
−
E
P
si ,ti ≤ (k − 1) · max εsi+1 −ti (N ).
si ,ti
1≤i≤k−1
i=1
i=1
Note that the case k = 2 corresponds to (4.1.11).
Exercise 4.15. Prove (4.1.13) using induction on k.
The next lemma gives a bound on the speed of convergence of the lazy random walk on TdN to
its stationary distribution. We refer the reader to [LPW08, Theorem 5.5] for the proof.
29
4.2. PROOF OF THEOREM 4.1
Lemma 4.16. Let δ > 2, N ≥ 1, and n = ⌊N δ ⌋. There exist c > 0 and C < ∞ such that for
any N ≥ 1,
δ−2
(4.1.14)
εn (N ) ≤ Ce−cN .
Lemmas 4.12 and 4.16 imply that the sigma-algebras σ(Y0 , . . . , Yt ) and σ(Yt+⌊N 2+ǫ ⌋ , . . . ) are
asymptotically (as N → ∞) independent for any given ǫ > 0. This will be crucially used in the
proof of Theorem 4.1.
4.2
Proof of Theorem 4.1
First of all, by Corollary 4.5, it suffices to show that for any u > 0 and K ⊂⊂ Zd ,
lim P[{Y0 , . . . , Y2⌊uN d ⌋ } ∩ ϕ(K) = ∅] = e−u cap(K) .
(4.2.1)
N →∞
We fix u > 0 and define
L = 2⌊uN d ⌋.
Our plan is the following: we subdivide the random walk trajectory (Y0 , . . . , YL ) into multiple
trajectories with some gaps in between them. We will do this in a way that the number of
sub-trajectories is big and their combined length is almost the same as L. On the other hand,
any two sub-trajectories will be separated by a missing trajectory that is much longer than
the mixing time of the lazy random walk on TdN , so that the sub-trajectories are practically
independent.
In order to formalize the above plan, we choose some α, β ∈ R satisfying
2 < α < β < d.
(4.2.2)
Let
ℓ∗ = ⌊N β ⌋ + ⌊N α ⌋,
ℓ = ⌊N β ⌋,
K = ⌊L/ℓ∗ ⌋ − 1.
(4.2.3)
Note that the assumptions (4.2.2) imply
K·ℓ
= 1.
N →∞ L
lim K = ∞,
(4.2.4)
lim
N →∞
We fix K ⊂⊂ Zd , and denote K = ϕ(K). For 0 ≤ k ≤ K, consider the events
Ek = {{Ykℓ∗ , . . . , Ykℓ∗ +ℓ } ∩ K = ∅}.
The events Ek have the same probability, since under P, each Ykℓ∗ is uniformly distributed on
TdN .
We are ready to prove (4.2.1). On the one hand, note that
0 ≤ lim
N →∞
P
"
K
\
k=0
Ek
#
i
− P {Y0 , . . . , Y2⌊uN d ⌋ } ∩ ϕ(K) = ∅

≤ lim P 
N →∞
h
K
[
(k+1)ℓ∗
[
k=0 t=kℓ∗ +ℓ

(∗)
!
|K| · (K + 1) · (ℓ∗ − ℓ)
N →∞
Nd
{Yt ∈ K} ≤ lim
(4.2.2),(4.2.3)
=
0,
where in (∗) we used the union bound and the fact that Yt is a uniformly distributed element
of TdN under P for any t ∈ N.
30CHAPTER 4. RANDOM WALK ON THE TORUS AND RANDOM INTERLACEMENTS
On the other hand, by Lemma 4.16, and (4.2.2),
lim K · εℓ∗ −ℓ (N ) = 0,
N →∞
and using (4.1.13),
lim P
N →∞
"
K
\
k=0
#
Ek = lim
N →∞
K
Y
k=0
P [Ek ] = lim (P [E0 ])K+1
N →∞
(4.1.10)
=
lim
N →∞
K+1
ℓ 1
1 − d · · cap(K)
N 2
(4.2.4)
= e−u·cap(K) .
Putting all the estimates together we complete the proof of (4.2.1) and Theorem 4.1.
P
c
Exercise 4.17. Define for each N the random variable MN = K
k=0 1[Ek ]. In words, MN is
the number of sub-trajectories of form (Ykℓ∗ , . . . , Ykℓ∗ +ℓ ), k = 0, . . . , K that hit K.
Show that if we let N → ∞, then the sequence MN converges in distribution to Poisson with
parameter u · cap(K).
4.3
Notes
The study of the limiting microscopic structure of the random walk trace on the torus was
motivated by the work of Benjamini and Sznitman [BS08], in which they investigate structural
changes in the vacant set left by a simple random walk on the torus (Z/N Z)d , d ≥ 3, up to
times of order N d .
The model of random interlacements was introduced by Sznitman in [S10] and used in [S09a,
S09b] to study the disconnection time of the discrete cylinder (Z/N Z)d × Z by a simple random
walk.
Theorem 4.1 was first proved in [W08] using the result from [AB92] (see also [A89, p. 24, B2])
that the hitting time of a set by the (continuous time) random walk on TdN is asymptotically
exponentially distributed, and using variational formulas to express the expected hitting time of
a set in TdN using capacity. Our proof is more in the spirit of [TW11], where for any δ ∈ (0, 1) a
coupling between the random walk Xt , and random interlacements at levels (u − ǫ) and (u + ǫ)
is constructed in such a way that
I u−ǫ ∩ B(N δ ) ⊂ ϕ−1 {X0 , . . . , X⌊uN d ⌋ } ∩ B(N δ ) ⊂ I u+ǫ ∩ B(N δ )
with probability going to 1 faster than any polynomial as N → ∞. In fact, the proof of [TW11]
reveals that locally the random walk trajectory (and not just the trace) looks like a random
interlacements point process, which we define and study in the remaining chapters.
Chapter 5
Poisson point processes
In this chapter we review the notion of a Poisson point process on a measurable space as well
as some
basic operations (coloring, mapping, thinning) that we will need for the construction of the
random interlacements point process and in the study of its properties. First we recall some
well-known facts about the Poisson distribution.
5.1
Poisson distribution
We say that the N-valued random variable X has Poisson distribution or briefly denote X ∼
k
POI(λ) if P[X = k] = e−λ λk! when λ ∈ (0, ∞). We adopt the convention that P[X = ∞] = 1 if
λ = ∞.
We recall the fact that the generating function of the POI(λ) distribution is E[z X ] = eλ(z−1) .
Lemma 5.1 (Infinite divisibility). If X1 , . . . , Xj , . . . are independent and Xj ∼ POI(λj ) then
we have
∞
∞
X
X
λj )
(5.1.1)
Xj ∼ POI(
j=1
j=1
Proof. The proof follows easily from the fact that an N-valued random variable is uniquely
determined by its generating function:


∞
∞
∞
i
h P∞
P∞
Y
Y
Y
eλj (z−1) = e j=1 λj (z−1) .
E z Xj =
z Xj  =
E z j=1 Xj = E 
j=1
j=1
j=1
The reverse operation of (5.1.1) in known as coloring (or labeling) of Poisson particles.
Lemma 5.2 (Coloring). If Ω = {ω1 , . . . , ωj , . . . } is a countable set and if P
ξ1 , . . . , ξi , . . . are
∞
i.i.d. Ω-valued random variables with distribution P[ξi = ωj ] = pj , where
j=1 pj = 1 and
∞
X ∼ POI(λ) is a Poisson random variable independent from (ξi )i=1 , and if we define
Xj =
X
X
1[ξi = ωj ],
i=1
j ∈ N,
then X1 , . . . , Xj , . . . are independent, Xj ∼ POI(λ · pj ) and we have
the number of particles with color ωj .
31
P∞
j=1 Xj
= X. We call Xj
32
CHAPTER 5. POISSON POINT PROCESSES
Proof. We only prove the statement if |Ω| = 2, i.e., when we only have two colors. The statement
with more than two colors follows by induction. If |Ω| = 2 then denote by p1 = p, p2 = 1 − p.
We again use the method of generating
that if Y has binomial distribution
Y functions. Recall
n
with parameters n and p, then E z = ((1 − p) + pz) . Using this we get
#
" PX
P
PX
h
i
i=1 1[ξi =ω1 ]
X
z
1[ξ
=ω
]
1−1[ξ
=ω
]
1
1
1
i
i
X1
X2
X
· z2
E z1 · z2 = E z1 i=1
· z2 i=1
=E
z2
#
" " P X
##
"
X
i=1 1[ξi =ω1 ]
z1
z
1
=E E
· z2X
· z2X | X
= E (1 − p) + p
z2
z2
i
h
= E ((1 − p)z2 + pz1 )X = eλ((1−p)z2 +pz1 −1) = epλ(z1 −1) e(1−p)λ(z2 −1) .
The right-hand side coincides with the joint generating function of two independent Poisson
random variables with parameters pλ and (1 − p)λ, respectively.
5.2
Poisson point processes
Let (W , W) denote a measurable space, i.e., W is a set and W is a sigma-algebra on W . We
denote an element of W by w ∈ W .
P
Definition 5.3. An infinite point measure on W is a measure of the form µ = ∞
i=1 δw i , where
δwi denotes the Dirac measure concentrated on wi ∈ W , that is for all A ⊆ W we have
µ(A) =
∞
X
i=1
1[wi ∈ A].
We denote by Ω(W ) the set of infinite point measures on W . A Poisson point process on W
is a random point measure on W , i.e., a random element of Ω(W ). A random element µ of
Ω(W ) corresponds to a probability measure on Ω(W ) (which is called the law of µ), thus we
first need to define a sigma-algebra on Ω(W ). We definitely want to talk about the random
variables defined on the space Ω(W ) of form µ(A) for any A ∈ W, so we define A(W ) to be the
sigma-algebra on Ω(W ) generated by such random variables.
Definition 5.4. Given a sigma-finite measure λ on (W , W), we say that the random point
measure µ is a Poisson point process (PPP) on W with intensity measure λ if
(a) for all B ∈ W we have µ(B) ∼ POI(λ(B)) and
(b) if B1 , . . . , Bn are disjoint subsets of W , then the random variables µ(B1 ), . . . , µ(Bn ) are
independent.
Definition 5.4 uniquely describes the joint distribution of the random variables µ(B1 ), . . . , µ(Bn )
for any (not necessarily disjoint) finite collection of subsets B1 , . . . , Bn of W , thus Claim 3.3
guarantees the uniqueness of a probability measure on the measurable space
(Ω(W ), A(W ))
satisfying Definition 5.4 for any given intensity measure λ. We only need to prove the existence
of such probability measure.
Now we provide the construction of the PPP µ satisfying Definition 5.4 given the intensity
measure λ.
Let A1 , A2 , . . . ∈ W such that
33
5.2. POISSON POINT PROCESSES
(a) ∪∞
i=1 Ai = W ,
(b) Ai ∩ Aj = ∅ if i 6= j,
(c) 0 < λ(Ai ) < +∞ for all i ∈ N.
Such a partitioning of W exists because λ is a sigma-finite measure on (W , W). For any i ∈ N
ei on (W , W) by letting
define the probability measure λ
ei (A) = λ(A ∩ Ai ) ,
λ
λ(Ai )
A ∈ W.
ei is supported on Ai . Let us now generate a doubly infinite sequence (w i,j )∞ of
Note that λ
i,j=1
ei . Independently from
W -valued independent random variables, where wi,j has distribution λ
∞
(w i,j )∞
i,j=1 we also generate an infinite sequence (Xi )i=1 of N-valued independent random variables with distribution Xi ∼ POI(λ(Ai )). Given this random input we define
µ=
Xi
∞ X
X
δwi,j .
(5.2.1)
i=1 j=1
Exercise 5.5. Check that the random point measure µ that we have just constructed satisfies
Definition 5.4. Hint: use Lemmas 5.1 and 5.2.
The following exercise allows us to perform basic operations with Poisson point processes.
P
Exercise 5.6 (Restriction and mapping of a PPP). Let µ = ∞
i=1 δw i denote a PPP on the
space (W , W) with intensity measure λ.
(a) Given some A ∈ W, denote by 1A λ the measure on (W , W) defined by
(1A λ)(B) := λ(A ∩ B)
for any B ∈ W. Similarly, denote by 1A µ the point measure on (W , W) defined by
(1A µ)(B) := µ(A ∩ B) =
∞
X
i=1
1[wi ∈ A ∩ B],
P
or equivalently by the formula 1A µ = ∞
i=1 δw i 1{wi ∈A} . Use Definition 5.4 to prove that
1A µ is a PPP with intensity measure 1A λ.
(b) Let A1 , A2 , . . . ∈ W such that Ai ∩ Aj = ∅ if i 6= j. Show that the Poisson point processes
1A1 µ, 1A2 µ, . . . are independent, i.e., they are independent (Ω(W ), A(W ))-valued random
variables.
′
′
′
(c) If (W , W ) is another measurable space and ϕ : W → W is a measurable mapping, denote
′
′
by ϕ ◦ λ the measure on (W , W ) defined by
(ϕ ◦ λ)(B ′ ) := λ(ϕ−1 (B ′ ))
′
′
′
for any B ′ ∈ W . Similarly, denote by ϕ ◦ µ the point measure on (W , W ) defined by
′
−1
(ϕ ◦ µ)(B ) := µ(ϕ
′
(B )) =
∞
X
i=1
P∞
or equivalently by the formula ϕ ◦ µ =
is a PPP with intensity measure ϕ ◦ λ.
i=1 δϕ(w i ) .
1[wi ∈ ϕ−1 (B ′ )],
Use Definition 5.4 to prove that ϕ ◦ µ
34
CHAPTER 5. POISSON POINT PROCESSES
The following two exercises are not needed for the construction of random interlacements, but
they prove useful to us in Section 11.2.2.
Exercise 5.7 (Merging of PPPs). Use Lemma 5.1 and Definition 5.4 to show that if µ and µ′
are independent Poisson point processes on the space (W , W) with respective intensity measures
λ and λ′ , then µ + µ′ is a PPP on (W , W) with intensity measure λ + λ′ .
Exercise 5.8 (Thinning of a PPP). Let µ be a PPP on the space (W , W) with intensity measure
λ. Let λ′ be another nonnegative measure on (W , W) satisfying
λ′ ≤ λ
(i.e., for all A ∈ W, we have λ′ (A) ≤ λ(A)). Now if λ′ ≤ λ, then λ′ is absolutely continuous with
′
respect to λ, so we can λ-almost surely define the Radon-Nikodym derivative dλ
dλ : W → [0, 1] of
λ′ with respect to λ.
P∞
Given a realization µ =
i=1 δw i , let us define the independent Bernoulli random variables
(αi )∞
in
the
following
way.
Let
i=1
P[αi = 1] = 1 − P[αi = 0] =
dλ′
(w i ).
dλ
Use the construction (5.2.1) to show that
′
µ =
∞
X
αi δwi
i=1
is a PPP on (W , W) with intensity measure λ′ .
5.3
Notes
For a comprehensive treatise of Poisson point processes on abstract measurable spaces, see
[Res87] and [K93].
Chapter 6
Random interlacements point
process
In this chapter we give the definition of random interlacements at level u as the range of a
countable collection of doubly-infinite trajectories in Zd . This collection will arise from a certain
Poisson point process (the random interlacements point process).
We will first construct the intensity measure of this Poisson point process in Section 6.1.
In Section 6.2 we define the canonical probability space of the random interlacements point
process as well as some additional point measures on it. In Section 6.3 we prove inequality
(3.3.2).
6.1
A sigma-finite measure on doubly infinite trajectories
The aim of this section is to define the measurable space (W ∗ , W ∗ ) of doubly infinite trajectories modulo time shifts (see Definition 6.1), and define on it a sigma-finite measure ν (see
Theorem 6.2). The random interlacement point process is going to be the Poisson point process
on the product space W ∗ × R+ with intensity measure ν ⊗ λ, where λ denotes the Lebesgue
measure on R+ , see (6.2.2).
6.1.1
Spaces
We begin with some definitions. Let
W = {w : Z → Zd : |w(n) − w(n + 1)|1 = 1 for all n ∈ Z, and |w(n)| → ∞ as n → ±∞}
be the space of doubly infinite nearest neighbor trajectories which visit every finite subset of
Zd only finitely many times, and
W+ = {w : N → Zd : |w(n) − w(n + 1)|1 = 1 for all n ∈ N, and |w(n)| → ∞ as n → ∞}
the space of forward trajectories which spend finite time in finite subsets of Zd .
As in Section 2.2, we denote by Xn , n ∈ Z, the canonical coordinates on W and W+ , i.e.,
Xn (w) = w(n). By W we denote the sigma-algebra on W generated by (Xn )n∈Z , and by W+
the sigma-algebra on W+ generated by (Xn )n∈N .
We define the shift operators θk : W → W , k ∈ Z and θk : W+ → W+ , k ∈ N by
θk (w)(n) = w(n + k).
(6.1.1)
Next, we define the space (W ∗ , W ∗ ) which will play an important role in the construction of the
random interlacements point process.
35
36
CHAPTER 6. RANDOM INTERLACEMENTS POINT PROCESS
Definition 6.1. Let ∼ be the equivalence relation on W defined by
w ∼ w′
⇐⇒
∃ k ∈ Z : w′ = θk (w),
i.e., w and w′ are equivalent if w′ can be obtained from w by a time shift. The quotient space
W/ ∼ is denoted by W ∗ . We write
π∗ : W → W ∗
for the canonical projection which assigns to a trajectory w ∈ W its ∼-equivalence class π ∗ (w) ∈
W ∗ . The natural sigma-algebra W ∗ on W ∗ is defined by
A ∈ W ∗ ⇐⇒ (π ∗ )−1 (A) ∈ W.
Let us finish this section by defining some useful subsets of W and W ∗ . For any K ⊂⊂ Zd , we
define
WK = {w ∈ W : Xn (w) ∈ K for some n ∈ Z} ∈ W
∗ = π ∗ (W ) ∈ W ∗ . It will also prove helpful
to be the set of trajectories that hit K, and let WK
K
to partition WK according to the first entrance time of trajectories in K. For this purpose we
define (similarly to the definition of the first entrance time (2.2.2) for trajectories in W+ ) for
w ∈ W and K ⊂⊂ Zd ,
HK (w) := inf{n ∈ Z : w(n) ∈ K},
‘first entrance time’,
and
n
WK
= {w ∈ W : HK (w) = n} ∈ W.
n)
n
∗
∗
n
The sets (WK
n∈Z are disjoint and WK = ∪n∈Z WK . Also note that WK = π (WK ), for each
n ∈ Z.
6.1.2
Construction of the intensity measure underlying random interlacements
In this subsection we construct the sigma-finite measure ν on (W ∗ , W ∗ ). This is done by first
describing a family of finite measures QK , K ⊂⊂ Zd , on (W, W) in (6.1.2), and then defining ν
using the pushforwards of the measures QK by π ∗ in Theorem 6.2. The main step is to show that
the pushforwards of the measures QK by π ∗ form a consistent family of measures on (W ∗ , W ∗ ),
see (6.1.5).
Recall from Section 2.2 that Px and Ex denote the law and expectation, respectively, of simple
random walk starting in x. By Theorem 2.7 we know that for d ≥ 3 the random walk is transient,
i.e., we have Px [W+ ] = 1. From now on we think about Px as a probability measure on W+ .
e K (2.2.3) and the equilibrium measure eK (·) (2.3.1)
Using the notions of the first hitting time H
d
of K ⊂⊂ Z , we define the measure QK on (W, W) by the formula
e K = ∞] · eK (x) · Px [B]
QK [(X−n )n≥0 ∈ A, X0 = x, (Xn )n≥0 ∈ B] = Px [A | H
(2.3.1)
e
= Px [A, HK = ∞] · Px [B]
(6.1.2)
for any A, B ∈ W+ and x ∈ Zd .
Note that we have only defined the measure of sets of form
A × {X0 = x} × B ∈ W
6.1. A SIGMA-FINITE MEASURE ON DOUBLY INFINITE TRAJECTORIES
37
(describing an event in terms of the behavior of the backward trajectory (X−n )n≥0 , the value
at time zero X0 and the forward trajectory (Xn )n≥0 ), but the sigma-algebra W is generated by
events of this form, so QK can be uniquely extended to all W-measurable subsets of W . For
any K ⊂⊂ Zd ,
X
(6.1.2) X
(2.3.2)
0
QK [W ] = QK [WK ] = QK [WK
]=
QK [X0 = x] =
eK (x) = cap(K).
(6.1.3)
x∈K
x∈K
1
QK is a probability measure on (W, W)
In particular, the measure QK is finite, and cap(K)
0
supported on WK , which can be defined in words as follows:
(a) X0 is distributed according to the normalized equlibrium measure eeK on K, see (2.3.3),
(b) conditioned on the value of X0 , the forward and backward paths (Xn )n≥0 and (X−n )n≥0
are conditionally independent,
(c) conditioned on the value of X0 , the forward path (Xn )n≥0 is a simple random walk starting
at X0 ,
(d) conditioned on the value of X0 , the backward path (X−n )n≥0 is a simple random walk
starting at X0 , conditioned on never returning to K after the first step.
We now define a sigma-finite measure ν on the measurable space (W ∗ , W ∗ ).
Theorem 6.2 ([S10], Theorem 1.1). There exists a unique sigma-finite measure ν on (W ∗ , W ∗ )
which satisfies for all K ⊂⊂ Zd the identity
∗
∀A ∈ W ∗ , A ⊆ WK
:
ν(A) = QK [(π ∗ )−1 (A)].
(6.1.4)
Remark 6.3. With the notation of Exercise 5.6, the identity (6.1.4) can be briefly re-stated as
∀ K ⊂⊂ Zd : 1WK∗ ν = π ∗ ◦ QK ,
∗ is the pushforward of Q
∗
i.e., the restriction of ν to WK
K by π .
For a discussion of the reasons why it is advantageous to factor out time-shifts and work with a
measure on W ∗ rather than W , see Section 6.4.
Proof. We first prove the uniqueness of ν. If K1 ⊂ K2 ⊂ . . . is a sequence of finite subsets of
d
∗
∞
∗
Zd such that ∪∞
n=1 Kn = Z , then W = ∪n=1 WKn and by the monotone convergence theorem,
∀A ∈ W ∗ :
∗
∗
) = lim QKn [(π ∗ )−1 (A ∩ WK
)],
ν(A) = lim ν(A ∩ WK
n
n
n→∞
n→∞
thus ν(A) is uniquely determined by (6.1.4) for any A ∈ W ∗ .
To prove the existence of ν satisfying (6.1.4) we only need to check that the definition of ν(A)
∗ ⊆ W ∗ , then
in (6.1.4) is not ambiguous, i.e., if K ⊆ K ′ ⊂⊂ Zd and A ∈ W ∗ , A ⊆ WK
K′
QK ′ [(π ∗ )−1 (A)] = QK [(π ∗ )−1 (A)].
(6.1.5)
As soon as we prove (6.1.5), we can explicitly construct ν using the measures of form QK in the
following way:
d
If ∅ = K0 ⊆ K1 ⊂ K2 ⊂ . . . is a sequence
of Zd such that ∪∞
n=1 Kn = Z , then
P∞of finite subsets
∗
∗
∗
for any A ∈ W , we must have ν(A) = n=1 ν(A ∩ (WKn \ WKn−1 )), thus we can define
ν(A) =
∞
X
n=1
∗
∗
QKn (π ∗ )−1 (A ∩ (WK
\ WK
)) .
n
n−1
(6.1.6)
38
CHAPTER 6. RANDOM INTERLACEMENTS POINT PROCESS
Exercise 6.4. Use (6.1.5) to show that the measure ν defined by (6.1.6) indeed satisfies (6.1.4).
Also note that ν is sigma-finite, since
∗
∗
QKn (π ∗ )−1 ((WK
\ WK
)) = cap(Kn ) − cap(Kn−1 ) < +∞.
n
n−1
The rest of this proof consists of the validation of the formula (6.1.5). Recall that QK is
0 and Q ′ is supported on W 0 , and note that the random shift θ
supported on WK
HK : W K ∩
K
K′
0
0
WK ′ → WK is a bijection map, see Figure 6.1. It suffices to prove that for all B ∈ W such that
B ⊆ WK ,
0
0
QK ′ {w ∈ WK ∩ WK
(6.1.7)
′ : θHK (w) ∈ B} = QK {w ∈ WK : w ∈ B} .
∗ , let B = (π ∗ )−1 (A). Then B ∈ W, B ⊆ W , and
Indeed, for any A ∈ W ∗ such that A ⊆ WK
K
0
0
QK {w ∈ WK
: w ∈ (π ∗ )−1 (A)} = QK {w ∈ WK
: w ∈ B}
(6.1.7)
0
0
∗ −1
= QK ′ {w ∈ WK ∩ WK
(A)}
′ : θHK (w) ∈ B} = QK ′ {w ∈ WK ∩ WK ′ : θHK (w) ∈ (π )
′
∗ −1
0
(A)} ,
= QK ′ {w′ ∈ WK
′ : w ∈ (π )
which is precisely (6.1.5). By Claim 3.3 in order to prove the identity (6.1.7) for all B ∈ W, we
only need to check it for cylinder sets B, i.e., we can assume that
B = {w : Xm (w) ∈ Am , m ∈ Z}
for some Am ⊂ Zd , m ∈ Z, where only finitely many of the sets Am , m ∈ Z are not equal to Zd .
Recall the notion of the time of last visit LK from (2.2.5). First we rewrite the right-hand side
of (6.1.7):
0
: w ∈ B} = QK [Xm ∈ Am , m ∈ Z]
QK {w ∈ WK
(6.1.2) X
e K = ∞, Xm ∈ A−m , m ≥ 0] · Py [Xn ∈ An , n ≥ 0]
Py [H
=
=
X X
x∈K ′
y∈K
y∈K
e K = ∞, XL ′ = x, Xm ∈ A−m , m ≥ 0] · Py [Xn ∈ An , n ≥ 0]. (6.1.8)
Py [H
K
Before we rewrite the left-hand side of (6.1.7), we introduce some notation.
Given x ∈ K ′ and y ∈ K we denote by Σx,y the countable set of finite paths
o
n
Σx,y = σ : {0, . . . , Nσ } → Zd : σ(0) = x, σ(Nσ ) = y, σ(n) ∈
/ K, 0 ≤ n < Nσ .
(6.1.9)
Given a σ ∈ Σx,y , let
0
W σ := w ∈ WK ∩ WK
′ : (X0 (w), . . . , XNσ (w)) = (σ(0), . . . , σ(Nσ )) .
Note that if w ∈ W σ , then HK (w) = Nσ .
Now we rewrite the left-hand side of (6.1.7), see Figure 6.1. In the equation marked by (∗)
0 . In the equation marked by (∗∗) below we
below we use the fact that QK ′ is supported on WK
′
use the Markov property and (6.1.2).
6.1. A SIGMA-FINITE MEASURE ON DOUBLY INFINITE TRAJECTORIES
39
(∗) X X X
0
QK ′ [{w ∈ W σ : θNσ (w) ∈ B}]
QK ′ {w ∈ WK ∩ WK
′ : θHK (w) ∈ B} =
X X X
=
x∈K ′ y∈K σ∈Σx,y
x∈K ′ y∈K σ∈Σx,y
(∗∗)
=
QK ′ [{w ∈ W σ : Xi (w) ∈ Ai−Nσ , i ∈ Z}]
X X X
x∈K ′ y∈K σ∈Σx,y
e K ′ = ∞]·
Px [Xj ∈ A−j−Nσ , j ≥ 0, H
Px [Xn = σ(n) ∈ An−Nσ , 0 ≤ n ≤ Nσ ] · Py [Xn ∈ An , n ≥ 0]. (6.1.10)
QK ′
QK
K′
K′
K
K
x
y
x
y
σ
σ
Figure 6.1: An illustration of the consistency result (6.1.7). The picture on the left is what
the measure QK ′ sees, the picture on the right is what the measure QK sees. On both pictures
the white circle represents X0 and the arrows represent the directions of the “forward” and
“backward” trajectories. The difference between the two pictures is that the path σ (defined in
(6.1.9)) gets time-reversed.
We claim that for any x ∈ K ′ and y ∈ K,
X
σ∈Σx,y
e K ′ = ∞] · Px [Xn = σ(n) ∈ An−Nσ , 0 ≤ n ≤ Nσ ]
Px [Xj ∈ A−j−Nσ , j ≥ 0, H
e K = ∞, XL ′ = x, Xm ∈ A−m , m ≥ 0]. (6.1.11)
= Py [H
K
Once (6.1.11) is established, the right-hand sides of (6.1.8) and (6.1.10) coincide, and (6.1.7)
follows.
The proof of (6.1.11) is based on the time-reversibility of simple random walk on Zd with respect
to the counting measure on Zd :
Px [Xn = σ(n) ∈ An−Nσ , 0 ≤ n ≤ Nσ ] = (2d)−Nσ · 1[σ(n) ∈ An−Nσ , 0 ≤ n ≤ Nσ ]
= Py [Xm = σ(Nσ − m) ∈ A−m , 0 ≤ m ≤ Nσ ]. (6.1.12)
40
CHAPTER 6. RANDOM INTERLACEMENTS POINT PROCESS
We write (using the Markov property in equation (∗) below)
X
σ∈Σx,y
(6.1.12)
=
e K ′ = ∞] · Px [Xn = σ(n) ∈ An−Nσ , 0 ≤ n ≤ Nσ ]
Px [Xj ∈ A−j−Nσ , j ≥ 0, H
X
σ∈Σx,y
(∗)
=
X
σ∈Σx,y
e K ′ = ∞] · Py [Xm = σ(Nσ − m) ∈ A−m , 0 ≤ m ≤ Nσ ]
Px [Xj ∈ A−j−Nσ , j ≥ 0, H
e K ′ ◦ θNσ = ∞]
Py [Xm = σ(Nσ − m) ∈ A−m , 0 ≤ m ≤ Nσ , Xm ∈ A−m , m ≥ Nσ , H
(6.1.9)
e K = ∞, XL ′ = x, Xm ∈ A−m , m ≥ 0].
= Py [H
K
This completes the proof of (6.1.11) and of Theorem 6.2.
Let us derive some
P identities that arise as a by-product of the proof of Theorem 6.2. Recall the
notation Pm = x∈K m(x)Px from (2.2.1).
Proposition 6.5 (Sweeping identity). Let K ⊆ K ′ ⊂⊂ Zd .
∀ y ∈ K : eK (y) = PeK ′ [HK < ∞, XHK = y] .
(6.1.13)
Proof of Proposition 6.5. Take y ∈ K. We apply (6.1.7) to the event B = {X0 = y}. By (6.1.2),
the right hand side of (6.1.7) equals eK (y), and the left hand side
X
x∈K ′
(2.2.1)
eK ′ (x) · Px [HK < ∞, XHK = y] = PeK ′ [HK < ∞, XHK = y] .
Thus, (6.1.13) follows.
Corollary 6.6. By summing over y in (6.1.13) and using (2.3.2), we immediately get that for
any K ⊆ K ′ ⊂⊂ Zd ,
cap(K)
PeeK ′ [HK < ∞] =
.
(6.1.14)
cap(K ′ )
6.2
Random interlacements point process
The main goal of this section is to define (a) the random interlacements point process (see (6.2.2))
as the Poisson point process on the space W ∗ × R+ of labeled doubly-infinite trajectories modulo
time-shift, (b) the canonical measurable space (Ω, A) for this point process (see (6.2.1)), and
(c) random interlacements at level u as a random variable on Ω taking values in subsets of Zd
(see Definition 6.7). Apart from that, we will define various point measures on Ω, which will be
useful in studying properties of random interlacements in later chapters.
6.2.1
Canonical space and random interlacements
Recall from Definition 6.1 the space W ∗ of doubly infinite trajectories in Zd modulo time-shift,
and consider the product space W ∗ × R+ . For each pair (w∗ , u) ∈ W ∗ × R+ , we say that u is the
label of w∗ , and we call W ∗ × R+ the space of labeled trajectories. We endow this product space
with the product sigma-algebra W ∗ ⊗ B(R+ ), and define on this measurable space the measure
ν ⊗ λ, where ν is the measure constructed in Theorem 6.2, and λ is the Lebesgue measure on R.
A useful observation is that for any K ⊂⊂ Zd and u ≥ 0,
∗
∗
(ν ⊗ λ) (WK
× [0, u]) = ν(WK
) · λ([0, u]) = cap(K) · u < ∞.
41
6.2. RANDOM INTERLACEMENTS POINT PROCESS
Thus, the measure ν ⊗λ is a sigma-finite measure on (W ∗ ×R+ , W ∗ ⊗B(R+ )), and can be viewed
as an intensity measure for a Poisson point process on W ∗ × R+ , see Definition 5.4. It will be
handy to consider this Poisson point process on the canonical probability space (Ω, A, P), where
n
X
Ω := ω :=
δ(wn∗ ,un ) , where (wn∗ , un ) ∈ W ∗ × R+ , n ≥ 0
n≥0
o
∗
and ω(WK
× [0, u]) < ∞ for any K ⊂⊂ Zd , u ≥ 0
(6.2.1)
is the space of locally finite point measures on W ∗ × R+ , the sigma-algebra A is generated by
the evaluation maps
X
ω 7→ ω(D) =
1[(wn∗ , un ) ∈ D], D ∈ W ∗ ⊗ B(R+ ),
n≥0
and P is the probability measure on (Ω, A) such that
X
δ(wn∗ ,un ) is the PPP with intensity measure ν ⊗ λ on W ∗ × R+ under P.
ω=
(6.2.2)
n≥0
The random element of (Ω, A, P) is called the random interlacements point process. By construction, it is the Poisson point process on W ∗ × R+ with intensity measure ν ⊗ λ. The expectation
operator corresponding to P is denoted by E, and the covariance by CovP .
We are now ready to define the central objects of these lecture notes:
Definition 6.7. Random interlacements at level u, denoted by I u , is the random subset of Zd
such that
[
X
I u (ω) :=
range(wn∗ ), for ω =
δ(wn∗ ,un ) ∈ Ω,
(6.2.3)
un ≤u
n≥0
where
range(w∗ ) = {Xn (w) : w ∈ (π ∗ )−1 (w∗ ), n ∈ Z} ⊆ Zd
is the set of all vertices of Zd visited by w∗ .
The vacant set of random interlacements at level u is defined as
V u (ω) := Zd \I u (ω).
(6.2.4)
An immediate consequence of Definition 6.7 is that
′
′
P[I u ⊆ I u ] = P[V u ⊆ V u ] = 1,
∀u < u′ .
(6.2.5)
Remark 6.8. It follows from (6.1.3), (6.2.2) and Definition 6.7 (see also Definition 5.4) that for
∗ × [0, u]) (i.e., the number of interlacement
any K ⊂⊂ Zd and u ≥ 0, the random variable ω(WK
trajectories at level u that hit the set K) has Poisson distribution with parameter
∗
ν(WK
) · λ([0, u]) = cap(K) · u.
In particular,
∗
P[I u ∩ K = ∅] = P[ω(WK
× [0, u]) = 0] = e−cap(K)·u .
d
This proves for any u > 0 the existence of the probability measure P u on ({0, 1}Z , F) satisfying
the equations (3.1.2). In particular, P u is indeed the law of random interlacements at level u.
Recalling the notation of Exercise 4.17, it is also worth noting that
d
∗
MN −→ ω(WK
× [0, u]),
N → ∞,
42
CHAPTER 6. RANDOM INTERLACEMENTS POINT PROCESS
which indicates that Definition 6.7 was already “hidden” in the proof of Theorem 4.1.
Definition 6.7 allows one to gain a deeper understanding of random interlacements at level u.
In particular, as soon as one views I u as the trace of a cloud of doubly infinite trajectories,
interesting questions arise about how these trajectories are actually “interlaced”. For a further
discussion of the connectivity properties of I u , see Section 6.4.
6.2.2
Finite point measures on W+ and random interlacements in finite sets
In this subsection we give definitions of finite random point measures µK,u on W+ (in fact,
Poisson point processes on W+ ), which will be useful in the study of properties of I u . For
instance, for any K ⊂⊂ Zd , the set I u ∩ K is the restriction to K of the range of trajectories
from the support of µK,u . Since these measures are finite, they have a particularly simple
representation, see Exercise 6.9.
Consider the subspace of locally finite point measures on W+ × R+ ,
n
o
X
M := µ =
δ(wi ,ui ) : I ⊂ N, (wi , ui ) ∈ W+ × R+ ∀i ∈ I, and µ(W+ × [0, u]) < ∞ ∀u ≥ 0 .
i∈I
0 := {w ∈ W : H (w) = 0} and define
Recall the definition of WK
K
∗
0
s K : WK
∋ w∗ 7→ w0 ∈ WK
,
0 with π ∗ (w 0 ) = w ∗ .
where sK (w∗ ) = w0 is the unique element of WK
If w = (w(n))n∈Z ∈ W we define w+ ∈ W+ to be the part of w which is indexed by nonnegative
coordinates, i.e.,
w+ = (w(n))n∈N .
For K ⊂⊂ Zd define the map µK : Ω → M characterized via
Z
Z
f (sK (w∗ )+ , u) ω(dw∗ , du),
f d(µK (ω)) =
∗ ×R
WK
+
for ω ∈ Ω and
Pf : W+ × R+ → R+ measurable. Alternatively, we can define µK in the following
way: if ω = n≥0 δ(wn∗ ,un ) ∈ Ω, then
X
∗
µK (ω) =
δ(sK (wn∗ )+ ,un ) 1[wn∗ ∈ WK
].
n≥0
In words: in µK (ω) we collect the trajectories from ω which hit the set K, keep their labels, and
only keep the part of each trajectory which comes after hitting K, and we index the trajectories
in a way such that the hitting of K occurs at time 0.
In addition, for u > 0, define on Ω the functions
µK,u (ω)(dw) := µK (ω)(dw × [0, u]),
ω ∈ Ω,
taking values in the set of
Pfinite point measures on W+ . Alternatively, we can define µK,u in
the following way: if ω = n≥0 δ(wn∗ ,un ) ∈ Ω, then
µK,u(ω) =
X
n≥0
∗
δsK (wn∗ )+ 1[wn∗ ∈ WK
, un ≤ u].
(6.2.6)
In words: in µK,u (ω) we collect the trajectories from µK (ω) with labels less than u, and we
forget about the values of the labels.
6.3. COVERING OF A BOX BY RANDOM INTERLACEMENTS
43
It follows from the definitions of QK and µK,u and Exercise 5.6 that
µK,u is a PPP on W+ with intensity measure u · cap(K) · PeeK ,
(6.2.7)
where e
eK is defined in (2.3.3), and PeeK in (2.2.1). Moreover, it follows from Exercise 5.6(b) that
for any u′ > u,
(µK,u′ − µK,u ) is a PPP on W+ with intensity measure (u′ − u) · cap(K) · PeeK ,
which is independent from µK,u ,
(6.2.8)
∗ × [0, u] and W ∗ × (u, u′ ] are disjoint subsets of W ∗ × R .
since the sets WK
+
K
Using Definition 6.7 and (6.2.6), we can define the restriction of random interlacements at level
u to any K ⊂⊂ Zd as
[
range(w) .
(6.2.9)
Iu ∩ K = K ∩
w∈Supp(µK,u )
The following exercise provides a useful alternative way to generate a Poisson point process on
W+ with the same distribution as µK,u.
Exercise 6.9. Let NK be a Poisson random variable with parameter u · cap(K), and (wj )j≥1
i.i.d. random walks with distribution PeeK and independent from NK . Show that the point measure
µ
eK,u =
NK
X
δwj
j=1
eK,u
is a Poisson point process on W+ with intensity measure u · cap(K) · PeeK . In particular, µ
u = ∪NK (range(w j ) ∩ K) has the same distribution as
has the same distribution as µK,u, and IeK
j=1
I u ∩ K.
6.3
Covering of a box by random interlacements
In this section we apply the representation of random interlacements in finite sets obtained
in Exercise 6.9 to estimate from below the probability that random interlacements at level u
completely covers a box.
Claim 6.10. Let d ≥ 3 and u > 0. There exists R0 = R0 (d, u) < ∞ such that for all R ≥ R0 ,
1
(6.3.1)
P[B(R) ⊆ I u ] ≥ exp − ln(R)2 Rd−2 .
2
Remark 6.11. Claim 6.10 was used in the proof of Claim 3.15, where we showed that the law
of I u is not stochastically dominated by the law of Bernoulli percolation with parameter p, for
any p ∈ (0, 1).
For further discussion on topics related to (6.3.1), i.e., cover levels of random interlacements
and large deviations bounds for occupation time profiles of random interlacements, see Section
6.4.
Proof of Claim 6.10. Fix u > 0 and R ∈ N, and write K = B(R). Using the notation of
Exercise 6.9, and denoting the probability underlying the random objects introduced in that
exercise by P, we have
u
P[B(R) ⊆ I ] =
∞
X
n=0
u
P[B(R) ⊆ IeK
| NK = n] · P[NK = n].
(6.3.2)
44
CHAPTER 6. RANDOM INTERLACEMENTS POINT PROCESS
Moreover, we can bound
u
u
P[B(R) ⊆ IeK
| NK = n] = 1 − P[∪x∈B(R) {x ∈
/ IeK
} | NK = n] ≥
X
(6.1.14)
n
j
1−
P[∩j=1 {Hx (w ) = ∞}] = 1 − |B(R)| 1 −
x∈B(R)
cap(0)
cap(B(R))
n
. (6.3.3)
Exercise 6.12. (a) Deduce from (6.3.3) that there exist positive constants C0 , R0 such that
for all radii R ≥ R0 ,
1
u
∀ n ≥ n0 (R) := ⌈C0 ln(R)Rd−2 ⌉ : P[B(R) ⊆ IeK
| NK = n] ≥ .
2
(6.3.4)
(b) Use Stirling’s approximation to prove that there exists R0 = R0 (u) < ∞ such that
(6.3.5)
∀ R ≥ R0 : P[NK = n0 (R)] ≥ exp − ln(R)2 Rd−2 .
Putting together (6.3.2), (6.3.4) and (6.3.5) we obtain the desired (6.3.1).
6.4
Notes
The random interlacements point process on Zd was introduced by Sznitman in [S10].
In Theorem 6.2 we constructed a measure ν on W ∗ which satisfies 1WK∗ ν = π ∗ ◦ QK for all
K ⊂⊂ Zd , see Remark 6.3. Note that if K = (Kn )∞
n=0 is a sequence ∅ = K0 ⊆ K1 ⊂ K2 ⊂ . . .
d and if we define the measure Q on W by
K
=
Z
of finite subsets of Zd such that ∪∞
K
n=1 n
QK =
∞
X
(1 − 1[WKn−1 ])QKn ,
n=1
then we have ν = π ∗ ◦ QK , see (6.1.6). One might wonder why it is necessary to define a
measure on the space W ∗ rather than the simpler space W . First, the choice of K above is
rather arbitrary and factoring out time shift equivalence of QK gives rise to the same ν for any
choice of K. Also, the measure ν is invariant with respect to spatial shifts of W ∗ (see [S10,
(1.28)]), but there is no sigma-finite measure Q on W which is invariant with respect to spatial
shifts of W and satisfies ν = π ∗ ◦ Q, see [S10, Remark 1.2 (1)].
Sznitman’s construction of random interlacements allows for various generalizations and variations. For example, instead of Zd one could take an arbitrary graph on which simple random
walk is transient, or even replace the law of simple random walk Px in the definition of QK (see
(6.1.2)) by the law of another transient reversible Markov chain, e.g., the lazy random walk.
Such a generalization is obtained in [T09b].
Another modification that one could make, is to replace the discrete time Markov chain by a continuous time process. Such modification allows to compare the occupation times for continuous
time random interlacements and the square of the Gaussian free field, see e.g., [S12b, S12c].
One could also build the interlacements using continuous time processes in continuous space
such as Brownian interlacements, see [S13], or random interlacements of Lévy processes, see
[R13].
It is immediate from Definition 6.7 that if we view I u as a subgraph of Zd with edges drawn
between any pair of vertices x, y ∈ I u with |x− y|1 = 1, then this random subgraph consists only
6.4. NOTES
45
of infinite connected components. In fact, using the classical Burton-Keane argument, it was
shown in [S10, Corollary 2.3] that for any u > 0, the graph I u is almost surely connected. Later,
different proofs of this result were found in [RS12, PT11], where a more detailed description of
the connectivity of the cloud of doubly infinite trajectories contributing to the definition of I u
(as in (6.2.3)) was obtained. The main result of [RS12, PT11] states that for any d ≥ 3 and
u > 0, almost surely, any pair of vertices in I u are connected via at most ⌈d/2⌉ trajectories of
the random interlacements point process of trajectories with labels at most u, but almost surely
there are pairs of vertices in I u which can only be connected via at least ⌈d/2⌉ such trajectories.
The inequality (6.3.1) gives a lower bound on the probability of the unlikely event that I u covers
B(R). For more examples of large deviation results on random interlacements which involve an
exponential cost of order Rd−2 , see [LSz13a].
The inequality (6.3.4) gives a bound on the number of interlacement trajectories needed to
cover a big ball. For more precise results on how to tune the level u of random interlacements
to achieve B(R) ⊆ I u , see [B12].
46
CHAPTER 6. RANDOM INTERLACEMENTS POINT PROCESS
Chapter 7
Percolation of the vacant set
In this chapter we discuss basic geometric properties of the vacant set V u defined in (6.2.4). We
view this set as a subgraph of Zd with edges drawn between any pair of vertices x, y ∈ V u with
|x − y|1 = 1. We define the notion of a phase transition in u and the threshold u∗ = u∗ (d) such
that for u < u∗ the graph V u contains an infinite connected component P-almost surely, and
for u > u∗ all its connected components are P-almost surely finite. Finally, using elementary
considerations we prove that u∗ > 0 if the dimension d is large enough, see Theorem 7.2. Later
on in Chapter 10 we prove that u∗ ∈ (0, ∞) for any d ≥ 3, see Theorem 10.1. Unlike the
proof of Theorem 7.2, the proof of Theorem 10.1 is quite involved and heavily relies on the socalled decoupling inequalities, see Theorem 9.5. Proving these inequalities is the ultimate goal
of Chapters 8, 9, and 11.
7.1
Percolation of the vacant set, zero-one law
The first basic question we want to ask is whether the random graph V u contains an infinite
connected component. If it does, then we say that percolation occurs. For u > 0, we consider
the event
Perc(u) = {ω ∈ Ω : V u (ω) contains an infinite connected component} (∈ A).
The following properties are immediate.
• For any u > 0,
P[Perc(u)] ∈ {0, 1},
(7.1.1)
which follows from Theorem 3.10 and the fact that the event
the set {x ∈ Zd : ξx = 0} contains
Zd
ξ ∈ {0, 1}
:
(∈ F)
an infinite connected component
is invariant under all the translations of Zd by tx , x ∈ Zd .
• For any u < u′ , the inclusion Perc(u′ ) ⊆ Perc(u) follows from (6.2.5).
Using these properties, we can define the percolation threshold
u∗ = sup{u ≥ 0 : P[Perc(u)] = 1} ∈ [0, ∞],
such that
• for any u < u∗ , P[Perc(u)] = 1 (supercritical regime),
47
(7.1.2)
48
CHAPTER 7. PERCOLATION OF THE VACANT SET
• and for any u > u∗ , P[Perc(u)] = 0 (subcritical regime).
We say that a percolation phase transition occurs at u∗ .
In fact, the graph V u contains an infinite connected component if and only if there is a positive
density of vertices of V u which are in infinite connected components. This is proved in Proposition 7.1, and it motivates another definition of u∗ in (7.1.3), which is equivalent to (7.1.2).
For u > 0 and x ∈ Zd , we use the notation
Vu
{x ←→ ∞} := {the connected component of x in V u has infinite size},
and define the density of the infinite components by
h
i
Vu
η(u) = P 0 ←→ ∞ .
h
Vu
i
By Lemma 3.8, η(u) = P x ←→ ∞ , for any x ∈ Zd , and by (6.2.5), η(u) is non-increasing in
u. The following proposition provides an alternative definition of the threshold u∗ .
Proposition 7.1. For any u > 0, η(u) > 0 if and only if P[Perc(u)] = 1. In particular,
u∗ = sup{u ≥ 0 : η(u) > 0}.
(7.1.3)
Proof. Once the main statement of Proposition 7.1 is proved, the equality (7.1.3) is immediate
from (7.1.2).
Assume that η(u) > 0. Then
P[Perc(u)] ≥ η(u) > 0.
By (7.1.1), we conclude that P[Perc(u)] = 1.
Assume now that η(u) = 0. Note that
Perc(u) =
[ n
x∈Zd
o
Vu
x ←→ ∞ .
Since the probability of each of the events in the union is η(u) = 0,
X
P[Perc(u)] ≤
η(u) = 0.
x∈Zd
This finishes the proof of the proposition.
In Chapter 10 we will prove that for any d ≥ 3,
u∗ ∈ (0, ∞),
(7.1.4)
which amounts to showing that
(a) there exists u > 0 such that with probability 1, V u contains an infinite connected component, and
(b) there exists u < ∞ such that with probability 1, all connected components of V u are finite.
The proof of (7.1.4) relies on the so-called decoupling inequalities, which are stated in Theorem 9.5 and proved in Chapters 8, 9, and 11.
Nevertheless, it is possible to prove that u∗ > 0 if the dimension d is sufficiently large using only
elementary considerations. The rest of this chapter is devoted to proving this fact.
7.2. EXPONENTIAL DECAY AND PROOF OF THEOREM 7.2
49
Theorem 7.2. There exists d0 ∈ N such that for all d ≥ d0 one has
u∗ (d) > 0.
The proof will be based on the so-called Peierls argument, which has been developed by Rudolf
Peierls in his work [P36] on the Ising model. This way, the proof will even show that the
probability of the origin being contained in an infinite connected component of not only V u , but
V u ∩ Z2 × {0}d−2 has positive probability for some u > 0.
7.2
Exponential decay and proof of Theorem 7.2
An auxiliary result we will need for the proof of Theorem 7.2 is the following exponential decay
of the probability in the cardinality of a planar set being contained in random interlacements
for high dimensions and small intensities. To be more precise, we let d ≥ 3 and define the plane
F = Z2 × {0}d−2 .
Proposition 7.3. For all d large enough there exists u1 (d) > 0 such that for u ∈ [0, u1 (d)] we
have
P[I u ⊇ K] ≤ 14−|K| , for all K ⊂⊂ F.
(7.2.1)
We will now proceed with the proof of Theorem 7.2 and postpone the proof of Proposition 7.3
to Section 7.3.
Proof of Theorem 7.2. We will prove that for d large enough and any u ≤ u1 , with u1 from
Proposition 7.3, one has
h u
i
V ∩F
P 0 ←→ ∞ > 0,
(7.2.2)
V u ∩F
where 0 ←→ ∞ denotes the event that the origin is contained in an infinite connected
component of V u ∩ F. We say that a set π = (y1 , . . . , yk ) ⊂ F is a ∗-path in F , if yi , yi+1 are
∗-neighbors (recall this notion from Section 2.1) for all i. If y1 = yk , we call this set a ∗-circuit.
Let C be the connected component of 0 in V u ∩ F . The crucial observation is that
C is finite if and only if there exists a ∗-circuit in I u ∩ F around 0.
(7.2.3)
While the validity of (7.2.3) seems obvious (see Figure 7.1), it is not trivial to prove such a
statement rigorously; we refer to [K86, Lemma 2.23] for more details.
Combining Proposition 7.3 with (7.2.3) we get that for d large enough,
P[|C| < ∞] = P [I u ∩ F contains a ∗-circuit around 0]
X I u ∩ F contains a ∗-circuit around 0 ≤
P
passing through (n, 0, 0, . . . , 0) ∈ F
n≥0
X
I u ∩ F contains a simple ∗-path πn
≤
P
(7.2.4)
with (n + 1) vertices started from (n, 0, 0, . . . , 0) ∈ F
n≥0
X
X
≤
P[I u ⊇ πn ]
n≥0 πn admissible
≤
X
n≥0
|{πn admissible}| · 14−(n+1) ,
where a path πn is admissible if it fulfills the property in the probability on the right-hand side
of (7.2.4). It is easy to see that |{πn : πn admissible}| ≤ 8 · 7n−1 , if n ≥ 1, and it is equal to
50
CHAPTER 7. PERCOLATION OF THE VACANT SET
Figure 7.1: An illustration of planar duality (7.2.3). The circles represent the vertices of V u ∩ F
that belong to the | · |1 -connected finite vacant component C, the crosses represent vertices of
∂ext C (see (2.1.1)). Now ∂ext C ⊆ I u ∩ F and ∂ext C contains a ∗-circuit.
one if n = 0. Plugging this bound into the above calculation, we get P[|C| < ∞] < 1. This is
equivalent to (7.2.2), thus u ≤ u∗ (d) follows by Proposition 7.1. This is true for all u ≤ u1 (d),
hence the proof of Theorem 7.2 is complete, given the result of Proposition 7.3.
The rest of this chapter is devoted to proving Proposition 7.3.
7.3
Proof of Proposition 7.3
We begin with a generating function calculation. For w ∈ W+ , let
ϕ(w) =
X
1{Xn (w)∈F }
n≥0
denote the number of visits of w to F. Let
d−2
q = P0Z
e 0 = ∞]
[H
(2.3.7),(2.3.1)
=
1
,
gd−2 (0)
(7.3.1)
d−2
e0
is the law of a (d − 2)-dimensional simple random walk started from the origin, H
where P0Z
is the first time when this random walk returns to 0, cf. (2.2.3), and gd−2 is the Green function
for this random walk.
Note that it follows from Pólya’s theorem (cf. Theorem 2.7) that q > 0 if and only if d ≥ 5.
Lemma 7.4. If λ > 0 and d ≥ 5 satisfy
2
2
+ 1−
(1 − q) < 1,
χ(λ) = eλ
d
d
then
λϕ
λϕ
Ex [e ] = E0 [e
q 1 − 2d eλ
< ∞,
]=
1 − χ(λ)
x ∈ F.
(7.3.2)
(7.3.3)
51
7.3. PROOF OF PROPOSITION 7.3
Proof. We start with defining the consecutive entrance and exit times of the d-dimensional
random walk to the plane F . Recall the definitions of the entrance time HF to F, see (2.2.2),
the exit time TF from F, see (2.2.4), as well as the canonical shift θk from (6.1.1). We define R0
to be the first time the walker visits F , D0 to be the first time after R0 when the walker leaves
F , R1 to be the first time after D0 when the walker visits F , D1 to be the first time after R1
when the walker leaves F , and so on. Formally, let R0 = HF , and
Ri + TF ◦ θRi , if Ri < ∞,
, i ≥ 0,
Di =
∞,
otherwise,
and
Ri =
Di−1 + HF ◦ θDi−1 , if Di−1 < ∞,
, i ≥ 1.
∞,
otherwise,
Let
τ = inf{n : Rn = ∞}
and note that for any w ∈ W+ ,
τ (w)−1
ϕ(w) =
X
i=0
TF ◦ θRi (w).
(7.3.4)
We begin with a few observations:
Exercise 7.5. Show that with the above definitions we have
(i)
Px [TF = n] =
2
1−
d
n−1
2
,
d
n ≥ 1, x ∈ F,
(ii)
Px [τ = n] = q(1 − q)n−1 ,
n ≥ 1, x ∈ F.
Hint: Observe that it is sufficient to prove that the random variables TF and τ both have geometric distributions (which are characterized by their memoryless property) with corresponding
parameters 1 − 2/d and q. For the latter, deduce that those steps of the d-dimensional SRW that
are orthogonal to the plane F form a (d − 2)-dimensional SRW and the strong Markov property
in order to derive that
q = Px [Ri+1 = ∞ | Ri < ∞], x ∈ F.
(7.3.5)
Using (ii) and (7.3.4) as well as the i.i.d. structure of the family of excursions of SRW in F, we
obtain that for any x ∈ F ,
Ex [eλϕ ] =
X
n≥1
i
h Pn−1
Ex eλ i=0 TF ◦θRi 1{τ =n} =
q · E0 [eλTF ]
,
1 − (1 − q) · E0 [eλTF ]
where we can use (i) in order to compute
λTF
E0 [e
1 − 2d eλ
,
]=
1 − d2 eλ
if
The combination of the last two identities gives (7.3.3).
2 λ
e < 1.
d
if (1 − q) · E0 [eλTF ] < 1,
52
CHAPTER 7. PERCOLATION OF THE VACANT SET
Remark 7.6. By looking at the moment-generating function (7.3.3) one sees that ϕ itself is
geometrically distributed with parameter q · (1 − 2/d) under Px for any x ∈ F .
Using (7.3.5), an alternative way to state the next proposition is that the probability that SRW
started from any x ∈ F will ever return to the plane F after having left it, tends to 0 as d tends
to ∞.
Proposition 7.7.
q = q(d) → 1,
as d → ∞.
This result will be crucial in choosing the base in the right-hand side of (7.2.1) large enough,
i.e., in our case equal to 14. The proof of Proposition 7.7 will be provided in Section 7.4, and
we can now proceed to prove Proposition 7.3.
e that satisfy
Proof of Proposition 7.3. Recalling (7.3.2), we will show that for any d, λ and λ
e > λ > 0 and χ(λ)
e < 1,
λ
(7.3.6)
we can choose u small enough (see (7.3.8)) such that P[I u ⊇ K] ≤ e−λ|K| holds.
Note that P[I u ⊇ K] = P[I u ∩ K = K]. Recall (see Exercise 6.9) that
[
Iu ∩ K =
range(w) ∩ K,
w∈Supp(µK,u )
P NK
where µK,u = i=1 δYi , with NK Poisson distributed with parameter u·cap(K), and Yi are independent simple random walks (independent from NK ) distributed according to PeeK . Therefore,


#
"N
N
K
K [
X
[
ϕ(Yi ) ≥ |K| .
(7.3.7)
Yi (n) ⊇ K  ≤ P
P[I u ⊇ K] = P 
i=1
i=1 n≥0
Then by the exponential Chebychev inequality applied to (7.3.7), we get
e PNK
e
P[I u ⊇ K] ≤ e−λ|K| · E eλ i=1 ϕ(Yi ) .
Using the fact that NK is Poisson distributed with parameter u cap(K), we can explicitly compute that
o
n
e PNK
e
E eλ i=1 ϕ(Yi ) = exp u cap(K) E0 eλϕ − 1 ,
or otherwise refer to Campbell’s formula (see Section 3.2 in [K93]) for this equality. Together
with (7.3.3) this gives
)
(
e
λ
λe PNK ϕ(Y ) e
−
1
i
.
E e i=1
= exp u cap(K)
e
1 − χ(λ)
Choosing
we obtain for any u ≤ u1 that
e
e − λ) · g(0) · 1 − χ(λ) > 0,
u1 = (λ
eλe − 1
e
(7.3.8)
e
P[I u ⊇ K] ≤ P[I u1 ⊇ K] ≤ e−λ|K| · e(λ−λ)·g(0)·cap(K) .
(2.3.4) P
Noting that cap(K) ≤
x∈K cap({x}) = |K|/g(0) we can upper bound the last display by
−λ|K|
e as in
e
. Using Proposition 7.7 and (7.3.2), we can observe that for d large enough, λ and λ
(7.3.6) can be chosen such that λ = log 14, whence in this setting inequality (7.2.1) follows.
The only piece that is still missing to complete the proof of Theorem 7.2 is the proof of Proposition 7.7 which we give in the next section.
53
7.4. PROOF OF PROPOSITION 7.7
7.4
Proof of Proposition 7.7
e 0 < ∞] = 0. We will first prove the result for d of the
It is enough to show that limd→∞ P0 [H
form
d = 3k.
(7.4.1)
In Exercise 7.8 we will then show how this implies the general case. For such d = 3k, denote by
(j)
Y (1) , . . . , Y (k) a k-tuple of i.i.d. three-dimensional SRWs, and by Uk , j ≥ 1, an independent
sequence of i.i.d. uniformly distributed variables on {1, . . . , k}. We write
,
(7.4.2)
Yn = Ym(1)
, . . . , Ym(k)
k,n,1
k,n,k n≥0
where
mk,n,i =
n
X
1{U (j) =i} ,
k
j=1
i ∈ {1, . . . , k},
corresponds to the number of steps that will have been made in dimension 3k by the i-th triplet
of coordinates up to time n. We observe that the d-dimensional SRW (Xn )n≥0 has the same law
as the process in (7.4.2). Also note that for 2 ≤ l ≤ k we have
l−1
h
i
Y
k−m
(i)
(j)
=: p(l, k).
P Uk 6= Uk ∀1 ≤ i < j ≤ l =
k
m=1
1
Choosing l = l(k) = ⌊k 3 ⌋, a simple computation shows that limk→∞ p(l(k), k) → 1 and thus we
can deduce
i
h
e 0 < ∞] ≤ P ∃1 ≤ i < j ≤ l(k) such that U (i) = U (j)
P0 [H
k
k
i
h
(i)
(j)
(i)
e 0 (Y (Uk ) ) < ∞ ∀1 ≤ i ≤ l(k)
+ P Uk 6= Uk ∀1 ≤ i < j ≤ l(k), and H
1 l(k)
→ 0,
as k → ∞,
≤ (1 − p(l(k), k)) + 1 −
g3 (0)
where to obtain the last inequality we used the independence of
(i)
Y (Uk ) ,
(i)
1 ≤ i ≤ l(k)
(j)
conditional on Uk 6= Uk ∀1 ≤ i < j ≤ l(k). This finishes the proof of Proposition 7.7 if d = 3k.
Exercise 7.8. (a) Show that q(d) as in (7.3.1) is a non-decreasing function of d by using a
representation similar to (7.4.2).
(b) Deduce from this that it is sufficient to prove Proposition 7.7 for the case (7.4.1)
7.5
Notes
The results of this section are instructive but not optimal. In fact, in [S10, Remark 2.5 (3)] it
is noted that d0 of Theorem 7.2 actually can be chosen to be equal to 18.
When it comes to Proposition 7.7, using the asymptotics
1
+ o(d−1 ), d → ∞,
2d
on the high-dimensional Green function (see pp. 246–247 in [M56]), one obtains the rate of
convergence in Proposition 7.7 from the relation (7.3.1) of gd (0) and q(d).
gd (0) = 1 +
For further discussion on the history of the non-triviality of u∗ , see Section 10.3.
54
CHAPTER 7. PERCOLATION OF THE VACANT SET
Chapter 8
Source of correlations and
decorrelation via coupling
In this chapter we consider the question of correlations in random interlacements. We have
already seen in Remark 3.6 that the random set I u exhibits long-range correlations. Despite
of this, we want to effectively control the stochastic dependence of locally defined events with
disjoint (distant) support. We will identify the source of correlations in the model and use the
trick of coupling to compare the correlated events to their decorrelated counterparts.
Section 8.1 serves as an introduction to the above themes and also as an example of how to work
with the random interlacements Poisson point process: we derive a bound on the correlation of
locally defined events which decays like the SRW Green function as a function of the distance
between the support of our events.
In short Section 8.2 we argue that the way to improve the result of Section 8.1 is to compare
the probability of the joint occurrence of locally defined events for I u to the product of the
′
probability of these events for I u , where u′ is a small perturbation of u.
In Sections 8.3 and 8.4 we start to prepare the ground for the decoupling inequalities, which will
be stated in Chapter 9 and proved in Chapter 11. The decoupling inequalities are very useful
tools in the theory of random interlacements and they will serve as the main ingredient of the
proof of u∗ ∈ (0, ∞) for all d ≥ 3 in Chapter 10.
In Section 8.3 we devise a partition of the space of trajectories that allows us to identify the
Poisson point processes on random walk excursions that cause the correlations between two
locally defined events. In Section 8.4 we show how to achieve decorrelation of these locally
defined events using a coupling where random interlacements trajectories that contribute to the
outcome of both events are dominated by trajectories that only contribute to one of the events.
Let us now collect some definitions that we will use throughout this chapter.
d
Recall the notion of the measurable space ({0, 1}Z , F) from Definition 3.1.
For an event A ∈ F and a random subset J of Zd defined on some measurable space (Z, Z), we
use the notation {J ∈ A} to denote the event
{J ∈ A} = {z ∈ Z : (1x∈J (z) )x∈Zd ∈ A} (∈ Z).
8.1
(8.0.1)
A polynomial upper bound on correlations
The following claim quantifies the asymptotic independence result of (3.2.4). Its proof introduces
the technique of partitioning Poisson point processes and highlights the source of correlations
in I u .
55
56CHAPTER 8. SOURCE OF CORRELATIONS AND DECORRELATION VIA COUPLING
Claim 8.1. Let u > 0, K1 , K2 ⊂⊂ Zd such that K1 ∩ K2 = ∅, and consider a pair of local events
Ai ∈ σ(Ψz , z ∈ Ki ), i = 1, 2. We have
CovP (1{I u ∈A } , 1{I u ∈A } ) = |P[I u ∈ A1 ∩ A2 ] − P[I u ∈ A1 ] · P[I u ∈ A2 ]|
1
2
≤ 4u · Cg ·
cap(K1 )cap(K2 )
, (8.1.1)
d(K1 , K2 )d−2
where Cg is defined in (2.2.8) and d(K1 , K2 ) = min{|x − y| : x ∈ K1 , y ∈ K2 }.
Proof. Let K = K1 ∪ K2 . We subdivide the Poisson point measure µK,u (defined in (6.2.6)) into
four parts:
µK,u = µK1 ,K2c + µK1 ,K2 + µK2,K1c + µK2,K1 ,
(8.1.2)
where
µK1 ,K2c = 1{X0 ∈ K1 , HK2 = ∞} µK,u ,
µK2 ,K1c = 1{X0 ∈ K2 , HK1 = ∞} µK,u ,
µK1 ,K2 = 1{X0 ∈ K1 , HK2 < ∞} µK,u ,
µK2 ,K1 = 1{X0 ∈ K2 , HK1 < ∞} µK,u .
(8.1.3)
µK1 ,K2c , µK1 ,K2 , µK2,K1c , µK2 ,K1 are independent Poisson point processes on W+ ,
(8.1.4)
By Exercise 5.6(b),
since the events in the indicator functions in (8.1.3) are disjoint subsets of W+ .
Analogously to Definition 6.7, we define for κ ∈ {{K1 , K2c }, {K1 , K2 }, {K2 , K1c }, {K2 , K1 }},
[
Iκ =
range(w).
(8.1.5)
w∈Supp(µκ )
By (8.1.4),
IK1 ,K2c , IK1 ,K2 , IK2,K1c , and IK2 ,K1 are independent.
(8.1.6)
Moreover, by (6.2.9)
{I u ∈ A1 } = IK1 ,K2c ∪ IK1 ,K2 ∪ IK2 ,K1 ∈ A1 ,
{I u ∈ A2 } = IK2 ,K1c ∪ IK2 ,K1 ∪ IK1 ,K2 ∈ A2 .
(8.1.7)
Recall the definition of A∆B from (3.2.2). By (8.1.7),
{I u ∈ A1 ∩ A2 }∆{I u ∈ A1 , IK2 ,K1c ∈ A2 } ⊆ {IK1 ,K2 ∪ IK2 ,K1 6= ∅},
(8.1.8)
and by using |P[A] − P[B]| ≤ P[A∆B], we obtain from (8.1.8) that
P[I u ∈ A1 ∩ A2 ] − P [I u ∈ A1 ] · P IK ,K c ∈ A2 2
1
(8.1.6),(8.1.7) P[I u ∈ A1 ∩ A2 ] − P I u ∈ A1 , IK ,K c ∈ A2 =
2
1
(8.1.8)
≤ P [IK1 ,K2 6= ∅] + P [IK2,K1 6= ∅] . (8.1.9)
d
By applying (8.1.9) to A1 = {0, 1}Z , we obtain also that
P[I u ∈ A2 ] − P IK ,K c ∈ A2 ≤ P [IK ,K 6= ∅] + P [IK ,K 6= ∅] .
2
1
1
2
2
1
(8.1.10)
The combination of (8.1.9) and (8.1.10) gives that
|P[I u ∈ A1 ∩ A2 ] − P [I u ∈ A1 ] · P [I u ∈ A2 ]| ≤ 2P [IK1 ,K2 6= ∅] + 2P [IK2 ,K1 6= ∅]
(8.1.5)
= 2P [µK1 ,K2 (W+ ) ≥ 1] + 2P [µK2 ,K1 (W+ ) ≥ 1] . (8.1.11)
8.2. PERTURBING THE VALUE OF U
57
We can bound
(8.1.3)
P[µK1,K2 (W+ ) ≥ 1] ≤ E[µK1 ,K2 (W+ )] = E[µK,u(X0 ∈ K1 , HK2 < ∞)]
(2.3.5)
(6.2.7)
= uPeK [X0 ∈ K1 , HK2 < ∞] ≤ uPeK1 [HK2 < ∞]
X X
(2.2.8),(2.3.2)
cap(K1 )cap(K2 )
(2.3.6)
. (8.1.12)
eK1 (x)eK2 (y)g(x, y)
≤
u · Cg ·
= u
d(K1 , K2 )d−2
x∈K1 y∈K2
By interchanging the roles of K1 and K2 in (8.1.12), one obtains exactly the same upper bound
on P[µK2 ,K1 (W+ ) ≥ 1].
Claim 8.1 follows from the combination of (8.1.11) and (8.1.12).
Remark 8.2. It follows from the proof of Claim 8.1 that the source of correlation between the
events {I u ∈ A1 } and {I u ∈ A2 } comes from the random interlacements trajectories that hit
both K1 and K2 , i.e., the point processes µK1 ,K2 and µK1 ,K2 , see, e.g., (8.1.4), (8.1.5), and
(8.1.7).
Remark 8.3. Our strategy of proving Claim 8.1 was rather crude in the sense that we could
only achieve decorrelation if the terms causing the trouble (i.e., µK1 ,K2 and µK2 ,K1 ) vanished. In
the remaining part of this chapter we will follow a different, more sophisticated strategy: loosely
speaking, instead of neglecting the effect of trajectories from µK1 ,K2 inside K2 , we will pretend
that their effect is caused by a sprinkling of trajectories from an independent copy of µK2 ,K1c .
′
′
This will allow us to dominate P[I u ∈ A1 ∩ A2 ] by P[I u ∈ A1 ] · P[I u ∈ A2 ] with an error
depending on the difference between u′ and u and the distance between K1 and K2 , but decaying
much faster than d(K1 , K2 )2−d . This strategy currently only applies when A1 and A2 are either
both increasing or decreasing.
8.2
Perturbing the value of u
We have proved in Claim 8.1 that the covariance between any pair of “local” events decays
as the distance between their supports raised to the power (2 − d), and Claim 3.5 states that
this order of decay is correct for the events {x ∈ I u } and {y ∈ I u }. Such a slow polynomial
decay of correlations is not a good news for many applications (including the proof of the fact
that u∗ ∈ (0, ∞) for all d ≥ 3), which require a good qualitative estimate of an asymptotic
independence between certain events.
It turns out that by comparing the probability of intersection of events for I u with the product of
′
probabilities of events for I u , where u′ is a certain small perturbation of u, one can significantly
reduce the error term for a certain class of events. This idea is best illustrated in the following
exercise. Recall that V u = Zd \ I u .
Exercise 8.4. For x, y ∈ Zd , and u > 0, let u′ = u′ (u, x, y) = u · (1 −
′
g(y−x)
g(0)+g(y−x) ).
Then
′
P[x, y ∈ V u ] = P[x ∈ V u ] · P[y ∈ V u ].
Thus, by slightly perturbing u (note that u′ → u as |x − y| → ∞) we can write the probability of
′
the intersection of two events for V u as the product of probabilities of similar events for V u .
The proof of Exercise 8.4 heavily depends on the specific choice of events. The aim of Sections 8.3
and 8.4 is to describe a method which applies to a more general class of events.
58CHAPTER 8. SOURCE OF CORRELATIONS AND DECORRELATION VIA COUPLING
d
Recall from Section 3.3 the natural partial order on {0, 1}Z as well as the notion of monotone
d
increasing and decreasing subsets of {0, 1}Z .
Let u, u′ > 0, K1 , K2 ⊂⊂ Zd such that K1 ∩ K2 = ∅, and consider a pair of local increasing (or
decreasing) events Ai ∈ σ(Ψz , z ∈ Ki ), i = 1, 2. The main result of this chapter is Theorem 8.9,
which states that
′
′
P[I u ∈ A1 ∩ A2 ] ≤ P[I u ∈ A1 ] · P[I u ∈ A2 ] + ǫ(u, u′ , K1 , K2 ),
(8.2.1)
where the error term ǫ(u, u′ , K1 , K2 ) is described by a certain coupling of random subsets of Zd ,
see (8.4.9).
If u′ = u, then we already know from Claim 3.5 that the best error term we can hope for
decays as the distance between K1 and K2 raised to the power (2 − d). This is too slow for the
applications that we have in mind. It turns out though, and we will see it later in Theorem 9.3,
that by varying u′ a little and by restricting further the class of considered events, one can
establish (8.2.1) with an error term much better than (8.1.1).
8.3
Point measures on random walk excursions
We begin with a description of useful point measures that arise from partitioning the space
of trajectories. The partitioning that we are about to describe is more refined than the one
considered in (8.1.2)-(8.1.3) and it will allow us to carry out the plan sketched in Remark 8.3.
Given K1 , K2 ⊂⊂ Zd , K1 ∩ K2 = ∅ we introduce the finite sets Si and Ui such that for i ∈ {1, 2},
Ki ⊂ Si ⊂ Ui
and U1 ∩ U2 = ∅,
(8.3.1)
U = U1 ∪ U2 .
(8.3.2)
and define
S = S1 ∪ S2
and
Take
0 < u− < u+
and consider the Poisson point measures µS,u− and µS,u+ defined as in (6.2.6).
Remark 8.5. The main idea in establishing (8.2.1) (with u = u− and u′ = u+ in case of
increasing events, and u = u+ and u′ = u− in case of decreasing events) is to decompose all
the trajectories in the supports of µS,u− and µS,u+ into finite excursions from their entrance
time to S until the exit time from U and then dominate the set of excursions which come from
trajectories in the support of µS,u− by the set of excursions which come from trajectories in the
support of µS,u+ that never return to S after leaving U . A useful property of the latter set of
excursions is that they cannot visit both sets S1 and S2 . This will imply some independence
properties (see, e.g., (8.4.7)) that we will find very useful when we prove the decorrelation result
in Theorem 8.9.
We define the consecutive entrance times to S and exit times from U of a trajectory w ∈ W+ :
R1 is the first time w visits S, D1 is the first time after R1 when w leaves U , R2 is the first time
after D1 when w visits S, D2 is the first time after R2 when w leaves U , etc.
In order to give a formal definition, we first recall the definitions of the entrance time HS to S
from (2.2.2) and the exit time TU from U from (2.2.4) and the shift operator θk from (6.1.1).
Let
R1 := HS ,
D1 :=
TU ◦ θR1 + R1 , if R1 < ∞,
∞,
otherwise,
59
8.3. POINT MEASURES ON RANDOM WALK EXCURSIONS
as well as for k ≥ 1,
R1 ◦ θDk + Dk , if Dk < ∞,
Rk+1 :=
∞,
otherwise,
Dk+1 :=
D1 ◦ θDk + Dk , if Dk < ∞,
∞,
otherwise.
For j ≥ 1, we define the following Poisson point processes on W+ :
j
:= 1{Rj < ∞ = Rj+1 }µS,u− ,
ζ−
(8.3.3)
j
ζ+
:= 1{Rj < ∞ = Rj+1 }µS,u+ .
j
j
In ζ−
(resp., ζ+
) we collect the trajectories from µS,u− (resp., µS,u+ ) which perform exactly j
excursions from S to U c . By Exercise 5.6(b) we see that
j
j
ζ−
(resp., ζ+
), j ≥ 1, are independent Poisson point processes.
(8.3.4)
j
j
) the j-tuple of its
(and ζ+
We would like to associate with each trajectory in the support of ζ−
c
excursions from S to U (see (8.3.6)), and then consider a point process of such j-tuples (see
(8.3.8)). For this we first introduce the countable set of finite length excursions from S to U c as
d
C := ∪∞
n=1 π = (π(i))0≤i≤n a nearest neighbor path in Z with
π(0) ∈ S, π(n) ∈ U c , and π(i) ∈ U, for 0 ≤ i < n . (8.3.5)
D4
w4
w3
D3
R4
U2
R3
U1
S1
S2
D2
w
w2
1
D1
R2
R1
Figure 8.1: An illustration of the excursions defined in (8.3.7).
For any j ≥ 1, consider the map
φj : {Rj < ∞ = Rj+1 } ∋ w 7→ (w1 , . . . , wj ) ∈ C j ,
(8.3.6)
wk := (XRk + · (w))0≤·≤Dk (w)−Rk (w) ,
(8.3.7)
where
1 ≤ k ≤ j,
60CHAPTER 8. SOURCE OF CORRELATIONS AND DECORRELATION VIA COUPLING
are the excursions of the path w ∈ W+ . In words, φj assigns to a path w having j such excursions
the vector φj (w) = (w1 , . . . , wj ) ∈ C j which collects these excursions in chronological order, see
Figure 8.1. With this in mind, define for j ≥ 1,
j
j
j
j
(resp., ζ+
) under φj .
(8.3.8)
ζe−
(resp., ζe+
) as the image of ζ−
P
P
j
j
This means that if ζ = i δwi , for ζ ∈ {ζ−
, ζ+
}, then ζe = i δφj (wi ) . Thus the point measures
in (8.3.8) are random variables taking values in the set of finite point measures on C j .
By Exercise 5.6(c) and (8.3.4) we obtain that
j
j
ζe−
(resp., ζe+
), j ≥ 1, are independent Poisson point processes.
(8.3.9)
j , j ≥ 1, we define finite point measures on C by
Next, using these Poisson point processes on CP
replacing each δ-mass δ(w1 ,...,wj ) with the sum jk=1 δwk .
Formally, consider the map sj from the set of finite point measures on C j to the set of finite
point measures on C
sj (m) :=
N
X
δwi1 + . . . + δwj ,
i
i=1
where m =
PN
i=1 δ(wi1 ,...,wij ) ,
and define
∗∗
ζ−
:=
X
j≥2
j
sj (ζe−
),
∗∗
ζ+
:=
X
j≥2
j
sj (ζe+
).
(8.3.10)
These are the random point measures on C that one obtains by collecting all the excursions
from S to U c of the trajectories from the support of µS,u− (resp., µS,u+ ) that come back to S
after leaving U at least one more time. To simplify notation a bit, we also define
∗
1
ζ−
:= ζe−
,
∗
1
ζ+
:= ζe+
.
(8.3.11)
These are the random point measures on C that one obtains by collecting the unique excursions
from S to U c of the trajectories from the support of µS,u− (resp., µS,u+ ) that never come back
to S after leaving U .
Finally, we define the Poisson point process
∗
∗
∗
ζ−,+
= ζ+
− ζ−
,
(8.3.12)
and observe from (6.2.8),(8.3.3), (8.3.8), and (8.3.11), that
∗
∗
ζ−
and ζ−,+
are independent.
(8.3.13)
Property (8.3.13) will be crucially used in the proof of Theorem 8.9.
Remark 8.6. The decompositions (8.3.10) and (8.3.11) will be crucial for understanding the
error term in (8.2.1). One should argue as follows. If S is far enough from U c , then only a
small fraction of trajectories from µS,u− (resp., µS,u+ ) will return back to S after leaving U , i.e.,
∗∗ (C) (resp., ζ ∗∗ (C)) will be rather small. On the other hand, the laws of excursions from S to
ζ−
+
∗∗ and ζ ∗ will be very similar, because a trajectory coming from U c back
U c in the supports of ζ+
+
to S will more or less forget its past by the time it hits S. It is thus not unreasonable to expect
∗∗ is stochastically dominated by ζ ∗
that the point measure ζ−
−,+ on an event of large probability,
c
if the difference u+ − u− is not so small and if S and U are sufficiently far apart. We will
make this intuitive argument precise in Chapter 11 and finish this chapter by showing how such
a coupling can be used to obtain (8.2.1) with a very specific error term.
61
8.4. DECORRELATION VIA COUPLING
8.4
Decorrelation via coupling
Here we state and prove the main result of Chapter 8, Theorem 8.9. It gives decorrelation
inequalities of the form (8.2.1) in the case of increasing or decreasing events, subject to a specific
coupling of the point measures in (8.3.10) and (8.3.11), see (8.4.1). The main aim here is only
to show how one can use coupling of point measures to obtain decorrelation inequalities (8.2.1).
Later in Theorem 9.3 we will consider a special subclass of increasing (resp., decreasing) events,
for which we will be able to quantify the error term in (8.2.1). For a special choice of u− and
u+ , the error decays much faster than polynomial with the distance between K1 and K2 . In
Chapter 10, this will be used to show that u∗ ∈ (0, ∞).
We first introduce the important notion of a coupling of two random variables X and Y (not
necessarily defined on a common probability space).
b Yb ) defined on a common probability space is
Definition 8.7. A pair of random variables (X,
d b
d
d
called a coupling of X and Y , if X = X
and Y = Yb , with = denoting equality in distribution.
b Yb ) which
Remark 8.8. The art of coupling lies in finding the appropriate joint distribution (X,
allows us to compare the distributions of X and Y . For example, Definition 6.7 of (I u )u>0 on
the probability space (Ω, A, P) gives a coupling of random subsets of Zd with the distributions
′
(3.1.2) such that the inclusion P[I u ⊆ I u ] = 1 holds for all u ≤ u′ .
For two point measures µ1 and µ2 on the same state space, we say that µ1 ≤ µ2 if there exists
a point measure µ on the same state space as µ1 and µ2 such that µ2 = µ1 + µ.
Theorem 8.9. Let 0 < u− < u+ , K1 , K2 ⊂⊂ Zd such that K1 ∩ K2 = ∅, S1 , S2 , U1 , U2 ⊂⊂ Zd
are satisfying (8.3.1) and (8.3.2). For i ∈ {1, 2}, let Ain
i ∈ σ(Ψz , z ∈ Ki ) be increasing and
Ade
∈
σ(Ψ
,
z
∈
K
)
decreasing.
z
i
i
∗∗ , ζ
b∗ ) of the point
Let ǫ = ǫ(u− , u+ , S1 , S2 , U1 , U2 ) be such that there exists a coupling (ζb−
−,+
∗
∗∗
b
b
b
measures ζ− and ζ−,+ on some probability space (Ω, A, P) satisfying
i
h
b ζb∗∗ ≤ ζb∗
(8.4.1)
P
−
−,+ ≥ 1 − ǫ.
Then
u−
u+
u+
P [{I u− ∈ Ain
∈ Ain
∈ Ain
∈ Ain
1 } ∩ {I
2 }] ≤ P [I
1 ] · P [I
2 ] + ǫ,
(8.4.2)
u+
u−
u−
P [{I u+ ∈ Ade
∈ Ade
∈ Ade
∈ Ade
1 } ∩ {I
2 }] ≤ P [I
1 ] · P [I
2 ] + ǫ.
(8.4.3)
Remark 8.10. A useful implication of Theorem 8.9 is that the information about the specific
de
events Ain
i , Ai , i ∈ {1, 2} is not present in the error term ǫ, since the coupling will only depend
on the “geometry” of the sets S1 , S2 , U1 , U2 . Given K1 and K2 , we also have a freedom in
choosing S1 , S2 , U1 , U2 satisfying the constraints (8.3.1) and (8.3.2), and later we will choose
them in a very special way (see (11.1.2), (11.1.3), (11.1.4)) to obtain a satisfactory small error
ǫ = ǫ(u− , u+ , K1 , K2 ) as in (8.2.1).
For the state of the art in available error terms for decorrelation inequalities of form (8.4.2) and
(8.4.3), see Section 8.5.
Proof of Theorem 8.9. We begin by introducing the random subsets of S which emerge as the
ranges of trajectories in the support of measures defined in (8.3.10) and (8.3.11). In analogy
with (6.2.9), we define for κ ∈ {−, +},




[
[
(8.4.4)
range(w) .
range(w) , Iκ∗∗ = S ∩ 
Iκ∗ = S ∩ 
w∈Supp(ζκ∗ )
w∈Supp(ζκ∗∗ )
62CHAPTER 8. SOURCE OF CORRELATIONS AND DECORRELATION VIA COUPLING
Note that
∗
∗∗
I u− ∩ S = (I−
∪ I−
) ∩ S,
By (8.3.9),
∗
∗∗
I u+ ∩ S = (I+
∪ I+
) ∩ S.
∗
∗
∗∗
∗∗
the pair (I−
, I+
) is independent from (I−
, I+
).
1HS1 <∞ ζκ∗
(8.4.5)
(8.4.6)
1HS2 <∞ ζκ∗
Moreover, since U1 ∩U2 = ∅, the supports of the measures
and
are disjoint,
where κ ∈ {−, +}. Indeed, if a trajectory enters both S1 and S2 , it must exit from U in between
these entries. Thus, the pairs
∗
∗
∗
∗
, 1HS2 <∞ ζ+
) are independent.
(1HS1 <∞ ζ−
, 1HS1 <∞ ζ+
) and (1HS2 <∞ ζ−
(8.4.7)
This implies that the pairs of sets
∗
∗
∗
∗
∩ S2 , I +
∩ S2 ) are independent.
(I−
∩ S1 , I +
∩ S1 ) and (I−
(8.4.8)
Note that (8.4.8) implies that for any pair of events Ai ∈ σ(Ψz , z ∈ Ki ), i ∈ {1, 2} and any
κ ∈ {−, +},
P[Iκ∗ ∈ A1 ∩ A2 ] = P[Iκ∗ ∈ A1 ] · P[Iκ∗ ∈ A2 ].
Next, we show using (8.4.1) that there exists a coupling (I
probability space (Ω, A, P) satisfying
i
h
u
∗
P I − ⊆ I + ≥ 1 − ǫ.
u−
∗
∗ on some
, I + ) of I u− ∩ S and I+
(8.4.9)
∗ and (ζ ∗∗ , ζ ∗ ), so we can
Indeed, (8.3.9), (8.3.10), and (8.3.13) imply the independence of ζ−
−,+
−
b from the statement of Theorem 8.9 by introducing a
b A,
b P)
extend the probability space (Ω,
∗ which is independent from everything else and has the same distribution as ζ ∗ .
random set ζb−
−
∗
∗∗
∗
∗∗
b
b
This way we obtain that the point measure ζ− + ζ− has the same distribution as ζ− + ζ− , and
∗ +ζ
b∗ has the same distribution as ζ ∗ . We can then define
the point measure ζb−
+
−,+




[
[
u
∗




range(w) .
range(w) , I + = S ∩ 
I − =S∩
∗ +ζ
b∗ )
w∈Supp(ζb−
−,+
∗ +ζ
b∗∗ )
w∈Supp(ζb−
−
∗
u
By (8.4.5), I − has the same distribution as I u− , and by (8.4.4), I + has the same distribution
∗ . Moreover, if ζ
b∗∗ ≤ ζb∗ , then I u− ⊆ I ∗ . Thus, (8.4.9) follows from (8.4.1).
as I+
−,+
−
+
Using the notation from (8.0.1), we observe that for any random subset J of Zd ,
{J ∈ Ai } = {J ∩ Si ∈ Ai } = {J ∩ Ki ∈ Ai }.
(8.4.10)
Now we have all the ingredients to prove (8.4.2) and (8.4.3).
We first prove (8.4.2). Let Ain
i be increasing events as in the statement of Theorem 8.9. Using
(8.4.5) and (8.4.9), we compute
u−
u−
u−
∈ Ain
∈ Ain
P[{I u− ∈ Ain
∈ Ain
1 } ∩ {I
2}
1 } ∩ {I
2 }] = P {I
i
(8.4.9) h
∗
∗
∗
in
∗
in
≤ P {I + ∈ Ain
}
∩
{I
∈
A
}
+ ǫ = P {I+
∈ Ain
1
+
2
1 } ∩ {I+ ∈ A2 } + ǫ
(8.4.10)
=
∗
∗
(8.4.8) ∗
∗
in
in
in
P {I+
∩ S1 ∈ Ain
1 } ∩ {I+ ∩ S2 ∈ A2 } + ǫ = P I+ ∩ S1 ∈ A1 · P I+ ∩ S2 ∈ A2 + ǫ
(8.4.5)
u+
≤ P [I u+ ∈ Ain
∈ Ain
1 ] · P [I
2 ] + ǫ.
63
8.5. NOTES
The proof of (8.4.2) is complete.
We proceed with the proof of (8.4.3). Let Ade
i be decreasing events as in the statement of the
theorem. Using (8.4.5), we obtain
(8.4.5) u+
∗
∗∗
de
∗
∗∗
de
P[{I u+ ∈ Ade
∈ Ade
1 } ∩ {I
2 }] = P {I+ ∪ I+ ∈ A1 } ∩ {I+ ∪ I+ ∈ A2 }
∗
∗∗
∗
de
≤ P {(I+
∪ I+
) ∩ S1 ∈ Ade
1 } ∩ {I+ ∩ S2 ∈ A2 }
∗
(8.4.6),(8.4.8) ∗
∗∗
de
=
P (I+ ∪ I+
) ∩ S1 ∈ Ade
1 · P I+ ∩ S2 ∈ A2
∗
∗
(8.4.5)
de
u−
de
= P [I u+ ∈ Ade
∈ Ade
1 ] · P I+ ∈ A2 ≤ P [I
1 ] · P I+ ∈ A2 .
Using the coupling (8.4.9), we compute
P
Thus,
∗
I+
de
∈ A2
=P
h
∗
I+
i (8.4.9) u
u−
≤ P I − ∈ Ade
∈ A2
∈ Ade
2 + ǫ = P [I
2 ] + ǫ.
de
u+
u−
u−
P[{I u+ ∈ Ade
∈ Ade
∈ Ade
∈ Ade
1 } ∩ {I
2 }] ≤ P [I
1 ] · (P [I
2 ] + ǫ)
u−
≤ P [I u− ∈ Ade
∈ Ade
1 ] · P [I
2 ] + ǫ,
and the proof of (8.4.3) is complete.
8.5
Notes
Claim 8.1 is a reformulation of [S10, (2.8)-(2.15)] and the content of Sections 8.3 and 8.4 (as well
as of Chapters 9 and 11) is an adaptation of the main result of [S12a]. Note that the decoupling
inequalities of [S12a] are formulated in a general setting where the underlying graph is of form
G × Z (with some assumptions on G), while we only consider the special case G = Zd−1 .
Decorrelation results for random interlacements are substantial ingredients of the theory, and
despite the fact that the model exhibits polynomial decay of correlations, better and better
results are produced in this direction. Currently, the best decoupling inequalities available are
[PT13, Theorem 1.1], where the method of soft local times is introduced to prove that (8.4.2)
and (8.4.3) hold with error term
(u+ − u− )2 d−2
·s
ǫ(u− , u+ , K1 , K2 ) = C · (r + s)d · exp −c ·
,
u+
where r = min(diam(K1 ), diam(K2 )), diam(K) = maxx,y∈K |x − y| and s = d(K1 , K2 ).
While our aim in this chapter was to decorrelate, we should mention that if A1 , A2 ∈ F are both
increasing or both decreasing events, then they are positively correlated, i.e.,
P[I u ∈ A1 ∩ A2 ] ≥ P[I u ∈ A1 ] · P[I u ∈ A2 ].
This fact follows from the Harris-FKG inequality for the law of I u , see [T09b, Theorem 3.1].
64CHAPTER 8. SOURCE OF CORRELATIONS AND DECORRELATION VIA COUPLING
Chapter 9
Decoupling inequalities
In this chapter we define subclasses of increasing and decreasing events for which the decorrelation inequalities (8.4.2) and (8.4.3) hold with rapidly decaying error term, see Section 9.1
and Theorem 9.3. The definition of these events involves a tree-like hierarchical construction
on multiple scales Ln . In Section 9.2 we state the decoupling inequalities for a large number
of local events, see Theorem 9.5. We prove Theorem 9.5 in Section 9.3 by iteratively applying
Theorem 9.3. Finally, in Section 9.4, using Theorem 9.5 we prove that if the density of certain
patterns in I u is small, then it is very unlikely that such patterns will be observed in I u(1±δ)
along a long path, see Theorem 9.7. This last result will be crucially used in proving that
u∗ ∈ (0, ∞) in Chapter 10.
9.1
Hierarchical events
For n ≥ 0, let T(n) = {1, 2}n (in particular, T(0) = ∅ and |T(n) | = 2n ), and denote by
Tn =
n
[
T(k)
k=0
the dyadic tree of depth n. If 0 ≤ k < n and m ∈ T(k) , m = (ξ1 , . . . , ξk ), then we denote by
m1 = (ξ1 , . . . , ξk , 1)
and
m2 = (ξ1 , . . . , ξk , 2)
the two children of m in T(k+1) . We call T(n) the set of leaves of Tn . The vertices 1, 2 ∈ T(1) are
the children of the root ∅.
d
Consider the measurable space ({0, 1}Z , F) introduced in Definition 3.1. For any family of
events (Gx )x∈Zd such that Gx ∈ F for all x ∈ Zd , any integer n ≥ 1, and any embedding
T : Tn → Zd , we define the event GT ∈ F by
\
GT (m) .
(9.1.1)
GT =
m∈T(n)
Note that if the (Gx )x∈Zd are all increasing (resp., decreasing), see Definition 3.12, then GT is
also increasing (resp., decreasing).
We denote by T1 and T2 the two embeddings of Tn−1 which arise from T as the embeddings of
the descendants of the two children of the root, i.e.,
∀ 0 ≤ k ≤ n − 1, ∀ m = (ξ1 , . . . , ξk ) ∈ T(k) : T1 (m) = T (1, ξ1 , ξ2 , . . . , ξk ),
T2 (m) = T (2, ξ1 , ξ2 , . . . , ξk ).
65
66
CHAPTER 9. DECOUPLING INEQUALITIES
Note that
GT = GT1 ∩ GT2 .
Let L0 ≥ 1 and l0 ≥ 2000 be integers and consider the sequence
Ln := L0 · l0n ,
n ≥ 1,
(9.1.2)
of geometrically growing scales. For n ≥ 0, we denote by Ln the renormalized lattice Ln Zd :
L n = Ln Z d .
(9.1.3)
Λx,n := Ln−1 ∩ (x + [−Ln , Ln ]d ).
(9.1.4)
For any n ≥ 1 and x ∈ Ln , we define
Note that for any L0 , l0 , n ≥ 1, and x ∈ Ln ,
|Λx,n | = (2l0 + 1)d .
(9.1.5)
Definition 9.1. We say that T : Tn → Zd is a proper embedding of Tn with root at x ∈ Ln if
(a) T (∅) = x, i.e., the root of the tree Tn is mapped to x;
(b) for all 0 ≤ k ≤ n and m ∈ T(k) we have T (m) ∈ Ln−k ;
(c) for all 0 ≤ k < n and m ∈ T(k) we have
T (m1 ), T (m2 ) ∈ ΛT (m),n−k
and
|T (m1 ) − T (m2 )| >
Ln−k
.
100
(9.1.6)
e x,n the set of proper embeddings of Tn with root at x.
We denote by Λ
For an illustration of Definition 9.1, see Figure 9.1.
Exercise 9.2. Using (9.1.5) and induction on n, show that
2+4+···+2n
n+1
e x,n | ≤ (2l0 + 1)d
≤ (2l0 + 1)d·2 .
|Λ
(9.1.7)
e x,n+1 and for any choice of local
The next theorem states that for any proper embedding T ∈ Λ
increasing (resp., decreasing) events (Gx )x∈Zd , the events GT1 and GT2 satisfy the decorrelation
inequality (8.4.2) (resp., (8.4.3)) with a rapidly decaying (in Ln ) error term.
Theorem 9.3. Let d ≥ 3. There exist constants Dl0 = Dl0 (d) < ∞ and Du = Du (d) < ∞ such
e x,n+1 , all u− > 0 and
that for all n ≥ 0, L0 ≥ 1, l0 ≥ Dl0 a multiple of 100, x ∈ Ln+1 , T ∈ Λ
− d−2
− 32
4
u+ = 1 + Du · (n + 1) · l0
· u− ,
(9.1.8)
the following inequalities hold:
(a) for any increasing events Gx ∈ σ(Ψz , z ∈ x + [−2L0 , 2L0 ]d ), x ∈ Zd ,
P [I u− ∈ GT ] = P [I u− ∈ GT1 ∩ GT2 ] ≤ P [I u+ ∈ GT1 ] · P [I u+ ∈ GT2 ] + ǫ(u− , n), (9.1.9)
67
9.2. DECOUPLING INEQUALITIES
ΛT (∅),3
ΛT (1),2
ΛT (11),1 ΛT (12),1
ΛT (211),0
ΛT (212),0
ΛT (2),2
ΛT (21),1
ΛT (22),1
ΛT (221),0
ΛT (222),0
Figure 9.1: An illustration of a proper embedding T : T3 → Zd , see Definition 9.1. The boxes
represent ΛT (m),3−n , where m ∈ T(n) , 0 ≤ n ≤ 3. Note that the embedded images of the eight
great-grandchildren of the root are well “spread-out” in space.
(b) for any decreasing events Gx ∈ σ(Ψz , z ∈ x + [−2L0 , 2L0 ]d ), x ∈ Zd ,
P [I u+ ∈ GT ] = P [I u+ ∈ GT1 ∩ GT2 ] ≤ P [I u− ∈ GT1 ] · P [I u− ∈ GT2 ] + ǫ(u− , n), (9.1.10)
where the error function ǫ(u, n) is defined by
d−2
2
.
ǫ(u, n) = 2 exp −2 · u · (n + 1)−3 · Ld−2
·
l
n
0
(9.1.11)
Theorem 9.3 will follow from Theorem 8.9 as soon as we show that there exists a coupling of
∗∗ and ζ ∗
ζ−
−,+ satisfying (8.4.1) with ǫ = ǫ(u− , n). To this end, we will choose the sets appearing
in (8.3.1) and (8.3.2) as Ki = ∪m∈T(n) B(Ti (m), 2L0 ), i.e., the union of all the boxes of radius
n+1
), i.e., the large
2L0 centered at the images of the leaves of T(n) under Ti , Ui = B(T (i), L1000
box centered at T (i) containing Ki , and most importantly, we will choose S in a very delicate
way, see (11.1.3) and (11.1.4). Since the proof is quite technical, we postpone it to Chapter 11.
Instead, in the next section we deduce a useful corollary from Theorem 9.3 by iterating the
inequalities (9.1.9) and (9.1.10), see Theorem 9.5.
9.2
Decoupling inequalities
For a family of events (Gx )x∈Zd with Gx ∈ F for all x ∈ Zd , we define recursively the events

Gx ,
n = 0, x ∈ L0 ,

[

Gx1 ,n−1 ∩ Gx2 ,n−1 ,
n ≥ 1, x ∈ Ln .
(9.2.1)
Gx,n :=

 x1 ,x2 ∈Λx,n ,
|x1 −x2 |>Ln /100
In words, for n ≥ 1, the event Gx,n occurs if there exists a pair x1 , x2 ∈ Λx,n of well-separated
vertices on the scale Ln (namely, |x1 − x2 | > Ln /100) such that the events Gx1 ,n−1 and Gx2 ,n−1
occur.
68
CHAPTER 9. DECOUPLING INEQUALITIES
Basic properties of the events Gx,n are listed in the next exercise.
Exercise 9.4. Prove by induction on n that for all n ≥ 0 and x ∈ Ln ,
(a) if the measurability assumptions on Gx as in Theorem 9.3 are fulfilled, then the event Gx,n
is measurable with respect to the sigma-algebra σ Ψz , z ∈ x + [−2Ln , 2Ln ]d ;
(b) if the events (Gx )x∈Zd are increasing (resp., decreasing) then Gx,n is also increasing (resp.,
decreasing);
(c) the events defined in (9.1.1) and (9.2.1) are related via
\
[
[
GT (m) ;
GT =
Gx,n =
(9.2.2)
e x,n m∈T(n)
T ∈Λ
e x,n
T ∈Λ
(d) if the events (Gx )x∈L0 are shift invariant, i.e., ξ ∈ Gx if and only if ξ(· − x) ∈ G0 for
all x ∈ L0 , then the events Gx,n are also shift invariant, i.e., ξ ∈ Gx,n if and only if
ξ(· − x) ∈ G0,n for all x ∈ Ln .
To state the main result of this section (see Theorem 9.5) we need some more notation. Let
Y
3
− d−2
f (l0 ) =
(9.2.3)
1 + Du (k + 1)− 2 · l0 4 ,
k≥0
where Du is defined in the statement of Theorem 9.3. Note that


d−2 X
3
−
1 ≤ f (l0 ) ≤ exp Du l0 4
(k + 1)− 2  < +∞ and
lim f (l0 ) = 1.
l0 →∞
k≥0
For u > 0, we define
u−
∞ := u ·
u+
∞ := u · f (l0 ),
1
.
f (l0 )
(9.2.4)
(9.2.5)
By (9.2.4), we have
+
0 < u−
∞ < u < u∞ < +∞
and
+
lim u−
∞ = lim u∞ = u.
l0 →∞
l0 →∞
(9.2.6)
Theorem 9.5 (Decoupling inequalities). Let d ≥ 3. Take the constants Dl0 = Dl0 (d) < ∞ and
Du = Du (d) < ∞ as in the statement of Theorem 9.3. Then for all n ≥ 0, L0 ≥ 1, l0 ≥ Dl0 a
multiple of 100, and u > 0, the following inequalities hold:
(a) for any increasing shift invariant events Gx ∈ σ(Ψz , z ∈ x + [−2L0 , 2L0 ]d ), x ∈ Zd ,
2n
−
n+1
,
(9.2.7)
P[I u∞ ∈ G0,n ] ≤ (2l0 + 1)d·2
· P[I u ∈ G0 ] + ǫ(u−
,
l
,
L
)
0
0
∞
(b) for any decreasing shift invariant events Gx ∈ σ(Ψz , z ∈ x + [−2L0 , 2L0 ]d ), x ∈ Zd ,
2n
+
n+1
P[I u∞ ∈ G0,n ] ≤ (2l0 + 1)d·2
· P[I u ∈ G0 ] + ǫ(u, l0 , L0 )
,
(9.2.8)
where
d−2
ǫ(u, l0 , L0 ) :=
2e−uL0
d−2
2
l0
d−2
1 − e−uL0
d−2
2
l0
2e−x
is decreasing . (9.2.9)
note that R+ ∋ x 7→
1 − e−x
69
9.3. PROOF OF THEOREM 9.5
Remark 9.6. (a) By Exercise 9.4(d) and the translation invariance of I u , the inequalities
(9.2.7) and (9.2.8) hold with G0,n replaced by any Gx,n , x ∈ Ln , also.
n+1
in the RHS of the inequalities (9.2.7) and (9.2.8) comes from
(b) The factor (2l0 + 1)d·2
(9.1.7). It stems from the combinatorial complexity when we apply the union bound to all
combinations of intersections appearing in the representation (9.2.2) of G0,n .
(c) A typical application of Theorem 9.5 would be to get the bounds
−
+
n
P[I u∞ ∈ G0,n ] ≤ e−2 ,
n
P[I u∞ ∈ G0,n ] ≤ e−2
by tuning the parameters u and L0 in such a way that for a given l0 ,
(2l0 + 1)2d · P[I u ∈ G0 ] + ǫ(u−
,
l
,
L
)
≤ e−1 .
0
∞ 0
(9.2.10)
+
Sometimes it is also desirable to take u−
∞ (resp., u∞ ) sufficiently close to u, then we should
first take l0 large enough (according to (9.2.6)), and then tune u and L0 to fulfil (9.2.10).
(d) The main application of Theorem 9.5 that we consider in these notes is summarized in
Theorem 9.7, which will later be used to prove Propositions 10.2, 10.3, and 10.5.
9.3
Proof of Theorem 9.5
We deduce Theorem 9.5 from Theorem 9.3. The proofs of (9.2.7) and (9.2.8) are essentially
the same — (9.2.7) follows from the repetitive application of (9.1.9), and (9.2.8) follows from
repetitive application of (9.1.10). We only give the proof of (9.2.8) here and leave the proof of
(9.2.7) as an exercise to the reader.
Let Gx ∈ σ(Ψz , z ∈ x + [−2L0 , 2L0 ]d ), x ∈ Zd , be a family of decreasing shift invariant events.
+
Fix u > 0 and define the increasing sequence u+
n by u0 = u and
u+
n
=
u+
n−1
n−1
Y
3
− d−2
− d−2
+
− 23
4
= u0 ·
1 + Du · (k + 1)− 2 · l0 4 ,
· 1 + Du · n · l0
k=0
n ≥ 1.
By (9.2.3) and (9.2.5),
+
u ≤ u+
n ≤ u∞ ,
+
lim u+
n = u∞ .
n→∞
We begin by rewriting the LHS of (9.2.8) as

i (9.2.2)
i
h +
h +
+
= P I un ∈
P I u∞ ∈ G0,n ≤ P I un ∈ G0,n
(9.1.1)
≤
X
e 0,n
T ∈Λ
[
\
e 0,n m∈T(n)
T ∈Λ

GT (m) 
i (9.1.7)
i
h +
h +
n+1
P I un ∈ GT
≤ (2l0 + 1)d·2
· max P I un ∈ GT .
e 0,n
T ∈Λ
e 0,n ,
Thus, to finish the proof of (9.2.8) it suffices to show that for any T ∈ Λ
i
2n
i h +
h +
.
P I un ∈ GT ≤ P I u0 ∈ G0 + ǫ(u+
0 , l0 , L0 )
(9.3.1)
70
CHAPTER 9. DECOUPLING INEQUALITIES
Recall the definition of ǫ(u, n) from (9.1.11), and introduce the increasing sequence
−n
ǫ+
0 = 0,
+
+
2
ǫ+
n = ǫn−1 + ǫ(un−1 , n − 1)
Since l0 ≥ 100,
l0d−2
2
∀k≥0 :
!k
=
n−1
X
−k−1
2
ǫ(u+
k , k)
.
(9.3.2)
k=0
≥ (k + 1)4 .
(9.3.3)
Thus, for any n ≥ 0,
ǫ+
n
≤
∞
X
2−k−1
ǫ(u+
k , k)
k=0
∞
(9.1.11) X
(9.1.2),(9.1.11)
=
≤
∞
X
k=0
k=0
−k−1
22
−k−1
2
ǫ(u+
0 , k)

−3
· Ld−2
·
exp −u+
0 · (k + 1)
0
(9.3.3)
≤ 2
∞ X
l0d−2
2
d−2
d−2
−u+
·l0 2
0 ·L0
e
k=0
!k
k+1
d−2
2
· l0
(9.2.9)


= ǫ(u+
0 , l0 , L0 ).
e 0,n ,
Therefore, in order to establish (9.3.1) it suffices to prove that for all n ≥ 0 and T ∈ Λ
2n
i
i h +
h +
.
P I un ∈ GT ≤ P I u0 ∈ G0 + ǫ+
n
(9.3.4)
We prove (9.3.4) by induction on n.h The statement
is trivial for n = 0, since in this case both
i
+
u
sides of the inequality are equal to P I 0 ∈ G0 . Now we prove the general induction step using
Theorem 9.3:
i
i h +
i (9.1.10) h +
i
h +
h +
≤ P I un−1 ∈ GT1 · P I un−1 ∈ GT2 + ǫ(u+
P I un ∈ GT = P I un ∈ GT1 ∩ GT2
n−1 , n − 1)
i
2n
(9.3.4) h +
+ ǫ(u+
≤
P I u0 ∈ G0 + ǫ+
n−1
n−1 , n − 1)
2n
i
i
2n
h +
2n h u+
(9.3.2)
+
+
+
0 ∈ G
,
+
ǫ
≤
P
I
+
ǫ
−
ǫ
=
P I u0 ∈ G0 + ǫ+
0
n
n
n−1
n−1
where in the last step we use the inequality am + bm ≤ (a + b)m , which holds for any a, b > 0
and m ∈ N. The proof of (9.2.8) is complete.
9.4
Long ∗-paths of unlikely events are very unlikely
In this section we discuss one of the applications of Theorem 9.5, which will be essential in the
proof of u∗ ∈ (0, ∞) in Chapter 10, more specifically, it will be used three times in the proofs of
Propositions 10.2, 10.3, and 10.5.
Let L0 ≥ 1 and L0 = L0 · Zd , For N ≥ 1, x ∈ L0 , and a family of events G = (Gy )y∈Zd in F,
consider the event


there exist z(0), . . . , z(m) ∈ L0 such that z(0) = x,


|z(m) − x| > N , |z(i) − z(i − 1)| = L0 , for i ∈ {1, . . . , m},
A(x, N, G) :=
.
(9.4.1)


and Gz(i) occurs for all i ∈ {0, . . . , m}
9.4. LONG ∗-PATHS OF UNLIKELY EVENTS ARE VERY UNLIKELY
71
A L0 -valued sequence z(0), . . . , z(m) with the property |z(i)−z(i−1)| = L0 for all i ∈ {1, . . . , m}
is usually called a ∗-path in L0 .
Thus, if A(x, N, G) occurs then there exists a ∗-path z(0), . . . , z(m) in L0 from x to B(x, N )c
such that the event Gz(i) occurs at every vertex z(i) of the ∗-path.
The following theorem states that for shift invariant (Gy )y∈Zd , if the probability to observe a
“pattern” G0 in I u is reasonably small, then the chance is very small to observe a long ∗-path
z(0), . . . , z(m) in L0 such that the pattern Gz(i) is observed around every point of this path
in I u(1−δ) or I u(1+δ) (depending on if Gy are increasing or decreasing, respectively), for any
δ ∈ (0, 1).
Theorem 9.7. Let d ≥ 3 and Dl0 the constant defined in Theorem 9.3. Take L0 ≥ 1, and
consider shift invariant events Gy ∈ σ(Ψz , z ∈ y + [−2L0 , 2L0 ]d ), y ∈ Zd . Let δ ∈ (0, 1), u > 0,
and l0 ≥ Dl0 a multiple of 100. If the following conditions are satisfied,
f (l0 ) < 1 + δ,
1
(2l0 + 1)2d · (P [I u ∈ G0 ] + ǫ(u(1 − δ), l0 , L0 )) ≤ ,
e
(9.4.2)
where f is defined in (9.2.3), and ǫ in (9.2.9), then there exists C ′ = C ′ (u, δ, l0 , L0 ) < ∞ such
that
(a) if Gx are increasing, then for any N ≥ 1,
h
i
1
′
P I u(1−δ) ∈ A(0, N, G) ≤ C ′ · e−N C ,
(9.4.3)
(b) if Gx are decreasing, then for any N ≥ 1,
h
i
1
′
P I u(1+δ) ∈ A(0, N, G) ≤ C ′ · e−N C .
(9.4.4)
Proof of Theorem 9.7. Fix L0 ≥ 1, events (Gy )y∈Zd , u > 0, δ ∈ (0, 1), and l0 ≥ Dl0 a multiple
of 100 satisfying (9.4.2). We will apply Theorem 9.5. For this, recall the definition of scales Ln
from (9.1.2), the corresponding lattices Ln from (9.1.3), and events Gx,n , x ∈ Ln from (9.2.1).
By (9.2.9),
ǫ(u, l0 , L0 ) ≤ ǫ(u−
∞ , l0 , L0 ) ≤ ǫ(u(1 − δ), l0 , 1),
for all L0 ≥ 1.
(9.4.5)
Thus, by the second inequality in (9.4.2) and (9.4.5), the right hand sides of (9.2.7) and (9.2.8)
n
are bounded from above for every n ≥ 0, by e−2 .
By (9.2.5) and the first inequality in (9.4.2),
(1 − δ)u ≤
1
1
+
u≤
u = u−
∞ ≤ u∞ = f (l0 )u ≤ (1 + δ)u.
1+δ
f (l0 )
Thus, Theorem 9.5 and (9.4.6) imply that for any n ≥ 0,
i
h
n
if Gx are all increasing, then P I u(1−δ) ∈ G0,n ≤ e−2 ,
i
h
n
if Gx are all decreasing, then P I u(1+δ) ∈ G0,n ≤ e−2 .
(9.4.6)
(9.4.7)
To finish the proof of Theorem 9.7, it suffices to show that
for any n ≥ 0 such that Ln < N , A(0, N, G) ⊆ G0,n .
(9.4.8)
72
CHAPTER 9. DECOUPLING INEQUALITIES
Indeed, if (9.4.8) is settled, we take n such that Ln < N ≤ Ln+1 , and obtain with u′ = u(1 − δ)
if G0 is increasing, or u′ = u(1 + δ) if G0 is decreasing, that
h ′
i (9.4.8) h ′
i (9.4.7)
n
P I u ∈ A(0, N, G) ≤ P I u ∈ G0,n
≤ e−2 .
(9.4.9)
Since Ln+1 ≥ N , we obtain from (9.1.2) that
n≥
log N − log(L0 l0 )
log N − log L0
−1=
.
log l0
log l0
(9.4.10)
On the one hand, we take C ′ such that
′
(log C ′ )C ≥ (L0 l0 )2 ,
or, equivalently,
2
′
C ′ · e−(L0 l0 ) C ≥ 1.
It ensures that for any N ≤ (L0 l0 )2 , the inequalities (9.4.3) and (9.4.4) are trivially satisfied.
N
On the other hand, for any N ≥ (L0 l0 )2 , by (9.4.10), n ≥ 2log
log l0 , and
log N
log 2
2n ≥ 2 2 log l0 = N 2 log l0 .
Thus, if C ′ satisfies C ′ ≥
all N ≥ (L0 l0 )2 .
2 log l0
log 2 ,
then by (9.4.9), the inequalities (9.4.3) and (9.4.4) hold also for
It remains to prove (9.4.8). For n ≥ 0 and x ∈ Ln , consider the event


 there exist z(0), . . . , z(m) ∈ L0 such that |z(0) − x| ≤ L2n , 
Ax,n,G :=
|z(m) − x| > Ln , |z(i) − z(i − 1)| = L0 , for i ∈ {1, . . . , m},
.


and Gz(i) occurs for all i ∈ {0, . . . , m}
In words, if Ax,n,G occurs then there exists a ∗-path z(0), . . . , z(m) in L0 from B(x, L2n ) to
B(x, Ln )c such that the event Gz(i) occurs at every vertex z(i) of this path.
Note that for all n such that Ln < N , A(0, N, G) ⊆ A0,n,G . Therefore, (9.4.8) will follow once
we prove that for any n ≥ 0, A0,n,G ⊆ G0,n . This claim is satisfied for n = 0, since Ax,0,G ⊆ Gx
for all x ∈ L0 . (Mind that L0 ∩ B(x, L20 ) = {x}.) Therefore, by (9.2.1), it suffices to prove that
[
Ax1 ,n−1,G ∩ Ax2 ,n−1,G .
(9.4.11)
∀ n ≥ 1, x ∈ Ln :
Ax,n,G ⊆
x1 ,x2 ∈Λx,n ,
|x1 −x2 |>Ln /100
We now prove (9.4.11). By translation invariance, we assume x = 0 without loss of generality.
If ξ ∈ A0,n,G then there is a ∗-path π = (z(0), . . . , z(m)) in L0 such that |z(0)| = L2n , |z(m−1)| =
Ln , and ξ ∈ Gz(i) for all i ∈ {0, . . . , m}.
Choose x1 , x2 ∈ Ln−1 ∩ B(0, Ln ) (= Λ0,n , see (9.1.4)) such that
1
|x1 − z(0)| ≤ Ln−1
2
and
1
|x2 − z(m − 1)| ≤ Ln−1 .
2
(9.4.12)
By (9.1.2), if l0 ≥ 100, then
|x1 − x2 |
(9.4.12)
≥
1
Ln
1
|z(0) − z(m − 1)| − Ln−1 ≥ Ln − Ln >
.
2
l0
100
(9.4.13)
c
By (9.4.12), for i ∈ {1, 2}, an appropriate sub-∗-path πi of π connects B(xi , Ln−1
2 ) to B(xi , Ln−1 )
(see Figure 9.2), which implies that ξ ∈ Axi ,n−1 . Together with (9.4.13), it implies (9.4.11). The
proof of (9.4.8) is complete, and so is the proof of Theorem 9.7.
73
9.5. NOTES
π
π1
z(0)
x1
π2
x2
z(m − 1)
x
Figure 9.2: An illustration of (9.4.11). The big annulus is B(x, Ln ) \ B(x, L2n ). π =
(z(0), . . . , z(m)) is a ∗-path in L0 such that |z(0)| = L2n , |z(m − 1)| = Ln , i.e., π crosses
the big annulus. The vertices x1 , x2 ∈ Ln−1 ∩ B(0, Ln ) are chosen such that |x1 − z(0)| ≤ 12 Ln−1
and |x2 − z(m − 1)| ≤ 21 Ln−1 . The ∗-paths πi are sub-paths of π that cross the small annuli
B(xi , Ln−1 ) \ B(xi , Ln−1
2 ) for i ∈ {1, 2}.
9.5
Notes
The decoupling inequalities serve as a powerful tool when it comes to handling the long-range
dependencies of random interlacements. Our Theorem 9.5 is an adaptation of [S12a, Theorem
3.4], which is formulated in a general setting where the underlying graph is of form G × Z (with
some assumptions on G).
The basic decoupling inequalities of Theorem 9.3 serve as a partial substitute for the absence of
a van den Berg-Kesten inequality, see [S12a, Remark 1.5 (3)] for further discussion.
Theorem 9.5 involves a renormalization scheme where the scales grow exponentially. Many
results about random interlacements (and other strongly correlated random fields like the Gaussian free field) are proved using similar renormalization schemes. For example, the seminal
paper [S10] already uses multi-scale renormalization to prove u∗ < +∞, but the scales of the
renormalization scheme of [S10, Section 3] grow faster than exponential. The reason for this is
that the decorrelation results derived in [S10, Section 3] were not yet as strong as the ones we
used.
Our recursion (9.4.11) is a special case of the notion of cascading events, see [S12a, Section 3].
A version of decoupling inequalities that involve the local times of random interlacements is
available, see the Appendix of [DRS12a].
Theorem 9.7 is formulated for monotone events, but in fact similar results hold for convex events
(i.e., intersections of increasing and decreasing events), see [RS11b, Section 5], also in the context
74
CHAPTER 9. DECOUPLING INEQUALITIES
of Gaussian free field, see [Rod12, Lemma 4.4].
The renormalization scheme of [DRS12b, Section 3] involves scales that grow faster than exponential, pertains to monotone events, but holds in a general set-up which can be applied to
various percolation models with long-range correlations.
The renormalization method of [PT13, Section 8] allows to deduce exponential decay (as opposed
to the stretched exponential decay of Theorem 9.7) for any model that satisfies a certain strong
form of decorrelation inequalities, see [PT13, Remark 3.4].
Chapter 10
Phase transition of V u
In this chapter we give the main applications of Theorem 9.7.
Recall from Section 7.1 the notion of the percolation threshold
Vu
Vu
u∗ = inf{u ≥ 0 : P[0 ←→ ∞] = 0} = sup{u ≥ 0 : P[0 ←→ ∞] > 0}.
(10.0.1)
We have already seen in Theorem 7.2 that u∗ = u∗ (d) > 0 if the dimension of the underlying
lattice Zd is high enough. In fact, in the proof of Theorem 7.2 we showed that if d is large
enough, then P-almost surely even the restriction of V u to the plane Z2 × {0}d−2 contains an
infinite connected component. For the proof of Theorem 7.2, it is essential to assume that d
is large. It turns out that the conclusions of Theorem 9.7 are strong enough to imply that
u∗ ∈ (0, ∞) for any d ≥ 3. Proving this fact is the main aim of the current chapter.
Theorem 10.1. For any d ≥ 3,
u∗ ∈ (0, ∞).
The statement of Theorem 10.1 is equivalent to the statements that
Vu
(a) there exists u < ∞ such that P[0 ←→ ∞] = 0,
Vu
(b) there exists u > 0 such that P[0 ←→ ∞] > 0.
The statement (a) will follow from Proposition 10.2 in Section 10.1, where we show that for
some u < ∞, the probability that there exists a nearest neighbor path in V u from 0 to B(L)c
decays to 0 stretched exponentially as L → ∞. In the remainder of Section 10.1 we discuss a
possibility of such stretched exponential decay for all u > u∗ . While this question is still open,
we provide an equivalent definition for such decay in terms of the threshold u∗∗ ≥ u∗ defined in
(10.1.7), see Proposition 10.3.
The statement (b) will follow from Proposition 10.5 in Section 10.2, where we show that for some
u > 0, there is a positive probability that the connected component of 0 in V u ∩ Z2 × {0}d−2 is
infinite.
In the proofs of all the three propositions we will use Theorem 9.7.
We will frequently use the following notation throughout the chapter. For K1 , K2 ⊆ Zd ,
Vu
{K1 ←→ K2 } := {there exists a nearest neighbor path in V u connecting K1 and K2 } (∈ A).
If Ki = {x}, for some x, then we omit the curly brackets around x from the notation.
75
CHAPTER 10. PHASE TRANSITION OF V U
76
10.1
Subcritical regime
In this section we prove that u∗ < ∞. In fact, the following stronger statement is true.
Proposition 10.2. For any d ≥ 3, there exists u′ < ∞ and C ′ = C ′ (u′ ) < ∞ such that for all
N ≥ 1,
Vu
1
′
′
P[0 ←→ B(N )c ] ≤ C ′ · e−N C ,
(10.1.1)
in particular, u∗ < ∞.
Proof. The fact that (10.1.1) implies u∗ < ∞ is immediate. Indeed, assume that there exists
u′ < ∞ satisfying (10.1.1). Then
Vu
′
Vu
′
(10.1.1)
P[0 ←→ ∞] ≤ lim P[0 ←→ B(N )c ]
=
N →∞
0,
(10.1.2)
and by (10.0.1), u∗ ≤ u′ < ∞.
To prove (10.1.1), we apply Theorem 9.7 to L0 = 1 and the events Gy = {ξy = 0}, y ∈ Zd . Then
d
the event A(0, N, G), see (9.4.1), consists of those ξ ∈ {0, 1}Z for which there exists a ∗-path in
Zd from 0 to B(N )c such that the value of ξ is 0 for each vertex of this ∗-path. In particular,
since any nearest neighbor path is a ∗-path,
Vu
′
′
{0 ←→ B(N )c } ⊆ {I u ∈ A(0, N, G)}.
(10.1.3)
The events Gy are decreasing, thus if the conditions (9.4.2) are satisfied by some u < ∞,
δ ∈ (0, 1), and l0 ≥ Dl0 a multiple of 100, then by (9.4.4) and (10.1.3),
V u(1+δ)
1
′
P[0 ←→ B(N )c ] ≤ C ′ · e−N C ,
i.e., (10.1.1) is satisfied by u′ = u(1 + δ).
We take δ = 21 and l0 ≥ Dl0 a multiple of 100 such that f (l0 ) < 1 + δ, i.e., the first inequality in
(9.4.2) is satisfied. Next, with all other parameters fixed, we take u → ∞ and observe that by
(9.2.9), ǫ(u(1 − δ), l0 , L0 ) → 0, and
P [I u ∈ G0 ] = P[0 ∈ V u ] = e−u·cap(0) → 0,
u → ∞.
Thus, the second inequality in (9.4.2) holds if u is sufficiently large. The proof of Proposition 10.2
is complete.
In Proposition 10.2 we showed that for large enough u > u∗ , there is a stretched exponential
Vu
decay of the probability of the event {0 ←→ B(N )c }. It is natural to ask if such stretched
exponential decay holds for any u > u∗ . Namely, is it true that for any u > u∗ there exists
C ′ = C ′ (u) < ∞ such that (10.1.1) holds? This question is still open. Further we provide a
partial answer to it. Let
use := inf u′ : there exists C ′ = C ′ (u′ ) < ∞ such that (10.1.1) holds .
(10.1.4)
Note that by (10.1.2) and (10.1.4), any u′ > use also satisfies u′ ≥ u∗ . Thus use ≥ u∗ .
Further, we introduce another threshold u∗∗ ≥ u∗ in (10.1.7), which involves probabilities of
crossings of annuli by nearest neighbor paths in V u . We prove in Proposition 10.3 that u∗∗ = use .
77
10.1. SUBCRITICAL REGIME
This equivalence will allow us to provide a heuristic argument to why one should expect the
equality u∗ = u∗∗ = use , see Remark 10.4.
Define the event
o
n
Vu
AuL = B(L/2) ←→ B(L)c ,
(10.1.5)
i.e., there is a nearest neighbor path of vacant vertices at level u crossing the annulus with inner
radius 12 L and outer radius L. Note that
P[AuL ] ≤
X
Vu
x∈B(L/2)
Vu
P[x ←→ B(L)c ] ≤ |B(L/2)| · P[0 ←→ B(L/2)c ].
(10.1.6)
Thus, it follows from Proposition 10.2 that for some u < ∞, limL→∞ P[AuL ] = 0. Define
u∗∗ = inf{u ≥ 0 : lim inf P[AuL ] = 0} < ∞.
L→∞
(10.1.7)
By (10.1.7),
∀ u < u∗∗ :
lim inf P[AuL ] > 0,
(10.1.8)
∀ u > u∗∗ :
lim inf P[AuL ] = 0.
(10.1.9)
L→∞
and by (6.2.5) and (10.1.7),
L→∞
′
Indeed, if u > u∗∗ then by (10.1.7), there exists u∗∗ ≤ u′ ≤ u such that lim inf L→∞ P[AuL ] = 0,
′
′
but by (6.2.5), P[AuL ⊆ AuL ] = 1, which implies that lim inf L→∞ P[AuL ] ≤ lim inf L→∞ P[AuL ] = 0.
Vu
It is immediate from (10.1.9), (10.0.1), and the inclusion {0 ←→ ∞} ⊆ AuL that u∗∗ defined
in (10.1.7) is ≥ u∗ , since any u > u∗∗ must also be ≥ u∗ . Next, we prove that u∗∗ defined in
(10.1.7) satisfies (10.1.4).
Proposition 10.3. For any d ≥ 3,
use = u∗∗ .
Proof. The fact that use ≥ u∗∗ is immediate from (10.1.6). Indeed, any u′ > use also satisfies
u′ ≥ u∗∗ .
To prove that use ≤ u∗∗ , it suffices to show that any u′ > u∗∗ satisfies (10.1.1). The proof of this
fact involves an application of Theorem 9.7.
We fix δ ∈ (0, 1) and u > u∗∗ , and for L0 ≥ 1, define Gy as the event that there exist
z(0), . . . , z(m) ∈ Zd such that |z(0) − y| ≤ L20 , |z(m) − y| > L0 , |z(i) − z(i − 1)|1 = 1 for
d
i ∈ {1, . . . , m}, and ξz(i) = 0 for all i ∈ {0, . . . , m}. In words, Gy consists of those ξ ∈ {0, 1}Z
for which there exists a nearest neighbor path from B(y, L0 /2) to B(y, L0 )c such that the value
of ξ is 0 at all the vertices of this path.
Note that Gy are decreasing, thus if the parameters L0 , u, δ, and l0 satisfy (9.4.2), then by
(9.4.4), there exists C ′ = C ′ (u, δ, l0 , L0 ) < ∞ such that for all N ≥ 1,
1
′
P[I u(1+δ) ∈ A(0, N, G)] ≤ C ′ · e−N C .
(10.1.10)
Since u and δ are fixed, we only have freedom to choose l0 and L0 . We take l0 = l0 (δ) such that
the first inequality in (9.4.2) is fulfilled. Note that by (9.2.9), limL0 →∞ ǫ(u(1 − δ), l0 , L0 ) = 0,
and since u > u∗∗ ,
lim inf P[I u ∈ G0 ] = lim inf P[AuL0 ]
L0 →∞
L0 →∞
(10.1.9)
=
0.
CHAPTER 10. PHASE TRANSITION OF V U
78
Thus, there exists L0 = L0 (u, δ) such that the second inequality in (9.4.2) holds. In particular,
(10.1.10) holds for any given u > u∗∗ and δ ∈ (0, 1) with some C ′ = C ′ (u, δ). Since
u(1+δ)
AN
⊆ {I u(1+δ) ∈ A(0, N, G)},
we have proved that for any u > u∗∗ and δ ∈ (0, 1), there exists C ′ = C ′ (u, δ) such that u(1 + δ)
satisfies (10.1.1).
To finish the proof, for any u′ > u∗∗ , we take u = u(u′ ) ∈ (u∗∗ , u′ ) and δ = δ(u′ ) ∈ (0, 1) such
that u′ = u(1 + δ). It follows from what we just proved that there exists C ′ = C ′ (u′ ) < ∞ such
that u′ satisfies (10.1.1). Thus u′ ≥ use . The proof of Proposition 10.3 is complete.
We finish this section with a remark about the relation between u∗ and u∗∗ .
Remark 10.4. We know from above Proposition 10.3 that u∗ ≤ u∗∗ . Assume that the inequality
is strict, i.e., there exists u ∈ (u∗ , u∗∗ ). Then by (10.1.8), there exists c = c(u) > 0 such
that for all large L, P[AuL ] > c, with AuL defined in (10.1.5). Note that Au2L implies that at
least one of the vertices on the boundary of B(L) is connected to B(2L)c by a path in V u .
Vu
Thus, P[Au2L ] ≤ CLd−1 · P[0 ←→ B(L)c ]. We conclude that for any u ∈ (u∗ , u∗∗ ), there exists
c′ = c′ (u) > 0 such that as L → ∞,
c′ · L−(d−1)
(u<u∗∗ )
≤
Vu
(u>u∗ )
P[0 ←→ B(L)c ] −→ 0.
In other words, if u∗ < u∗∗ , then the subcritical regime is divided into two subregimes (u∗ , u∗∗ ),
Vu
where P[0 ←→ B(L)c ] decays to 0 at most polynomially, and (u∗∗ , ∞), where it decays to 0 at
least stretched exponentially. To the best of our knowledge, there are currently no “natural”
percolation models known for which both subregimes are non-degenerate, e.g., in the case of
Bernoulli percolation, the subregime of at most polynomial decay is empty, see, e.g., [G99,
Theorem 5.4]. We do not have good reasons to believe that u∗ < u∗∗ , therefore we think that the
equality should hold.
?
For further discussion on the history of the definition of u∗∗ and aspects of the question u∗ = u∗∗ ,
see Section 10.3.
10.2
Supercritical regime
In this section we prove that u∗ > 0 for all d ≥ 3, which extends the result of Theorem 7.2. As in
the proof of Theorem 7.2, the result will follow from the more general claim about the existence
of an infinite connected component in the restriction of V u to the plane F = Z2 × {0}d−2 , if u
is sufficiently small.
Proposition 10.5. For any d ≥ 3, there exists u′ > 0 such
′
V u ∩F
P[0 ←→ ∞] > 0.
(10.2.1)
In particular, u∗ ≥ u′ > 0.
Proof. The fact that u∗ > u′ for any u′ satisfying (10.2.1) is immediate from (10.0.1), thus we
will focus on proving the existence of such u′ > 0.
79
10.2. SUPERCRITICAL REGIME
As in the proof of Theorem 7.2, we will use planar duality and a Peierls-type argument in
combination with Theorem 9.7. Let L0 , u, δ, l0 be parameters as in Theorem 9.7, which will be
specified later. For L0 ≥ 1 and y ∈ Zd , we define
Gy =
2
[
[ n
k=1 z∈y+Ck
d
ξ ∈ {0, 1}Z
o
: ξz = 1 ,
(10.2.2)
where
Ck = {0}k−1 × {−L0 , . . . , L0 } × {0}d−k
is the segment of length 2L0 centered at the origin of Zd and parallel to the k-th coordinate
direction, and y + Ck is the translation of Ck by y ∈ Zd . We will also use the notation C := C1 ∪ C2
for the “cross” at the origin of Zd . Thus, event Gy consists of those ξ which take value 1 in at
least one vertex of the cross y + C.
Assume for a moment that
there exist L0 , u, δ, l0 satisfying (9.4.2), with the events (Gy )y∈Zd as in (10.2.2).
(10.2.3)
The events Gy are increasing, therefore by (9.4.3) and (10.2.3), there exists C ′ = C ′ (u, δ, l0 , L0 ) <
∞ such that
h
i
1
′
(10.2.4)
P I u(1−δ) ∈ A(0, N, G) ≤ C ′ · e−N C .
Fix L0 , u, δ, l0 as in (10.2.3), and define u′ = u(1 − δ). Recall that F = Z2 × {0}d−2 . We will
prove that
′
P[V u ∩ F contains an infinite connected component ] > 0,
(10.2.5)
Once we are done with (10.2.5), the argument from the proof of Proposition 7.1 implies that
′
(10.2.5) is equivalent to P[V u ∩F contains an infinite connected component ] = 1, which in turn
is equivalent to (10.2.1). Now we prove (10.2.5).
d
• By (10.2.2), the event Gcy consists of those ξ ∈ {0, 1}Z which take value 0 at all the
vertices of the cross y + C. Note that for any y, y ′ ∈ L0 ∩ F with |y − y ′ |1 = L0 , their
crosses overlap, where L0 = L0 · Zd . Therefore, if the set {y ∈ L0 ∩ F : ξ ∈ Gcy } contains
an infinite nearest neighbor path z(0), z(1), . . . in L0 ∩ F , then the set ∪i≥0 (z(i) + C) is
infinite, connected in F , and the value of ξ at every vertex of this set is 0.
• It follows from planar duality in L0 ∩ F (similarly to (7.2.3)) that B(L) is not connected
to infinity by a nearest neighbor path z(0), z(1), . . . in L0 ∩ F such that for all i ≥ 0, event
Gz(i) does not occur, if and only if there is a ∗-circuit (a ∗-path with the same starting
and ending vertices) z ′ (0), . . . , z ′ (m) in L0 ∩ F surrounding B(L) ∩ F in F such that for
every i ∈ {0, . . . , m}, the event Gz ′ (i) occurs.
• If there is a ∗-circuit z ′ (0), . . . , z ′ (m) in L0 ∩ F surrounding B(L) ∩ F in F such that
for every i ∈ {0, . . . , m}, event Gz ′ (i) occurs, then there exist i ∈ {0, . . . , m} such that
|z ′ (i) − z ′ (0)| ≥ |z ′ (0)|. In particular, the event A(z ′ (0), |z ′ (0)|, G) occurs, see (9.4.1).
CHAPTER 10. PHASE TRANSITION OF V U
80
The combination of the above three statements implies that for any L ≥ 1
i
h
′
P B(L) is not connected to infinity in V u ∩ F
"
#
B(L) is not connected to infinity in L0 ∩ F
′
≤P
by a nearest neighbor path z(0), z(1), . . . such that for all i ≥ 0, I u ∈ Gcz(i)
there exists a ∗-circuit z ′ (0), . . . , z ′ (m) in L0 ∩ F
=P
′
surrounding B(L) ∩ F in F and such that for all i ∈ {0, . . . , m}, I u ∈ Gz ′ (i)
i
h
′
≤ P there exists x ∈ L0 ∩ F ∩ B(L)c such that I u ∈ A(x, |x|, G)
X
(10.2.4)
≤
1
′
C ′ e−|x| C .
x∈L0 ∩F ∩B(L)c
The sum on the right-hand side is finite for L = 0, so it converges to zero as L → ∞. In
particular, there is an L, for which it is less than 1, which implies (10.2.5).
It remains to prove (10.2.3).
We choose δ = 21 and l0 ≥ Dl0 a multiple of 100 (with Dl0 defined in Theorem 9.3) such that the
first inequality in (9.4.2) holds.
To satisfy the second inequality in (9.4.2), we still have the freedom to choose L0 and u. We
will choose them in such a way that
P [I u ∈ G0 ] ≤
1
,
(2l0 + 1)2d · 2e
ǫ(u(1 − δ), l0 , L0 ) ≤
1
.
(2l0 + 1)2d · 2e
(10.2.6)
The inequalities in (10.2.6) trivially imply the second inequality in (9.4.2).
First of all, we observe that for k ∈ {1, 2},
cap(Ck )
(2.2.8),(2.3.13)
≤
P
CL0
2−d
1≤j≤L0 j
≤
CL0
ln L0 ,
if d = 3,
CL0 , if d ≥ 4.
In fact, one can prove similar lower bounds on cap(Ck ), but we will not need them here. It
follows from (2.3.4) that
CL0
ln L0 , if d = 3,
(10.2.7)
cap(C) ≤ cap(C1 ) + cap(C2 ) ≤
CL0 , if d ≥ 4.
Thus,
P [I u ∈ G0 ] = 1 − P [V u ∩ C = ∅] = 1 − e−u·cap(C)
≤ u · cap(C)
(10.2.7)
≤
CuL0
ln L0 ,
if d = 3,
CuL0 , if d ≥ 4.
(10.2.8)
On the other hand, by (9.2.9),
ǫ(u(1 − δ), l0 , L0 ) ≤ ǫ
1
u, 1, L0
2
d−2
1
=
2e− 2 uL0
1
d−2
1 − e− 2 uL0
.
(10.2.9)
To satisfy (10.2.6), we need to choose u and L0 so that each of the right hand sides of (10.2.8)
1
and (10.2.9) is smaller than (2l +1)
2d ·2e . We can achieve this by choosing u as the function of
0
L0 :
 √
 ln(L0 )
, if d = 3,
L0
(10.2.10)
u = u(L0 ) =
3
−
 L 2,
if
d
≥
4.
0
81
10.3. NOTES
With this choice of u, the right hand sides of both (10.2.8) and (10.2.9) tend to 0 as L0 → ∞.
Thus, for large enough L0 and u = u(L0 ) as in (10.2.10), the inequalities in (10.2.6) are satisfied,
and (10.2.3) follows. The proof of Proposition 10.5 is complete.
10.3
Notes
The phase transition of the vacant set V u is the central topic of the theory of random interlacements. Let us now give a historical overview of the results presented in this section as well as
brief outline of other, related results.
The finiteness of u∗ for d ≥ 3 and the positivity of u∗ for d ≥ 7 were proved in [S10, Theorems
3.5, 4.3], and the latter result was extended to all dimensions d ≥ 3 in [SidSz09, Theorem 3.4].
In contrast to the percolation phase transition of the vacant set V u , the graph spanned by
random interlacements I u at level u is almost surely connected for any u > 0, as we have
already discussed in Section 6.4.
The main result of [T09a] is that for any u < u∗ , the infinite component of V u is almost surely
unique. The proof is a non-trivial adaptation of the Burton-Keane argument, because the law
of V u does not satisfy the so-called finite energy property, see [S10, Remark 2.2 (3)].
As alluded to in Section 6.4 already, one can define random interlacements on more general
transient graphs than Zd , d ≥ 3, which can lead to interesting open questions on the nontriviality of u∗ . As an example, in the case of random interlacements on the graph G × Z,
where G is the discrete skeleton of the Sierpinski gasket, it is still not known whether the
corresponding parameter u∗ is strictly positive; see [S12a] for more details of the construction
of random interlacements on such G × Z, and in particular Remark 5.6 (2) therein.
The first version of the definition of u∗∗ appeared in [S09b, (0.6)], and a stretched exponential
upper bound on the connectivity function of V u for u > u∗∗ was first proved in [SidSz10, Theorem
0.1]. The definition of u∗∗ has been subsequently weakened multiple times. Our definition of
u∗∗ is a special case of definition [S12a, (0.10)] and our proof of Proposition 10.3 is a special
case of [S12a, Theorem 4.1], because the results of [S12a] are about random interlacements on a
large class of graphs of form G × Z, while our results are only about the special case G = Zd−1 .
Currently, the weakest definition of u∗∗ on Zd is [PT13, (3.6)], and the strongest subcriticality
result is an exponential upper bound (with logarithmic corrections if d = 3) on the connectivity
function of V u for u > u∗∗ , see [PT13, Theorem 3.1].
?
The question u∗ = u∗∗ is currently still open for random interlacements on Zd , but u∗ = u∗∗ does
hold (and the value of u∗ is explicitly known) if the underlying graph is a regular tree, see [T09b,
Section 5]. The exact value of u∗ on Zd is not known and probably there is no simple formula
for it. However, the following high-dimensional asymptotics are calculated in [Sz11a, Sz11b]:
u∗∗ (d)
u∗ (d)
= lim
= 1.
d→∞ ln(d)
d→∞ ln(d)
lim
The supercritical phase u < u∗ of interlacement percolation has also received some attention. It
is known that for any u < u∗ , the infinite connected component of V u is almost surely unique
(see [T09a]). One might wonder if the infinite component is locally unique in large boxes. In
[T11] (for d ≥ 5) and in [DRS12a] (for d ≥ 3) it is proved (among other local uniqueness results)
that for small enough values of u we have
Vu
Vu
P [0 ←→ B(L)c | 0 6↔ ∞] ≤ κ · e−L
1/κ
.
CHAPTER 10. PHASE TRANSITION OF V U
82
Properties of V u become rather different from that of Bernoulli percolation if one considers large
deviations, see [LSz13a, LSz13b]. For example, for a supercritical value of u, the exponential cost
Vu
of the unlikely event {B(L) 6↔ ∞} is of order at most Ld−2 (see [LSz13b, Theorem 0.1]), while
in the case of Bernoulli percolation, the exponential cost of the analogous unlikely disconnection
event is proportional to Ld−1 .
Chapter 11
Coupling of point measures of
excursions
In this chapter we prove Theorem 9.3. It will follow from Theorem 8.9 as soon as we show that
for some K1 and K2 such that GTi ∈ σ(Ψz , z ∈ Ki ), and for a specific choice of U1 , U2 , S1 , S2
∗∗ (see (8.3.10)) and
satisfying (8.3.1) and (8.3.2), there exists a coupling of point measures ζ−
∗
(see (8.3.12)) such that the condition (8.4.1) is satisfied with ǫ = ǫ(u− , n), where ǫ(u− , n)
ζ−,+
is defined in (9.1.11).
The existence of such coupling is stated in Theorem 11.4.
11.1
Choice of sets
Consider the geometric length-scales Ln defined in (9.1.2) and associated to them lattices Ln
defined in (9.1.3), and recall from Definition 9.1 the notion of a proper embedding T of a dyadic
tree into Zd .
e x∅ ,n+1 , we define the sets
For n ≥ 0, x∅ ∈ Ln+1 , and T ∈ Λ
Ki =
[
B(Ti (m), 2L0 ),
i = 1, 2,
(11.1.1)
m∈T(n)
i.e., the sets K1 and K2 are the unions of L0 -scale boxes surrounding the images of the leaves
of Tn under T1 and T2 , respectively.
Remark 11.1. Our motivation for such a choice of Ki is that eventually we want to prove Theorem 9.3 by applying Theorem 8.9 to events GT1 and GT2 (see the discussion below the statement
of Theorem 9.3,
of Theorem 9.3). By (9.1.1) and the choice of events (Gx )x∈Zd in the statement
S
the event GTi is measurable with respect to the sigma-algebra σ(Ψz , z ∈ m∈T(n) B(Ti (m), 2L0 )).
Thus, by taking Ki as in (11.1.1), GTi is indeed measurable with respect to σ(Ψz , z ∈ Ki ), as
required in the statement of Theorem 8.9.
Next, we define
Ln+1 , i = 1, 2,
U := U1 ∪ U2 .
(11.1.2)
Ui := B T (i),
1000
The sets Ui are Ln+1 -scale boxes centered at the images of the children of the root of Tn+1
e x,n+1 . By the choice of scales in (9.1.2) and Definition 9.1,
under the proper embedding T ∈ Λ
n+1
, we see that Ui ⊇ Ki and U1 ∩ U2 = ∅. In other words,
namely using that |T (1) − T (2)| > L100
the sets Ui satisfy the requirement (8.3.1).
83
84
CHAPTER 11. COUPLING OF POINT MEASURES OF EXCURSIONS
Finally, we consider sets S1 , S2 ⊂ Zd and S := S1 ∪ S2 satisfying the following conditions:
and
Ln+1 ( ⊆ Ui ),
Ki ⊆ Si ⊆ B T (i),
2000
i = 1, 2,
3
3
3
3
1
(d−2)
(d−2)
· (n + 1)− 2 · l04
· Ld−2
≤ cap(S) ≤ 2 · (n + 1)− 2 · l04
· Ld−2
n
n .
2
(11.1.3)
(11.1.4)
Remark 11.2. Condition (11.1.4) might look very mysterious at the moment. It is a technical
condition which ensures that
• for the choice of u− and u+ as in (9.1.8), on the one hand S is not too large, so that the
∗∗ (see (8.3.10)) is not too big in comparison
total mass of the point measure of excursions ζ−
∗
with the total mass of the point measure of excursions ζ−,+
(see (8.3.12));
• on the other hand the bigger S is, the better the estimate on the probability in (8.4.1) we
can get.
Condition (11.1.4) will not be used until the proof of Lemma 11.10, thus we postpone further
discussion of these restrictions on cap(S) until Remark 11.11.
We invite the reader to prove that a choice of S1 and S2 satisfying (11.1.3) and (11.1.4) actually
exists, by solving the following exercise.
Exercise 11.3.
e ⊂⊂ Zd and x ∈ Zd ,
(a) Use (2.3.4) and (2.3.12) to show that for any K
e ≤ cap(K
e ∪ {x}) ≤ cap(K)
e +
cap(K)
1
.
g(0)
n+1
n+1
(b) Let U ′ = B T (1), L2000
∪ B T (2), L2000
. Use (2.3.14) and (2.3.4) to show that
cap(K1 ∪ K2 ) ≤ C2n+1 Ld−2
0 ,
11.2
n+1 d−2
cap(U ′ ) ≥ c · Ld−2
)
· Ld−2
n+1 = c · (l0
0 .
Coupling of point measures of excursions
In this section we state and prove Theorem 11.4 about the existence of a coupling of the point
∗∗ and ζ ∗
measures ζ−
−,+ for the specific choice of u− and u+ as in (9.1.8), Ki as in (11.1.1),
Ui as in (11.1.2), Si as in (11.1.3) and (11.1.4), such that this coupling satisfies (8.4.1) with
ǫ = ǫ(u− , n) defined in (9.1.11). The combination of Theorems 8.9 and 11.4 immediately implies
Theorem 9.3, see Section 11.3.
Theorem 11.4. Let d ≥ 3. There exist constants Dl0 = Dl0 (d) < ∞ and Du = Du (d) < ∞
e x∅ ,n+1 , if u−
such that for all n ≥ 0, L0 ≥ 1, l0 ≥ Dl0 a multiple of 100, x∅ ∈ Ln+1 , T ∈ Λ
and u+ satisfy (9.1.8), Ki are choisen as in (11.1.1), Ui as in (11.1.2), and Si as in (11.1.3)
∗∗ , ζ
b∗ ) of point measures ζ ∗∗ and ζ ∗ on some
and (11.1.4), then there exists a coupling (ζb−
−,+
−
−,+
b satisfying
b A,
b P)
probability space (Ω,
i
h
b ζb∗∗ ≤ ζb∗
(11.2.1)
P
−
−,+ ≥ 1 − ǫ(u− , n),
where ǫ(u− , n) is defined in (9.1.11).
85
11.2. COUPLING OF POINT MEASURES OF EXCURSIONS
The proof of Theorem 11.4 is split into several steps. We first compare the intensity measures
of relevant Poisson point processes in Section 11.2.1. Then in Section 11.2.2 we construct a
∗∗ , ζ
b∗ ) of ζ ∗∗ and ζ ∗ with some auxiliary point measures Σ− and Σ1 such that
coupling (ζb−
−,+
−,+
−
−,+
∗∗ ≤ Σ and Σ1
b∗ . The point measures Σ− and Σ1 are constructed in
≤
ζ
almost surely, ζb−
−
−,+
−,+
−,+
1
such a way that Σ− ≤ Σ1−,+ if the total mass N− of Σ− does not exceed the total mass N−,+
1
with probability
of Σ1−,+ , see Remark 11.9. Finally, in Section 11.2.3 we prove that N− ≤ N−,+
≥ 1−ǫ(u− , n), which implies that the coupling constructed in Section 11.2.2 satisfies (11.2.1). We
collect the results of Sections 11.2.1–11.2.3 in Section 11.2.4 to finish the proof of Theorem 11.4.
The choice of constant Du = Du (d) in the statement of Theorem 11.4 is made in Lemma 11.10
(see (11.2.43) and (11.2.44)), and the choice of constant Dl0 = Dl0 (d) in the statement of
Theorem 11.4 is made in Lemmas 11.5 (see (11.2.9)) and 11.10 (see (11.2.43)), see (11.2.45).
11.2.1
Comparison of intensity measures
Recall the definition of the Poisson point processes ζeκj , j ≥ 1, κ ∈ {−, +}, from (8.3.8), as the
images of the Poisson point processes ζκj (see (8.3.3)) under the maps φj (see (8.3.6)). In words,
every trajectory from the support of ζκj performs exactly j excursions from S to U c , and the
map φj collects all these excursions into a vector of j excursions in the space C j (see (8.3.5)).
Let ξeκj be the intensity measure of the Poisson point process ζeκj , which is a finite measure on
j
j
j
C j , and let ξe−,+
be the intensity measure of the Poisson point process ζe+
− ζe−
. The aim of this
j
1
section is to provide bounds on ξe− and ξe−,+ in terms of
Γ(◦) := PeeS [(X· )0≤ · ≤TU = ◦ ],
(11.2.2)
which is the probability distribution on the space C of finite excursions from S to U c such
that the starting point of a random trajectory γ with distribution Γ is chosen according to the
normalized equilibrium measure eeS , and the excursion evolves like simple random walk up to its
time of first exit out of U . The result is summarized in the following lemma.
Lemma 11.5 (Comparison of intensity measures). Let d ≥ 3. There exist constants D′l0 =
D′l0 (d) < ∞ and DS = DS (d) < ∞ such that for all 0 < u− < u+ , n ≥ 0, L0 ≥ 1, l0 ≥ D′l0 a
e x∅ ,n+1 , if Ui are chosen as in (11.1.2), and Si as in (11.1.3),
multiple of 100, x∅ ∈ Ln+1 , T ∈ Λ
then
u+ − u−
1
cap(S) · Γ,
ξe−,+
≥
2
j
ξe−
≤ u− cap(S) ·
DS cap(S)
Ld−2
n+1
(11.2.3)
!j−1
· Γ⊗j ,
j ≥ 2.
(11.2.4)
j
Remark 11.6. The relations (11.2.4) compare the “complicated” intensity measure ξe−
on C j
j
j
j
⊗j
j
to the “simple” measure Γ on C . Under the probability measure ξe− (·)/ξe− (C ), the coordinates of the j-tuple (w1 , . . . , wj ) are correlated, while under the probability measure Γ⊗j (·), the
coordinates of the j-tuple (w1 , . . . , wj ) are independent.
Remark 11.7. The constant D′l0 is specified in (11.2.9), and DS in (11.2.18) (see also (11.2.15)
and (11.2.17)).
j
1
and ξe−
. By
Proof of Lemma 11.5. We first provide useful expressions for the measures ξe−,+
Exercise 5.6(b), (6.2.7) and (6.2.8), the intensity measures on W+ of the Poisson point processes
86
CHAPTER 11. COUPLING OF POINT MEASURES OF EXCURSIONS
j
j
j
ζ−
, and ζ+
− ζ−
, j ≥ 1, (see (8.3.3)) are given by
j
:= u− 1{Rj < ∞ = Rj+1 }PeS ,
ξ−
j ≥ 1,
j
ξ−,+ := (u+ − u− )1{Rj < ∞ = Rj+1 }PeS ,
(11.2.5)
j ≥ 1.
Further, by Exercise 5.6(c), (11.2.5), and the identity
cap(S)PeeS
(2.3.3)
= PeS ,
j
j
the intensity measure ξe−
on C j of the Poisson point process ζe−
, j ≥ 1, is given by
j
ξe−
(w1 , . . . , wj ) = u− cap(S) · PeeS
Rj < ∞ = Rj+1 ,
∀1 ≤ k ≤ j : (XRk , . . . , XDk ) = wk
,
(11.2.6)
1
1 −ζ
e1 is given by
and the intensity measure ξe−,+
on C of the Poisson point process ζe+
−
1
(w) = (u+ − u− )cap(S) · PeeS [(X0 , . . . , XTU ) = w, HS ◦ θTU = ∞] .
ξe−,+
(11.2.7)
Recall the notions of the exterior and interior boundaries of a set from (2.1.1) and (2.1.2),
respectively.
We begin with the proof of (11.2.3). By (11.2.7) and (11.2.2) we only need to check that there
is a D′l0 = D′l0 (d) < ∞ such that for any l0 ≥ D′l0 ,
1
∀ w ∈ C : PeeS [(X0 , . . . , XTU ) = w, HS ◦ θTU = ∞] ≥ PeeS [(X0 , . . . , XTU ) = w] .
2
(11.2.8)
By the strong Markov property applied at the stopping time TU we get
PeeS [(X0 , . . . , XTU ) = w, HS ◦ θTU = ∞] ≥ PeeS [(X0 , . . . , XTU ) = w] min Px [HS = ∞] .
x∈∂ext U
Thus, (11.2.8) will follow, once we prove that for l0 ≥ D′l0 ,
min Px [HS = ∞] ≥
x∈∂ext U
1
2
1
max Px [HS < ∞] ≤ .
x∈∂ext U
2
or equivalently
We compute
(2.3.6)
max Px [HS < +∞] =
x∈∂ext U
max
x∈∂ext U
(2.3.2)
=
X
y∈S
g(x, y)eS (y) ≤
max
x∈∂ext U, y∈S
g(x, y) · cap(S) ≤
≤ c · (Ln+1 )2−d cap(S)
1
(2−d) (∗∗)
≤
x∈∂ext U, y∈S
(∗)
(2.2.8)
= 2c · l04
max
1
.
2
(11.1.4)
≤
g(x, y) ·
X
eS (y)
y∈S
max g(z) · cap(S)
|z|≥
Ln+1
2000
3
4
c · (Ln l0 )2−d · 2Ld−2
n l0
(d−2)
(11.2.9)
In the equation marked by (∗) we used the fact that (11.1.2) and (11.1.3) imply that for all
n+1
. We choose D′l0 = D′l0 (d) such that (∗∗) holds for all l0 ≥ D′l0 .
x ∈ ∂ext U , y ∈ S, |x − y| ≥ L2000
The proof of (11.2.3) is complete.
87
11.2. COUPLING OF POINT MEASURES OF EXCURSIONS
We now proceed with the proof of (11.2.4). By (11.2.6) and (11.2.2) we only need to check that
there exists DS = DS (d) < ∞ such that for any vector of excursions (w1 , . . . , wj ) ∈ C j ,
!j−1 j
i
h
Y
DS cap(S)
k
Γ(wk ).
·
PeeS Rj < ∞ = Rj+1 , ∀1 ≤ k ≤ j : (XRk , . . . , XDk ) = w ≤
d−2
Ln+1
k=1
In fact we will prove by induction on j the slightly stronger result that there exists DS such that
for any vector of excursions (w1 , . . . , wj ) ∈ C j ,
!j−1 j
i
h
Y
DS cap(S)
k
PeeS Rj < ∞, ∀1 ≤ k ≤ j : (XRk , . . . , XDk ) = w ≤
·
Γ(wk ). (11.2.10)
d−2
Ln+1
k=1
The inequality (11.2.10) is satisfied with equality for j = 1:
(11.2.2)
PeeS R1 < ∞, (XR1 , . . . , XD1 ) = w1 = PeeS (X0 , . . . , XTU ) = w1
= Γ(w1 ).
Thus, it remains to prove the following induction step for j ≥ 2:
i
h
PeeS Rj < ∞, ∀1 ≤ k ≤ j : (XRk , . . . , XDk ) = wk
i D cap(S)
h
S
≤ PeeS Rj−1 < ∞, ∀1 ≤ k ≤ j − 1 : (XRk , . . . , XDk ) = wk ·
· Γ(wj ). (11.2.11)
d−2
Ln+1
By the strong Markov property at time Dj−1 ,
i
h
PeeS Rj < ∞, ∀1 ≤ k ≤ j : (XRk , . . . , XDk ) = wk
i
h
≤ PeeS Rj−1 < ∞, ∀1 ≤ k ≤ j − 1 : (XRk , . . . , XDk ) = wk
· max Px [R1 < ∞, (XR1 , . . . , XD1 ) = wj ].
x∈∂ext U
Thus, (11.2.11) reduces to showing that
max Px [R1 < ∞, (XR1 , . . . , XD1 ) = wj ] ≤
x∈∂ext U
DS cap(S)
· Γ(wj ).
Ld−2
n+1
(11.2.12)
In fact, in order to prove (11.2.12), we only need to check that there exists DS = DS (d) < ∞
such that for all y ∈ S,
DS cap(S)
2−d
·
e
e
(y)
=
D
L
·
e
(y)
.
(11.2.13)
max Px [HS < ∞, XHS = y] ≤
S
S n+1
S
x∈∂ext U
Ld−2
n+1
Indeed, if (11.2.13) holds, then by the strong Markov property at time R1 ,
max Px [R1 < ∞, (XR1 , . . . , XD1 ) = wj ]
x∈∂ext U
= max Px [HS < ∞, XHS = wj (0)] · Pwj (0) [(X0 , . . . , XTU ) = wj ]
x∈∂ext U
≤
DS cap(S)
DS cap(S)
· eeS (wj (0)) · Pwj (0) [(X0 , . . . , XTU ) = wj ] =
· PeeS [(X0 , . . . , XTU ) = wj ]
d−2
Ln+1
Ld−2
n+1
(11.2.2) DS cap(S)
· Γ(wj ),
=
Ld−2
n+1
88
CHAPTER 11. COUPLING OF POINT MEASURES OF EXCURSIONS
which is precisely (11.2.12).
It remains to prove (11.2.13). Fix y ∈ S. We will write {XHS = y} for {HS < ∞, XHS = y}.
From (6.1.13) we get
cap(U ) · min Px [XHS = y] ≤ eS (y) ≤ cap(U ) · max Px [XHS = y],
x∈∂int U
x∈∂int U
since eeU is a probability measure supported on ∂int U .
Note that
(2.3.12)
Ln+1 (2.3.14)
(11.1.2)
cap(U ) ≥ cap(U1 ) = cap B 0,
≥ b
c · Ld−2
n+1 .
1000
(11.2.14)
(11.2.15)
Define
h(x) = Px [XHS = y].
(11.2.16)
In order to show (11.2.13), we only need to check that
because then we have
b = C(d)
b
∃C
< +∞ :
b · min h(x),
max h(x) ≤ C
x∈∂ext U
(11.2.14)
b · min Px [XH = y]
max Px [XHS = y] ≤ C
S
x∈∂ext U
≤
x∈∂int U
(11.2.17)
x∈∂int U
b · eS (y)
C
cap(U )
(11.2.15)
≤
b
C
· L2−d
n+1 · eS (y)
b
c
=: DS · L2−d
n+1 · eS (y), (11.2.18)
which is precisely (11.2.13).
It only remains to show (11.2.17). The proof will follow from the Harnack inequality (see
Lemma 2.3) and a covering argument.
Note that the function h defined in (11.2.16) is harmonic on S c , which can be shown similarly
to (2.2.6). Recall from (9.1.6), (11.1.2) and (11.1.3) that
Ln+1 Ln+1 Si ⊆ B T (i),
⊆ B T (i),
= Ui ,
2000
1000
T (1), T (2) ∈ B(x∅ , Ln+1 ) ∩ Ln ,
i ∈ {1, 2},
S = S1 ∪ S2 ,
|T (1) − T (2)| >
U = U1 ∪ U2 ,
Ln+1
,
100
where x∅ ∈ Ln+1 is the image of the root of T as defined in the statement of Theorem 11.4.
Let us define
Ln+1 d
Λ=
Z ∩ (B(x∅ , 2Ln+1 ) \ U ) .
(11.2.19)
4000
Note the following facts about Λ, also illustrated by Figure 11.1:
|Λ| ≤ 8001d ,
∂ext U ∪ ∂int U ⊆
Λ
[
x′ ∈Λ
L
n+1
e,
=: U
B x′ ,
4000
spans a connected subgraph of the lattice
[
x′ ∈Λ
L
n+1
⊆ Sc.
B x′ ,
2000
(11.2.20)
(11.2.21)
Ln+1 d
Z
4000
(11.2.22)
(11.2.23)
89
11.2. COUPLING OF POINT MEASURES OF EXCURSIONS
S2
U2
S1
U1
Figure 11.1: An illustration of the properties (11.2.20)–(11.2.23) of the set Λ, defined in (11.2.19).
The dots represent elements of Λ and the big box is B(x∅ , 2Ln+1 ).
Now by (11.2.23) we see that we can apply Lemma 2.3 to obtain
∀ x′ ∈ Λ :
max
x∈B(x′ ,
Ln+1
)
4000
h(x) ≤ CH ·
min
B(x′ ,
Ln+1
)
4000
h(x).
n+1
Using (11.2.22) we can form a chain of overlapping balls of form B(x′ , L4000
), x′ ∈ Λ, to connect
e . The number of balls used in such a chain is certainly less than
any pair of vertices x1 , x2 ∈ U
or equal to |Λ|, thus we obtain
|Λ|
max h(x) ≤ CH · min h(x).
e
x∈U
e
x∈U
We conclude
max h(x)
x∈∂ext U
(11.2.21)
≤
max h(x)
e
x∈U
(11.2.20)
≤
d
(CH )8001 · min h(x)
e
x∈U
(11.2.21)
≤
which finishes the proof of (11.2.17). The proof of (11.2.4) is complete.
11.2.2
b · min h(x),
C
x∈∂int U
Construction of coupling
∗∗ and ζ ∗
In this section we construct a coupling of ζ−
−,+ using thinning and merging of Poisson
point processes, see Lemma 11.8. Later in Section 11.2.3 we show that this coupling satisfies
the requirements of Theorem 11.4, see also Remark 11.9.
b A,
b P),
b a family of independent
Fix u+ > u− > 0, and define on an auxiliary probability space (Ω,
90
CHAPTER 11. COUPLING OF POINT MEASURES OF EXCURSIONS
1
and N−j , j ≥ 2, as
Poisson random variables N−,+
u+ − u−
1
cap(S) ,
N−,+ ∼ POI
2

!j−1 
D
cap(S)
S
 , j ≥ 2,
N−j ∼ POI u− cap(S) ·
Ld−2
n+1
where the constant DS = DS (d) is defined in Lemma 11.5, and set
X
N− :=
jN−j .
(11.2.24)
(11.2.25)
j≥2
In addition, let
(γk ), k ≥ 1, be an i.i.d. sequence of C-valued random variables with distribution Γ, (11.2.26)
where Γ is defined in (11.2.2), and introduce the following point processes on C:
X
X
Σ− :=
δγk
and
Σ1−,+ :=
δγk .
(11.2.27)
1
1≤k≤N−,+
1≤k≤N−
Mind that we use the same γk in the definitions of Σ− and Σ1−,+ .
The following lemma provides the coupling claimed in Theorem 11.4, see Remark 11.9.
Lemma 11.8 (Coupling of point processes). Let d ≥ 3. Take the constants D′l0 = D′l0 (d) and
DS = DS (d) as in the statement of Lemma 11.5. For all 0 < u− < u+ , n ≥ 0, L0 ≥ 1, l0 ≥ D′l0 a
e x∅ ,n+1 , if Ui are chosen as in (11.1.2), and Si as in (11.1.3),
multiple of 100, x∅ ∈ Ln+1 , T ∈ Λ
b one can construct random variables N 1 , N− ,
b A,
b P)
then on the auxiliary probability space (Ω,
−,+
Σ1−,+ , and Σ− as above, as well as random variables
such that
∗∗
∗∗
ζb−
distributed as ζ−
∗∗
ζb−
≤ Σ−
and
and
∗
∗
ζb−,+
distributed as ζ−,+
∗
Σ1−,+ ≤ ζb−,+
.
(11.2.28)
(11.2.29)
Remark 11.9. Later in Lemma 11.10 we prove that if in addition to the conditions of
Lemma 11.8 one assumes that u+ and u− are chosen as in (9.1.8) with some properly chosen constant Du = Du (d) < ∞, and S satisfies (11.1.4), then
Since
b 1 < N− ] ≤ ǫ(u− , n).
P[N
−,+
1
∗∗
∗
{N− ≤ N−,+
} ⊆ {Σ− ≤ Σ1−,+ } ⊆ {ζb−
≤ ζb−,+
},
∗∗ , ζ
b∗ ) of ζ ∗∗ and ζ ∗ constructed in Lemma 11.8 satisfies all the requirements
the coupling (ζb−
−,+
−
−,+
of Theorem 11.4.
∗∗ . Using (11.2.25), we can write
Proof of Lemma 11.8. We first construct ζb−
P
Σ− =
j
jN−
X
j≥2
i=1
Nj
δγi =
j
−
X
XX
j≥2 i=1 l=1
δγPj−1
k=2
k +j(i−1)+l
kN−
.
91
11.2. COUPLING OF POINT MEASURES OF EXCURSIONS
For j ≥ 2 and 1 ≤ i ≤ N−j , we consider the vectors
γj,i = γPj−1 kN k +j(i−1)+1 , . . . , γPj−1 kN k +ji ∈ C j .
k=2
By (11.2.26), the vectors (γj,i )j≥2,
−
k=2
j
1≤i≤N−
−
are independent, and γj,i has distribution Γ⊗j .
j
Consider the point measures σ−
on the space C j , defined as
Nj
j
σ−
=
−
X
δγj,i .
(11.2.30)
i=1
since (N−j )j≥2 are independent, and N−j has Poisson distribution with parameter as in (11.2.24),
it follows from the construction (5.2.1) and Exercise 5.5 that
j
, j ≥ 2, are independent Poisson point processes on C j
σ−
j−1
DS cap(S)
with intensity measures u− cap(S) ·
· Γ⊗j .
d−2
(11.2.31)
Ln+1
Moreover, recalling the definition of sj from above (8.3.10), we see that
X
j
Σ− =
sj (σ−
).
j≥2
j
j
We will now construct Poisson point processes ζb−
on C j with intensity measures ξe−
(cf. (11.2.6))
j
j
by “thinning” the corresponding processes σ− . In particular, ζb− will have the same distribution
j
as ζe−
.
j
, conditional on the (γk )k≥1 and (N−j )j≥2 , we consider independent
For the construction of ζb−
{0, 1}-valued random variables (αj,i )j≥2, 1≤i≤N j with distributions determined by
−
P [αj,i = 1] = 1 − P [αj,i = 0] =
u− cap(S) ·
j
ξe−
(γj,i )
j−1
DS cap(S)
Ld−2
n+1
.
· Γ⊗j (γj,i )
The fact that P [αj,i = 1] ∈ [0, 1] follows from (11.2.4).
j
as in (11.2.30), we define
Then for σ−
Nj
j
:=
ζb−
By Exercise 5.8 and (11.2.31),
In particular,
−
X
αj,i δγj,i .
i=1
j
, j ≥ 2, are independent Poisson point processes on C j
ζb−
j
.
with intensity measures ξe−
∗∗ =
Recall from (8.3.10) that ζ−
P
(11.2.32)
j d ej
= ζ− ,
ζb−
ej
j≥2 sj (ζ− ).
j ≥ 2.
Thus, if we analogously define
X
j
∗∗
ζb−
:=
sj (ζb−
),
j≥2
(11.2.33)
(11.2.34)
92
CHAPTER 11. COUPLING OF POINT MEASURES OF EXCURSIONS
then by (8.3.9), (11.2.33), and (11.2.34),
∗∗ d ∗∗
= ζ− .
ζb−
Moreover, by (11.2.30) and (11.2.32),
∗∗
ζb−
≤
X
j
sj (σ−
) = Σ− .
j≥2
∗∗ satisfying (11.2.28) and (11.2.29) is complete.
The construction of ζb−
∗ . First note that by (11.2.24), (11.2.26), and (11.2.27),
We will now construct ζb−,+
Σ1−,+ is a Poisson point process on C with intensity
u+ − u−
cap(S) · Γ.
2
Let σ 1 be a Poisson point process on C independent from Σ1−,+ (and anything else) with intensity
measure
u+ − u−
1
cap(S) · Γ.
ξe−,+
−
2
The fact that this intensity measure is non-negative crucially depends on (11.2.3). We define
By Exercise 5.7,
In particular,
∗
ζb−,+
:= Σ1−,+ + σ 1 .
∗
1
ζb−,+
is a Poisson point process on C with intensity measure ξe−,+
.
d ∗
∗
∗
∗
ζb−,+
= ζ+
− ζ−
= ζ−,+
and
∗
≥ Σ1−,+ .
ζb−,+
∗
The construction of ζb−,+
satisfying (11.2.28) and (11.2.29) is complete.
11.2.3
Error term
The main result of this section is the following lemma, which gives the error term in (11.2.1),
see Remark 11.9.
Lemma 11.10. Let d ≥ 3. There exist constants D′′l0 = D′′l0 (d) < ∞ and Du = Du (d) < ∞ such
e x∅ ,n+1 , if u− and u+
that for all n ≥ 0, L0 ≥ 1, l0 ≥ D′′l0 a multiple of 100, x∅ ∈ Ln+1 , T ∈ Λ
1
satisfy (9.1.8), Ui are chosen as in (11.1.2), Si as in (11.1.3) and (11.1.4), then N−,+
and N−
defined, respectively, in (11.2.24) and (11.2.25) satisfy
b 1 < N− ] ≤ ǫ(u− , n),
P[N
−,+
(11.2.35)
where the error function ǫ(u, n) is defined by (9.1.11).
The proof of Lemma 11.10 crucially relies on the fact that S is chosen so that cap(S) satisfies
(11.1.4). In the next remark we elaborate on condition (11.1.4).
93
11.2. COUPLING OF POINT MEASURES OF EXCURSIONS
Remark 11.11. Note that
3
1
− d−2
b 1 ] = (u+ − u− ) cap(S) (9.1.8)
=
· Du · (n + 1)− 2 · l0 4 · u− cap(S)
E[N
−,+
2
2
and
b −] =
E[N
∞
X
j=2
DS cap(S)
Ld−2
n+1
j · u− cap(S) ·
!j−1
≈ 2 · u− cap(S) ·
DS cap(S)
,
Ld−2
n+1
if
DS cap(S)
≪ 1.
Ld−2
n+1
b − ] < E[N
b 1 ], then we should assume that
Thus, if we want E[N
−,+
cap(S) <
3(d−2)
Du
Du
− d−2
− 32
− 32
d−2
4
4
·
(n
+
1)
· Ld−2
·
(n
+
1)
·
L
·
l
=
·
l
.
n
n+1
0
0
4DS
4DS
(11.2.36)
1
and N− are “close” to
On the other hand, in order to obtain (11.2.35) we should show that N−,+
their means, which would be the case if the parameters of their distributions are sufficiently large.
In other words, the larger we take cap(S) the better relative concentration we get. Therefore, by
choosing cap(S) to grow as fast as the upper bound in (11.2.36), we should get the best possible
b 1 < N− ].
decay for P[N
−,+
Of course, the precise choice of the exponents for Ln , l0 and n in (11.1.4) (as well as in 9.1.11)
is strongly tied to our specific choice of u+ in (9.1.8). We could have chosen to replace these
exact exponents by some parameters satisfying certain restrictions, and the proofs would still go
through without appeal for new ideas, for further discussion see Section 11.4
Throughout this section we will use the following notation:
λ1−,+ :=
λj−
(u+ − u− )
cap(S),
2
:= u− cap(S) ·
β :=
DS cap(S)
Ld−2
n+1
!j−1
,
j ≥ 2,
(11.2.37)
DS cap(S)
,
Ld−2
n+1
1
∼ POI(λ1−,+ ),
where the constant DS = DS (d) is defined in Lemma 11.5. Note that N−,+
N−j ∼ POI(λj− ), and λj− = u− cap(S) · β j−1 .
1
and N− :
Exercise 11.12. Calculate the moment generating functions of N−,+
i
h
1
b ea·N−,+
= exp λ1−,+ · (ea − 1) ,
∀a∈R : E


∞
X
a·N− b e
λj− · (eaj − 1) .
∀ a < − ln(β) : E
= exp 
(11.2.38)
(11.2.39)
j=2
Proof of Lemma 11.10. We begin with noting that
1
1
1
1
1
1
b
b
b
P[N
·λ
·λ
≤ N− .
+P
−,+ < N− ] ≤ P N−,+ ≤
2 −,+
2 −,+
We estimate the first summand in (11.2.40) by
i (∗) 1 1
i
h
h
1 1
1
1
1 1
1
b e−N−,+
b e−N−,+
b
≥ e− 2 ·λ−,+ ≤ e 2 ·λ−,+ · E
P N−,+ ≤ · λ−,+ = P
2
(11.2.38)
=
1
eλ−,+ ·(e
−1 − 1 )
2
(11.2.40)
1
1
≤ e− 10 λ−,+ ,
94
CHAPTER 11. COUPLING OF POINT MEASURES OF EXCURSIONS
where in the inequality marked by (∗) we used the exponential Markov inequality.
Similarly, but now using the exponential Markov inequality and (11.2.39), we bound the second
summand in (11.2.40) by


∞
X
1
b N− ≥ 1 · λ1
λj− (ej − 1)
≤ exp − · λ1−,+ +
P
2 −,+
2
j=2


∞
X
(11.2.37)
(∗)
1
1
≤ exp − · λ1−,+ + u− cap(S) ·
β j−1 ej  ≤ exp − · λ1−,+ + u− cap(S) · 2e2 β
2
2
j=2
(∗∗)
1 1
≤ exp − · λ−,+ ,
4
where the inequality (∗) holds under the assumption β · e ≤ 21 , and the inequality (∗∗) holds
under the assumption u− cap(S) · 2e2 β ≤ 41 · λ1−,+ , which can be rewritten using (9.1.8) and
(11.2.37) as
cap(S) ≤
1
· Ld−2
n+1 ,
2e · DS
cap(S) ≤
and
1
− d−2
− 32
4
·
D
·
(n
+
1)
·
l
· Ld−2
u
0
n+1 . (11.2.41)
16e2 · DS
By our choice of S in (11.1.4) (namely, the upper bound on cap(S)),
3(d−2)
4
3
cap(S) ≤ 2 · (n + 1)− 2 · l0
· Ld−2
n
(9.1.2)
3
− d−2
4
= 2 · (n + 1)− 2 · l0
· Ld−2
n+1 ,
thus conditions (11.2.41) are fulfilled, if
d−2
3
4e · DS ≤ (n + 1) 2 · l0 4
32e2 · DS ≤ Du .
and
(11.2.42)
Mind that DS = DS (d) was already identified in Lemma 11.5, but we still have the freedom to
choose Du and D′′l0 . We take
Du = Du (d) ≥ 32e2 · DS
and
4
D′′l0 = D′′l0 (d) ≥ (4e · DS ) d−2 ,
(11.2.43)
so that conditions (11.2.42) hold for all l0 ≥ D′′l0 .
Plugging in the obtained bounds into (11.2.40), we obtain (using the lower bound on cap(S) in
(11.1.4)) that
b 1 < N− ] ≤ 2 · e− 101 ·λ1−,+
P[N
−,+
(9.1.8)
1
(11.2.37)
=
−3
2
= 2 · e− 20 ·Du ·(n+1)
− d−2
4
·l0
1
2 · e− 20 ·(u+ −u− )·cap(S)
·u− cap(S)
(11.1.4)
≤
(∗)
d−2
−3 ·l 2 ·Ld−2
n
0
1
2 · e− 40 ·Du ·u− ·(n+1)
d−2
(9.1.11)
−2·u− ·(n+1)−3 ·l0 2 ·Ld−2
n
≤ 2·e
=
ǫ(u− , n),
where in the inequality (∗) we assumed that
Du ≥ 80.
(11.2.44)
The proof of Lemma 11.10 is complete, with the choice of D′′l0 satisfying (11.2.43), and the choice
of Du as in (11.2.43) and (11.2.44).
95
11.3. PROOF OF THEOREM 9.3
11.2.4
Proof of Theorem 11.4
The result of Theorem 11.4 holds with the choice of the constants Du as in Lemma 11.10 and
Dl0 = max(D′l0 , D′′l0 ),
(11.2.45)
where D′l0 is defined in Lemma 11.5 and D′′l0 in Lemma 11.10. Indeed, with this choice of the
∗∗ and ζ ∗
constants, the coupling of point measures ζ−
−,+ constructed in Lemma 11.8 satisfies
(11.2.1) by Lemma 11.10 and Remark 11.9.
11.3
Proof of Theorem 9.3
By Theorem 8.9, it suffices to show that for some choice of K1 and K2 such that GTi ∈ σ(Ψz , z ∈
Ki ), U1 , U2 , S1 , S2 satisfying (8.3.1) and (8.3.2), and u− and u+ as in (9.1.8), there exists a
∗∗ and ζ ∗
coupling of point measures ζ−
−,+ which satisfies (8.4.1) with ǫ = ǫ(u− , n).
The existence of such a coupling is precisely the statement of Theorem 11.4, where Ki are chosen
as in (11.1.1), Ui as in (11.1.2), and Si as in (11.1.3) and (11.1.4).
11.4
Notes
The proofs of this chapter are adaptations of the proofs of [S12a, Section 2], which are formulated
in a general setting where the underlying graph is of form G × Z (with some assumptions on G).
The decoupling inequalities of [S12a] are also more flexible than ours in the sense that [S12a,
Theorem 2.1] has more parameters that the user can choose freely in order to optimize the tradeoff between the amount of sprinkling u+ − u− and the error term ǫ(u− , n) that we discussed in
Remark 11.11. For simplicity, we have have made some arbitrary choice of parameters. Our
Theorem 9.3 follows from [S12a, Theorem 2.1] applied to the graph E = Zd−1 × Z (see also
[S12a, Remark 2.7(1)]), therefore the decay of the Green function in our case is governed by the
exponent ν = d − 2. Our choice of parameters from [S12a, Theorem 2.1] is the following:
K = 1,
ν′ =
d−2
ν
=
.
2
2
Index
(Ω, A, P), 41
∗-neighbors, 9
GT , 65
Gx,n , 67
HA , 10
LA , 10
Ln , 66
QK , 36
TA , 10
U , 58, 83
W , 35
W ∗ , 36
∗ , 36
WK
W+ , 35
WK , 36
n , 36
WK
Γ, 85
Λx,n , 66
Ψx , 17
S, 58, 84
∂ext K, 9
∂int K, 9
I u , 41
Ln , 66
P u , 17
T , 65
V u , 41
cap(·), 13
µK , 42
µK,u , 42
ν, 37
π ∗ , 36
S, 17
σ(·), 10
θk , 35
| · |p , 9
e A , 10
H
e
Λx,n , 66
eeK , 13
∗ , 60
ζ−,+
∗∗
ζ+ , 60
∗∗ , 60
ζ−
eK , 12
f ≍ g, 10
f ∼ g, 10
g(·), 11
g(·, ·), 11
−
u+
∞ , u∞ , 68
u∗∗ , 77
range, 41
Bernoulli percolation, 21
coupling, 61
cylinder event, 17
decoupling inequalities, 68
dyadic tree, 65
ergodic transformation, 19
harmonic function, 11
Harnack inequality, 11
lazy random walk, 24
local event, 17
measure-preserving transformation, 19
nearest neighbors, 9
Pólya’s theorem, 12
Peierls argument, 49
percolation threshold u∗ , 47
proper embedding, 66
simple random walk, 10
sweeping identity, 40
96
Bibliography
[A89]
D. Aldous. Probability approximations via the Poisson clumping heuristic. Applied
Mathematical Sciences, 77, Springer-Verlag, New York, 1989.
[AB92]
D. Aldous and M. Brown. Inequalities for rare events in time-reversible Markov
chains. I. Stochastic inequalities (Seattle, WA, 1991), IMS Lecture Notes Monogr.
Ser., 22, 1–16, 1992.
[AP96]
P. Antal and A. Pisztora (1996) On the chemical distance for supercritical Bernoulli
percolation. Ann. Probab. 24 (2), 1036–1048.
[B12]
D. Belius. Cover levels and random interlacements. Ann. Appl. Probab. 22(2), 522–
540, 2012.
[BS08]
I. Benjamini and A.-S. Sznitman. Giant component and vacant set for random walk
on a discrete torus. J. Eur. Math. Soc., 10(1), 133-172, 2008.
[BeBi07]
N. Berger and M. Biskup (2007) Quenched invariance principle for simple random
walk on percolation cluster. Probab. Theory Rel. Fields 137, 83–120.
[B86]
P. Billingsley. Probability and measure. Wiley Series in Probability and Mathematical Statistics: Probability and Mathematical Statistics, Second Ed. John Wiley &
Sons Inc., New York, 1986.
[CP12]
J. Černý and S. Popov (2012) On the internal distance in the interlacement set.
Electron. J. Probab. 17, no. 29, 1–25.
[CT12]
J. Černý and A. Teixeira. From random walk trajectories to random interlacements.
Ensaios Mathemáticos, 23, 1-78, 2012.
[DS84]
P. G. Doyle and J. L. Snell (1984) Random walks and electric networks. Carus
Mathematical Monographs 22. Mathematical Association of America.
[DRS12a]
A. Drewitz, B. Ráth, A. Sapozhnikov. Local percolative properties of the vacant
set of random interlacements with small intensity. to appear in Ann. Inst. Henri
Poincaré. arXiv:1206.6635, 2012.
[DRS12b]
A. Drewitz, B. Ráth and A. Sapozhnikov. On chemical distances and shape theorems
in percolation models with long-range correlations. arXiv:1212.2885, 2012.
[G99]
G. R. Grimmett (1999) Percolation. Springer-Verlag, Berlin, second edition.
[GKZ93]
G. R. Grimmett, H. Kesten, Y. Zhang. Random walk on the infinite cluster of the
percolation model. Probab. Theory Related Fields 96 33–44, 1993.
97
98
BIBLIOGRAPHY
[K86]
H. Kesten. Aspects of first-passage percolation, in École d’été de probabilité de SaintFlour XIV, Lecture Notes in Math 1180, Springer-Verlag, 125-264, 1986.
[K93]
J.F.C. Kingman. Poisson processes. Oxford Studies in Probability, The Clarendon
Press Oxford University Press, 1993.
[L91]
G.F. Lawler. Intersections of random walks. Probability and its Applications,
Birkhäuser Boston Inc., 1991.
[LPW08]
D. A. Levin, Y. Peres and E. L. Wilmer. Markov Chains and Mixing Times, Amer.
Math. Soc., Providence, RI, 2008.
[LSz13a]
X. Li and A.-S. Sznitman. Large deviations for occupation time profiles of random
interlacements. (preprint) arXiv:1304.7477 , 2013.
[LSz13b]
X. Li and A.-S. Sznitman. A lower bound for disconnection by random interlacements. (preprint) arXiv:1310.2177 , 2013.
[L83]
T. Lyons (1983) A Simple Criterion for Transience of a Reversible Markov Chain.
Ann. Probab. 11, 393-402.
[LP11]
R. Lyons and Y. Peres. Probability on Trees and Networks. In preparation. Current version is available online at http://mypage.iu.edu/~rdlyons/. Cambridge
University Press.
[MP07]
P. Mathieu and A. L. Piatnitski (2007) Quenched invariance principles for random
walks on percolation clusters. Proceedings of the Royal Society A 463, 2287–2307.
[M56]
E.W. Montroll. Random walks in multidimensional spaces, especially on periodic
lattices. J. Soc. Indust. Appl. Math., vol. 4, pp. 241–260, 1956.
[P36]
R. Peierls. On Ising’s model of ferromagnetism. Proc. Camb. Phil. Soc, vol. 32, pp.
477–481, 1936.
[P21]
G. Pólya Über eine Aufgabe der Wahrscheinlichkeitsrechnung betreffend die Irrfahrt
im Straßennetz. Math. Ann. 84, 1-2, 149–160, 1921.
[PT13]
S. Popov and A. Teixeira. Soft local times and decoupling of random interlacements.
(to appear in the J. of the Eur. Math. Soc.) arXiv:1212.1605, 2013.
[PRS13]
E. Procaccia, R. Rosenthal, A. Sapozhnikov. Quenched invariance principle for simple random walk on clusters in correlated percolation models. arXiv:1310.4764, 2013.
[PT11]
E. Procaccia and J. Tykesson. Geometry of the random interlacement. Electron.
Commun. Probab. 16, 528–544., 2011.
[RS12]
B. Ráth, A. Sapozhnikov. Connectivity properties of random interlacement and
intersection of random walks. ALEA, 9: 67-83, 2012.
[RS11a]
B. Ráth and A. Sapozhnikov. On the transience of random interlacements. Electron.
Commun. Probab. 16, 379–391, 2011.
[RS11b]
B. Ráth and A. Sapozhnikov. The effect of small quenched noise on connectivity
properties of random interlacements. Electron. J. Probab 18, no. 4, 1–20.
[Res87]
S.I. Resnick. Extreme values, regular variation, and point processes. Springer, 2007.
BIBLIOGRAPHY
99
[Rod12]
P.-F. Rodriguez. Level set percolation for random interlacements and the Gaussian
free field. arXiv:1302.7024, 2013.
[R13]
J. Rosen. Intersection local times for interlacements. arXiv:1308.3469, 2013.
[SidSz04]
V. Sidoravicius and A.-S. Sznitman (2004) Quenched invariance principles for walks
on clusters of percolation or among random conductances. Prob. Theory Rel. Fields
129, 219-244.
[SidSz09]
V. Sidoravicius and A.-S. Sznitman. Percolation for the Vacant Set of Random
Interlacements. Comm. Pure Appl. Math. 62(6), 831–858., 2009.
[SidSz10]
V. Sidoravicius and A.-S. Sznitman Connectivity bounds for the vacant set of random interlacements. Ann. Inst. Henri Poincaré., Prob. et Stat. 46(4), 976–990.,
2010.
[SW2001]
S. Smirnov and W. Werner. Critical exponents for two-dimensional percolation.
Mathematical Research Letters 8, 729–744, 2001.
[S09a]
A.-S. Sznitman (2009) Random walks on discrete cylinders and random interlacements. Probab. Theory Relat. Fields 145, 143–174.
[S09b]
A.-S. Sznitman (2009) Upper bound on the disconnection time of discrete cylinders
and random interlacements. Annals of Probability 37 (5), 1715–1746.
[S10]
A.-S. Sznitman. Vacant set of random interlacements and percolation. Ann. of Math.
(2), 171, 2039–2087, 2010.
[Sz11a]
A.-S. Sznitman. A lower bound on the critical parameter of interlacement percolation
in high dimension. Probab. Theory Relat. Fields, 150, 575–611, 2011.
[Sz11b]
A.-S. Sznitman. On the critical parameter of interlacement percolation in high dimension. Ann. Probab. 39(1), 70–103, 2011.
[S12a]
A.-S. Sznitman. Decoupling inequalities and interlacement percolation on G x Z.
Inventiones mathematicae (3), 187, 645–706, (2012).
[S12b]
A.-S. Sznitman. Random interlacements and the Gaussian free field Ann. Probab.,
40, 6, 2400–2438, 2012.
[S12c]
A.-S. Sznitman. An isomorphism theorem for random interlacements. Electron.
Commun. Probab., 17, 9, 1–9, 2012.
[S12d]
A.-S. Sznitman. Topics in occupation times and Gaussian free fields. European
Mathematical Society, Zürich, 2012.
[S13]
A.-S. Sznitman. On scaling limits and Brownian interlacements. Bull. Braz. Math.
Soc., New Series 44(4), 555-592, 2013. Special issue of the Bulletin of the Brazilian
Mathematical Society-IMPA 60 years.
[T09a]
A. Teixeira. On the uniqueness of the infinite cluster of the vacant set of random
interlacements. Annals of Applied Probability 19(1) (2009) 454-466.
[T09b]
A. Teixeira. Interlacement percolation on transient weighted graphs Elect. J. of
Probab. 14, 1604–1627, 2009.
100
BIBLIOGRAPHY
[T11]
A. Teixeira. On the size of a finite vacant cluster of random interlacements with
small intensity. Probab. Theory and Rel. Fields 150(3-4), 529–574, 2011.
[TW11]
A. Teixeira and D. Windisch. On the fragmentation of a torus by random walk.
Communications on Pure and Applied Mathematics, LXIV, 1599-1646, 2011.
[W08]
D. Windisch. Random walk on a discrete torus and random interlacements. Electron.
Commun. Probab., 13, 140–150, 2008.
Download