Uniform spanning forests and the bi-Laplacian Gaussian field Gregory F. Lawler Xin Sun

advertisement
Uniform spanning forests and the bi-Laplacian
Gaussian field
Gregory F. Lawler∗
Xin Sun
Wei Wu
University of Chicago
MIT
New York University
Abstract
1
For N = n(log n) 4 , on the ball in Z4 with radius N we construct
a ±1 spin model coupled with the wired spanning forests. We show
that as n → ∞ the spin field has bi-Laplacian Gaussian field on R4 as
its scaling limit. For d ≥ 5, the model can be defined on Zd directly
without the N -cutoff. The same scaling limit result holds. Our proof
is based on Wilson’s algorithm and a fine analysis of intersections of
simple random walks and loop erased walks. In particular, we improve
results on the intersection of a random walk with a loop-erased walk
in four dimensions. To our knowledge, this is the first natural discrete
model (besides the discrete bi-Laplacian Gaussian field) that has been
proved to converge to the bi-Laplacian Gaussian field.
Acknowledgment: We are grateful to Scott Sheffield for motivating this
problem and helpful discussions. We thank the Isaac Newton Institute for
Mathematical Sciences, Cambridge, for support and hospitality during the
programme Random Geometry where part of this project was undertaken.
1
Introduction
This paper connects two natural objects that have four as the critical dimension: the bi-Laplacian Gaussian field and uniform spanning trees (UST)
(and the associated loop-erased random walk (LERW)). We will construct a
∗
Research supported by National Science Foundation grant DMS-1513036.
1
sequence of random fields on the integer lattice Zd (d ≥ 4) using LERW and
UST and show that they converge in distribution to the bi-Laplacian field.
We will now describe the construction for each positive integer n, let
N = Nn = n(log n)1/4 . Let AN = {x ∈ Zd : |x| ≤ N }. We will construct
a ±1 valued random field on AN as follows. Recall that a wired spanning
tree on AN is a tree on the graph AN ∪ {∂AN } where we have viewed the
boundary ∂AN as “wired” to a single point. Such a tree produces a spanning
forest on AN by removing the edges connected to ∂AN . We define the uniform
spanning forest (USF) on AN to be the forest obtained by choosing the wired
spanning tree of AN ∪{∂AN } from the uniform distribution. (Note this is not
the same thing as choosing a spanning forest uniformly among all spanning
forests of AN .) The field is then defined as follows. Let an be a sequence of
positive numbers (we will be more precise later).
• Choose a USF on AN . This partitions AN into (connected) components.
• For each component of the forest, flip a coin and assign each vertex in
the component value 1 or −1 based on the outcome. This gives a field
of spins {Yx,n : x ∈ AN }. If we wish we can extend this to a field on
x ∈ Zd by setting Yx,n = 1 for x 6∈ AN .
• Let φn (x) = an Ynx,n which is a field defined on Ln := n−1 Zd .
This random function is constructed in a manner similar to the Edward-Sokal
coupling of the FK-Ising model [7]. The content of our main theorem which
we now state is that for d ≥ 4, we can choose an that φn converges to the
bi-Laplacian Gaussian field on Zd . If h ∈ C0∞ (Rd ), we write
X
hh, φn i = n−d/2
h(x) φn (x).
x∈Ln
Theorem 1.1.
• If d ≥ 5, there exists a such that if an = a n(d−4)/2 , then for every
h1 , . . . , hm ∈ C0∞ (Rd ), the random variables hhj , φn i converge in distribution to a centered joint Gaussian random variable with covariance
Z Z
hj (z) hk (w) |z − w|4−d dz dw.
2
√
• If d = 4, if an = 1/ 3 log n, then for every h1 , . . . , hm ∈ C0∞ (Rd ) with
Z
hj (z) dz = 0, j = 1, . . . , m,
the random variables hhj , φn i converge in distribution to a centered
Gaussian random variable with variance
Z Z
−
hj (z) hk (w) log |z − w| dz dw.
Remark 1.2.
• For d = 4, we could choose the cutoff N = n(log n)α for any α > 0.
For d > 4, we could do the same construction with no cutoff (N = 0)
and get the same result. For convenience, we choose a specific value of
α.
• We will prove the result for m = 1, h1 = h. The general result follows
by applying the k = 1 result to any linear combination of h1 , . . . , hm .
• We will do the proof only for d = 4; the d > 4 case is similar but
easier (see [22] for the d ≥ 5 case done in detail). In the d = 4 proof,
we will
√ have a slightly different definition of an but we will show that
an ∼ 3 log n.
Our main result uses strongly the fact that the scaling limit of the looperased random walk in four or more dimensions is Brownian motion, that
is, has Gaussian limits [15, 14]. Although we do not use explicitly the results in those papers, they are used implicitly in our calculations where the
probability that a random walk and a a loop-erased walk avoid each other is
comparable to that of two simple random walks and can be given by the expected number of intersections times a non-random quantity. This expected
number of intersections is the discrete biharmonic function that gives the
covariance structure of the biLaplacian random field.
Gaussian fluctuations have been observed and studied for numerous physical systems. Many statistical physics models with long correlations converge
are known to converge to the Gaussian free field. Typical examples come from
domino tilings [9], random matrix theory [3, 19] and random growth models
[6]. Our model can be viewed as an analogue of critical statistical mechanics
3
models in four dimension, in the sense that the correlation functions are scale
invariant.
The discrete bi-Laplacian Gaussian field (in physics literature, this is
known as the membrane model) is studied in [20, 10, 11], whose continuous
counterpart is the bi-Laplacian Gaussian field. Our model can be viewed as
another natural discrete object that converges to the bi-Laplacian Gaussian
field. In one dimensional case, Hammond and Sheffield constructed a reinforced random walk with long range memory [8], which can be associated
with a spanning forest attached to Z. Our construction can also be viewed
as a higher dimensional analogue of “forest random walks”.
1.1
Uniform spanning trees
Here we review some facts about the uniform spanning forest (that is, wired
spanning trees) on finite subsets of Zd and loop erased random walks (LERW)
on Zd . Most of the facts extend to general graphs as well. For more details,
see [16, Chapter 9].
Given a finite subset A ⊂ Zd , the uniform wired spanning tree in A is
a subgraph of the graph A ∪ {∂A}, choosing uniformly random among all
spanning trees of A ∪ {∂A}. (A spanning tree T is a subgraph such that any
two vertices in T are connected by a unique simple path in T ). We define
the uniform spanning forest (USF) on A to be the uniform wired spanning
tree restricted to the edges in A. One can also consider the uniform spanning
forest on all of Zd [18, 2], but we will not need this construction.
The uniform wired spanning tree, and hence the USF, on A can be generated by Wilson’s algorithm [23] which we recall here:
• Order the elements of A = {x1 , . . . , xk }.
• Start a simple random walk at x1 and stop it when in reaches ∂A giving
a nearest neighbor path ω. Erase the loops chronologically to produce
a self-avoiding path η = LE(ω). Add all the edges of η to the tree
which now gives a tree T1 on a subset of A ∪ {∂A} that includes ∂A.
• Choose the vertex of smallest index that has not been included and run
a simple random walk until it reaches a vertex in T1 . Erase the loops
and add the new edges to the tree giving a tree T2 .
• Continue until all vertices are included in the tree.
4
Wilson’s theorem states that the distribution of the tree is independent of
the order in which the vertices were chosen and is uniform chosen among
all spanning trees. In particular we get the following. Suppose that at each
x ∈ A, we have an independent simple random walk path ω x from x stopped
when it reaches ∂A.
• If x, y ∈ A, then the probability that x, y are in the same component
of the USF is the same as
P{LE(ω x ) ∩ ω y = ∅}.
(1)
Using this characterization, we can see the three regimes for the dimension
d. Let us first consider the probabilities that neighboring points are in the
same component. Let qN be the probability that a nearest neighbor of the
origin is in a different component as the origin.
q∞ := lim qN > 0,
N →∞
qN (log N )−1/3 ,
d ≥ 5,
d = 4,
For d < 4, qN decays like a power of N .
• If d > 4, and |x| = n, the probability that that 0 and x are in the same
component is comparable to |x|4−d . This is true even if N = ∞.
• If d = 4 and |x| = n, the probability that that 0 and x are in the same
component is comparable to 1/ log n. However, if we chose N = ∞,
the probability would equal one.
The last fact can be used to show that the uniform spanning tree in all of
Z4 is, in fact, a tree. For d < 4, the probability that that 0 and x are in the
same component is asymptotic to 1 and our construction is not interesting.
This is why we restrict to d ≥ 4.
1.2
Intersection probabilities for loop-erased walk
The hardest part of the analysis of this model is estimation of the intersection
probability (1) for d = 4. We will review what is known and then state the
new results in this paper. We will be somewhat informal here; see Section 2
for precise statements.
5
Let ω denote a simple random walk path starting at the origin and let
η = LE(ω) denote its loop erasure. We can write
η = {0} ∪ η 1 ∪ η 2 ∪ · · · ,
where η j denotes the path from the first visit to {|z| ≥ ej−1 } stopped at the
step immediately before the first visit to {|z| ≥ ej }. Let S be an independent
simple random walk starting at the origin, write S for S = [1, ∞), and let us
define
Zn = Zn (η) = P S ∩ ({0} ∪ η 1 ∪ η 2 · · · ∪ η n ) = ∅ | η ,
Z n = Z n (η) = P S ∩ (η 1 ∪ η 2 · · · ∪ η n ) = ∅ | η ,
∆n = ∆n (η) = P{S ∩ η n 6= ∅ | η}.
With probability one, Z∞ = 0, and hence E[Zn ] → 0 as n → ∞. In [14]
building on the work on slowly recurrent sets [13], it was shown that for
integers r, s ≥ 0,
s
pn,r,s := E Znr Z n n−(r+s)/3 ,
where means that the ratio of the two sides are bounded away from 0 and
∞ (with constants depending on r, s). The proof uses the “slowly recurrent”
nature of the paths to derive a relation
pn,r,s ∼ pn−1,r,s (1 − (r + s) E[∆n ]) .
(2)
We improve on this result by establishing an asymptotic result
lim n(r+s)/3 pn,r,s = cr,s ∈ (0, ∞).
n→∞
The proof does not compute the constant cr,s except in one special case
r = 2, s = 1. Let p̂n = pn,2,1 . We show that
π2
.
24n
This particular value of r, s comes from a path decomposition in trying
to estimate
P{S ∩ η n 6= ∅} = E [∆n ] .
p̂n ∼
On the event {S ∩ η n 6= ∅} there are many intersections of the paths. By
focusing on a particular one using a generalization of a first passage decomposition, it turns out that
P{S ∩ η n 6= ∅} ∼ Ĝ2n p̂n ,
6
where Ĝ2n denotes the expected number of intersections of S with a simple
random walk starting at the sphere of radius en−1 stopped at the sphere of
radius n. This expectation can be estimated well giving Ĝ2n ∼ 8/π 2 , and
hence we get a relation
P{S ∩ η n 6= ∅} ∼
8
p̂n .
π2
Combining this with (2), we get a consistency relation,
24
p̂n ∼ p̂n−1 1 − 2 p̂n ,
π
which leads to n p̂n ∼ π 2 /24 and
P{S ∩ η n 6= ∅} ∼
1
.
3n
From this we also get the asymptotic value of (1),
P{LE(ω x ) ∩ ω y = ∅} ∼
1.3
1
.
3 log N
Estimates for Green’s functions
Let GN (x, y) denote the usual random walk Green’s function on AN , and let
X
G2 (x, y) =
G(x, z) G(z, y),
z∈Zd
G2N (x, y) =
X
GN (x, z) GN (z, y),
z∈A
G̃2N (x, y) =
X
GN (x, z) G(z, y),
z∈A
Note that
G2N (x, y) ≤ G̃2N (x, y) ≤ G2 (x, y).
Also, G2 (x, y) < ∞ if and only if d > 4.
The Green’s function G(x, y) is well understood for d > 2. We will use
the following asymptotic estimate [16, Theorem 4.3.1]
7
d Γ(d/2)
.
(3)
(d − 2) π d/2
(Here and throughout we use the convention that if we say that a function
on Zd is O(|x|)−r ) with r > 0, we still imply that it is finite at every point.
In other words, for lattice functions, O(|x|−r ) really means O(1 ∧ |x|−r ). We
do not make this assumption for functions on Rd which could blow up at the
origin.) It follows immediately, and will be more convenience for us, that
G(x) = Cd Id (x) 1 + O(|x|−2 ) ,
(4)
G(x) = Cd |x|2−d + O(|x|−d ),
where we write
where Cd =
dd ζ
,
|ζ − x|d−2
V
and V = Vd = {(x1 , . . . , xd ) ∈ Rd : |xj | ≤ 1/2} denotes the d-dimensional
cube of side length one centered at the origin. One aesthetic advantage is
that Id (0) is not equal to infinity.
Z
Id (x) =
Proposition 1.3. If d ≥ 5, there exists cd such that
G2 (x, y) = cd |x − y|4−d + O |x − y|2−d .
Moreover, there exists c < ∞ such that if |x|, |y| ≤ N/2, then
G2 (x, y) − G2N (x, y) ≤ c N 4−d .
We note that it follows immediately that
G2 (x, y) = cd Jd (x − y) 1 + O |x − y|−2 ,
where
dd ζ
.
|ζ − x|d−4
V
The second assertion shows that it does not matter in the limit whether we
use G2 (x, y) or G2N (x, y) as our covariance matrix.
Z
Jd (x) =
Proof. We may assume y = 0. We use (3) to write
X
G2 (0, x) = Cd2
Id (z) Id (z − x) 1 + O(|z|−2 ) + O(|z − x|−2 )
z∈Zd
Z Z
dζ1 dζ2
+ O(|x|2−d )
|ζ1 |d−2 |ζ2 − x|d−2
Z Z
dζ1 dζ2
2
4−d
= Cd |x|
+ O(|x|2−d ),
d−2
|ζ1 | |ζ2 − u|d−2
=
Cd2
8
where u denotes a unit vector in Rd . This gives the first expression with
Z Z
dζ1 dζ2
2
cd = C d
.
d−2
|ζ1 | |ζ2 − u|d−2
For the second assertion, Let S x , S y be independent simple random walks
starting at x, y stopped at times T x , T y , the first time that they reach ∂AN .
Then,
G2 (x, y) − G2N (x, y) ≤ E G2 (S x (T x ), y) + E G2 (x, S y (T y )) ,
and we can apply the first result to the right-hand side.
Lemma 1.4. If d = 4, there exists c0 ∈ R such that if |x|, |y|, |x − y| ≤ N/2,
then
N
1
|x| + |y| + 1
8
2
+
+ c0 + O
.
GN (x, y) = 2 log
π
|x − y|
N
|x − y|
Proof. Using the martingale Mt = |St |2 − t, we see that
X
GN (x, w) = N 2 − |x|2 + O(N ).
(5)
w∈AN
Using (4), we see that
X
w∈AN
2
G(0, w) = O(N ) + 2
π
Z
|s|≤N
ds
= 2N 2 + O(N ).
|s|2
Let δ = N −1 [1 + |x| + |y|] and note that |x − y| ≤ δ N . Since
X
X
Gn (x − y, w) Gn (0, w) ≤
Gn (x, w) Gn (y, w)
w∈Dn
|w|<N (1−δ)
≤
X
Gn (x − y, w) Gn (0, w),
|w|≤N (1+δ)
it suffices to prove the result when y = 0 in which case δ = (1 + |x|)/N .
Recall that
min
N ≤|z|≤N +1
G(x, z) ≤ G(x, w) − Gn (x, w) ≤
9
max
N ≤|z|≤N +1
G(x, z).
In particular,
Gn (0, w) = G(0, w) −
Gn (x, w) = G(x, w) −
2
π2N 2
2
π2N 2
+ O(N −3 ),
+ O(δN −2 ).
Using (5), we see that
X
w
2
−2
Gn (0, w)
Gn (x, w) Gn (0, w) =
G(x, w) − 2 2 + O δN
π
N
w∈Dn
X
2
= O(δ) − 2 +
G(x, w) Gn (0, w).
π
w∈D
X n
Similarly, we write,
X
G(x, w) Gn (0, w) =
w∈Dn
X
w∈Dn
2
−3
G(x, w) G(0, w) − 2 2 + O N
,
π N
and use (5) to see that
X
G(x, w) Gn (0, w) = −
w∈Dn
X
2
2
+
O(δ
)
+
G(x, w) G(0, w).
π2
w∈D
n
Hence it suffices to show that there exists c0 such that
X
G(x, w) G(0, w) = log(N/|x|) + ĉ + O(|x|−1 ) + O(δ).
w∈Dn
Define (x, w) by
(x, w) = G(x, w) G(0, w) −
2
π2
2 Z
The estimate for the Green’s function implies

 c |w|−3 |x|−2 ,
c |w|−2 |x − w|−3 ,
|(x, w)| ≤

c |w|−5 ,
10
Sw
ds
.
|x − s|2 |s|2
that
|w| ≤ |x|/2
|x − w| ≤ |x|/2
other w,
and hence,
X 2 2 Z
ds
G(x, w) G(0, w) = O(|x| ) +
2
2
2
π
Sw |x − s| |s|
w∈Dn
w∈Dn
2 Z
2
ds
−1
= O(|x| ) +
.
2
2
2
π
|s|≤N |x − s| |s|
X
−1
(The second equality had an error term of O(N −1 ) but this is smaller than
the O(|x|−1 ) term already there.) It is straightforward to check that there
exists c0 such that
Z
d4 s
N
|x|
2
0
= 2π log
+c +O
.
2
2
|x|
N
|s|≤N |x − s| |s|
1.4
Proof of Theorem
Here we give the proof of the theorem leaving the proof of the most important
lemma for Section 2. The proofs for the d = 4 and d > 4 are similar with a
few added difficulties Rfor d = 4. We will only consider the d = 4 case here.
We fix h ∈ C0∞ with h = 0 and allow implicit constants to depend on h.
We will write just Yx, for Yx,n . Let K be such that h(x) = 0 for |x| ≤ K .
Let us write Ln = n−1 Z4 ∩ {|x| ≤ K} and
X
X
h(x) Ynx = n−4 an
h(x) Ynx ,
hh, φn i = n−4 an
x∈Ln
nx∈AnK
where an is still to be specified. We will now give the value. Let ω be a
LERW from 0 to ∂AN in AN and let η = LE(ω) be its loop-erasure. Let
ω1 be another simple random walk starting at the origin stopped when it
reaches ∂AN and let ω10 be ω1 with the initial step removed. Let
u(η) = P{ω1 ∩ η = {0} | η},
ũ(η) = P{ω10 ∩ η = ∅ | η}.
We define
pn = E u(η) ũ(η)2 .
In [] it was shown that pn 1/ log n if d = 4. One of the main goals of
Section 2 is to give the following improvement.
11
Lemma 1.5. If d = 4 and pn is defined as above, then
pn ∼
π2
.
24 log n
We can write the right-hand side as
1
1
,
2
(8/π ) 3 log n
where 8/π 2 is the nonuniversal constant in Lemma 1.4 and 1/3 is a universal
constant for loop-erased walks. If d ≥ 5, then pn ∼ c n4−d . We do not prove
this here.
For d = 4, we define
r
1
8
∼√
an =
.
2
π pn
3 log n
Let δn = exp{−(log log n)2 }; this is a function that decays faster than
any power of log n. Let Lkn,∗ be the set of x̄ = (x1 , . . . , xk ) ∈ Lkn such that
|xj | ≤ K for all j and |xi − xj | ≥ δn for each i 6= j. Note that #(Lkn ) n4k
and #(Lkn \ Lkn,∗ ) k 2 n4k δn . In particular,
n−4k a2k
n
X
h(x1 ) h(x2 ) · · · h(x2k ) = o2k (
p
δn ).
x̄∈L2k
n,∗
Let qN (x, y) be the probability that x, y are in the same component of
the USF on AN , and note that
E [Yx,n Yy,n ] = qN (x, y),
X
E hh, φn i2 = n−4
h(x) h(y) a2n qN (nx, ny).
x∈Ln
An upper bound for qN (x, y) can be given in terms of the probability that
the paths of two independent random walks starting at x, y, stopped when
then leave AN , intersect. This gives
qN (x, y) ≤
c log[N/|x − y|]
.
log N
12
In particular,
qN (x, y) ≤ c
(log log n)2
,
log n
|x − y| ≥ n δn .
(6)
To estimate E [hh, φn i2 ] we will use the following lemma, the proof of
which will be delayed.
Lemma 1.6. There exists a sequence rn with √
rn ≤ O(log log n), a sequence
n ↓ 0, such that if x, y ∈ Ln with |x − y| ≥ 1/ n ,
2
an q(nx, ny) − rn + log |x − y| ≤ n .
It follows from the lemma, (6), and the trivial inequality qN ≤ 1, that
X
E hh, φn i2 = o(1) + n−d
[rn − log |x − y|] h(x) h(y)
x,y∈Ln
X
−d
= o(1) − n
log |x − y| h(x) h(y)
x,y∈Ln
Z
= o(1) −
h(x) h(y) log |x − y| dx dy,
which shows Rthat the second moment has the correct limit. The second
equality uses h = 0 to conclude that
rn X
h(x) h(y) = o(1).
nd x,y∈L
n
We now consider the higher moments. It is immediate from the construction that the odd moments of hh, φn i are identically zero, so it suffices to consider the even moments E[hh, φn i2k ]. We fix k ≥ 2 and allow
implicit constants to depend on k as well. Let L∗ = Lkn,∗ be the set of
x̄ = (x1 , . . . , xk ) ∈ Lkn such that |xj | ≤ K for all j and |xi − xj | ≥ δn for each
i 6= j. We write h(x̄) = h(x1 ) . . . h(x2n ). Then we see that
X
E hh, φn i2k i = n−2kd a2k
h(x̄) E [Ynx1 · · · Ynx2k ]
n
x̄∈L2k
N
X
p
= O( δn ) + n−2kd a2k
h(x̄) E [Ynx1 · · · Ynx2k ]
n
x̄∈Lkn \L∗
13
Lemma 1.7. For each k, there exists c < ∞ such that the following holds.
1
2k
Suppose x̄ ∈ L2k
be independent simple random walks
n,∗ and let ω , . . . , ω
started at nx1 , . . . , nx2k stopped when they reach ∂AN . Let N denote the
number of integers j ∈ {2, 3, . . . , 2k} such that
ω j ∩ (ω 1 ∪ · · · ∪ ω j−1 ) 6= ∅}.
Then,
(log log n)3
P{N ≥ k + 1} ≤ c
log n
k+1
.
We write yj = nxj write Yj for Yyj . To calculate E[Y1 · · · Y2k ] we first
sample our USF which gives a random partition P of {y1 , . . . , y2k } Note that
E[Y1 · · · Y2k | P] equals 1 if it is an “even” partition in the sense that each
set has an even number of elements. Otherwise, E[Y1 · · · Y2k | P] = 0. Any
even partition, other than a partition into k sets of cardinality 2, will have
N ≥ k + 1. Hence
k+1 ! X
(log log n)3
+
P (Pȳ ) ,
E[Y1 · · · Y2k ] = O
log n
where the sum is over the (2k − 1)!! perfect matchings of {1, 2, . . . , 2k} and
P (Pȳ ) denotes the probability of getting this matching for the USF for the
vertices y1 , . . . , y2k .
Let us consider one of these perfect matchings that for convenience we
will assume is y1 ↔ y2 , y3 ↔ y4 , . . . , y2k−1 ↔ y2k . We claim that
P(y1 ↔ y2 , y3 ↔ y4 , . . . , y2k−1 ↔ y2k ) =
k+1 !
(log log n)3
O
+ P(y1 ↔ y2 ) P(y3 ↔ y4 ) · · · P(y2k−1 ↔ y2k ).
log n
Indeed, this is just inclusion-exclusion using our estimate on P{N ≥ k + 1}.
If we write n = n,k = (log log n)3(k+1) / log n, we now see from symmetry
that
E hh, φn i2k i
X
= O(n ) + n−2kd an (2k − 1)!!
P{nx1 ↔ nx2 , . . . , nx2k−1 ↔ nx2k }
x̄∈L∗
k
= O(n ) + (2k − 1)!! E hh, φn i2
.
14
1.5
Proof of Lemma 1.7
Here we fix k and let y1 , . . . , y2k be points with |yj | ≤ Kn and|yi − yj | ≥ n δn
where we recall log δn = (log log n)2 . Let ω 1 , . . . , ω 2k be independent simple
random walk starting at yj stopped when they get to ∂AN . We let Ei,j denote
the event that ω i ∩ ω j 6= ∅, and let Ri,j = P(Ei,j | ω j ).
Lemma 1.8. There exists c < ∞ such that for all i, j, and all n sufficiently
large,
1
(log log n)3
≤
.
P Ri,j ≥ c
log n
(log n)4k
Proof. We know that there exists c < ∞ such that if |y − z| ≥ n δn2 , then
the probability that simple random walks starting at y, z stopped when the
reach ∂AN intersect is at most c(log log n)2 / log n. Hence there exists c1 such
that
1
c1 (log log n)2
(7)
≥ .
P Ri,j ≤
log n
2
Start a random walk at z and run it until one of three things happens:
• It reaches ∂AN
• It gets within distance n δn2 of y
• The path is such that the probability that a simple random walk
starting at y intersects the path before reaching ∂AN is greater than
c1 (log log n)2 / log n.
If the third option occurs, then we restart the walk at the current site and
do this operation again. Eventually one of the first two options will occur.
Suppose it takes r trials of this process until one of the first two events
occur. Then either Ri,j ≤ r c1 (log log n)2 / log n or the original path starting
at z gets within distance δn2 of y. The latter event occurs with probability
O(δn ) = o((log n)−4k ). Also, using (7), we can see the probability that it
took at least r steps is bounded by (1/2)r . By choosing r = c2 log log n, we
can make this probability less than 1/(log n)4k .
Proof of Lemma 1.7. Let R be the maximum of Ri,j over all i 6= j in {1, . . . , 2k}.
Then, at least for n sufficiently large,
(log log n)3
1
P R≥c
≤
.
log n
(log n)3k
15
Let
j−1
E·,j =
[
Ei,j .
i=1
On the event R < c(log log n)3 / log n, we have
c(j − 1) (log log n)3
.
P E·,j | ω 1 , . . . , ω j−1 ≤
log n
If N denotes the number of j for which E·,j occurs, we see that
(log log n)3
P{N ≥ k + 1} ≤ c
log n
2
k+1
.
Loop-erased walk in Z4
The purpose of the remainder of this paper is to improve results about the
escape probability for loop-erased walk in four dimensions.
We start by setting up our notation. Let S be a simple random walk in Z4
defined on the probability space (Ω, F, P) starting at the origin. Let G(x, y)
be the Green’s function for simple random walk which we recall satisfies
G(x) =
π2
2
+ O(|x|−4 ),
|x|2
|x| → ∞.
Using this we see that there exists a constant λ such that as r → ∞,
X
G(x)2 =
|x|≤r
8
log r + λ + O(r−1 ).
2
π
(8)
It will be easiest to work on geometric scales, and we let Cn , An be the
discrete balls and annuli defined by
Cn = {z ∈ Z4 : |z| < en },
An = Cn \ Cn−1 = {z ∈ Cn : |z| ≥ en−1 }.
Let
σn = min {j ≥ 0 : Sj 6∈ Cn } ,
16
and let Fn denote the σ-algebra generated by {Sj : j ≤ σn }. If V ⊂ Z4 , we
write
H(x, V ) = Px {S[0, ∞) ∩ V 6= ∅},
H(V ) = H(0, V ),
H(x, V ) = Px {S[1, ∞) ∩ V 6= ∅},
Es(V ) = 1 − H(V ),
Es(V ) = 1 − H(0, V ).
Note that Es(V ) = Es(V ) if 0 6∈ V and otherwise a standard last-exit decomposition shows that
Es(V 0 ) = GV 0 (0, 0) Es(V ),
where V 0 = V \ {0}.
We have to be a little careful about the definition of the loop-erasures of
the random walk and loop-erasures of subpaths of the walk. We will use the
following notations.
• Ŝ[0, ∞) denotes the (forward) loop-erasure of S[0, ∞) and
Γ = Ŝ[1, ∞) = Ŝ[0, ∞) \ {0}.
• ωn denotes the finite random walk path S[σn−1 , σn ]
• η n = LE(ωn ) denotes the loop-erasure of S[σn−1 , σn ].
• Γn = LE(S[0, σn ]) \ {0}, that is, Γn is the loop-erasure of S[0, σn ] with
the origin removed.
Note that S[1, ∞) is the concatenation of the paths ω1 , ω2 , . . .. However, it
is not true that Γ is the concatenation of η 1 , η 2 , . . ., and that is one of the
technical issues that must be addressed in the proof.
Let Yn , Zn be the Fn -measurable random variables
Yn = H(η n ),
Zn = Es[Γn ],
Gn = GZ4 \Γn (0, 0).
As noted above,
Es(Γn ∪ {0}) = G−1
n Zn .
It is easy to see that 1 ≤ Gn ≤ 8, and using transience, we can see that with
probability one
lim Gn = G∞ := GΓ\{0} (0, 0).
n→∞
17
Theorem 2.1. For every 0 ≤ r, s < ∞, there exists cr,s < ∞, such that
cr,s
E Znr G−s
∼ r/3 .
n
n
Moreover, c3,2 = π 2 /24.
Our methods do not compute the constant cr,s except in the case case
r = 3, s = 2 . The proof of this theorem requires several steps which we will
outline now. For the remainder of this paper we fix 0 < r ≤ r0 and allow
constants to depend on r0 but not otherwise on r. We write
.
pn = E [Znr ] , p̂n = E Zn3 G−2
n
Here are the steps in order with a reference to the section that the argument
appears. Let
n
Y
φn =
exp −E[H(η j )] .
j=1
• Section 2.1. Show that E [H(η n )] = O(n−1 ), and hence,
1
n
φn = φn−1 exp{−E[H(η )]} = φn−1 1 + O
.
n
• Section 2.2. Show that
pn = pn−1 1 + O
log4 n
n
.
(9)
• Section 2.4. Find a function φ̃n such that there exist c̃r with
2 log n
r
pn4 = c̃r φ̃n 1 + O
.
n
• Section 2.5. Find u, ĉ > 0 such that
φ̃n = ĉ φn4 1 + O(n−u ) .
• Combining the previous estimates we see that there exist c0 = c0r , u such
that
4 log n
0
pn = c φn 1 + O
.
n1/4
18
• Section 2.6. Show that there exists c0r,s , u such that
4 r −s log n
0
E Zn Gn = cr,s φn 1 + O
.
n1/4
• Section 2.7. Use a path decomposition to show that
8
E[H(η n )] = 2 p̂n 1 + O(n−u ) ,
π
and combine the last two estimates to conclude that
π2
.
p̂n ∼
24n
The error estimates are probably not optimal, but they suffice for proving
our theorem. We say that a sequence {n } of positive numbers is fast decaying
if it decays faster than every power of n, that is
lim nk n = 0,
n→∞
for every k > 0. We will write {n } for fast decaying sequences. As is the
convention for constants, the exact value of {n } may change from line to
line. We will use implicitly the fact that if {n } is fast decaying then so is
{0n } where
X
0n =
m .
m≥n
2.1
Preliminaries
In this subsection we prove some necessary lemmas about simple random walk
in Z4 . The reader could skip this section now and refer back as necessary.
We will use the following facts about intersections of random walks in Z4 ,
see [12, 13].
Proposition 2.2. There exist 0 < c1 < c2 < ∞ such that the following is
true. Suppose S is a simple random walk starting at the origin in Z4 and
a > 2. Then
c
√ 1
≤ P {S[0, n] ∩ S[n + 1, ∞] = ∅}
log n
≤ P {S[0, n] ∩ S[n + 1, 2n] = ∅}
c2
≤ √
,
log n
19
log a
.
P S[0, n] ∩ S[n (1 + a−1 ), ∞) 6= ∅ ≤ c2
log n
Moreover, if S 1 is an independent simple random walk starting at z ∈ Z4 ,
log α
,
P S[0, n] ∩ S 1 [0, ∞) 6= ∅ ≤ c2
log n
where
(10)
√ n
α = max 2,
.
|z|
An important corollary of the proposition is that
sup n E [H(η n )] ≤ sup n E [H(ωn )] < ∞,
n
(11)
n
and hence
exp {−E [H(η n )]} = 1 − E [H(η n )] + O(n−2 ),
and if m < n,
n
Y
−1
φn = φm 1 + O(m )
1 − E H(η j ) .
j=m+1
Corollary 2.3. There exists c < ∞ such that if n > 0, α ≥ 2 and we let
m = mn,α = (1 + α−1 ) n,
Y = Yn,α = max H(Sj , S[0, n]),
j≥m
0
Y 0 = Yn,α
= max H(Sj , S[m, 2n]),
0≤j≤n
then for every 0 < u < 1,
c
log α
≤
,
P Y ≥
u
(log n)
(log n)1−u
log α
c
0
P Y ≥
≤
.
u
(log n)
(log n)1−u
Proof. Let
log α
τ = min j ≥ m : H(Sj , S[0, n]) ≥
(log n)u
20
.
The strong Markov property implies that
P {S[0, n] ∩ S[m, ∞) 6= ∅ | τ < ∞} ≥
log α
,
(log n)u
and hence Proposition 2.2 implies that
c2
.
(log n)1−u
P{τ < ∞} ≤
This gives the first inequality and the second can be done similarly looking
at the reversed path.
Lemma 2.4. There exists c < ∞ such that for all n:
• if m < n,
P{S[σn , ∞) ∩ Cn−m 6= ∅ | Fn } ≤ c e−2m ;
(12)
• if φ is a positive (discrete) harmonic function on Cn and x ∈ Cn−1 ,
|log[φ(x)/φ(0)]| ≤ c |x| e−n .
(13)
Proof. These are standard estimates, see, e.g., [16, Theorem 6.3.8, Proposition 6.4.2].
Lemma 2.5. Let
σn− = σn − bn−1/4 e2n c,
Sn− = S[0, σn− ],
σn+ = σn + bn−1/4 e2n c,
Sn+ = S[σn+ , ∞),
R = Rn = max− H(x, Sn+ ) + max
H(y, Sn− ),
+
x∈Sn
y∈Sn
Then, for all n sufficiently large,
P{Rn ≥ n−1/3 } ≤ n−1/3 .
(14)
Our proof will actually give a stronger estimate, but (14) is all that we
need and makes for a somewhat cleaner statement.
21
Proof. Using the fact that P{σn ≤ (k + 1) e2n | σn ≥ k e2n } is bounded
uniformly away from zero, we can see that there exists c0 with
P{σn ≥ c0 e2n log n} ≤ O(n−1 ).
Let N = Nn = dc0 e2n log ne, k = kn = bn−1/4 e2n /4c and let Ej = Ej,n be
the event that either
log n
max H(Si , S[j(k + 1), N ]) ≥ 1/4 ,
i≤jk
n
or
log n
max H(Si , S[0, jk]) ≥ 1/4 ,
(j+1)k≤i≤N
n
Using the previous corollary, we see that P(Ei ) ≤ O(n−3/4 ) and hence
#
"
[
log n
.
P
Ej ≤ O
1/2
n
jk≤N
Lemma 2.6. Let
Mn = max |Sj − Sk | ,
where the maximum is over all 0 ≤ j ≤ k ≤ n e2n with |j − k| ≤ n−1/4 e2n .
Then,
P Mn ≥ n−1/5 en
is fast decaying.
Proof. This is a standard large deviation estimate for random walk, see, e.g.,
[16, Corollary A.2.6].
Lemma 2.7. Let Un be the event that there exists k ≥ σn with
LE(S[0, k]) ∩ Cn−log2 n 6= Γ ∩ Cn−log2 n .
Then P(Un ) is fast decaying.
Proof. By the loop-erasing process, we can see that the event Un is contained
in the event that either
S[σn− 1 log2 n , ∞) ∩ Cn−log2 n 6= ∅
2
or
S[σn , ∞) ∩ Cn− 1 log2 n 6= ∅.
2
The probability that either of these happens is fast decaying by (12).
22
Proposition 2.8. If Λ(m, n) = S[0, ∞) ∩ A(m, n), then the sequences
log4 n
log2 n
, P{ H(ωn ) ≥
,
P H[Λ(n − 1, n)] ≥
n
n
are fast decaying.
Proof. Using Proposition 2.2 we can see that there exists c < ∞ such that
for all |z| ≥ en−1 ,
n
co 1
Pz H(S[0, ∞)) ≥
≤ .
n
2
Using this and the strong Markov property, we can by induction that for
every positive integer k,
ck
P H[Λ(n − 1, n)] ≥
≤ 2−k .
n
Setting k = bc−1 log2 nc, we see that the first sequence is fast decaying.
For the second, we use (12) to see that
P ωn 6⊂ A(n − log2 n, n)
is fast decaying, and if ωn ⊂ A(n − log2 n, log n),
X
H(ωn ) ≤
H [Λ(j − 1, j)] .
n−log2 n≤j≤n
2.2
Sets in Xn
Simple random walk paths in Z4 are “slowly recurrent” sets in the terminology of [13]. In this section we will consider a subcollections Xn of the
collection of slowly recurrent sets and give uniform bounds for escape probabilities for such sets. We will then apply these uniform bounds to random
walk paths and their loop-erasures.
If V ⊂ Z4 we write
Vn = V ∩ A(n − 1, n),
23
hn = H(Vn ).
Using the Harnack inequality, we can see that there exist 0 < c1 < c2 < ∞
such that
c1 hn ≤ H(z, Vn ) ≤ c2 hn ,
z ∈ Cn−2 ∪ A(n + 1, n + 2).
4
Let X√
n denote the collection of subsets of V of Z such that for all integers
m ≥ n,
log2 m
.
H(Vm ) ≤
m
Proposition 2.9. P{S[0, ∞) 6∈ Xn } is a fast decaying sequence.
Proof. This is an immediate corollary of Proposition 2.8.
Let En denote the event
En = En,V = {S[1, σn ] ∩ V = ∅}.
Proposition 2.10. There exists c < ∞ such that if V ∈ Xn , m ≥ n/10, and
P(Em+1 | Em ) ≥ 1/2, then
c
P(Em+2
c log2 n
.
| Em+1 ) ≤
n
Proof. For z ∈ ∂Cm+1 , let
u(z) = P{S(σm+1 ) = z | Em }.
Then,
c
P(Em+2
| Em+1 ) =
X
u(z) Pz {S[0, σm+2 ] ∩ V 6= ∅}.
z∈Cm+1
Note that
P{S(σm+1 ) = z, Em+1 }
P(Em+1 )
P{S(σm+1 ) = z, Em+1 }
≤ 2
≤ 2 P{S(σm+1 ) = z | Em }.
P(Em )
u(z) =
Using the Harnack principle, we see that
P{S(σm+1 ) = z | Em } ≤ sup Pw {S(σm+1 ) = z} ≤ c P{S(σm+1 ) = z}.
w∈∂Cm
24
Therefore,
c
| Em+1 ) ≤ c P{S[σm+1 , σm+2 ] ∩ V 6= ∅}
P(Em+2
m+2
X
≤ c
P{S[σm+1 , σm+2 ] ∩ Vk 6= ∅}.
k=1
Note that
P{S[σm+1 , σm+2 ] ∩ (Vm ∪ Vm+1 ∪ Vm+2 ) 6= ∅} ≤ H(Vm ∪ Vm+1 ∪ Vm+2 )
log2 n
.
≤ c
n
Using (12), we see that for λ large enough,
P{S[σm+1 , σm+2 ] ∩ Cm−λ log m 6= ∅} ≤ cn−2 .
For m − λ log m ≤ k ≤ m − 1, we estimate
P{S[σm+1 , σm+2 ] ∩ Vk 6= ∅} ≤ P[E 0 ] P{S[σm+1 , σm+2 ] ∩ Vk 6= ∅ | E 0 },
where E 0 = {S[σm+1 , σm+2 ] ∩ Ck+1 6= ∅}. The first probability on the righthand side is exp{−O(k − m)} and this second is O(log2 n/n). Summing over
k we get the result.
Proposition 2.11. There exists c < ∞ and a fast decaying sequence {n }
such that if V ∈ Xn , and P(En ) ≥ n , then
c
P(Ej+1
| Ej ) ≤
c log2 n
,
n
3n
≤ j ≤ n.
4
Proof. It suffices to consider n sufficiently large. If P(Em+1 | Em ) ≤ 1/2
for all n/4 ≤ m ≤ n/2, then P(En ) ≤ (1/2)n/4 which is fast decaying. If
P(Em+1 | Em ) ≥ 1/2 for some n/4 ≤ m ≤ n/2, then (for n sufficiently
large) we can use the previous proposition and induction to conclude that
P(Ek+1 | Ek ) ≥ 1/2 for m ≤ k ≤ n.
Corollary 2.12. There exists c such that for all n,
| log(pn /pn+1 )| ≤
25
c log4 n
.
n
In particular, there exists c such that if n4 ≤ m ≤ (n + 1)4 ,
c log4 n
.
| log(pm /p )| ≤
n
n4
Proof. Except for an event with fast decaying probability, we have
Γn 4 Γn+1 ⊂ Λ(n − log2 n, n + 1).
and
c log4 n
H Λ(n − log2 n, n + 1) ≤
.
n
This implies that there exists a fast decaying sequence n such that, except
for an event of probability at most n , either Zn ≤ n or
|log Zn − log Zn+1 | ≤
2.3
c log4 n
.
n
Loop-free times
One of the technical nuisances in the analysis of the loop-erased random walk
is that if j < k, it is not necessarily the case that
LE(S[j, k]) = LE(S[0, ∞)) ∩ S[j, k].
However, this is the case for special times called loop-free times. We say that
j is a (global) loop-free time if
S[0, j] ∩ S[j + 1, ∞) = ∅.
Proposition 2.2 shows that the probability that j is loop-free is comparable
to 1/(log j)1/2 . From the definition of chronological loop erasing we can see
the following:
• If j < k and j, k are loop-free times, then for all m ≤ j < k ≤ n,
LE (S[m, n]) ∩ S[j, k] = LE (S[0, ∞)) ∩ S[j, k] = LE (S[j, k]) . (15)
26
It will be important for us to give upper bounds on the probability that
there is no loop-free time in a certain interval of time. If m ≤ j < k ≤ n, let
I(j, k; m, n) denote the event that for all j ≤ i ≤ k − 1,
S[m, i] ∩ S[i + 1, n] 6= ∅.
Proposition 2.2 gives a lower bound on P [I(n, 2n; 0, 3n)],
P [I(n, 2n; 0, 3n)] ≥ P{S[0, n] ∩ S[2n, 3n] 6= ∅} 1
.
log n
The next lemma shows that P [I(n, 2n; 0, 3n)] 1/ log n by giving the upper
bound.
Lemma 2.13. There exists c < ∞ such that
P [I(n, 2n; 0, ∞)] ≤
c
.
log n
Proof. Let E = En denote the complement of I(n, 2n; 0, ∞). We need to
show that P(E) ≥ 1 − O(1/ log n).
Let kn = bn/(log n)3/4 c and let Ai = Ai,n be the event that
Ai = {S[n + (2i − 1)kn , n + 2ikn ] ∩ S[n + 2ikn + 1, n + (2i + 1)kn ] = ∅} .
and consider the events A1 , A2 , . . . , Ar where r = b(log n)3/4 /4c. These
are r independent events each with probability greater than c (log n)−1/2 .
Hence the probability that none of them occurs is exp{−O((log n)−1/4 )} =
o((log n)−3 ), that is,
1
P(A1 ∪ · · · ∪ Ar ) ≥ 1 − o
.
log3 n
Let Bi = Bi,n be the event
Bi = {S[0, n + (2i − 1)kn ] ∩ S[n + 2ikn , ∞) = ∅} ,
and note that
E ⊃ (A1 ∪ · · · ∪ Ar ) ∩ (B1 ∩ · · · ∩ Br ).
Since P(Bic ) ≤ c log log n/ log n, we see that
cr log log n
≥1−O
P(B1 ∩ · · · ∩ Br ) ≥ 1 −
log n
27
log log n
(log n)1/4
,
and hence,
P(E) ≥ P [(A1 ∪ · · · ∪ Ar ) ∩ (B1 ∩ · · · ∩ Br )] ≥ 1 − O
log log n
(log n)1/4
.
(16)
This is
Let
I n
a good estimate, but we need to improve on it.
Cj , j = 1., . . . , 5, denote the independent events (depending on n)
3(j − 1) + 2
(j − 1)n
jn
3(j − 1) + 1
,n 1 +
;n +
,n +
.
1+
15
15
5
5
By (16) we see that P[Cj ] ≤ o 1/(log n)1/5 , and hence
1
P(C1 ∩ · · · ∩ C5 ) ≤ o
.
log n
Let D = Dn denote the event that at least one of the following ten things
happens:
j−1
3(j − 1) + 1
S 0, n 1 +
∩S n 1+
, ∞ 6= ∅, j = 1, . . . , 5.
5
15
j
3(j − 1) + 2
∩S n 1+
, ∞ 6= ∅, j = 1, . . . , 5.
S 0, n 1 +
15
5
Each of these events has probability comparable to 1/ log n and hence P(D) 1/ log n. Also,
I(n, 2n; 0, ∞) ⊂ (C1 ∩ · · · ∩ Cn ) ∪ D.
Therefore, P [I(n, 2n; 0, ∞)] ≤ c/ log n.
Corollary 2.14.
1. There exists c < ∞ such that if 0 ≤ j ≤ j + k ≤ n, then
P [I(j, j + k; 0, n)] ≤
c log(n/k)
.
log n
2. There exists c < ∞ such that if 0 < δ < 1 and
Iδ,n =
n−1
[
I(j, j + δn; 0, n),
j=0
then
P [Iδ,n ] ≤
28
c log(1/δ)
.
δ log n
Proof.
1. We will assume that k ≥ n1/2 , since otherwise log(n/k)/ log n ≥ 1/2.
Note that
I(j, j +k; 0, n) ⊂ I(j, j +k; j −k, j +2k)∪{S[0, j −k]∩S[k +2k, n] 6= ∅},
and
c
P [I(j, j + k; j − k, j + 2k)] ≤
=O
log k
P{S[0, j − k] ∩ S[k + 2k, n] 6= ∅} ≤
1
log n
,
c log(n/k)
.
log n
2. We can cover Iδ,n by the union of O(1/δ) events each of probability
O(log(1/δ)/ log n).
2.4
Along a subsequence
It will be easier to prove the main result first along the subsequence {n4 }.
We let bn = n4 , τn = σn4 , qn = pn4 , Gn = Fn4 . Let η̂n denote the (forward)
loop-erasure of S[σ(n−1)4 +(n−1) , σn4 −n ]. and
Γ̃n = η̂n ∩ A(bn−1 + 4(n − 1), bn − 4n),
Γ∗n = Γ ∩ A(bn−1 + 4(n − 1), bn − 4n).
We state the main result that we will prove in this subsection.
Proposition 2.15. There exists c0 such that as n → ∞,
(
)
4 n
X
log n
exp −r
h̃j ,
q n = c0 + O
n2
j=1
where
h
i
h̃j = E H(Γ̃j ) .
29
Lemma 2.16. There exists c < ∞ such that
n
o
c
P Γ̃n 6= Γ∗n ≤ 4 .
n
n
o
c
P ∃m ≥ n with Γ̃m 6= Γ∗m ≤ 3 .
n
Proof. The first follows from Corollary 2.14, and the second is obtained by
summing over m ≥ n.
Let S 0 be another simple random walk defined another probability space
(Ω , P0 ) with corresponding stopping times τn0 . Let
h
i
Qn = Es Γ̃1 ∪ · · · ∪ Γ̃n , Yn = H(Γ̃n ).
0
Using only the fact that
Γ̃n ⊂ {exp{bn−1 + 4(n − 1)} ≤ |z| ≤ exp{bn − 4n}} ,
we can see that
Qn+1 = Qn [1 − Yn ] [1 + O(n )].
(17)
for some fast decaying sequence {n }. On the event that S[0, ∞) ∈ Xn ,
4 log n
Es[Γn+1 ] = Es[Γn ] 1 − Yn + O
n3
4 log n
n
H(η ) = Yn + O
.
n3
and for m ≥ n,
(
Zm4 = Zn4
)
4 log n
.
exp −
Yj
1+O
n2
j=n+1
m
X
(18)
Lemma 2.17. There exists a fast decaying sequence {n } such that we can
define Γ̃1 , Γ̃2 , . . . and Γ̃01 , Γ̃02 , . . . on the same probability space so that Γ̃j has
the same distibution as Γ̃0j for each j; Γ̃01 , Γ̃02 , . . . are independent; and
P{Γ̃n 6= Γ̃0n } ≤ n .
30
In particular, if Yn0 = H(Γ̃0n ),
"m
#
m
Y
Y
E
[1 − Yj ] | Gn−1 = [1 + O(n )]
E[1 − Yj ]
j=n
j=n
=
1+O
log4 n
n4
(
exp −
m
X
)
E(Yj ) .
j=n
Also, using (18), except on an event of fast decaying probability,
( m
)
4 X
log n
r
r
E [Zm4 | Gn ] = Zn4 exp −
E(Yj )
1+O
.
n2
j=n
Taking expectations, we see that
( m
)
4 X
log n
qm = qn exp −
E(Yj )
,
1+O
n2
j=n
which implies the proposition.
2.5
Comparing hitting probabilities
The goal of this section is to prove the following. Recall that bn = n4 , hn n−1 , h̃n n−1 .
Proposition 2.18. There exists c < ∞, u > 0 such that
b
n
X
c
h̃n −
hj ≤ 1+u .
n
j=bn−1 +1
Proof. For a given n we will define a set
U = Ubn−1 +1 ∪ · · · ∪ Ubn ,
such that the following four conditions hold:
Uj ⊂ η j ,
U ⊂ Γ̃n ,
(19)
j = bn−1 + 1, . . . bn ,
(20)
31
bn
X
h
i
E H(Γ̃n \ U ) +
E [H(η n \ Uj )] ≤ O(n−(1+u) ),
(21)
j=bn−1 +1
max H(x, U \ Uj ) ≤ n−u .
max
(22)
bn−1 <j≤bn x∈Uj
We claim that finding such a set proves the result. Indeed,
H(U ) ≤ H(Γ̃n ) ≤ H(U ) + H(Γ̃n \ U ),
bn
X
bn
X
H(Uj ) ≤
j=bn−1 +1
bn
X
H(η j ) ≤
j=bn−1 +1
H(Uj ) + H(η j \ Uj ) .
j=bn−1 +1
Also it follows from (22) and the strong Markov property that
bn
X
H(U ) = 1 − O(n−u )
H(Uj ) = −O(n−1−u ) +
bn
X
H(Uj ).
j=bn−1 +1
j=bn−1 +1
Taking expectations and using (19)–(21) we get
h
i
h̃n = E H(Γ̃n ) = O(n
−1−u
)+
bn
X
E H(η j ) = O(n−1−u ) +
j=bn−1 +1
bn
X
hj .
j=bn−1 +1
Hence, we need only find the sets Uj .
+
Recall that σj± = σj ± bj −1/4 e2j c and let ω̃j = S[σj−1
, σj− ]. Using Lemma
2.6, we can see that
+
P diam S[σj−1
, σj− ] ≥ j −1/5 ej
is fast decaying. We will set Uj equal to η j ∩ ω̃j unless one of the following
six events occurs in which case we set Uj = ∅. We assume (n − 1)4 < j ≤ n4 .
1. if j ≤ (n − 1)4 + 8n or j ≥ n4 − 8n.
2. If H(ωj ) ≥ n−4 log2 n.
3. If ωj ∩ Cj−8 log n 6= ∅.
4. If H(ωj \ ω̃j ) ≥ n−4−u .
+
5. If it is not true that there exist loop-free points in both [σj−1 , σj−1
] and
−
[σj , σj ].
32
6. If
sup H(x, S[0, ∞) \ ωj ) ≥ n−u .
x∈ω̃j
The definition of Uj immediately implies (20). Combining conditions 1
and 3, we see that (for n sufficiently large which we assume throughout) that
Uj ⊂ A(bn−1 + 6n, bn − 6n). Moreover, if there exists loop-free points in
+
[σj−1 , σj−1
] and [σj− , σj ], then η̃n ∩ ω̃j = ηj ∩ ω̃j . Therefore, (19) holds. Also,
condition 6 immediately gives (22).
In order to establish (21) we first note that
bn
[
(Γ̃n ∪ η bn−1 +1 ∪ · · · ∪ η bn ) \ U ⊂
Vj ,
j=bn−1 +1
where
Vj =
ωj
ωj \ ω̃j
if Uj = ∅
.
if Uj = η j ∩ ω̃j
Hence, it suffices to show that
bn
X
bn
X
E [H(ωj ); Uj = ∅] +
E [H(ωj \ ω̃j )] ≤ c n−1−u .
j=bn−1 +1
j=bn−1 +1
We write the event {Uj = ∅} as a union of six disjoint events
{Uj = ∅} = Ej1 ∪ · · · ∪ Ej6 ,
where Eji is the event that the ith condition above holds but none of the
previous ones hold. It suffices to show that for bn−1 < j ≤ bn ,
E [H(ωj \ ω̃j )] +
6
X
h
i
E H(ωj ) 1Eji ≤ n−4−u .
i=1
• In order to estimate E [H(ωj \ ω̃j )], we use standard estimates on random walk to show that for δ > 0, except perhaps on an event of probability o(n−2 ),
1
Cap[ωj \ ω̃j ] ≤ O(n− 8 +δ ) n−1/2 en ,
1
diam[ωj \ ω̃j ] ≤ n− 2 +δ en
and by choosing δ = 1/64, we have off the bad event,
1
H(ωj \ ω̃j ) ≤ O(n−1− 16 ).
33
(23)
We now consider the events Ej1 , . . . , Ej6 .
1. Since for each j, E[H(ωj )] n−4
X
j≤(n−1)4 +8n
or
E[H(ωj )] = O(n−3 ).
j≥n4 −8n
2. We have already seen that
log2 n
P H(ωj ) ≥
≤ O(n ).
n4
We note that given this, for j > 2,
h
i log2 n
E H(ωj ) 1Enj ≤
P(Enj ).
n4
Hence it suffices for j = 3, . . . , 6 to prove that P(Enj ) ≤ n−u for some u.
3. The standard estimate gives
P{ωj ∩ Cj−log j 6= ∅} ≤ O(j −2 ).
4. This we did in (23).
5. This follows from Corollary 2.14 for any u < 1/4.
6. This follows from Lemma 2.5.
2.6
s 6= 0
The next proposition is easy but it is important for us.
Proposition 2.19. For every r, s there exists cr,s such that
4 r −s log n
r
E Zn Gn = cr,s φn 1 +
n1/4
34
Proof. Using Lemma 2.7, we can see that there exists a fast decaying sequence
{n } such that if m ≥ n,
P{|Gn − Gm | ≥ n } ≤ n .
Since 1 ≤ Gn ≤ 8, this implies that
−1 r −s −1 r −s φn Zn Gn − φn Zn G∞
is fast decaying and
−1 r r
−s −1 r
−s
−1 r
.
E φ̂n Zn G∞ − E φ̂m Zm G∞ ≤ c E φ−1
n Zn − E φm Zm
2.7
Determining the constant
The goal of this section is to prove
p̂n ∼
π2
.
24n
(24)
We will give the outline of the proof using the lemmas that we will prove
afterwards. By combining the results of the previous sections, we see that
(
)
n
X
r −s E Zn Gn = c0r,s 1 + O(n−u ) exp −r
hj ,
j=1
where hj = E [H(η j )] , and c0r,s are unknown constants. In particular,
(
)
n
X
p̂n = c03,2 1 + O(n−u ) exp −3
hj .
j=1
We show in Section 2.7.2 that
hn =
8
p̂n + O(n−1−u ).
π2
It follows that the limit
"
#
n
24 X
p̂j ,
lim log p̂n + 2
n→∞
π j=1
35
(25)
exists and is finite. Using this and p̂n+1 = p̂n 1 + O(log4 n/n) (see Corollary
2.12) we can conclude (see Lemma 2.20) the result. We note that it follows
from this and (25) that there exists c such that
(
)
n
X
c
φ3n = exp −3
hj ∼ .
n
j=1
Let W denote a simple random walk independent of S. Then, hn = P(E),
where where
E = En = {W [0, ∞) ∩ η n 6= ∅} .
2.7.1
A lemma about a sequence
Lemma 2.20. Suppose p1 , p2 , . . . is a sequence of positive numbers with
pn+1 /pn → 1 and such that
"
#
n
X
lim log pn + β
pj
n→∞
j=1
exists and is finite. Then
lim n pn = 1/β.
n→∞
Proof. It suffices to prove the result for β = 1, for otherwise we can consider
p̃n = β pn . Let
n
X
an = log pn +
pj .
j=1
We will use the fact that {an } is a Cauchy sequence.
We first claim that for every δ > 0, there exists Nδ > 0 such that if
n ≥ Nδ and pn = (1 + 2)/n with δ ≤ , then there does not exists r > n
with
1
pk ≥ , k = n, . . . , r − 1.
k
1 + 3
pr ≥
.
r
Indeed, suppose these inequalities hold for some n, r. Then,
log(pn+r /pn ) ≥ log
1 + 3
+ log(r/n),
1 + 2
36
r
X
pj ≥ log(r/n) − O(n−1 ).
j=n+1
and hence for n sufficiently large,
ar − an ≥
1
1 + 3
1
1 + 3δ
log
≥ log
.
2
1 + 2
2
1 + 2δ
Since an is a Cauchy sequence, this cannot be true for large n.
We next claim that for every δ > 0, there exists Nδ > 0 such that if
n ≥ Nδ and pn = (1 + 2)/n with δ ≤ , then there exists r such that
1+
1 + 3
≤ pk <
,
k
k
k = n, . . . , r − 1.
(26)
1+
.
r
1+
. By the previous
To see this, we consider the first r such that pn+r ≤ n+r
claim, if such an r exists, then (26) holds for n large enough. If no such r
exists, then by the argument above for all r > n,
pr <
ar − an ≥ log
1+
+ log(r/n) − (1 + ) O(n−1 ).
1 + 2 2
Since the right-hand side goes to infinity as r → ∞, this contradicts the fact
that an is a Cauchy sequence.
By iterating the last assertion, we can see that for every δ > 0, there
exists Nδ > 0 such that if n ≥ Nδ and pn = (1 + 2)/n with δ ≤ , then there
exists r > n such that
1 + 2δ
pr <
,
r
1 + 3
, k = n, . . . , r − 1.
pk ≤
k
Let s be the first index greater than r (if it exists) such that either
pk ≤
1
1 + 2δ
or pk ≥
.
k
k
Using pn+1 /pn → 1, we can see, perhaps by choosing a larger Nδ if necessary,
that
1−δ
1 + 4δ
≤ ps ≤
.
k
k
37
If ps ≥ (1 + 2δ)/k, then we can iterate this argument with ≤ 2δ to see that
lim sup n pn ≤ 1 + 6δ.
n→∞
The lim inf can be done similarly.
2.7.2
Exact relation
We will prove the following. Let S, W be simple random walks with corresponding stopping times σn and let Gn = GCn . We will assume that
S0 = w, W0 = 0. Let η = LE (S[0, σn ]). Let
X
X
G(0, z) Gn (w, z) =
G(0, z) Gn (w, z).
Ĝ2n (w) =
z∈Z4
z∈Cn
Note that we are stopping the random walk S at time σn but we are allowing
the random walk W to run for infinite time. If w ∈ ∂Cn−1 , then
Ĝ2n (w) =
8
+ O(e−n ).
2
π
(27)
This is a special case of the next lemma.
Lemma 2.21. If w ∈ Cn , then
Ĝ2n (w) =
Proof. Let f (x) =
8
[n − log |w|] + O(e−n ) + O(|w|−2 log |w|).
π2
8
π2
log |x| and note that
∆f (x) =
π2
2
+ O(|x|−4 ) = G(x) + O(|x|−4 ).
|x|2
where ∆ denotes the discrete Laplacian. Also, we know that for any function
f,
X
f (w) = Ew [f (Sσn )] −
Gn (w, z) ∆f (z).
z∈Cn
n
n
Since e ≤ |Sσn | ≤ e + 1,
Ew [f (Sσn )] =
8n
+ O(e−n ).
π2
38
Therefore,
X
Gn (w, z) G(z) =
z∈Cn
8
[n − log |w|] + O(e−n ) + ,
2
π
where
X
|| ≤
Gn (w, z) O(|z|−4 ) ≤
z∈Cn
X
O(|w − z|−2 ) O(|z|−4 ).
z∈Cn
We split the sum into three pieces.
X
O(|w − z|−2 ) O(|z|−4 ) ≤ c |w|−2
|z|≤|w|/2
X
O(|z|−4 )
|z|≤|w|/2
−2
≤ c |w|
X
log |w|,
O(|w − z|−2 ) O(|z|−4 ) ≤ c |w|−4
|z−w|≤|w|/2
X
O(|x|−2 )
|x|≤|w|/2
−2
≤ c |w| ,
If we let Cn0 the the set of z ∈ Cn with |z| > |w|/2 and |z − w| > |w|/2, then
X
X
O(|w − z|−2 ) O(|z|−4 ) ≤
O(|z|−6 ) ≤ c |w|−2 .
0
z∈Cn
|z|>|w|/2
Proposition 2.22. There exists α < ∞ such that if n−1 ≤ e−n |w| ≤ 1−n−1 ,
then
logα n
2
log
P{W
[0,
∞]
∩
η
=
6
∅}
−
log[
Ĝ
(w)
p̂
]
≤
c
.
n n
n
In particular,
α 8 p̂n
log n
n
E [H(η )] = 2 1 + O
.
π
n
The second assertion follows immediately from the first and (27). We can
write the conclusion of the proposition as
P{W [0, ∞] ∩ η 6= ∅} = Ĝ2n (w) p̂n [1 + θn ] ,
39
where θn throughout this subsection denotes an error term that decays at
least as fast as logα n/n for some α (with the implicit uniformity of the
estimate over all n−1 ≤ e−n |w| ≤ 1 − n−1 ).
We start by giving the sketch of the proof which is fairly straightforward.
On the event {W [0, ∞]∩η 6= ∅} there are typically many points in W [0, ∞]∩
η. We focus on a particular one. This is analogous to the situation when
one is studying the probability that a random walk visits a set. In the latter
case, one usually focuses on the first or last visit. In our case with two paths,
the notions of “first” and “last” are a little ambiguous so we have to take
some care. We will consider the first point on η that is visited by W and
then focus on the last visit by W to this first point on η.
To be precise, we write
η = [η0 , . . . , ηm ].
i = min{t : ηt ∈ W [0, ∞)},
ρ = max{t ≥ 0 : St = ηi },
λ = max{t : Wt = ηi }.
Then the event {ρ = j; λ = k; Sρ = Wλ = z} is the event that:
I :
II :
III :
Sj = z,
Wk = z,
LE (S[0, j]) ∩ (S[j + 1, σn ] ∪ W [0, k] ∪ W [k + 1, ∞)) = {z},
z 6∈ S[j + 1, σn ] ∪ W [k + 1, ∞).
Using the slowly recurrent nature of the random walk paths, we expect that
as long as z is not too close to 0,w, or ∂Cn , then I is almost independent of
(II and III) and
P {II and III} ∼ p̂n .
This then gives
P{ρ = j; λ = k; Sρ = Wλ = z} ∼ P{Sj = Wk = z} p̂n ,
and summing over j, k, z gives
P{W [0, ∞] ∩ η 6= ∅} ∼ p̂n Ĝ2n (w).
The rest of this section is devoted to making this precise.
40
Let V be the event
V = {w 6∈ S[1, σn ], 0 6∈ W [1, ∞)}.
A simple argument which we omit shows that there is a fast decaying sequence
{n } such that
|P(V ) − G(0, 0)−2 | ≤ n ,
|P{W [0, ∞] ∩ η 6= ∅ | V } − P{W [0, ∞] ∩ η 6= ∅}| ≤ n .
Hence it will suffice to show that
P [V ∩ {W [0, ∞] ∩ η 6= ∅}] =
Ĝ2n (w)
p̂n [1 + θn ] .
G(0, 0)2
(28)
Let E(j, k, z), Ez be the events
E(j, k, z) = V ∩ {ρ = j; λ = k; Sρ = Wλ = z},
∞
[
Ez =
E(j, k, z).
j,k=0
Then,
P [V ∩ {W [0, ∞] ∩ η 6= ∅}] =
X
P(Ez ).
z∈Cn
Let
0
Cn0 = Cn,w
= {z ∈ Cn : |z| ≥ n−4 en , |z − w| ≥ n−4 en , |z| ≤ (1 − n−4 )en }.
We can use the easy estimate
P(Ez ) ≤ Gn (w, z) G(0, z),
to see that
X
P(En ) ≤ O(n−6 ),
0
z∈Cn \Cn
so it suffices to estimate P(Ez ) for z ∈ Cn0 .
We will translate so that z is the origin and will reverse the paths W [0, λ],
S[0, ρ]. Using the fact that reverse loop-erasing has the same distribution
as loop-erasing, we see that P[E(j, k, z)] can be given as the probability of
the following event where ω 1 , . . . , ω 4 are independent simple random walks
41
starting at the origin and li denotes the smallest index l such that |ωli − y| ≥
en .
ω 3 [1, l3 ] ∪ ω 4 [1, ∞) ∩ LE(ω 1 [0, j]) = ∅,
ω 2 [0, k] ∩ LE(ω 1 [0, j]) = {0},
j < l1 ,
ω 1 (j) = x,
ω 2 (k) = y,
x 6∈ ω 1 [0, j − 1],
y 6∈ ω 2 [0, k − 1],
where x = w − z, y = −z. Note that
n−1 en ≤ |y|, |x − y|, |x| ≤ en [1 − n−1 ],
We now rewrite this. We fix x, y and let Cny = y + Cn . Let W 1 , W 2 , W 3 ,
W be independent random walks starting at the origin and let T i = Tni =
min{j : Wji 6∈ Cny },
4
τ 1 = min{j : Wj1 = x},
τ 2 = min{k : Wk2 = y}.
Let Γ̂ = Γ̂n = LE (W 1 [0, τ 1 ]) . Note that
P{τ 1 < T 1 } =
Gn (0, x)
Gn (0, x)
=
+ o(e−n ),
Gn (x, x)
G(0, 0)
P{τ 2 < ∞} =
G(0, y)
.
G(y, y)
Let p0n (x, y) be the probability of the event
Γ̂ ∩ W 2 [1, τ 2 ] ∩ W 3 [1, T 3 ] = ∅, Γ̂ ∩ W 4 [0, T 4 ] = {0},
where W 1 , W 2 , W 3 , W 4 are independent starting at the origin; W 3 , W 4 are
usual random walks; W 1 has the distribution of random walk conditioned
that τ 1 < T 2 stopped at τ 1 ; and W 2 has the distribution of random walk
conditioned that τ 2 < ∞ stopped at τ 2 . Then in order to prove (28), it
suffices to prove that
p0n (x, y) = p̂n [1 + θn ] .
We write Q for the conditional distribution. Then for any path ω =
[0, ω 1 , . . . , ω k ] with 0, ω 1 , . . . , ω k−1 ∈ Cny \ {x},
Q{[W01 , . . . , Wk1 ] = ω} = P{[S0 , . . . , Sk ] = ω}
42
φx (ωk )
,
φx (0)
(29)
where
φx (ζ) = φx,n (ζ) = Pζ {τ 1 < Tn1 }.
Similarly if φy = Pζ {τ 2 < ∞} and ω = [0, ω 1 , . . . , ω k ] is a path with y 6∈
{0, ω 1 , . . . , ω k−1 },
Q{[W01 , . . . , Wk1 ] = ω} = P{[S0 , . . . , Sk ] = ω}
φy (ωk )
.
φy (0)
2
Using (13), we can see that if ζ ∈ {x, y}, and if |z| ≤ en e− log n ,
φζ (z) = φζ (0) [1 + O(n )] .
where n is fast decaying.
Let σk be defined for W 1 as before,
σk = min{j : |Wj1 | ≥ ek }.
Using this we see that we can couple W 1 , S on the same probability space
such that, except perhaps on an event of probability O(n ),
Γ̂ ∩ Cn−log3 n = LE (S[0, ∞)) ∩ Cn−log3 n .
Using this we see that
qn ≤ pn−log3 n + O(n ) ≤ pn [1 + θn ] .
Hence,
Γ̂ ∩ Cn−log3 n ⊂ Γ̂ ⊂ Θn ∪ (Γ̂ ∩ Cn−log3 n ),
where
Θn = W 1 [0, τ 1 ] ∩ A(n − log3 n, 2n).
We consider two events. Let Ŵ = W 2 [1, τ 2 ] ∪ W 3 [1, T 3 ] ∪ W 4 [0, T 4 ] and
let E0 , E1 , E2 be the events
E0 = {0 6∈ W 2 [1, τ 2 ] ∪ W 3 [1, T 3 ]},
n
o
E1 = E0 ∩ Ŵ ∩ Γ̂ ∩ Cn−log3 n = {0} ,
n
o
E2 = E1 ∩ Ŵ ∩ Θn = ∅ ,
43
Then we show that
Q(E1 ) = pn [1 + θn ] ,
Q(E1 \ E2 ) ≤ n−1 θn ,
in order to show that
Q(E2 ) = pn [1 + θn ] .
Find a fast decaying sequence ¯n as in Proposition 2.8 such that
√
log2 m
P H (S[0, ∞) ∩ A(m − 1, m)) ≤
≤ ¯m , ∀m ≥ n.
m
1/10
Let δn = ¯n which is also fast decaying and let ρ = ρn = min{j : |Wj1 −x| ≤
en δn }. Using the Markov property we see that
n
p o
P W 1 [ρ, τ x ] 6∈ {|z − x| ≤ en δn } = O(δn ),
and since |x| ≥ e−n n−1 , we know that
h
p i
n
H {|z − x| ≤ e
δn } ≤ n2 δn .
Hence, we see that there is a fast decaying sequence δn such that such that
√
log2 m
P H (S[0, ∞) ∩ A(m − 1, m)) ≤
≤ δm , ∀m ≥ n.
m
In particular, using (29), we see that
log7 n
4
x
Q H W [0, τ ] ∩ A(n − log n, n + 1) ≤
n
is fast decaying. Then using Proposition 2.11 we see that there exists a fast
decaying sequence n such that, except for an event of Q probability at most
n , either Es[Γ̂] ≤ n or
Es[Γ̂] = Es[Γ̂ ∩ Cn−log3 n ] [1 + θn ].
This suffices for W 3 and W 4 but we need a similar result for W 2 , that is,
if
n
o
Esy [Γ̂] = Q W 2 [1, τ y ] = ∅ ∩ Γ̂ | Γ̂ ,
44
then we need that except for an event of Q probability at most n , either
Es[Γ̂] ≤ n or
Esy [Γ̂] = Es[Γ̂ ∩ Cn−log3 n ] [1 + θn ].
Since the measure Q is almost the same as P for the paths near zero, we can
use Proposition 2.11 to reduce this to showing that
Q{W 2 [0, τ y ] ∩ (Γ̂ \ Cn−log3 n ) 6= ∅ | Γ̂} ≤ θn .
This can be done with similar argument using the fact that the probability
that random walks starting at y and x ever intersection is less than θn .
Corollary 2.23. If n−1 ≤ e−n |w| ≤ 1 − n−1 , then
π 2 Ĝ2n (w)
.
P{W [0, ∞) ∩ η =
6 ∅} ∼
24 n
More precisely,
lim
max
n→∞ n−1 ≤e−n |w|≤1−n−1
2
24 n P{W [0, ∞) ∩ η 6= ∅} − π Ĝn (w) = 0.
By a very similar proof one can show the following. Let
X
X
Gn (0, z) Gn (w, z),
G2n (z) =
Gn (0, z) Gn (w, z) =
z∈Z4
z∈Cn
and
σnW = min{j : Wj 6∈ Cn }.
We note that
G̃2n (w) − G2n (w) =
X
[G(0, z) − Gn (0, z)] Gn (w, z) = O(1).
z∈Cn
The following can be proved in the same way.
Proposition 2.24. There exists α < ∞ such that if n−1 ≤ e−n |w| ≤ 1−n−1 ,
then
α
log P{W [0, σnW ] ∩ η 6= ∅} − log[G2n (w) p̂n ] ≤ c log n .
n
−r n
We note that if |w| = n e , then
log(1/r)
.
n
This gives Lemma 1.6 in the case that x or y equals zero. The general case
can be done similarly.
G2n ∼ Ĝ2n (w) ∼
45
References
[1] Itai Benjamini, Harry Kesten, Yuval Peres, and Oded Schramm. Geometry of the uniform spanning forest: transitions in dimensions 4, 8, 12, . . . .
Ann. of Math. (2), 160(2):465–491, 2004.
[2] Itai Benjamini, Russell Lyons, Yuval Peres, and Oded Schramm. Uniform spanning forests. Ann. Probab., 29(1):1–65, 2001.
[3] Alexei Borodin and Vadim Gorin. General beta Jacobi corners process
and the Gaussian free field. arXiv preprint arXiv:1305.3627, 2013.
[4] Maury Bramson and Rick Durrett, editors. Perplexing problems in probability, volume 44 of Progress in Probability. Birkhäuser Boston, Inc.,
Boston, MA, 1999. Festschrift in honor of Harry Kesten.
[5] Bertrand Duplantier, Rémi Rhodes, Scott Sheffield, and Vincent Vargas. Log-correlated Gaussian fields: an overview. arXiv preprint
arXiv:1407.5605, 2014.
[6] Patrik L Ferrari and Alexei Borodin. Anisotropic growth of random
surfaces in 2+ 1 dimensions. arXiv preprint arXiv:0804.3035, 2008.
[7] Geoffrey Grimmett. The random-cluster model. Springer, 2004.
[8] Alan Hammond and Scott Sheffield. Power law Pólyas urn and fractional
Brownian motion. Probability Theory and Related Fields, pages 1–29,
2009.
[9] Richard Kenyon. Dominos and the Gaussian free field. Annals of probability, pages 1128–1137, 2001.
[10] Noemi Kurt. Entropic repulsion for a class of Gaussian interface models in high dimensions. Stochastic processes and their applications,
117(1):23–34, 2007.
[11] Noemi Kurt. Maximum and entropic repulsion for a Gaussian membrane
model in the critical dimension. The Annals of Probability, 37(2):687–
725, 2009.
[12] Gregory F. Lawler, Intersections of random walks in four dimensions II,
Comm. Math. Phys. 97, 583–594. 1985.
46
[13] Gregory F. Lawler, Escape probabilities for slowly recurrent sets, Probability Theory and Related Fields 94, 91–117, 1992.
[14] Gregory F. Lawler. The logarithmic correction for loop-erased walk in
four dimensions. In Proceedings of the Conference in Honor of JeanPierre Kahane (Orsay, 1993), number Special Issue, pages 347–361,
1995.
[15] Gregory F. Lawler. Intersections of random walks. Modern Birkhäuser
Classics. Birkhäuser/Springer, New York, 2013. Reprint of the 1996
edition.
[16] Gregory F. Lawler and Vlada Limic. Random walk: a modern introduction, volume 123 of Cambridge Studies in Advanced Mathematics.
Cambridge University Press, Cambridge, 2010.
[17] Asad Lodhia, Scott Sheffield, Xin Sun, and Samuel S Watson. Fractional
Gaussian fields: a survey. arXiv preprint arXiv:1407.5598, 2014.
[18] Robin Pemantle. Choosing a spanning tree for the integer lattice uniformly. The Annals of Probability, pages 1559–1574, 1991.
[19] Brian Rider and Bálint Virág. The noise in the circular law and the
Gaussian free field. arXiv preprint math/0606663, 2006.
[20] Hironobu Sakagawa. Entropic repulsion for a Gaussian lattice field
with certain finite range interaction. Journal of Mathematical Physics,
44:2939, 2003.
[21] Scott Sheffield. Gaussian free fields for mathematicians. Probability
theory and related fields, 139(3-4):521–541, 2007.
[22] Xin Sun and Wei Wu, Uniform spanning forests and bi-Laplacian Gaussian field. arXiv preprint arXiv:1312.0059, 2013.
[23] David Bruce Wilson. Generating random spanning trees more quickly
than the cover time. In Proceedings of the twenty-eighth annual ACM
symposium on Theory of computing, pages 296–303. ACM, 1996.
47
Download