SAMPLE PATH LARGE DEVIATIONS FOR LAPLACIAN MODELS IN (1 + 1)-DIMENSIONS

advertisement
SAMPLE PATH LARGE DEVIATIONS FOR LAPLACIAN MODELS IN
(1 + 1)-DIMENSIONS
STEFAN ADAMS, ALEXANDER KISTER, AND HENDRIK WEBER
Abstract. For Laplacian models in dimension (1 + 1) we derive sample path large deviations
for the profile height function, that is, we study scaling limits of Gaussian integrated random
walks and Gaussian integrated random walk bridges perturbed by an attractive force towards
the zero-level, called pinning. We study in particular the regime when the rate functions of the
corresponding large deviation principles admit more than one minimiser, in our models either
two, three, or five minimiser depending on the pinning strength and the boundary conditions.
This study complements corresponding large deviation results for gradient systems with pinning
for Gaussian random walk bridges in (1 + 1)-dimension ([FS04]) and in (1 + d)-dimension
([BFO09]), and recently in higher dimensions in [BCF14]. In particular it turns out that the
Laplacian cases, i.e., integrated random walks, show richer and more complex structures of the
minimiser of the rate functions which are linked to different phases.
1. Introduction and large deviation results
1.1. The models. We are going to study models for a (1 + 1)-dimensional random field. The
models depend on the so-called potential, a measurable function V : R → R ∪ {+∞} such that
x 7→ exp(−V (x)) is bounded and continuous and that
Z
Z
Z
−V (x)
2 −V (x)
2
e
dx < ∞ and
xe
dx =: σ < ∞ and
xe−V (x) dx = 0.
R
R
R
The model will be a probability measure given by a Hamiltonian depending on the Laplacian
of the random field. The Hamiltonian H[`,r] (φ), defined for `, r ∈ Z, with r − ` ≥ 2, and for
φ : {`, ` + 1, . . . , r − 1, r} → R by
H[`,r] (φ) :=
r−1
X
V (∆φk ),
(1.1)
k=`+1
where ∆ denotes the discrete Laplacian, ∆φk = φk+1 + φk−1 − 2φk . Our pinning models are
ψf
ψ
defined by the following probability measures γN,ε
and γN,ε
respectively:
ψ
γN,ε
(dφ)
ψ
N
−1
Y
1
−H[−1,N +1] (φ)
e
(εδ0 (dφk ) + dφk )
=
ZN,ε (ψ)
k=1
f
γN,ε
(dφ) =
1
e−H[−1,N +1] (φ)
ZN,ε (ψf )
Y
δψk (dφk ),
k∈{−1,0,N,N +1}
N
−1
Y
Y
k=1
k∈{−1,0}
(εδ0 (dφk ) + dφk )
(1.2)
δψk (dφk ),
2000 Mathematics Subject Classification. Primary: 60K35; Secondary: 60F10; 82B41.
Key words and phrases. Large deviation, Laplacian models, pinning, integrated random walk, scaling limits,
bi-harmonic.
1
2
STEFAN ADAMS, ALEXANDER KISTER, AND HENDRIK WEBER
where N ≥ 2 is an integer, ε ≥ 0 is the pinning strength, dφk (resp. dφ+
k ) is the Lebesgue
Z
measure on R, δ0 is the Dirac mass at zero and where ψ ∈ R is a given boundary condition
and ZN,ε (ψ) (resp. ZN,ε (ψf )) is the normalisation, usually called the partition function. The
model with Dirichlet boundary condition on the left hand side and free boundary conditions
on the right hand side (terminal times for the corresponding integrate random walk) has only
the left hand side (initial times) fixed.
Our measures in (1.2) are (1 + 1)-dimensional models for a linear chain of length N which
is attracted to the defect line, the x-axis, and the parameter ε ≥ 0 tunes the strength of the
attraction and one wishes to understand its effect on the field, in the large N limit. The
models with ε = 0 have no pinning reward at all and are thus free Laplacian models. By
”(1+1)-dimensional” we mean that the configurations of the chain are described by trajectories
{(k, φk )}0≤k≤N of the field, so that we are dealing with directed models. Models with Laplacian
interaction appear naturally in the physical literature in the context of semiflexible polymers,
c.f. [BLL00, HV09], or in the context of deforming rods in space, cf. [Ant05].
The basic properties of the models were investigated in the two papers [CD08, CD09], to
which we refer for a detailed discussion and for a survey of the literature. In particular,
it was shown in [CD08] that there is a critical value εc ∈ (0, ∞) that determines a phase
transitions between a delocalised regime (ε < εc ), in which the reward is essentially ineffective,
and a localised regime (ε > εc ), in which on the other hand the reward has a microscopic
effect on the field. For more details see Section 1.2.3 below. All our models are linked to
integrals of corresponding random walks ([Sin92, CD08, CD09]). The present paper derives
large deviation principles for the macroscopic empirical profile distributed under the measures
in (1.2) (Section 1.2). The corresponding large deviation results for the gradient models, that
is, the Hamiltonian is a function of the discrete gradient of the field, have been derived in [FS04]
for Gaussian random walk bridges in R and for Gaussian random walks and bridges in higher
dimensions in [BFO09]. In [FO10] large deviations for general non-Gaussian random walks in
Rd , d ≥ 1, have been analysed, and in [BCF14] gradient model in higher (lattice) dimensions
have been introduced. Common feature of all these
√ gradient models is the linear scale (in N )
for the large deviations (the scaling limit scale is N , for more details see [F]), whereas in the
Laplacian case the scale for the scaling limits is N 3/2 as proved by Caravenna and Deuschel in
[CD09]. This scale goes back to the work [Sin92] on integrated random walks. Henceforth the
scale for our large deviation principles is N 2 and suggested by the scaling of the Hamiltonian
(discrete bi-Laplacian), see Appendix A for further details. Despite the different scale of the
large deviations also the proofs are different. As all our Laplacian models are versions of
integrated random walk measures our proofs cannot rely on the Markov property as used in
the corresponding gradient models. We use standard expansion techniques combined with a
correction of so-called single zeroes to double zeroes. This allows us to write the distribution
over disjoint intervals separated by a double zero as the product of independent distributions
over the disjoint intervals.
Our second major result is a complete analysis of the rate functions, in particular in the
critical situation that more than one minimiser for the rate function exist (Section 2). The
macroscopic time, observed after scaling, of our integrated random walks runs over the interval
[0, 1]. In the rate function arising in the large deviation result for mixed Dirchlet and free
boundary conditions only the starting point and its gradient (speed) at t = 0 is specified, while
LARGE DEVIATIONS
3
the rate function for Dirchlet boundary conditions requires us to specify the terminal point (location and speed) at t = 1 as well. The variational problem for our rate functions shows a much
richer structure for the minimiser as for corresponding gradient models in [BFO09]. Without
pinning there is a unique bi-harmonic function minimising the macroscopic bi-Laplacian energy,
see Appendix A. Once the pinning reward is switched on the integrated random walk (scaled
random field) has essentially two different strategies to pick up reward, see Section 2. One
strategy is to start picking up the reward earlier despite the energy involved to bend to the
zero line with speed zero whereas the other one is to cross the zero level and producing a longer
bend before turning to the zero level and picking up reward.
In Section 1.2 we present all our large deviation results which are proved in Section 3, whereas
our variational analysis results are given in Section 2 and their proofs are given in Section 2.3.
In Appendix A we give some basic details about the bi-harmonic equation and bi-harmonic
function along with convergence statements for the discrete bi-Laplacian. In Appendix B some
well-known facts about partition functions of Gaussian integrated random walks are collected.
1.2. Sample path large deviations.
1.2.1. Empirical profile. Let hN = {hN (t) : t ∈ [0, 1]} be the macroscopic empirical profile determined from the microscopic height function φ under a proper scaling, that is, defined through
some polygonal approximation of (hN (k/N ) = φk /N 2 )k∈ΛN , where ΛN = {−1, 0, . . . , N, N + 1},
so that
bN tc − N t + 1
N t − bN tc
hN (t) =
φbN tc +
φbN tc+1 , t ∈ [0, 1].
(1.3)
2
N
N2
We consider hN distributed under the given measures in (1.2) with the following particular
choice of scaled boundary conditions ψ (N ) in the case of Dirichlet boundary conidtions, defined
for any a, α, b, β ∈ R as


aN 2 − αN if x = −1,




aN 2
if x = 0,

(N )
2
(1.4)
ψ (x) = bN
if x = N,


2

bN − βN
if x = N + 1,




0
otherwise.
For the Dirichlet boundary conditions on the left hand side and free boundary conditions on
the right hand side we only specify the boundary in x = −1 and x = 0, and write ψf(N ) (−1) =
aN 2 − αN and ψf(N ) (0) = aN 2 , see (1.2). Our different macroscopic models are given in terms
of the empirical profile hN sampled via microscopic height functions under the measures in
(1.2) have the boundary value a and gradient α on the left hand side and for the Dirichlet case
boundary value b and gradient β on the right hand side. We write r = (a, α, b, β) to specify our
choice of boundary conditions ψ (N ) in the Dirichlet case and a = (a, α) for the mixed Dirichlet
and free boundary case.
r
We denote the Gibbs distributions with ε = 0 (no pinning) by γN
for Dirichlet boundary
a
conditions and by γN for Dirichlet boundary conditions on the left hand side and free boundary
conditions on the right hand side and their partition functions by ZN (r) and ZN (a), respectively. In Section 1.2.2 we study the large deviation principles without pinning (ε = 0) for
general integrated random walks with free boundary conditions on the right hand side and
4
STEFAN ADAMS, ALEXANDER KISTER, AND HENDRIK WEBER
show that these results apply to Gaussian integrated walk bridges as well. For the Dirichlet
case we consider the space
Hr2 = {h ∈ H 2 ([0, 1]) : h(0) = a, h(1) = b, ḣ(0) = α, ḣ(1) = β},
and for the left hand side free boundary case the space
Ha2 = {h ∈ H 2 ([0, 1]) : h(0) = a, ḣ(0) = α}.
Here, H 2 ([0, 1]) is the usual Sobolev space. We write C([0, 1]; R) for the space of continuous
functions on [0, 1] equipped with the supremum norm. In Section 1.2.3 we present our main
large deviation result for the measures with pinning.
1.2.2. Large deviations for integrated random walks and Gaussian integrated random walk
bridges. We recall the integrated random walk representation in Proposition 2.2 of [CD08].
Let (Xk )k∈N be a sequence of independent and identically distributed random variables, with
marginal laws X1 ∼ exp(−V (x))dx, and (Yn )n∈N0 the corresponding random walk with initial
condition Y0 = αN and Yn = αN + X1 + · · · + Xn . The integrated random walk is denoted
by (Zn )n∈N0 with Z0 = aN 2 and Zn = aN 2 + Y1 + · · · + Yn . We denote Pa the probability
distribution of the above defined processes. Then the following holds.
r
(ε = 0) is the law of the vector
Proposition 1.1 ([CD08]). The pinning free model γN
r
a
(Z1 , . . . , ZN −1 ) under the measure P (·) := P (·|ZN = bN 2 , ZN +1 = bN 2 − βN ). The partition function ZN,ε (r) is the value at (βN, bN 2 − βN ) of the density of the vector (YN +1 , ZN +1 )
a
under the law Pr . The model γN
coincides with the integrated random walk Pa .
The first part of the following result is the generalisation of Mogulskii’s theorem [Mog76]
from random walks to integrated random walks whereas its second part is the generalisation to
Gaussian integrated random walk bridges.
Theorem 1.2.
(a) Let V be any potential of the form above such that Λ(λ) :=
hλ,X1 i
log E[e
] < ∞ for all λ ∈ R, then the following holds. The large deviation prina
on the space C([0, 1]; R) as N → ∞ with speed N
ciple (LDP) holds for hN under γN
and the unnormalised good rate function Ef of the form:
(R 1
Λ∗ (ḧ(t)) dt, if h ∈ Ha2 ,
0
Ef (h) =
(1.5)
+∞
otherwise.
Here Λ∗ denotes the Fenchel-Legendre transform of Λ.
(b) For V (η) = 21 η 2 the following holds. The large deviation principle (LDP) holds for hN
r
under γN
on the spaces C([0, 1]; R) as N → ∞ with speed N and the unormalised good
rate function E of the form:
( R1
1
ḧ2 (t) dt, if h ∈ Hr2 ,
E(h) = 2 0
(1.6)
+∞
otherwise.
Remark 1.3.
(a) The rate functions in both cases are obtained from the unnormalised rate
functions by If0 (h) = Ef (h) − inf g∈Ha2 Ef (g) for general integrated random walks with
potential V respectively by I 0 (h) = E(h) − inf g∈Hr2 E(g) for Gaussian integrated random
walk bridges.
LARGE DEVIATIONS
5
(b) We believe that the large deviation in Theorem 1.2(b) holds for general potentials V
as in (a) as well. For the Gaussian integrated random walk bridges there exist explicit
formulae for the distribution, see [GSV05]. Our main result concerns the large deviations
for the pinning model for Gaussian integrated random walk bridges. General integrated
random walk bridges will require different techniques.
1.2.3. Large deviations for pinning models. The large deviation principle for the pinning models
gets an additional term for the rate functions. Recall the logarithm of the partition function
is the free energy, and the difference in the free energies with pinning and without pinning for
zero boundary conditions (r = 0) will be an important ingredient in our rate functions. We
determine τ (ε) by its thermodynamic limit
1
ZN,ε (0)
log
.
N →∞ N
ZN (0)
τ (ε) = lim
(1.7)
The existence of the limit in (1.7) and its properties have been derived by Caravenna and
Deuschel in [CD08], we summarise their result in the following proposition.
Proposition 1.4 ([CD08]). The limit in (1.7) exist for every ε ≥ 0. Furthermore, there exists
εc ∈ (0, ∞) such that τ (ε) = 0 for ε ∈ [0, εc ], while 0 < τ (ε) < ∞ for ε ∈ (εc , ∞), and as
ε → ∞,
τ (ε) = log ε(1 − o(1)).
Moreover the function τ is real analytic on (εc , ∞).
r
a
We have the following sample path large deviation principles for hN under γN,ε
and γN,ε
,
ε
ε
respectively. The unnormalised rate functions are given by Σ and Σf , respectively, all of
which are of the form
Z
1 1 2
ε
Σ (h) =
ḧ (t) dt − τ (ε)|{t ∈ [0, 1] : h(t) = 0}|,
(1.8)
2 0
for h ∈ H = Hr2 and H = Ha2 , respectively, and where | · | stands for the Lebesgue measure.
r
a
Theorem 1.5. The LDP holds for hN under γN = γN,ε
, γN,ε
respectively on the space C([0, 1]; R)
as N → ∞ with the speed N and the good rate functions I = I ε and I = Ifε of the form:
(
Σ(h) − inf h∈H {Σ(h)}, if h ∈ H,
I(h) =
(1.9)
+∞
otherwise,
with Σ = Σε and Σ = Σεf respectively, and H = Hr2 respectively H = Ha2 . Namely, for every
open set O and closed set K of C([0, 1]; R) equipped with the uniform topology, we have that
1
log γN (hN ∈ O) ≥ − inf I(h),
N →∞ N
h∈O
1
lim sup log γN (hN ∈ K) ≤ − inf I(h),
h∈K
N →∞ N
lim inf
in each of two situations.
(1.10)
6
STEFAN ADAMS, ALEXANDER KISTER, AND HENDRIK WEBER
As the limit τ (ε) of the difference of the free energies appears in our rate functions it is worth
stressing that this has a direct translation in terms of some path properties of the field, see
[CD08]. This is the microscopic counterpart of the effect of the reward term in our pinning rate
functions. Defining the contact number `N by
`N := #{k ∈ {1, . . . , N } : φk = 0},
we can easily obtain that for ε > 0 (see [CD08]),
0
DN (ε) := EγN,ε
`N /N = ετ 0 (ε).
This gives the following paths properties. When ε > εc , then DN (ε) → D(ε) > 0 as N → ∞,
and the mean contact density is non-vanishing leading to localisation of the field (integrated
random walk respectively integrated random walk bridge). For the other case, ε < εc , we get
DN (ε) → 0 as N → ∞ and thus the contact density is vanishing in the thermodynamic limits
leading to de-localisation.
2. Minimisers of the rate functions
We are concerned with the set Mε of the minimiser of the unnormalised rate functions in
(1.8) for our pinning LDPs. Any minimiser of (1.8) is a zero of the corresponding rate function
in Theorem 1.5. We let h∗r ∈ Hr2 be the
R 1 unique minimiser of the energy E defined in (A.1) (see
Proposition A.1), that is, E(h) = 1/2 0 ḧ2 (t) dt is the energy of the bi-Laplacian in dimension
2
one. For any interval I ⊂ [0, 1] we let h∗,I
r ∈ Hr (I), whereR the boundary conditions apply to
the boundaries of I, be the unique minimiser of EI (h) = 21 I ḧ2 (t) dt, and we sometimes write
a, b for the boundary condition r with a = (a, α) and b = (b, β). Of major interest are the
zero sets
Nh = {t ∈ [0, 1] : h(t) = 0} of any minimiser h.
In Section 2.1 we study the minimiser for the case of Dirichlet boundary conditions on the
left hand side and free boundary conditions on the right hand side, whereas in Section 2.2 we
summarise our findings for the Dirichlet boundary case on both the right hand and left hand
side. In Section 2.3 we give the proofs for our statements.
2.1. Free boundary conditions on the right hand side. We consider Dirichlet boundary
conditions on the left hand side and the free boundary condition on the right hand side (no
terminal condition) only.
Let Mεf denote the set of minimiser of Σεf .
Proposition 2.1. For any boundary condition a = (a, α) on the left hand side the set Mεf of
minimiser of Σεf is a subset of
{h} ∪ {h` : ` ∈ (0, 1)},
2
where for any ` ∈ (0, 1) the functions h` ∈ Ha,f
are given by
( ∗,(0,`)
h(a,0) (t) , for t ∈ [0, `),
h` (t) =
0
, for t ∈ [`, 1];
2
is the linear function h(t) = a + αt, t ∈ [0, 1].
and the function h ∈ Ha,f
(2.1)
(2.2)
LARGE DEVIATIONS
7
Note that h does not pick up reward for any boundary condition a 6= 0 whereas for a = 0 it
takes the maximal reward. The function h` picks up the reward in [`, 1], see Figure 1,2,3. This
motivates the following definitions. For any τ ∈ R and a ∈ R2 we let
∗,(0,`)
Eτ(a,0) (`) = E(ha,0 ) + τ `,
(2.3)
τ (ε)
(2.4)
and observe that for τ = τ (ε)
Σεf (h` ) = E(a,0) (`) − τ (ε).
Henceforth minimiser of Σεf are given by functions of type h` only if ` is a minimiser of the
function Eτ(a,0) in [0, 1]. We collect an analysis of the latter function in the next Proposition.
Proposition 2.2 (Minimiser for Eτ(a,0) ).
(a) For τ = 0 the function E0(a,0) is strictly decreasing with lim`→∞ E0(a,0) (`) = 0.
(b) For τ > 0 the function Eτ(a,0,0) , a 6= 0, has one local minimum at ` = `1 (τ, a, 0) =
p
|a|(18/τ )1/4 , and the function Eτ(0,α,0) , α 6= 0, has one local minimum at ` =
q
2
`1 (τ, 0, α) =
|α|. In both cases there exist τ1 (a) such that `1 (τ, a) ≤ 1 for all
τ
τ ≥ τ1 (a).
(c) For τ > 0 and a = (a, α) ∈ R2 with w = |a|/|α| ∈ (0, ∞) and s q
= sign (aα) the
√ function Eτ(a,0) has one local minimum at ` = `1 (τ, a) = √12τ |α| + α2 + 6|a| 2τ
when s = 1 , whereas for s = −1 the function Eτ(a,0) has two local minima at ` =
q
q
√ √ 1
1
2
√
√
`1 (τ, a) = 2τ − |α| + α + 6|a| 2τ and ` = `2 (τ, a) = 2τ |α| + α2 − 6|a| 2τ ,
where `2 is a local minimum only if τ ≤
`i (τ, a) ≤ 1 for all τ ≥ τi (a), i = 1, 2.
α4
.
72a2
In all cases there are τi (a) such that
We shall study the zero sets of all minimiser, that is we need to check if h` has zeroes in [0, `)
before picking up the reward in [`, 1].
∗,(0,`)
∗,(0,`)
∗,(0,`)
Lemma 2.3. Let a > 0, then the functions h(a,α,0) with α > 0, h(0,α,0) with α 6= 0, and h(a,−α,0)
∗,(0,`)
with α`/a ∈ [0, 3) have no zeroes in (0, `), whereas the functions h(a,−α,0) with α`/a > 3 have
exactly one zero in (0, `). Analogous statements hold for a < 0.
There is a qualitative difference between the minimiser h`1 and h`2 as the latter one has a
zero before picking up the reward on [`2 , 1], see Figure 2.
In the following we write εi (a) for the value of the reward with τ (εi (a)) = τi (a) such that
`i (τi (a), a) ≤ 1, i = 1, 2.
Theorem 2.4 (Minimiser for Σεf ).
(a) If a = (a, 0), a 6= 0 or a = (0, α), α 6= 0 or w =
|a|/|α| ∈ (0, ∞) with s = sign (aα) = 1, there exists ε∗ (a) > ε1 (a) such that


, for ε < ε∗ (a),
{h}
∗
∗
Mεf = {h, h`1 } with Σεf (h) = Σεf (h`1 ) , for ε = ε∗ (a),


{h`1 }
, for ε > ε∗ (a).
(b) Assume w = |a|/|α| ∈ (0, ∞) and s = sign (aα) = −1. There are τ0 (a) > 0 and
τ1∗ (a) > 0 such that the following statements hold.
8
STEFAN ADAMS, ALEXANDER KISTER, AND HENDRIK WEBER
0.035
0.030
0.025
0.020
0.015
0.010
0.005
0.000
0.0
0.2
Figure 1: h`1
0.4
0.6
0.8
1.0
√
for a = 1 and α = −12, τ = 288, `1 = 1/2( 2 − 1)
0.2
0.1
0.0
-0.1
-0.2
0.0
0.2
0.4
0.6
0.8
1.0
Figure 2: h`2 for a = 1 and α = −12, τ = 288, `2 = 1/2
1.0
0.8
0.6
0.4
0.2
0.0
0.0
0.2
0.4
0.6
0.8
1.0
Figure 3: h`1 for a = α = 1 and τ = 288, `1 = 6/100(1 +
√
101)
(i) Let a ∈ D1 := {a ∈ R2 : w ∈ (0, ∞) and τ0 (a) > τ1∗ (a)}.Then there exist ε∗1,2 (a) >
0 and ε∗2 (a) > ε2 (a) with ε∗2 (a) < ε∗1,2 (a) such that

, for ε < ε∗2 (a),
{h}



∗
∗

ε2 (a)
ε2 (a)

, for ε = ε∗2 (a),

{h, h`2 } with Σf (h) = Σf (h`2 )
. for ε ∈ (ε∗2 (a), ε∗1,2 (a)),
Mεf = {h`2 }

∗
∗

{h` , h` } with Σε1,2 (a) (h` ) = Σε1,2 (a) (h` ) , for ε = ε∗ (a),

1
2
1
2
1,2

f
f


{h`1 }
, for ε > ε∗1,2 (a).
LARGE DEVIATIONS
9
(ii) Let a ∈ D2 := {a ∈ R2 : w ∈ (0, ∞) and τ0 (a) = τ1∗ (a)
ε∗c (a) > τ2 (a) with τ (ε∗c (a)) = τ0 (a),


,
{h}
ε∗c (a)
ε∗c (a)
ε∗c (a)
ε
Mf = {h, h`1 , h`2 } with Σf (h) = Σf (h`1 ) = Σf (h`2 ) ,


{h`1 }
,
= τ2∗ (a)}. Then for
for ε < ε∗c (a),
for ε = ε∗c (a),
for ε > ε∗c (a).
(iii) Let a ∈ D3 := {a ∈ R2 : w ∈ (0, ∞) and τ0 (a) < τ1∗ (a)}. Then for ε∗1 (a) > 0 with
τ (ε∗1 (a)) = τ1∗ (a),


, for ε < ε∗1 (a),
{h}
∗
∗
Mεf = {h, h`1 } with Σεf1 (a) (h) = Σεf1 (a) (h`1 ) , for ε = ε∗1 (a),


{h`1 }
, for ε > ε∗1 (a).
Remark 2.5. We have seen that the rate function Σεf can have up to three minimiser having the
same value of the rate function. See Figure 1-2 for examples of these functions. The minimiser
in Figure 1 has no single zero before picking up the reward. Note that the existence of the
minimiser (see Figure 2) with a single zero before picking up a reward depends on the chosen
boundary conditions. This minimiser only exist if the gradient at 0 has opposite sign of the
value at zero. See Figure 3 for an example when the gradient has the same sign as the value
of the function at zero. The minimiser h`1 is the global minimiser if the reward is increased
unboundedly.
2.2. Dirichlet boundary. We consider Dirichlet boundary conditions on both sides given by
the vector r = (a, α, b, β) = (a, b). In a similar way to Section 2.1 for free boundary conditions
on the right hand side we define functions h`,r ∈ Hr2 for any `, r ≥ 0 with ` + r ≤ 1 by
 ∗,(0,`)

, t ∈ [0, `),
ha,0 (t)
h`,r (t) = 0
(2.5)
, t ∈ [`, 1 − r],

 ∗,(1−r,1)
h0,b
(t) , t ∈ (1 − r, 1].
Furthermore, we define the following energy function depending only on ` and r,
∗,(0,`)
∗,(1−r,1)
E(`, r) = E(ha,0 ) + E(h0,b
) − τ (ε)(1 − ` − r),
(2.6)
and using (2.3) we get
τ (ε)
τ (ε)
τ (ε)
τ (ε)
E(`, r) = E(a,0) (`) + E(0,b,−β) (r) − τ (ε) = E(a,0) (`) + E(b,−β,0) (r) − τ (ε),
∗,(1−r,1)
where β is replaced by −β due to symmetry, that is, using that h(0,b)
(2.7)
∗,(1−r,1)
(t) = h(b,−β,0) (2−r−t) =
∗,(0,r)
h(b,−β,0) (1 − t) for t ∈ [1 − r, 1]. Hence
Σε (h`,r ) = E(`, r).
For given boundary r = (a, α, b, β) the function h∗r ∈ Hr2 given in Proposition A.1 does not
pick up any reward in [0, 1].
Proposition 2.6. For any Dirichlet boundary condition r ∈ R4 the set Mε of minimiser of the
rate function Σε in Hr2 is a subset of
{h`,r , h∗r : ` + r ≤ 1},
where ` and r are minimiser of Eτ (ε) in Proposition 2.2.
10
STEFAN ADAMS, ALEXANDER KISTER, AND HENDRIK WEBER
0.6
0.5
0.4
0.3
0.2
0.1
0.2
0.4
0.6
0.8
1.0
√
Figure 4: h`1 for a = b = 1 and α = −12, β = 12, τ = 288, `1 = 1/2( 2 − 1)
1.0
0.8
0.6
0.4
0.2
0.0
-0.2
0.0
0.2
0.4
0.6
0.8
1.0
Figure 5: h`2 for a = b = 1 and α = −12, β = 12, τ = 288, `2 = 1/2
Proposition 2.6 allows to reduce the optimisation of the rate function Σε to the minimisation
of the function E defined in (2.7) for 0 ≤ ` + r ≤ 1. The general problem involves up to five
parameters including the boundary conditions r ∈ R4 and the pinning free energy τ (ε) for the
reward ε. It involves studying several different sub cases and in order to demonstrate the key
features of the whole minimisation problem we study only a special case in the following and
only outline how any general case can be approached.
The symmetric case r = (a, α, a, −α): It is straightforward to see that
τ (ε)
τ (ε)
Σε (h`i ,`j ) = E(`i , `j ) = E(a,α,0) (`i ) + E(a,α,0) (`j ) − τ (ε),
i, j = 1, 2.
(2.8)
Clearly the unique minimiser h∗r (t) = a+αt−αt2 of E has the symmetry h∗r (1/2−t) = h∗r (1/2+t)
for t ∈ [0, 1/2]. The function E is not convex and thus we distinguish two different sets of
parameter (a, τ (ε)) ∈ R3 according to whether (i) `i (τ (ε), a) ≤ 1/2 for i = 1, 2; or whether (ii)
`2 (τ (ε), a) > 1/2 > `1 (τ (ε), a). There are no other cases for the parameter due to the condition
`1 + `2 ≤ 1 and the fact that `2 (τ (ε), a) > `1 (τ (ε), a).
Parameter regime (i):
D1 := {(a, τ ) ∈ R3 : `1 (τ, a) ≤ 1/2 ∧ `2 (τ, a) ≤ 1/2 if `2 (τ, a) is local minimum of Eτ(a,0) }.
Parameter regime (ii):
α4
}.
D2 := {(a, τ ) ∈ R : 1 ≥ `2 (τ (ε), a) > 1/2 > `1 (τ (ε), a) > 0, τ ≤
72a2
3
LARGE DEVIATIONS
11
We shall define the following values before stating our results.
There are εi (a) such that `i (τ (ε), a) ≤ 1/2 for all ε ≥ εi (a), i = 1, 2. We denote by τ1∗ (a) =
τ (ε∗1 (a)) the unique value of τ such that
∗
Eτ1 (a) (`1 (τ1∗ (a), a)) − 1/2τ1∗ (a) = 1/2E(h∗r ).
(2.9)
∗
Likewise, we denote τ2∗ (a) the unique value of τ such that Eτ2 (a) (`1 (τ2∗ (a), a)) − 1/2τ2∗ (a) =
1/2E(h∗r ) when such a value exists in R otherwise we put τ2∗ (a) = ∞. We denote τ0 (a) the
unique zero in Lemma 2.10 (a) of the difference ∆(τ ) = Eτ (`1 (τ, a)) − Eτ (`2 (τ, a)).
Theorem 2.7 (Minimiser for Σε , symmetric case). Let r = (a, α, a, −α).
(a) If a = (a, 0), a 6= 0, or a = (0, α), α 6= 0, or w = |a|/|α| ∈ (0, ∞) with sign (aα) = 1
and ε ≥ ε1 (a), then (a, τ (ε)) ∈ D1 and there is ε∗1 (a) > ε1 (a) such that

∗

, for ε < ε∗1 (a),
{hr }
∗
∗
Mε = {h∗r , h`1 ,`1 } with Σε1 (a) (h∗r ) = Σε1 (a) (h`1 ,`1 ) , for ε = ε∗1 (a),


{h`1 ,`1 }
, for ε > ε∗1 (a).
(b) Assume w = |a|/|α| ∈ (0, ∞) and s = sign (aα) = −1. There are τ0 (a) > 0 and
τ1∗ (a) > 0 such that the following statements hold.
(i) Let a ∈ D1 := {a ∈ R2 : w ∈ (0, ∞) and τ0 (a) > τ1∗ (a)}.Then there exists
εe1,2 (a) > 0 such that (a, τ (ε)) ∈ D2 for all ε ∈ (e
ε1,2 (a), ε2 (a)) and (a, τ (ε)) ∈ D1
∗
∗
for ε ≥ ε2 (a). Then there exist ε1,2 (a) > 0 and ε2 (a) > 0 with ε∗2 (a) < ε∗1,2 (a) such
that


, for ε < εe1,2 (a),
{h∗r }




∗
ε
∗
ε
ε
∗
ε
{hr , h`2 ,`1 } with Σ (hr ) ≤ Σ (h`2 ,`1 ) ∨ Σ (hr ) > Σ (h`2 ,`1 ) , for ε ∈ (e
ε1,2 (a), ε2 (a)),



{h∗ , h
ε∗2 (a)
ε∗2 (a) ∗
(h`2 ,`2 )
(hr ) = Σ
, for ε = ε∗2 (a),
`2 ,`2 } with Σ
r
ε
M =
{h`2 ,`2 }
. for ε ∈ (ε∗2 (a), ε∗1,2 (a)),



∗
∗


{h`1 ,`1 , h`2 ,`2 } with Σε1,2 (a) (h`1 ,`1 ) = Σε1,2 (a) (h`2 ,`2 )
, for ε = ε∗1,2 (a),



{h
, for ε > ε∗1,2 (a).
`1 ,`1 }
(ii) Let a ∈ D2 := {a ∈ R2 : w ∈ (0, ∞) and τ0 (a) = τ1∗ (a) = τ2∗ (a)}. Then there exists
εe1,2 (a) > 0 such that (a, τ (ε)) ∈ D2 for all ε ∈ (e
ε1,2 (a), ε2 (a)) and (a, τ (ε)) ∈ D1
∗
for ε ≥ ε2 (a). Then there exists εc (a) > 0 with τ (ε∗c (a)) = τ0 (a) and ε∗c (a) ≥ ε2 (a)
such that


{h∗r }
, for ε < εe1,2 (a),




∗
ε
∗
ε
ε
∗
ε

{hr , h`2 ,`1 } with Σ (hr ) ≤ Σ (h`2 ,`1 ) ∨ Σ (hr ) > Σ (h`2 ,`1 )
, for ε ∈ (e
ε1,2 (a), ε2 (a)),



{h∗ }
, for ε ∈ (ε2 (a), ε∗c (a)),
r
Mε =
∗
∗

{h∗r , h`1 ,`1 , h`2 ,`2 , h`1 ,`2 , h`2 ,`1 } with Σεc (a) (h∗r ) = Σεc (a) (h`1 ,`1 ) =



∗
∗
∗


Σεc (a) (h`2 ,`2 ) = Σεc (a) (h`1 ,`2 ) = Σεc (a) (h`2 ,`1 )
, for ε = ε∗c (a),



{h
, for ε > ε∗c (a).
`1 ,`1 }
(iii) Let a ∈ D3 := {a ∈ R2 : w ∈ (0, ∞) and τ0 (a) < τ1∗ (a)}. Then there exists
εe1,2 (a) > 0 such that (a, τ (ε)) ∈ D2 for all ε ∈ (e
ε1,2 (a), ε1 (a)) and (a, τ (ε)) ∈ D1
12
STEFAN ADAMS, ALEXANDER KISTER, AND HENDRIK WEBER
for ε ≥ ε1 (a). Then there exists ε∗1 (a) > ε1 (a) with τ (ε∗1 (a)) = τ1∗ (a) such that


, for
{h∗r }



{h∗ , h
ε ∗
ε
ε ∗
ε
, for
`2 ,`1 } with Σ (hr ) ≤ Σ (h`2 ,`1 ) ∨ Σ (hr ) > Σ (h`2 ,`1 )
r
Mε =
∗
∗
∗
ε
(a)
∗
ε
(a)

{hr , h`1 ,`1 } with Σ 1 (hr ) = Σ 1 (h`1 )
, for



{h
, for
`1 ,`1 }
ε < εe1,2 (a),
ε ∈ (e
ε1,2 (a), ε∗1 (a)),
ε = ε∗1 (a),
ε > ε∗1 (a).
Remark 2.8 (General boundary conditions). For general boundary conditions r =
(a, α, b, β) one can apply the same techniques as for the symmetric case. Thus minimiser
of Σε are elements of
{h∗r , h`,` , hr,r , h`,r , hr,` : ` + r ≤ 1}.
Remark 2.9 (Concentration of measures). The large deviation principle in Theorem 1.5
r
a
immediately implies the concentration properties for γN = γN,ε
and γN = γN,ε
:
lim γN (dist∞ (hN , Mε ) ≤ δ) = 1,
N →∞
(2.10)
for every δ > 0, where Mε = {h∗ : h∗ minimiser of I} with I = I ε and I = Ifε , respectively, and
dist∞ denotes the distance under k·k∞ . More precisely, for any δ > 0 there exists c(δ) > 0 such
that
γN (dist∞ (hN , Mε ) > δ) ≤ e−c(δ)N
for large enough N . We say that two function h1 , h2 ∈ Mε coexist in the limit N → ∞ under
γN with probabilities λ1 , λ2 > 0, λ1 + λ2 = 1 when
lim γN (khN − hi k∞ ≤ δ) = λi ,
N →∞
i = 1, 2,
hold for small enough δ > 0. The same applies to the free boundary case on the right hand side
and its set of minimiser Mεf . For gradient models with quadratic interaction (Gaussian) the
authors in [BFO09] have investigated this concentration of measure problem and obtained statements depending on the dimension m of the underlying random walk (i.e. (1 + m)-dimensional
models). The authors are using finer estimates than one employs for the large deviation principle, in particular the make use of a renewal property of the partition functions. In our setting
of Laplacian interaction the renewal structure of the partition functions is different and requires
different type of estimates. In addition, the concentration of measure problem requires to study
all cases of possible minimiser. We devote this study to future work.
2.3. Proofs: Variational analysis.
2.3.1. Free boundary condition. Proof of Proposition 2.1. Suppose that h ∈ Ha2 is not
element of the set (2.1). It is easy to see that there is at least one function h∗ in the set (2.1)
with
Σεf (h∗ ) < Σεf (h).
(2.11)
For Σεf (h) < ∞, we distinguish two cases. If |Nh | = 0, then |Nh | = 0 and we get
Σεf (h) = E(h) > 0 = E(h) = Σεf (h)
LARGE DEVIATIONS
13
by noting that h is the unique function with E(h) = 0. If |Nh | > 0 we argue as follows. Let ` be
the infimum and r be the supremum of the accumulation points of Nh , and note that `, r ∈ Nh .
Since |Nh ∩ [`, r]c | = 0 we have
Σεf (h) = E[0,`] (h) + E(`,r) (h) − τ (ε)|{t ∈ (`, r) : h(t) = 0}| + E[r,1] (h).
As `, r ∈ Nh we have that ḣ(`) = ḣ(r) = 0 as the differential quotient vanishes due to the fact
that ` and r are accumulations points of Nh . Thus the restrictions of h and h∗ = h` to [0, `]
∗,(0,`)
2
. By the optimality of h(a,0) inequality (2.11) is satisfied for h∗ = h` . are elements of H(a,0)
Proof of Proposition 2.2.
` ∈ (0, 1)) and a = (a, α),
∗,(0,`)
The following scaling relations hold for ` > 0 (in our cases
∗,(0,1)
h(a,0) (t) = h(a,`α,0) (t/`)
for t ∈ [0, `].
(2.12)
Using this and Proposition A.1 with r = (a, `α, 0, 0) we obtain
Z 1
2
1
1
(0,`) ∗,(0,`)
E (h(a,0) ) = 3
h¨∗r (t) dt = 3 6a2 + 6aα` + 2α2 `2 ,
2` 0
`
and thus
1
2
2 2
6a
+
6aα`
+
2α
`
+ τ `,
`3
p
p
18a2 12aα 2α2
2
d τ
E(a,0) (`) = − 4 − 3 − 2 + τ = − 4 (3a + α` − τ /2`2 )(3a + α` + τ /2),
d`
`
`
`
`
2
d τ
4
E(a,0) (`) = 5 (6a + α`)(3a + α`).
2
d`
`
(2.13)
The derivative has the following zeroes
p
√
α ± α2 + 6a 2τ
√
`1/2 =
,
2τ
p
√
−α ± α2 − 6a 2τ
√
`3/4 =
.
2τ
Eτ(a,0) (`) =
d
E(a,0) (`) < 0 for a ∈ R2 \ {0} and lim`→∞ E(a,0) (`) = 0. If
(a) Our calculations (2.13) imply d`
a = α = 0, then E(0,0,0) (`) = 0 for all `.
(b) If a 6= 0 and α = 0 our calculations (2.13) imply that the function has local minimum at
p
1/4
` = `1 (τ, a) = |a| 18
, whereas for a = 0 and α =
6 0 the function has local minimum at
τ
p
` = `1 (τ, a) = τ /2|α|.
(c) Let w = |a|/|α| ∈ (0, ∞) and s =
q 1. Then√(2.13) shows that the function has a local
1
minimum at ` = `1 (τ, a) = √2τ (|α| + α2 + 6|a| 2τ ). If s = −1 we get a local minimum at
q
√
α4
` = `1 (τ, a) = √12τ (−|α| + α2 + 6|a| 2τ ) and in case τ ≤ 72a
2 a second local minimum at
q
√
` = `2 (τ, a) = √12τ (|α| + α2 − 6|a| 2τ ). Note that `1 (τ, a) < `2 (τ, a) whenever `2 (τ, a) is
local minimum. This follows immediately from the second derivative which is positive whenever
3a
6a
`i (τ, a) ≤ |α|
or `i (τ, a) ≥ |α|
for a > 0 > α and i = 1, 2.
14
STEFAN ADAMS, ALEXANDER KISTER, AND HENDRIK WEBER
Proof of Lemma 2.3.
∗,(0,`)
We are using the scaling property
∗,(0,1)
h(a,0) (t) = ah(1,sw−1 `,0) (t/`)
∗,(0,`)
∗,(0,1)
h(a,0) (t) = h(0,α`,0) (t/`)
for a 6= 0 and t ∈ [0, `], s = sign (aα), w = |a|/|α|,
(2.14)
for a = 0 and t ∈ [0, `],
and show the following equivalent statements, the functions h∗(1,`,0) with ` > 0, h∗(1,−`,0) with
` ∈ R \ {0}, and h∗(0,`,0) with ` ∈ [0, 3) have no zeroes in (0, 1), whereas the functions h∗(1,−`,0)
with ` > 3 have exactly one zero in (0, 1). Thus we study the unique minimiser of E given in
Proposition A.1, that is, we consider first the functions h∗(1,s`,0) for s ∈ {−1, 1} and ` > 0. The
function h∗(1,s`,0) has a zero in (0, 1) if and only if it has a local minimum at which it assumes a
negative value. Its derivative has at most one zero in (0, 1) as by Proposition A.1 the derivative
ḣ∗r (t) = `s + 2(−3 − 2`s)t + 3(2 + `s)t2
is zero at t = 1 for the boundary condition given by r = (1, s`, 0, 0). Now for s = 1 the local
extrema is a maximum as the function value at t = 0 is greater than its value at t = 1 and thus
the derivative changes sign from positive to negative. For s = −1 and ` ≤ 3 there is no local
extrema as the first derivative is zero only at t = 1 and has no second zero in (0, 1) and the
second derivative ḧ∗(r,0) (1) = 6 − 2` at t = 1 is strictly positive. Thus the derivative takes only
negative values in [0, 1) and is zero at t = 0. For s = −1 and ` > 3 there is a local minimum as
the second derivative at t = 1 is now strictly negative implying that the first derivative changes
sign from negative to positive and thus has a zero at which the function value is negative. The
functions h∗(0,`,0) have no zero in (0, `) for ` 6= 0 by definition as the only zeroes are t = 0 and
t = 1.
Proof of Theorem 2.4. (a) (i) Let α = 0 and a 6= 0. Note that `1 (τ, a) ≤ 1 if and only
if τ ≥ 18|a|2 . Let ε1 (a) be the maximum of εc and this lower bound. We write τ = τ (ε). Now
Σεf (h`1 ) = 0 if and only if τ = τ ∗ with
p
p
6 |a|
1/4
+
18
|a| = (τ ∗ )1/4 ,
183/4
and we easily see that Σεf (h`1 (τ,a) ) < 0 for all τ > τ ∗ .
√
√
(ii) Now let a = 0 and α 6= 0. Note that `1 (τ, a) ≤ 1 if and only if τ ≥ 2|α| and thus let
ε1 (a) be the maximum of εc and 2|α|2 . Now Σεf (h`1 ) = 0 if and only if τ = τ ∗ (a) := 8|α|2 , and
Σεf (h`1 (τ,a) ) < 0 as d/dτ (Στf (h`1 (τ,a) ) < 0 for all τ > τ ∗ .
(iii) Now let s = sign (aα) = 1 and assume that a, α > 0 (the case a, α < 0 follows analogously).
As `1 (τ, a) is decreasing in τ > 0 there is ε1 (a) ≥ εc such that `1 (τ, a) ≤ 1 for all τ ≥ τ1 (a).
ε∗ (a)
Lemma 2.10 (b) shows that there exists ε∗1 (a) such that Σf1 (h`1 (τ1∗ ,a) ) = 0 and the uniqueness
of that zero gives Σεf (h`1 (τ (ε),a) ) < 0 for all ε > ε∗1 (a).
(b) Let s = sign (aα) = −1 and assume a > 0 > α (the other case follows analogously). Clearly
ε (a)
we have Σfi (h`i (τ (εi (a)),a) ) > 0 as `i (τ (εi (a)), a) = 1 for i = 1, 2, and for any ε > εi (a) we
have `i (τ (ε), a) < 1 and thus
d
fi (τ ) = `i (τ, a) − 1 < 0 where we write fi (τ ) = Στf (h`i (τ,a) ), i = 1, 2.
dτ
Furthermore, due to Lemma 2.10 there is a unique τ0 = τ0 (a) such that
f1 (τ ) ≥ f2 (τ ) for τ ≤ τ0 and f1 (τ ) ≤ f2 (τ ) for τ ≥ τ0 and f1 (τ0 ) = f2 (τ0 ).
LARGE DEVIATIONS
15
We thus know that f1 is decreasing and that f1 (τ ) → −∞ for τ → ∞. As f1 (τ1 (a)) > 0 there
must be at least one zero which we denote τ1∗ (a) which we write as τ (ε∗1 (a)). The uniqueness of
τ1∗ (a) is shown in Lemma 2.10 (b). Similarly, we denote by τ2∗ (a) the zero of f2 when this zero
exists (otherwise we set it equal to infinity), and one can show uniqueness of this zero in the
same way as done for τ1∗ (a) in Lemma 2.10 (b). We can now distinguish three cases according
to the sign of the functions f1 and f2 at the unique zero τ0 of the difference ∆ = f1 − f2 . That
is, we distinguish whether τ0 (a) is greater, equal or less the unique zero τ1∗ (a) of f1 .
(i) Let a ∈ D1 := {a ∈ R2 : w ∈ (0, ∞) and τ0 (a) > τ1∗ (a)}. Then f1 (τ0 (a)) = f2 (τ0 (a)) < 0
and thus τ2∗ (a) exists and satisfies τ2∗ (a) < τ1∗ (a). This implies immediately the statement by
choosing ε∗1,2 (a) and ε∗2 (a) such that τ (ε∗1,2 (a)) = τ0 (a) and τ (ε∗2 (a)) = τ2∗ (a).
(ii) Let a ∈ D2 := {a ∈ R2 : w ∈ (0, ∞) and τ0 (a) = τ1∗ (a) = τ2∗ (a)}. Then f1 (τ0 (a)) =
ε∗ (a)
ε∗ (a)
f2 (τ0 (a)) = 0 and thus for ε∗c (a) with τ (ε∗c (a)) = τ0 (a) we get Σfc (h`1 ) = Σfc (h`2 ) =
ε∗ (a)
Σfc
(h) = 0. Then Lemma 2.10 (a) gives Σεf (h`1 ) < Σεf (h`2 ) < 0 for all ε > ε∗c (a).
(iii) Let a ∈ D3 := {a ∈ R2 : w ∈ (0, ∞) and τ0 (a) < τ1∗ (a)}. Then f1 (τ0 (a)) = f2 (τ0 (a)) > 0
ε∗ (a)
ε∗ (a)
and for ε∗1 (a) with τ (ε∗1 (a)) = τ1∗ (a) we get Σf1 (h`1 ) = Σf1 (h) = 0 and Σεf (h`1 ) < 0 and
Σεf (h`1 ) < Σεf (h`2 ) for ε > ε∗1 (a).
Lemma 2.10.
4
α
(a) For any a ∈ R2 with w = |a|/|α| ∈ (0, ∞) and τ ∈ (0, 72a
2 ] the function
∆(τ ) := Eτ (`1 (τ, a)) − Eτ (`2 (τ, a))
has a unique zero called τ0 , is strictly decreasing and strictly positive for τ < τ0 .
(b) For any a ∈ R2 with w ∈ (0, ∞) there is a unique solution of
Σεf (h`1 (τ (ε),a) ) = 0,
τ = τ (ε) ≥ τ1 (a),
(2.15)
which we denote by τ1∗ = τ (ε∗1 (a)).
Proof of Lemma 2.10. (a) The sign of the function ∆ is positive for τ → 0 whereas
α4
the sign is negative if τ = 72a
2 . Hence, the continuous function ∆ changes its sign and must
have a zero. We obtain the uniqueness of this zero by showing that the function ∆ is strictly
decreasing. For fixed τ we have (Proposition 2.2)
d τ
E (`) = 0,
for ` = `2 (τ, a) or ` = `2 (τ, a).
d`
The functions Eτ (`i (τ, a)) are rational functions of `i (τ, a) and depend explicitly on τ as well.
Thus the chain rule gives
d τ
E (`i (τ )) = `i (τ ).
dτ
i = 1, 2 and τ ∈ (0,
α4
].
72a2
4
α
As `1 (τ, a) < `2 (τ, a) the first derivative of ∆ is negative on (0, 72a
2 ].
(b) We let τ1∗ = τ (ε∗1 (a)) denote the solution of (2.15). As the rate function is strictly positive
for vanishing τ and limτ →∞ Στf (`1 (τ, a)) = −∞ we shall check whether there is a second solution
to (2.15). Suppose there are τ (ε) > τ (ε0 ) solving (2.15) with
0
Σεf (h`1 (τ (ε),a) ) = Σεf (h`1 (τ (ε0 ),a) ).
(2.16)
16
STEFAN ADAMS, ALEXANDER KISTER, AND HENDRIK WEBER
For fixed ` the function τ 7→ Σεf (h` ) = Eτ(a,0) (`) − τ is strictly decreasing and thus
0
Σεf (h` ) > Σεf (h` )
for ` = `1 (τ (ε0 )).
(2.17)
Now Proposition 2.2 gives
Σεf (h`1 (τ (ε0 ),a) ) ≥ min Σεf (h` ) = Σεf (h`1 (τ (ε),a) ).
`∈(0,1)
Combining (2.16) and (2.17) we arrive at a contradiction and thus the solution of (2.15) is
unique. Hence Σεf (h`1 (τ (ε),a) ) < 0 for all τ (ε) > τ1∗ = τ (ε∗1 (a)).
2.3.2. Dirichlet boundary conditions. Proof of Proposition 2.6. We argue as in our proof
of Proposition 2.1 using (2.7) observing that for any h ∈ Hr2 with ` being the infimum of
accumulation points of Nh and 1 − r being the corresponding supremum,
τ (ε)
τ (ε)
Σε (h) = E(0,`) (h) − τ (ε)(1 − ` − r) + E(1−r,1) (h) = E(a,0) (`) + Eb,−β,0) (r) − τ (ε),
= E(`, r).
The second statement follows from the Hessian of E being the product
∂ 2 τ (ε)
∂ 2 τ (ε)
E
(`)
E
(r)
∂`2 (a,0) ∂r2 (b,−β,0)
of the second derivatives of the functions Eτ (ε) (see Proposition 2.2).
Proof of Theorem 2.7. (a): We first note that due to convexity of E the solutions h∗r
for boundary conditions r = (a, α, a, −α) are symmetric with respect to the 1/2− vertical line.
Furthermore, in all three cases of (a) only `1 (τ, a) is a minimiser of Eτ and thus of E due to
symmetric boundary conditions and thus the Hessian (see above) of the energy function E (2.7)
is positive implying convexity. Henceforth, when `1 (τ, a) is a minimiser of E the corresponding
minimiser function (see Proposition 2.6) of the rate function has to be symmetric with respect
to the 1/2− vertical line. These observations immediately give the proofs for all three cases in
(a) of Theorem 2.7 because symmetric minimiser exist only if `1 (τ, a) ≤ 1/2. Hence we conclude
with Theorem 2.4 and `1 (τ, a) ≤ 1/2 for ε ≥ ε1 (a) using the existence of τ1∗ (a) solving (2.9).
The existence and uniqueness of τ1∗ (a) can be shown using an adaptation of Lemma 2.10 (b).
We are left to show all three sub cases (i)-(iii) of (b) in Theorem 2.7. In all these cases we
argue differently depending on the parameter regime. If (a, τ ) ∈ D1 we can argue as follows. If
`1 (τ, a) and `2 (τ, a) both exist and are minimiser of the energy function E we obtain convexity
as above (the mixed derivatives vanish due to the fact that E is a sum of functions of the single
variables). Then we can argue as above and conclude with our statements for all there sub
cases for parameter regime D1 with `i (τ, a) ≤ 1/2, i = 1, 2.
The only other case for the minimiser `1 (a, τ (ε)) of Eτ is 1 ≥ `2 (τ (ε), a) > 1/2 > `1 (τ (ε), a) >
0 which gives a candidate for minimiser of Σε which is not symmetric with respect to the 1/2−
vertical line. It is clear that at the boundary of D2 , namely `1 + `2 = 1, we get E(h∗r ) <
E(`2 (τ (ε), a), `1 (τ (ε), a)). Depending on the values of the boundary conditions and the value
of τ (ε) the minimiser can be either h∗r or the non-symmetric function h`2 ,`1 , or both. As
outlined in [Ant05] for elastic rods which pose similar variational problems there are no general
statements about the minimiser in this regime, for any given values of the parameter one can
check by computation which function has a lower numerical value.
LARGE DEVIATIONS
17
3. Proofs: Large deviation principles
This chapter delivers our proofs for the large deviation theorems. In Section 3.1 we prove
the extension of Mogulskii’s theorem to integrated random walks and integrated random walk
bridges where in the Gaussian cases for the bridge measure case we use Gaussian calculus
respectively an explicit representation of the bridge distribution in [GSV05]. In Section 1.2.3
we prove our main large deviation results in Theorem 1.5 for our models with pinning using
novel techniques involving precise estimates for double zeros and singular zeros, expansion and
Gaussian calculus.
3.1. Sample path large deviation for integrated random walks and integrated random walk bridges. We show Theorem 1.2 by using the contraction principle and an adaptation of Mogulskii’s theorem ([DZ98, Chapter 5.1]).
(a) Recall the integrated random walk representation in Section 1.2.2 and define a family of
random variables indexed by t as
1
YeN (t) = YbN tc+1 ,
N
0 ≤ t ≤ 1,
and let µN be the law of YeN in L∞ ([0, 1]). From Mogulskii’s theorem [DZ98, Theorem 5.1.2]
we obtain that µN satisfy in L∞ ([0, 1]) the LDP with the good rate function
(R 1
Λ∗ (ḣ(t)) dt , if h ∈ AC, h(0) = α,
0
I M (h) =
∞
otherwise,
where AC denotes the space of absolutely continuous functions. Clearly, the laws µN (as well
the laws in Theorem 1.2) are supported on the space of functions continuous from the right and
having left limits, of which our domains of all rate functions is a subset. Thus the LDP also
holds in this space when equipped with the supremum norm topology. Furthermore, there are
exponentially equivalent laws (in L∞ ([0, 1]) fully supported on C([0, 1]; R) (see [DZ98, Chapter
5.1]). Then the LDPs for the equivalent laws hold in C([0, 1]; R) as well, and as C([0, 1]; R) is a
closed subset of L∞ ([01, 1]), the same LDP holds also in L∞ ([0, 1]). The empirical profiles are
functions of the integrated random walk (Zn )n∈N0 , and
Z t
Z
1
1
1 t e
hN (t) = 2 ZbN tc + 2
ZbN sc+1 − ZbN sc ds =
YN (s) ds.
N
N bNNtc
N 0
The contraction principle for the integral mapping gives immediately the rate function for the
LDP for the empirical profiles hN under above laws via the following infimum
Z t
M
J(h) = inf I (g), with Sh = {g ∈ L∞ ([0, 1]) :
g(s) ds = h(t), t ∈ [0, 1]}.
g∈Sh
0
If either h(0) 6= α or h is not differentiable, then Sh = ∅. In the other cases one obtains
Sh = {ḣ}, and therefore J ≡ Ef . This proves (a) of Theorem 1.2.
(b) In the Gaussian case the LDP can be shown by Gaussian calculus (e.g., [DS89]), or by
employing the contraction principle for the Gaussian integrated random walk bridge. The
explicit distribution of the Gaussian bridge leads to the follows mapping. We only sketch this
18
STEFAN ADAMS, ALEXANDER KISTER, AND HENDRIK WEBER
approach for illustrations. For simplicity choose the boundary condition r = 0 and a = (0, 0).
The cases for non-vanishing boundary conditions follow analogously. Then
−1
P0 = P(0,0) ◦ BN
,
where for Z = (Z1 , . . . , ZN +1 ),
BN (Z)(x) = Zx − AN (x, ZN , ZN +1 − ZN ),
x ∈ {1, 2, . . . , N + 1},
and
AN (x, u, v) =
1
x3 (−2u + vN ) + x2 (3uN + vN − vN 2 ) + x((2 + 3N )u − N 2 v .
N (N + 1)(N + 2)
Clearly, BN (Z)(N ) = BN (Z)(N + 1) = 0.
3.2. Sample path large deviation for pinning models. In Section 3.2.1 we show the large
deviation lower bound and in Section 3.2.2 the corresponding upper bound. It will be convenient
to work in a slightly different normalisation. Instead of (1.10) we will show that
ZN,ε (r)
1
log
γN (hN ∈ O) ≥ − inf Σ(h),
N →∞ N
h∈O
ZN (0)
1
ZN,ε (r)
lim sup log
γN (hN ∈ K) ≤ − inf Σ(h).
h∈K
ZN (0)
N →∞ N
lim inf
(3.1)
(3.2)
Where ZN,ε (r) is the partition function introduced in (1.2) with Dirichlet boundary condition
given in (1.4) and ZN (0) is the partition function of the same model with pinning strength
ε = 0 and Dirichlet boundary condition zero. Note for later use that precise asymptotics of
the Gaussian partition function ZN (0) are presented in Appendix B. Once these bounds are
established, they can be applied to the full space C([0, 1]; R) implying
lim inf
N →∞
1
ZN,ε (r)
log
= − inf Σ(h),
h∈H
N
ZN (0)
so that (1.10) follows.
3.2.1. Proof of the lower bound in Theorem 1.5. Fix g ∈ H and δ > 0, and denote by B = {h ∈
C : kh − gk∞ < δ}. We establish the lower bound (3.1) in the form
lim inf
N →∞
1
ZN,ε (r) r
log
γ (hN ∈ B) ≥ −Σ(g).
N
ZN (0) N,ε
(3.3)
Reduction to “well behaved” g. Recall that by Sobolev embedding any g ∈ H is
automatically C1 ([0, 1]) with 21 -Hölder continuous first derivative. We can write
{t ∈ [0, 1] : g(t) = 0} = N̄ ∪ N,
where N̄ is the set of isolated zeros
N̄ = {t ∈ [0, 1] : g(t) = 0 and g has no further zeros in an open interval around t},
and where N is the set of all non isolated zeros. The set N̄ is at most countable, and therefore
|N̄| = 0. These zeros do not contribute to the value of Σ(g). The set N is closed.
LARGE DEVIATIONS
19
Definition 3.1. We say that g ∈ H is well behaved if N is empty or the union of finitely many
disjoint closed intervals, i.e.
N = ∪kj=1 [`j , rj ]
for 0 ≤ `1 < r1 < · · · < `k < rk ≤ 1.
Lemma 3.2. For any g ∈ H and δ > 0 there exists a well behaved function ĝ ∈ H such that
kg − ĝk∞ < δ and Σ(ĝ) ≤ Σ(g).
Proof. We start by observing that for t ∈ N, we have g 0 (t) = 0. Indeed, by definition there
exists a sequence (tn ) in [0, 1] \ {t} which converges to t and along which g vanishes. Hence
g(tn ) − g(t)
= 0.
n→∞
tn − t
By uniform continuity of g there exists a δ 0 such that for |t − t0 | < δ 0 we have |g(t) − g(t0 )| < δ.
We define recursively
g 0 (t) = lim
`1 = inf N
`2 = inf{t ∈ N : t > r1 }
r1 = inf{t ∈ N : (t, t + δ 0 ) ∩ N = ∅},
r2 = inf{t ∈ N : t > `2 and (t, t + δ 0 ) ∩ N = ∅},
and so on. Then we set ĝ = 0 on the intervals [`j , rj ] and ĝ = g elsewhere. The function ĝ
constructed in this way satisfies the desired properties.
Lemma 3.2 implies that it suffices to establish (3.3) for well behaved functions g and from
now on we will assume that g is well behaved. Actually, we will first discuss the notationally
simpler case where N consists of a single interval [`, r] for 0 < ` < r < 1. We explain how to
extend the argument to the general case in the last step.
Expansion and “good pinning sites”. From now on we assume that there exist 0 < ` <
r < 1 such that g = 0 on N = [`, r] and such that all zeros of g outside of N are isolated. Under
these assumptions we will show that
Z
1 Z `
ZN,ε (r) r
1 1 2
1
2
γ (hN ∈ B) ≥ −
g̈ (t) dt − τ (ε)(r − `) +
g̈ (t) dt . (3.4)
lim inf log
N →∞ N
ZN (0) N,ε
2 0
2 r
r
The definition (1.2) of γN,ε
can be rewritten as
r
ZN,ε (r) γN,ε
(dφ) =
X
P⊆{1,...,N −1}
e−H[−1,N +1] (φ)
Y
k∈P
εδ0 (dφk )
Y
k∈{1,...,N −1}\P
dφk
Y
δψk (dφk ).
(3.5)
k∈{−1,0,N,N +1}
The first crucial observation is that for certain choices of “pinning sites” P the right hand side
of this expression becomes a product measure. Indeed, if P contains two adjacent sites p, p + 1
we can write
1
1
H[−1,N +1] (φ) = H[−1,p] (φ) + (∆φp )2 + (∆φp+1 )2 + H[p+1,N +1] (φ),
2
2
2
2
which turns into H[−1,p] (φ) + φp−1 + φp+2 + H[p+1,N +1] (φ) if φp = φp+1 = 0. This means that
when φp and φp+1 are pinned, the Hamiltonian decomposes into two independent contributions
– one which depends only on (the left boundary conditions on φ(−1), φ(0) given in (1.4) and)
φ1 , . . . , φp−1 and one which only depends on φp+2 , . . . , φN −1 (and the right boundary conditions
on φ(N ), φ(N + 1)). Then the term corresponding to this choice P in the expansion (3.5)
20
STEFAN ADAMS, ALEXANDER KISTER, AND HENDRIK WEBER
factorises into two independent parts. We will now reduce ourselves to choices of pinning sites
P which have this property.
Definition 3.3. A subset P ⊆ {1, . . . , N } is a very good choice of pinning sites if
• {0, . . . , p∗ − 1} ∩ P = ∅ and {p∗ + 2, . . . , N } ∩ P = ∅.
• {p∗ , p∗ + 1, p∗ , p∗ + 1} ⊆ P,
Here we have set p∗ := bN `c and p∗ := bN rc (we leave implicit the N dependence of p∗ and
p∗ ).
As all the terms in (3.5) are non-negative we can obtain a lower bound by reducing the sum
to very good P. In this way we get
X
r
r
(B) ≥
sup
|h
(t)
−
g(t)|
≤
δ
ZN,ε (r)γN,ε
ε|P| Z[−1,p∗ +1] (r) γ[−1,p
N
+1]
∗
0≤t≤`
P very good
× Z[p∗ ,p∗ +1]\P (0) γ[p0 ∗ ,p∗ +1]\P sup |hN (t)| ≤ δ
`≤t≤r
×
Z[p∗ −1,N +1] (r) γ[pr ∗ −1,N +1]
sup |hN (t) − g(t)| ≤ δ .
(3.6)
r≤t≤N
r
and γ[pr ∗ −1,N +1] on the right hand side of this expression are defined as
The measures γ[−1,p
∗ +1]
Y
Y
1
r
−H[−1,p∗ +1] (φ)
γ[−1,p
(dφ)
=
e
dφ
δψk (dφk )
k
∗ +1]
Z[−1,p∗ +1] (r)
k∈{1,...,p∗ −1}
γ[pr ∗ −1,N +1] (dφ) =
1
Z[p∗ −1,N +1] (r)
k∈{−1,0,p∗ ,p∗ +1}
Y
e−H[p∗ ,N +1] (φ)
k∈{p∗ +2,...,N −1}
Y
dφk
δψk (dφk ),
k∈{p∗ ,p∗ +1,N,N +1}
where we have adjusted the boundary conditions to ψ(p∗ ) = ψ(p∗ + 1) = ψ(p∗ ) = ψ(p∗ + 1) = 0.
These measures do not depend on the specific choice P of very good pinning sites. The measure
γ[p0 ∗ ,p∗ +1]\P is defined as
Y
Y
1
e−H[p∗ ,p∗ +1] (φ)
δ0 (dφk )
dφk .
γ[p0 ∗ ,p∗ +1]\P (dφ) =
Z[p∗ ,p∗ +1]\P (0)
∗
k∈P
k∈{p∗ ,...,p +1}\P
Note that none of these measures depends on the choice ε of the pinning strength, which
only appears as a factor ε|P| in each term in (3.6). Note furthermore, that all three measures
r
γ[−1,p
, γ[pr ∗ −1,N +1] and γ[p0 ∗ ,p∗ +1]\P are Gaussian.
∗ +1]
Lemma 3.4. For every > 0 there exists an N∗ < ∞ such that for all N ≥ N ∗ we have
Z `
hZ `
i
r
2
2
g̈(t) dt − inf
ḧ(t) dt + (3.7)
γ[−1,p∗ +1] sup |hN (t) − g(t)| ≤ δ ≥ exp − N
0≤t≤`
γ[pr ∗ −1,N +1] sup |hN (t) − g(t)| ≤ δ ≥ exp − N
r≤t≤1
h
0
hZ
r
1
g̈(t)2 dt − inf
h
0
Z
1
i
ḧr (t)2 dt + , (3.8)
r
where the infimum is taken over all h : [0, `] → R and h : [r, 1] → R which satisfy the right
boundary conditions, i.e. h(0) = a, ḣ(0) = α, h(`) = 0, ḣ(`) = 0 for (3.7) and h(r) = 0,
ḣ(r) = 0, h(1) = b, ḣ(1) = β for (3.8).
Proof. This follows immediately from the Gaussian large deviation principle presented in
Proposition 1.2.
LARGE DEVIATIONS
21
Lemma 3.5. There exists an N∗ < ∞ such that for N ≥ N∗ and for all good P ⊆ {1, . . . , N −1}
we have
1
γ[p0 ∗ ,p∗ +1]\P sup |hN (t)| ≤ δ ≥ .
2
`≤t≤r
Proof.
By the definition of hN we get
γ[p0 ∗ ,p∗ +1]\P sup |hN (t)| > δ ≤ γ[p0 ∗ ,p∗ +1]\P
`≤t≤r
≤
X
sup |φ(k)| > δN 2
p∗ ≤k≤p∗
γ[p0 ∗ ,p∗ +1]\P |φ(k)| > δN 2 .
p∗ ≤k≤p∗
Recall that under γ[p0 ∗ ,p∗ +1]\P all φ(k) are centred Gaussian random variables and that the sum
on the right hand side goes over at most p∗ − p∗ ≤ N terms. Hence in order to conclude it is
sufficient to prove that for all N and for all P and for all k ∈ {p∗ , . . . , p∗ } the variance of φ(k)
under γ[p0 ∗ ,p∗ +1]\P is bounded by N 3 .
To see this, we recall a convenient representation of Gaussian variances: If C be the covariance
matrix of a centred non-degenerate Gaussian measure on RN . Then we have for k = 1, . . . , N ,
Ck,k
yk2
= sup
,
−1 yi
y∈RN \{0} hy, C
where h·, ·i denotes the canonical scalar product on RN . This identity follows immediately from
the Cauchy-Schwarz inequality. In our context, this implies that the variance of φ(k) under
γ[p0 ∗ ,p∗ +1]\P is given by
η(k)2
sup
η : {p∗ ,...,p∗ +1}→R
η(k)=0 for k∈P
2 H[p∗ ,p∗ +1] (η)
≤
η(k)2
sup
η : {p∗ ,...,p∗ +1}→R
η(k)=0 for k∈{p∗ ,p∗ +1,p∗ ,p∗ +1}
2 H[p∗ ,p∗ +1] (η)
,
where the inequality follows because the supremum is taken over a larger set.
The quantity on the right hand side can now be bounded easily. By homogeneity we can
reduce the supremum to test vectors η that satisfy η(k) = 1. Invoking the homogeneous
boundary conditions, for such η there must exist a j ∈ {p∗ , . . . , p∗ } such that η(j + 1) − η(j) ≥
1
. Invoking the homogenous boundary conditions once more (this time for the difference
N
η(p∗ + 1) − η(p∗ )) we get
∗
j
j
p
X
X
X
1
≤
η(m + 1) − η(m) − η(m) − η(m − 1) =
∆η(m) ≤
|∆η(m)
N
m=p +1
m=p +1
m=p
∗
∗
p∗
X
1
≤ (p∗ − p∗ ) 2
∗
2 21
|∆η(m)
.
m=p∗
∗
Using the bound p − p∗ ≤ N we see that η must satisfy
1
,
H[p∗ ,p∗ ] (η) ≥
2N 3
which implies the desired bound on the variance.
The pinning potential. First of all, we observe that the minimal energy terms appearing
in (3.7) and (3.8) can be absorbed into the boundary conditions. We obtain by Proposition A.1
22
STEFAN ADAMS, ALEXANDER KISTER, AND HENDRIK WEBER
in Appendix A as well as by identity (B.2) in conjunction with Proposition A.3 that for every
> 0 and for N large enough
Z
`
exp N inf
ḧ(t)2 dt Z[−1,p∗ +1] (r) ≥ ZP,` (0) exp(−N )
h
0
Z 1
exp N inf
ḧ(t)2 dt Z[p∗ −1,N +1] (r) ≥ Z[p∗ −1,N +1] (0) exp(−N ) .
h
r
Therefore, combining (3.6) with Lemma 3.4 and Lemma 3.5 we obtain for any > 0 and for N
large enough
Z 1
Z `
1
ZN,ε (r) r
2
g̈(t)2 dt − N g̈(t) dt − N
γN,ε (B) ≥ exp − N
ZN (0)
2
r
0
X
Z
(0)Z
[−1,p∗ +1]
[p∗ ,p∗ +1]\P (0) Z[p∗ −1,N +1] (0)
ε|P|
×
.
ZN (0)
P very good
Therefore, it remains to treat the sum of the partition functions on the right hand side. First
of all, we observe that Z[−1,p∗ +1] (0) and Z[p∗ −1,N +1] (0) and ZN (0) do not depend on the choice
of very good P so that they can be taken out of the sum, i.e. we can write
X
Z[−1,p∗ +1] (0)Z[p∗ ,p∗ +1]\P (0) Z[p∗ −1,N +1] (0)
ε|P|
ZN (0)
P very good
=
Z[−1,p∗ +1] (0)Zp∗ −p∗ (0) Z[p∗ −1,N +1] (0) X
Z[p ,p∗ +1]\P (0)
ε|P| ∗
.
ZN (0)
Z
p∗ −p∗ (0)
P very good
Here we have multiplied and divided by the Gaussian partition function Zp∗ −p∗ (0) because it
allows us to compare the sum on the right hand side to the limit (1.7) which defines τ (ε). More
precisely we get
X
Z[p ,p∗ +1]\P (0)
Zp∗ −p∗ ,ε
ε|P| ∗
=
≥ exp (r − `)τ (ε) − N .
Zp∗ −p∗ (0)
Zp∗ −p∗
P very good
for N large enough (depending on ).
Therefore, to conclude it only remains to observe that according to Appendix B the quotient
Z[−1,p∗ +1] (0)Zp∗ −p∗ (0) Z[p∗ −1,N +1] (0)
ZN (0)
decays at most polynomially in N which implies that it disappears on an exponential scale.
Therefore, (3.4) follows. For the case of Dirichlet boundary conditions on the left hand side
and free boundary conditions on the right hand side we easily obtain a version of (3.4) when
we replace r by a.
We conclude with the lower bound of Theorem 1.5 once we have shown that the proof above
can be easily extended to well behaved functions g which have a finite number of intervals
I ⊂ (0, 1) on which g is zero. All the steps above can be extended to a finite number of zero
intervals using the expansion and splitting. As the number of such zero intervals is finite we
finally conclude with our lower bound.
LARGE DEVIATIONS
23
3.2.2. Proof of the upper bound of Theorem 1.5. For the upper bound we need to show that
lim sup
N →∞
1
ZN,ε (r) r
log
γ (K) ≤ − inf Σ(h),
h∈K
N
ZN (0) N,ε
(3.9)
for all closed K ⊂ C([0, 1]; R).
Reduction to a simpler statement.
First of all we observe:
r
Lemma 3.6. The family {γN,ε
}N is exponentially tight in C([0, 1]; R).
The proof of this lemma can be found at the end of this section. Lemma 3.6 implies that it
suffices to establish (3.9) for compact sets K. Going further, it suffices to show that for any
g ∈ K and any > 0 there exists a δ = δ(g, ) > 0 such that
lim sup
N →∞
1
ZN,ε (r) r
log
γ (B(g, δ)) ≤ −Σ(g) + .
N
ZN (0) N,ε
(3.10)
Here B(g, r) = {h ∈ C : kh − gk∞ < r} denotes the L∞ ball of radius r around g.
We give the simple argument to show that (3.10) implies (3.9): For any compact set K and
any > 0 there exists a finite set {g1 , . . . , gM } ⊂ K such that K ⊆ ∪M
j=1 B(gj , δ(gj , )). Then
(3.10) yields
M
lim sup
N →∞
X ZN,ε (r)
ZN,ε (r) r
1
1
r
log
γN,ε (K) ≤ lim sup log
γN,ε
(B(gj , δ(gj , ε)))
N
ZN (0)
N
Z
(0)
N →∞
N
j=1
1
ZN,ε (r) r
log
γ (B(gj , δ(gj , )))
j=1,...,M N →∞ N
ZN (0) N,ε
≤ − min Σ(gj ) + ≤ max lim sup
j=1,...,M
≤ − inf Σ(h) + ,
h∈K
so (3.9) follows because > 0 can be chosen arbitrarily small.
For fixed g and the value of δ is determined by the following lemma.
Lemma 3.7. For any g ∈ C([0, 1]; R) and all > 0 there exists a δ̄ > 0 and a closed set
I ⊂ [0, 1] such that the following hold:
(1) I is the union of finitely many disjoint closed intervals, i.e.
I = ∪M
j=1 [`j , rj ]
(3.11)
for some finite M and 0 ≤ `1 < r1 < `2 < r2 < . . . < rM ≤ 1.
(2) The level-set {t ∈ [0, 1] : |g(t)| ≤ δ̄} is contained in I.
(3) The measure of I satisfies the bound
|I| ≤ |{t ∈ [0, 1] : |g(t)| = 0}| + .
The proof of this lemma is also given at the end of this section.
Expansion and key lemmas. We will now proceed to prove that (3.10) holds for a fixed g
and and a suitable δ ∈ (0, δ̄), where δ̄ is given by Lemma 3.7. For simplicity, we will assume
that the set I constructed in this lemma consists of a single interval [`, r]. The argument for
24
STEFAN ADAMS, ALEXANDER KISTER, AND HENDRIK WEBER
the case of a finite union of disjoint intervals is identical, only requiring slightly more complex
notation, and will be omitted.
We write
r
γN,ε
(B(g, δ)) =
1
ZN,ε (r)
X
r
ε|P| Z[−1,N +1]\P γ[−1,N
+1]\P (B(g, δ)),
(3.12)
P
r
where as above γ[−1,N
+1]\P denotes the Gaussian measure over {−1, . . . , N + 1} which is pinned
at the sites in P, i.e.
Y
Y
1
−H[−1,p∗ +1] (φ)
r
e
dφ
γ[−1,N
(dφ)
=
δψk (dφk ),
k
+1]\P
Z[−1,N +1]\P (r)
k∈{1,...,N −1}\P
k∈P∪{−1,0,N,N +1}
and Z[−1,N +1]\P (r) is the corresponding Gaussian normalisation constant. By definition |g(t)| >
δ̄ > δ for t ∈ [0, 1] \ I, so in (3.12) it suffices to sum over those sets of pinning sites P ⊆ N I ∩ Z.
The next two lemmas simplify the expressions in the sum (3.12). For the moment we only deal
with homogeneous boundary conditions r = 0. We start by introducing some notation which
will be used to simplify the partition functions. As above in Definition 3.3 we will be interested
in sets of pinning sites P that allow to separate the Hamiltonian H into independent parts:
Definition 3.8. Let P ⊆ {1, . . . , N − 1} be non-empty and let p∗ = min P and p∗ = max P.
We will call P an good choice of pinning sites if {p∗ + 1, p∗ − 1} ⊆ P. We will also call the
empty set good.
Note that the very good sets introduced in Definition 3.3 are indeed good. The difference
between the two notions is that we do not prescribe the precise value of p∗ and p∗ for good sets.
They will however always be confined to the interval [b`N c, brN c + 1]. We also introduce the
following operation of correcting a set to make it good.
Definition 3.9. Let P ⊆ {1, . . . , N − 1} be non-empty with p∗ = min P and p∗ = max P. Then
we define
c(P) = P ∪ {p∗ + 1, p∗ − 1}.
We also set c(∅) = ∅.
For later use we remark that on the one hand the correction map c adds at most two points
to a given set P, and that on the other hand for a given good set P there are at most 4 distinct
e with c(P)
e = P. The following Lemma permits to replace the partition function Z[−1,N +1]\P (0)
P
in (3.12) by the partition function Z[−1,N +1]\c(P) (0) with corrected choice of pinning sites.
Lemma 3.10. For every non-empty P ⊆ {1, . . . , N − 1} we have
Z[−1,N +1]\P (0) ≤ (2πN )Z[−1,N +1]\c(P) (0).
Proof.
For any A ⊂ {1, . . . , N − 1} set
γA0 (dφ) =
Y
1
e−H[−1,N +1] (φ)
dφk
ZA (0)
k∈A
Y
k∈{−1,...,N +1}\A
δ0 (dφk ).
LARGE DEVIATIONS
25
We derive an identity that links the Gaussian partition function ZA (0) to ZA\{j} (0) for an
arbitrary A ⊆ {1, . . . , N − 1} and j ∈ A. We have
Z Z
Y
Y
ZA (0) =
e−H[−1,N +1] (φ)
dφk
δ0 (dφk ) dφj .
(3.13)
R
k∈A\{j}
k∈{−1,...,N +1}\A
Denote by φ∗ the unique minimiser of H[−1,N +1] subject to the constraints that φ∗ (k) = 0 for
k ∈ {−1, . . . , N + 1} \ A and φ∗ (j) = 1. Then by homogeneity for any y ∈ R the function
yφ∗ (k) is the unique minimiser of H[−1,N +1] constrained to be zero on the same set, but satisfying
yφ∗ (j) = y. This implies that for any φ : {−1, . . . , N + 1} → R satisfying the same pinning
constraint we have
H[−1,N +1] (φ) = H[−1,N +1] (φ − φ(j)φ∗ ) + φ(j)2 H[−1,N +1] (φ∗ ).
1
As in the proof of Lemma 3.5 we can see that H[−1,N +1] (φ∗ ) = 2var(φ(j))
where var(φ(j)) denotes
0
the variance of φ(j) under γA . This allows to rewrite (3.13) as
Z Z
Y
Y
y2
0
−H[−1,N +1] (φ−φ(j)φ∗ )
ZA =
e
dφ(k)
δ0 (dφk ) e− 2var(φ(j)) dy
R
k∈A\{j}
0
=ZA\{j}
Z
−y 2
0
e 2var(φ(j)) dy = ZA\{j}
k∈{−1,...,N +1}\A
p
2π var(φ(j)).
R
As the correction map c adds at most two points to the pinned set it only remains to get an
0
upper bound on the variance of φ(j) for j = p∗ +1 or j = p∗ −1 under γ[−1,N
+1]\P , or equivalently
∗
a lower bound on H[−1,N +1] (φ ); we show the argument for p∗ + 1. It is very similar to the
upper bound on the variance derived in Lemma 3.5, but this time we obtain a better bound
using the fact that p∗ + 1 is adjacent to a pinned sight. More precisely, using that φ∗ (p∗ ) = 0
and the fact that the homogenous boundary conditions imply φ∗ (0) − φ∗ (−1) = 0, we get
∗
∗
1 = φ (p∗ + 1) − φ (p∗ ) =
p∗
X
p∗
X
∗
∗
φ (j + 1) − φ (j) − φ (j) − φ (j − 1) =
∆φ∗ (j)
∗
∗
j=0
p∗
1
≤ (p∗ + 1) 2
X
(∆φ∗ (j))2
12
j=0
1
≤ N 2 H[−1,N +1] (φ∗ )
21
.
j=0
This finishes the argument.
The next Lemma provides an upper bound on the Gaussian probabilities appearing in (3.12)
(still for homogeneous boundary conditions). Its proof makes use of a strong form of Gaussian
large deviation bounds provided by the Gaussian isoperimetric inequality.
Lemma 3.11. Let g ∈ C([0, 1]; R) and > 0. Then there exists a δ ∈ (0, δ̄) and an N0 > 0
such that for all N ≥ N0 and all P ⊆ {1, . . . , N − 1}
0
γ[−1,N
+1]\P (hN ∈ B(g, δ)) ≤ exp − N (E(g) − ) .
Proof.
For this proof we use the notation
EN (h) =
N
j − 1
j 2
1 X 1 4 j + 1 N h
+h
− 2h
2 j=0 N
N
N
N
26
STEFAN ADAMS, ALEXANDER KISTER, AND HENDRIK WEBER
for the discrete approximations of E. Note that according to the defining relation (1.3) between
hN and φ we have
H[−1,N +1] (φ) = N EN (hN ).
We now recall the Gaussian isoperimetric inequality (see e.g. [Led96]) in the following convenient form: Let N > 0 and A ⊆ {1, . . . , N − 1} and assume that δ > 0 is large enough to ensure
that
1
γA0 (hN ∈ B(0, δ̃)) ≥ .
2
Then for any r ≥ 0 we get
n
o
√
inf 2 khN − f k ≥ δ̃
≤ Φ(r N ).
γA0 hN :
(3.14)
f ∈SN (r /2)
Here
Z ∞
√
2
2
1
1
− x2
− N2r
√
√
Φ( N r) = √
e
dx
≤
e
2π √N r
2π N r
denotes the tail of the standard normal distribution and
n
r2 o
2
,
SN (r /2) = h : EN (h) ≤
2
is the sub-level set of EN .
The desired statement then follows if we can show that there exists a δ > 0 such that on the
one hand for N large enough
0
γ[−1,N
+1]\ B(0, δ)
(3.15)
uniformly over all P as well as
inf
f ∈B(g,2δ)
EN (f ) ≥ E(g) − ε.
(3.16)
Indeed, (3.16) implies that for all h ∈ B(g, δ) we have
inf
f ∈SN (E(g)−ε)
kf − hk∞ ≥ δ
which permits to invoke (3.14).
The proof of the first statement (3.15) is identical to the argument in Lemma 3.5. For the
second one we use Lemma A.2.
Conclusion. We now apply these two Lemmas to the terms appearing in the sum (3.12).
For each P 6= ∅ we can write
∗
−HΛN (φr,N )
∗
r
0
Z[−1,N +1]\P γ[−1,N
Z[−1,N +1]\P (0) γ[−1,N
+1]\P (B(g, δ)) = e
+1]\P (B(g − hr,N , δ)),
where we have used (B.2) to include the boundary conditions into the Gaussian partition
function. The function φ∗r,N is the minimiser of HΛN subject to the boundary conditions r and
pinned on the sites in P. The profile h∗r,N is the rescaled version of φ∗r,N . Using the previous
LARGE DEVIATIONS
27
two Lemmas as well as Lemma A.2, which permits to rewrite HΛN (φ∗r,N ) ≥ E(h∗r,N ) − , we
bound the last expression by
≤ exp − N (E(h∗r,N ) − (2πN )Z[−1,N +1]\c(P) (0) exp − N (E(g − h∗r,N ) − )
≤ (2πN )Z[−1,N +1]\c(P) (0) exp − N (E(g) − 2) .
Plugging this into (3.12) we obtain
X |P| Z[−1,N +1]\c(P) (0)
ZN,ε (r) r
γN,ε (B(g, δ)) ≤ (2πN ) exp − N (E(g) − 2)
.
ε
ZN (0)
ZN (0)
P
The sum appearing in this expression can be rewritten as
X
X
X
X
Z[−1,N +1]\P (0)
Z[−1,N +1]\P (0)
Z[−1,N +1]\P (0)
≤4
≤4
ε|P|
ε|P|
ε|P|
ZN (0)
ZN (0)
ZN (0)
`N ≤k ≤k ≤rN P good
P
P good
1
2
p∗ =k1
p∗ =k2
X
=4
X
`N ≤k1 ≤k2 ≤rN P good
p∗ =k1
p∗ =k2
ε|P|
Z[−1,p∗ +1] (0)Z[p∗ ,p∗ ]\P (0) Z[p∗ −1,N +1] (0)
.
ZN (0)
As in the proof of the upper bound the inner sum can be rewritten as
X
Z[−1,p∗ +1] (0)Z[p∗ ,p∗ ]\P (0) Z[p∗ −1,N +1] (0)
ε|P|
ZN (0)
P good
p∗ =k1
p∗ =k2
=
Zp∗ −p∗ ,ε Z[−1,p∗ +1] (0)Zp∗ −p∗ (0) Z[p∗ −1,N +1] (0)
.
Zp∗ −p∗
ZN (0)
As the number of terms in the outer sum is bounded by N 2 , and as the Gaussian partition
functions only grow polynomially (see Appendix B), we obtain the desired conclusion from
(1.7) by employing uniformity which follows from the fact that we are summing over different
members of the same converging sequence.
Proofs of Lemmas.
Due to the Arzelà-Ascoli Theorem it suffices to show that
1
r
lim sup lim sup log γN,ε
h ∈ C([0, 1]; R) : kḣk∞ ≥ M = −∞
M →∞
N →∞ N
Proof of Lemma 3.6.
(3.17)
r
As the measure γN,ε
is a convex combination of the Gaussian measures γAr we start by establishing that each of these measures satisfies this bound. For any fixed choice of pinning sites
P ⊆ {0, . . . , N } we have
γAr h ∈ C : kḣk∞ ≥ M = γrA φ : |φ(k + 1) − φ(k)| ≥ N M for at least one k ∈ {0, . . . , N }
≤
N
X
γAr φ : |φ(k + 1) − φ(k)| ≥ N M
k=0
(N M )2 N
≤ (N + 1) sup exp −
2var(φk )
k=0
28
STEFAN ADAMS, ALEXANDER KISTER, AND HENDRIK WEBER
As the variances of the φ(k + 1) − φ(k) are bounded by N (as seen in Lemma 3.5) we can
conclude.
By sigma-additivity of the Lebesgue measure there exists a δ > 0
Proof of Lemma 3.7.
such that
|{x ∈ [0, 1] : |g(x)| < 2δ}| ≤ |{x ∈ [0, 1] : g(x) = 0}| + ε.
For any x ∈ {x ∈ [0, 1] : |g(x)| < 2δ} there exists a ρx > 0 such that the ball B(x, ρx ) ∩ [0, 1] is
still contained in this set. By compactness there exists a finite set {x1 , . . . , xM̃ } such that
M̃
[
B(xj , ρxj ) ⊇ {x ∈ [0, 1] : |g(x)| ≤ δ}.
j=1
We then set I = ∪M̃
j=1 B(xj , ρxj ) ∩ [0, 1] and claim that this set has the desired properties.
Indeed, the union of finitely many open intervals can always be written as the union of a
(potentially smaller number of) disjoint open intervals. The closure of such a set is the union
of a finite (again, potentially smaller) number of disjoint closed intervals. The set I contains
{x ∈ [0, 1] : |g(x)| ≤ δ} by construction. Furthermore
M̃
[
B(xj , ρxj ) ∩ [0, 1] ⊆ {x ∈ [0, 1] : |g(x)| < 2δ}
j=1
which implies that the measure of this set is bounded by |{x ∈ [0, 1] : g(x) = 0}| + ε. Adding
a finite number of boundary points does not change the Lebesgue measure, so that I satisfies
the same bound.
Appendix
A. Energy minimiser
We outline the standard solution for the variational problem of minimising the energy functional
Z
1 1 2
E(h) =
ḧ (t) dt for h ∈ Hr2 ,
(A.1)
2 0
where r = (a, α, b, β).
Proposition A.1. The variational problem, minimise E in Hr2 , has a unique solution denoted
by h∗r ∈ Hr2 and given as
h∗r (t) = a + αt + k(r)t2 + c(r)t3 ,
t ∈ [0, 1],
with
k(r) = 3(b − a) − 2α − β,
and c(r) = (α + β) − 2(b − a).
Furthermore, E(h∗r ) = (2k(r)2 + 6k(r)c(r) + 6c(r)2 ).
Proof.
Let Dr = {h ∈ L2 ([0, 1]) : exists f ∈ Hr2 s.t. f¨ = h}. Then
hh, g − hiL2 = 0
for all g ∈ Dr
LARGE DEVIATIONS
29
follows for all h with h(4) ≡ 0. For f ∈ Hr2 let g ∈ Dr be given by g = f¨. Then for any h ∈ Hr2
with h(4) ≡ 0 we obtain,
1
1
E(f ) = hg, giL2 ≥ hḧ, ḧiL2 = E(h).
2
2
We obtain the uniqueness by convexity and conclude with noting that (h∗r )(4) ≡ 0, see [Mit13]
for an overview of bi-harmonic solutions.
We now consider the quadratic potential V (η) = 12 η 2 and we shall minimise the Hamiltonian
(1.1) in ΛN := {−1, 0, . . . , N, N + 1} over the set
ΛN
: φ(−1) = ψ (N ) (−1), φ(0) = ψ (N ) (0), φ(N ) = ψ (N ) (N ), φ(N + 1) = ψ (N ) (N + 1)}
ΩN
r = {φ ∈ R
of configurations with given boundary conditions.
Lemma A.2. For any N ∈ N let hN : − N1 , 0, . . . , 1, NN+1 → R be given with boundary values
r = (a, α, b, β) i.e.
1 1
hN (0) = a, hN (1) = b, N hN (0) − hN −
= α, N hN 1 +
− hN (1) = β.
N
N
We interpolate hN linearly between the grid-points. Furthermore set
N
j − 1
j i2
1 X 3h j + 1 − hN
+ 2hN
N hN
.
EN (hN ) =
2 j=0
N
N
N
Then if hN converges uniformly over [0, 1] to a function h we have
Z
1 1 00
(h (t))2 dt.
lim inf EN (hN ) ≥ E(h) =
N →∞
2 0
Proof. We fix a subsequence (Nk ) along which EN (hN ) converges to lim inf N →∞ EN (hN )
which we can assume to be finite without loss of generality. Along this sequence EN (hN ) is
bounded. We drop the extra-index k and assume from now on that
sup EN (hN ) = C̄ < ∞.
(A.2)
N
We will first consider discrete derivatives of hN . For j = −1, . . . , N , we set
j
h j + 1
j i
gN
= N hN
− hN
,
(A.3)
N
N
N
and as before we interpret gN as a function [− N1 , 1] → R by linear interpolation between the
grid-points. Note that hN corresponds to the configuration φhN (x) = N 2 hN (x/N ) for x ∈ ΛN .
Thus the Hamiltonian HΛN (hN ) and therefore the functional EN (hN ) can be re-expressed in
terms of gN as
N
j − 1 i2 1 Z 1
1X h j HΛN (φhN ) = EN (hN ) =
N gN
− gN
=
g 0 (t)2 dt.
2 j=0
N
N
2 − N1 N
So, (A.2) immediately implies the uniform Hölder bound
Z t
Z 1
21
p
1
1
0
0
|gN (t) − gN (s)| ≤
|gN (r)| dr ≤ |t − s| 2
gN
(r)2 dr ≤ |t − s| 2 2C̄
s
1
−N
(A.4)
30
STEFAN ADAMS, ALEXANDER KISTER, AND HENDRIK WEBER
for − N1 ≤ s < t ≤ 1. We introduce the (slightly) rescaled function g̃N : [0, 1] → R defined as
N 1 g̃N (t) = gN
t+
N +1
N
and observe that
Z 1
Z 1
N
2
0
0
g̃N (t) dt =
(t)2 dt ≤ 2C̄.
gN
1
N
+
1
0
−N
Observing that g̃N (1) = β we can conclude that there is a subsequence Nk along which g̃Nk
converges weakly in H 1 ([0, 1]) to a function g which satisfies
Z
Z 1
Z
N
0
2
0
2
0
g (t) dt ≤ lim inf g̃Nk (t) dt = lim inf
gN
(t)2 dt = lim inf E(hN ).
k
N →∞
N
→∞
N
→∞
N
+
1
0
Thus, the desired statement follows as soon as we have established that for all x ∈ [0, 1],
Z x
h(x) = a +
g(t) dt,
0
1
2
R1
00
2
because then we get
h (t)
0
(A.3) of gN for any N and any
R1
dt = 21
x ∈ [ Nj ,
0
2
g (t) dt. To see this we rewrite the defining relation
j ≥ 0, as
Z x
j−1
X
1
(N x − j)
hN (x) = a +
g̃N (t) dt + EN ,
(A.5)
gN (k) +
gN (j) = a +
N
N
0
k=0
where the error term EN satisfies
Z
Z x
gN (t) dt −
EN ≤ 0
0
x
0
j+1
],
N
Z
g̃N (t) dt + x
gN (t) − gN
0
btN c N
dt.
The definition of g̃N together with a uniform boundedness of gN in L1 ([0, 1]) imply that the
first term converges to zero as N → ∞ while the second term can be seen to go to zero by the
uniform Hölder bound (A.4). We can then conclude by going back to (A.5) and noting that
on the one hand hN (x) converges to h(x) byRassumption and that onRthe other hand the weak
x
x
convergence of g̃N in H 1 ([0, 1]) implies that 0 g̃N (t) dt converges to 0 g(t) dt.
Proposition A.3. For any N ∈ N, N > 2, the variational problem, minimise HΛN in ΩN
r has
∗
N
a unique bi-harmonic solution φr,N ∈ Ωr satisfying
(
∆2 φ∗r,N (x) = 0
for x ∈ {0, 1, . . . , N },
(A.6)
∗
(N )
φr,N (x) = ψ (x) for x ∈ ∂ΛN = {−1, 0, N, N + 1}.
Let h∗r,N (ξ) := N12 φ∗r,N (ξN ) for ξ ∈ {− N1 , 0, N1 , . . . , 1, NN+1 }, then h∗r,N Γ-converges to h∗r as
N → ∞. Moreover,
1
1
HΛN (φ∗r,N ) −→ E(h∗r ) as N → ∞.
N
2
Proof.
Clearly, the Euler-Lagrange equations are equivalent with (A.6) and thus
HΛN (φ) ≥ hφ∗r,N , φ∗r,N iRΛN .
LARGE DEVIATIONS
31
Similar to Proposition A.1 one can show that the unique minimiser φ∗r,N ∈ ΩN
r is a polynomial
of order three such that
1
1
N +1
t ∈ {− , 0, , . . . , 1,
h∗r,N (t) = aN + αN t + kN (r)t2 + cN (r)t3 ,
},
N
N
N
with
2b − a(2 + 3N ) + N (3b + α(N + 1) − β)
aN = a; αN =
;
(N + 1)(N + 2)
(−α + β + N (3(b − a) − 2α − β))
kN (r) = N
;
(N + 1)(N + 2)
(2(a − b) + α + β)
.
cN (r) = N 2
(N + 1)(N + 2)
This implies the convergence of h∗ r, N to h∗r . Thus we have established the limit for a recovery
sequence and the statement for the Γ-convergence follows in conjunction with Lemma A.2. To
see the convergence of the minimal energies note that for any N the function h∗r,N is the unique
minimiser for the functional EN , see proof of Lemma A.2.
B. Partition function
We collect some known results about the partition function for the case with no pinning (see
[Bor10] and [BS99]). The partition function with zero boundary condition r = 0 is
Z
Z
N
−1
N
−1
Y
Y
Y
1
−H[−1,N +1] (φ)
ZN (0) =
e
dφk
δ0 (dφk ) =
e− 2 hw,BN −1 wi
dwi
k=1
k∈{−1,0,N,N +1}
RN −1
i=1
(B.1)
(2π)N −1 1/2
1
=
= (2 + (N − 1))2 (3 + 4(N − 1) + (N − 1)2 ),
det(BN −1 )
12
where the matrix BN −1 reads as

6
 −4

 1

 0

· · ·
0
−4
6
−4
1
···
···

1
0 ··· 0
−4 1
0 · · ·

6 −4 1 · · ·
.
−4 6 −4 · · ·

· · · · · · · · · · · ·
· · · 1 −4 6
We can easily obtain the following relation for the partition functions with given boundary
r (via ψ (N ) ) and zero boundary condition r = 0 for models without pinning.
ZN (r) = exp − HΛN (φ∗r,N ) ZN (0).
(B.2)
Acknowledgments
S. Adams thanks J.D. Deuschel for discussions on this problem as well T. Funaki and H. Sakagawa for discussions on Laplacian models and the warm hospitality during his visit in July
2015.
32
STEFAN ADAMS, ALEXANDER KISTER, AND HENDRIK WEBER
References
[Ant05]
[BFO09]
[BCF14]
[Bor10]
[BLL00]
[BS99]
[CD08]
[DS89]
[CD09]
[DZ98]
[F]
[FO10]
[FS04]
[GSV05]
[HV09]
[Led96]
[Mit13]
[Mog76]
[Sin92]
S. Antman, Nonlinear Problems in elasticity, Applied Mathematical Sciences 107, 2nd ed., Springer
(2005).
E. Bolthausen, T. Funaki and T. Otobe, Concentration under scaling limits for weakly pinned
Gaussian random walks, Probab. Theory Relt. Fields 143:441-480 (2009).
E. Bolthausen, T. Chiyonobu and T. Funaki, Scaling limits for weakly pinned Gaussian random fields under the presence of two possible candidates, ArXiv: 1406.7766v1 (preprint) (2014).
M. Borecki, Pinning nad Wetting Models for Polymers with (∇ + ∆)-Interaction, PhD thesis
TU-Berlin, (2010).
R. Bundschuh, M. Lässig and R. Lipowsky, Semiflexible polymers with attractive Interactions,
Eur. Phys. J. E 3, 295-306 (2000).
A. Böttcher and B. Silbermann, Introduction to large truncated Toeplitz matrices, SpringerVerlag, (1999).
F. Caravenna and J.D. Deuschel, Pinning and Wetting Transition for (1+1)-dimensional Fields
with Laplacian interaction, Annals Probab. 36, No. 6, 2388-2433 (2008).
J.-D. Deuschel and D. Stroock, Large Deviations, American Mathematical Society, (1989).
F. Caravenna and J.D. Deuschel, Scaling limits of (1 + 1)-dimensional pinning models with
Laplacian interaction, Annals Probab. 37, No. 3, 903–945 (2009).
A. Dembo and O. Zeitouni, Large Deviations Techniques and Applications, 2nd edition, Springer,
Berlin (1998).
T. Funaki, Stochastic Interface Models, in: J. Picard (ed.), Lectures on probability theory and
statistics, Lecture Notes in Mathematics, vol. 1869, Springer-Verlag, Berlin (2005).
T. Funaki and T. Otobe, Scaling limits for weakly pinned random walks with two large deviation
minimizers, J. Math. Soc. Japan Vol. 62, No. 3, 1005-1041 (2010).
T. Funaki and H. Sakagawa, Large deviations for ∇ϕ Interface models and derivation of free
boundary problems, Advanced Studies in Pure Mathematics 39, 173–211 (2004).
D. Gasbarra, T. Sottinen and E. Valkeila, in Stochastic Analysis and Applications: The Abel
Symposium 2005 edited by Fred Espen Benth, Giulia Di Nunno, Tom Lindstrom, Bernt Øksendal,
Tusheng Zhang, 361–382, Springer (2007).
O. Hryniv and Y. Velenik, Some Rigorous Results about Semiflexible Polymers I. Free and
Confined Polymers, Stoch. Proc. Appl. 119, 3081-3100 (2009).
M. Ledoux, Isoperimetry and Gaussian analysis, Lectures on probability theory and statistics,
Springer, 165-294 (1996).
D. Mitrea, Distributions, Partial Differential Equations, and Harmonic analysis, Springer Universitext (2013).
A.A. Mogul’skii, Large deviations for trajectories of multi-dimensional random walks, Theory
Probab. Appl., 21, 300-315 (1976).
Y.G. Sinai, Distribution of some functionals of the integral of a random walk, Teoret. Mat. Fiz.
90, 323–353 (1992).
Mathematics Institute, University of Warwick, Coventry CV4 7AL, United Kingdom
E-mail address: S.Adams@warwick.ac.uk
Download