A VIEW OF EXTREMALITY IN TERMS OF PROBABILITY KERNELS: AND µ

advertisement
A VIEW OF EXTREMALITY IN TERMS OF PROBABILITY KERNELS:
MIXING PROPERTIES OF µ+ AND µ− FOR ISING MODEL ON Z2
RAIMUNDO BRICEÑO
In these notes we will study the structure of the set of Gibbs measures G(γ) for a given specification γ.
For now we have been studying some particular cases of (Gibbsian) specifications, namely the ones induced
by a nearest-neighbour (n.n.) interaction potential Φ. This is an important class, where the Ising model on
Z2 is one of the main examples. By using the idea that the set G(γ) is nothing more than a set of probability
measures preserved by a particular family of probability kernels, we will establish properties of the convex
set G(γ) and characterize its extreme elements, which will ideally represent an equilibrium state (or phase) of
a real system. Using the same formalism, we will also find connections between ergodicity and extremality.
1. Measure kernels
Let (X, X ) and (Y, Y) be two measurable spaces. A function π : X × Y → [0, ∞] is called a (probability)
kernel from Y to X if:
(i) π (·|y) is a (probability) measure on (X, X ) for all y ∈ Y , and
(ii) π (A|·) is Y-measurable for each A ∈ X .
Example 1.1. Given ϕ : Y → X measurable, the function (A, y) 7→ 1A ◦ ϕ(y) is a kernel from Y to X .
A kernel π from Y to X maps measures µ on (Y, Y) to measures µπ on (X, X ), where:
Z
µπ(A) := dµπ (A|·), for A ∈ X .
Let f : X → R be a measurable function. Then πf : Y → R is a measurable function given by:
Z
πf := π (f |·) = π (dx|·) f (x).
On the other hand, if f ≥ 0, we let f π to be the kernel from Y to X defined by:
Z
f π(A|·) := π(f 1A ) =
π (dx|·) f (x), for A ∈ X .
A
If (Z, Z) is a third measurable space, then the composition π1 π2 of a kernel π1 from Z to Y and a kernel
π2 from Y to X is a kernel from Z to X defined by the formula:
Z
π1 π2 (A|z) = π1 (dy|z)π2 (A|y), for A ∈ X , z ∈ Z.
We will focus on a particular family of probability kernels. Let S be a countable infinite set and (E, E)
any measurable space. (Think of S = Z2 , E = {−1, +1} and E = 2E .) We consider the product space
(Ω, F), given by Ω = E S = {ω = (ωi )i∈S : ωi ∈ E} and F = E S , the product σ-algebra. We denote the set
of probability measures on (Ω, F) by P(Ω, F) (the set of random fields).
Consider (Ω, F) and B ⊆ F a sub-σ-algebra. A probability kernel π : F ×Ω → [0, ∞] is a proper probability
kernel from B to F if:
(i) π (·|ω) is a probability measure on (Ω, F), for all ω ∈ Ω,
(ii) π (A|·) is B-measurable, for each A ∈ F, and
(iii) π (B|·) = 1B , for B ∈ B (where the word “proper” is coming from).
1
2. Specifications
Let S = {Λ ⊂ S : 0 < |Λ| < ∞} be the set of all non-empty finite subsets of S. We denote σi : Ω → E,
the projection ω 7→ ωi and σΛ : Ω → E Λ the natural extension to subsets Λ ⊆ S. By definition, F is the
smallest σ-algebra on Ω containing the cylinder events {σΛ ∈ A}, for Λ ∈ S and A ∈ E Λ . For each ∆ ⊆ S,
we consider the σ-algebra F∆ of all events occurring in ∆ and TΛ = FS\Λ . The tail σ-algebra is defined as:
\
T :=
TΛ .
Λ∈S
Definition 2.1. A specification with parameter set S and state space (E, E) is a family γ = (γΛ )Λ∈S of
proper probability kernels γΛ from TΛ to F which satisfy the consistency condition γ∆ γΛ = γ∆ , when Λ ⊂ ∆.
The random fields in the set:
(2.1)
(2.2)
G(γ) := {µ ∈ P(Ω, F) : µ(A|TΛ ) = γΛ (A|·) µ-a.s. for all A ∈ F and Λ ∈ S}
= {µ ∈ P(Ω, F) : µγΛ = µ, for all Λ ∈ S}
are then said to be specified or to be admitted by γ.
Remark 1. If π is a proper probability kernel from B to X and µ ∈ P(X, X ), then:
µ(A|B) = π(A|·) µ-a.s., for all A ∈ X ⇐⇒ µπ = µ.
Definition 2.2. An interaction potential is a family Φ = (ΦA )A∈S of functions ΦA : Ω → R such that:
(i) For each A ∈ S, ΦA is FA -measurable.
P
(ii) For all Λ ∈ S and ω ∈ Ω, the Hamiltonian HΛΦ (ω) := A∈S,A∩Λ6=∅ ΦA (ω) exists.
Idea (Gibbsian specification): γΛΦ ({σΛ = ξΛ }|ω) ∝ exp[−βHΛΦ (ξΛ ωΛc )], where ω represents the frozen
“surrounding world”. This is equivalent in the n.n. case to ask conditional probabilities of µ in finite volumes
to be proportional to the measures “µS,δ ” defined in class. The elements of G(γ Φ ) are called Gibbs measures.
3. Extremality
Definition 3.1. An element µ of a convex subset C of any real vector space is said to be extreme in C
(µ ∈ ex C) if µ 6= sν + (1 − s)ν 0 for all 0 < s < 1 and ν, ν 0 ∈ C with ν 6= ν 0 .
Proposition 3.1. Let (Ω, F) be a measurable space, π a probability kernel from F to F, and µ ∈ P(Ω, F)
with µπ = µ. Then the system:
Iπ (µ) = {A ∈ F : π(A|·) = 1A µ-a.s.}
of all µ-almost surely π-invariant sets is a σ-algebra, and for all measurable f : Ω → [0, ∞[ we have:
(f µ)π = f µ ⇐⇒ f is Iπ (µ)-measurable.
Proof. Suppose f is Iπ (µ)-measurable. We will show that (f µ)π = f µ. f is the limit of an increasing
sequence of Iπ (µ)-measurable step functions. Therefore it is sufficient to prove that (1A µ)π = 1A µ for all
A ∈ Iπ (µ). We choose any B ∈ F. Then
(1A µ)π(B) = (1A µ)π(A ∩ B) + (1A µ)π(B\A)
≤ µπ(A ∩ B) + µ(1A π(Ω\A|·))
= µ(A ∩ B) + µ(1A 1Ω\A )
= (1A µ)(B).
Similarly, (1A µ)π(Ω\B). This implies (1A µ)π(B) = (1A µ)(B) because
(1A µ)π(B) + (1A µ)π(Ω\B) = µ(A) = (1A µ)(B) + (1A µ)(Ω\B).
The proof is thus complete.
We will say a probability measure µ is trivial on a σ-algebra A if µ(A) = 0 or 1, for all A ∈ A.
2
Corollary 1. Let (Ω, F) be a measurable space, Π a non-empty set of probability kernels from F to F, and
PΠ = {µ ∈ P(Ω, F) : µπ = µ for all π ∈ Π}
the
T convex set of all Π-invariant probability measures on (Ω, F). Let µ ∈ PΠ be given and IΠ (µ) =
π∈Π Iπ (µ) be the σ-algebra of all µ-almost surely Π-invariant sets. Then µ ∈ ex PΠ if and only if µ
is trivial on IΠ (µ).
Proof. Suppose there exists a set A ∈ IΠ (µ) such that 0 < µ(A) < 1. Then we may look at the conditional
probabilities
ν = µ(·|A) = f µ, ν 0 = µ(·|Ac ) = f 0 µ,
where f = 1A /µ(A) and f 0 = 1Ac /µ(Ac ). Clearly, ν 6= ν 0 and µ = µ(A)ν + (1 − µ(A))ν 0 . But Proposition
3.1 asserts that ν, ν 0 ∈ PΠ because f and f 0 are Iπ (µ)-measurable for all π ∈ Π. Thus µ is not extreme. We can also define the σ-algebra:
IΠ = {A ∈ F : π(A|·) = 1A for all π ∈ Π} ⊆ IΠ (µ).
Remark 2. Suppose γ is a specification. Then Iγ = T . If µ ∈ G(γ) = Pγ , then Iγ (µ) is the µ-completion
of T .
Proof. If A ∈ T then γΛ (A|·) = 1A for all Λ ∈ S because all γΛ ’s are proper. Conversely, if A ∈ Iγ then
A = {γΛ (A|·) = 1} ∈ TΛ for all Λ ∈ S and therefore A ∈ T .
The preceding remark implies in particular that Iγ (µ) is µ-trivial if and only if T is µ-trivial.
Theorem 3.2. Let γ be a specification. Then the following conclusions hold.
(a) A Gibbs measure µ ∈ G(γ) is extreme in G(γ) if and only if µ is trivial on the tail σ-field T .
(b) If µ ∈ G(γ) and ν ∈ P(Ω, F) is absolutely continuous with respect to µ then ν ∈ G(γ) if and only if
ν = f µ for some T -measurable function f ≥ 0.
(c) Each µ ∈ G(γ) is uniquely determined (within G(γ)) by its restriction to the tail σ-field T .
(d) Distinct extreme elements µ, ν of G(γ) are mutually singular on T , in that there is some A ∈ T with
µ(A) = 1, ν(A) = 0.
Idea: A system’s state is described by a suitable extreme element (phase) of G(γ).
• Microscopic quantities: rapid fluctuations, random, consistent with observed empirical distribution
of microscopic variables (Gibbs distribution in finite volumes).
• Macroscopic quantities: constant, non-random, tail measurable functions!
Proposition 3.3. For each µ ∈ P(Ω, F) the following statements are equivalent.
(i) µ is trivial on T .
(ii) For all cylinder events A (or, equivalently, for all A ∈ F),
lim sup |µ(A ∩ B) − µ(A)µ(B)| = 0.
Λ∈S B∈TΛ
Proof. Suppose µ is trivial on T . Let A ∈ F be given. The backward martingale convergence theorem
asserts that for each cofinal increasing sequence (Λn )n≥1 in S
µ(A|TΛn ) → µ(A|T )
1
in the L (µ)-sense. As µ is trivial on T , µ(A|T ) = µ(A) µ-a.s. Thus for each > 0 there is some ∆ ∈ S
such that:
µ(|µ(A|T∆ ) − µ(A)|) < .
For all ∆ ⊂ Λ ∈ S we have:
Z
sup |µ(A ∩ B) − µ(A)µ(B)| ≤ sup dµ(µ(A|T∆ ) − µ(A)) ≤ µ (|µ(A|T∆ ) − µ(A)|) < ε.
B∈T
B∈T
Λ
∆
B
This proves (ii).
3
4. Ergodic random fields
Fix S = Zd , the d-dimensional integer lattice and consider Θ the shift group. Then:
PΘ (Ω, F) = {µ ∈ P(Ω, F) : θi (µ) = µ for all i ∈ S} ,
is the set of shift-invariant random fields on Zd (which is always non-empty). We will consider the σ-algebra
IΘ = {A ∈ F : θi A = A for all i ∈ S}
of all shift-invariant events.
Remark 3.
(1) An F-measurable function f : Ω → R is IΘ -measurable if and only if f is invariant, in that f ◦θi = f ,
for all i ∈ S.
(2) For each µ ∈ PΘ (Ω, F), the σ-algebra
IΘ (µ) = {A ∈ F : 1A ◦ θi = 1A µ-a.s. for all i ∈ S}
of all µ-almost surely invariant events is the µ-completion of IΘ .
We can see
n the parallel
o between G(γ) and T , and PΘ (Ω, F) and IΘ by defining the family of probability
kernels Π = θ̂i : i ∈ S from F to F given by:
θ̂i (A|ω) = 1A (θi ω),
for i ∈ S, A ∈ F, ω ∈ Ω.
Theorem 4.1.
(1) A probability measure µ ∈ PΘ (Ω, F) is extreme in PΘ (Ω, F) if and only if µ is trivial on the invariant
σ-algebra IΘ .
(2) Suppose µ ∈ PΘ (Ω, F) and ν ∈ P(Ω, F) is absolutely continuous relative to µ. Then ν ∈ PΘ (Ω, F)
if and only if ν = f µ for some IΘ -measurable function f .
(3) Each µ ∈ PΘ (Ω, F) is uniquely determined (within PΘ (Ω, F)) by its restriction to IΘ .
(4) Distinct probability measures µ, ν ∈ ex PΘ (Ω, F) are mutually singular on IΘ , in that there exists
an A ∈ IΘ such that µ(A) = 1 and ν(A) = 0.
Notice the similarity between Theorem 4.1 with Theorem 3.2.
Proposition 4.2. If µ ∈ ex G(γ) ∩ PΘ (Ω, F), then µ ∈ ex PΘ (Ω, F).
Proof. If µ ∈ ex G(γ), then for all cylinder events A, limΛ∈S supB∈TΛ |µ(A ∩ B) − µ(A)µ(B)| = 0. Let A, C
be two cylinder events. W.l.o.g., consider A, C ∈ FΛN , for some N . It suffices to prove that:
∀ > 0, ∃n0 ∈ N : kik∞ ≥ n0 =⇒ |µ(A ∩ θi C) − µ(A)µ(C)| < .
Let m be such that |µ(A ∩ B) − µ(A)µ(B)| < , for all B ∈ TΛm . Taking n0 = m + (2N + 1), we have
that if kik∞ ≥ n0 , then θi C ∈ TΛm , so:
> |µ(A ∩ θi C) − µ(A)µ(θi C)| = |µ(A ∩ θi C) − µ(A)µ(C)| ,
since µ ∈ PΘ (Ω, F).
Corollary 2. µ+ and µ− are strong mixing.
Proof. From Ben’s talk we know that:
η
+
µ−
Λ µΛ µΛ
+
Taking the mean µ(dη) in the previous equation, we obtain that µ−
Λ µ µΛ and since stochastic
domination is preserved under weak limits, we end up with:
R
µ− µ µ+
4
when µ is any Ising-model Gibbs measure for fixed β and h. On the one hand, this shows that µ− and µ+
are extremal. If µ+ = sµ1 + (1 − s)µ2 , for µ1 6= µ2 and 0 < s < 1, there must exist an increasing event A
such that µ1 (A) 6= µ2 (A). W.l.o.g., we can assume that µ1 (A) < µ2 (A). Then:
µ+ (A) = sµ1 (A) + (1 − s)µ2 (A) < sµ2 (A) + (1 − s)µ2 (A) = µ2 (A),
which this is a contradiction with the fact that µ2 µ+ . Since µ− and µ+ are also shift-invariant, both
must be strong mixing (in particular, ergodic).
Theorem 4.3 (Aizenman-Higuchi, 1980). For the Ising model on Z2 with no external field and inverse
temperature β > βc , µ+ and µ− are the only phases, and any other Gibbs measure is a mixture of these two.
References
[1] H.-O. Georgii. Gibbs Measures and Phase Transitions, de Gruyter, Berlin New York (1988).
[2] H.-O. Georgii, O. Haggstrom and C. Maes. The random geometry of equilibrium phases (2001).
5
Download