MEN S T A T A G I MOLEM SI S UN IV ER S I TAS WARWI C EN Towards the Formulation of an Edge Based Random Permutation Model by Owen Daniel Dissertation Submitted to the University of Warwick for the degree of Master of Science Supervised by Stefan Adams Mathematics September 2011 Contents Acknowledgments ii Abstract iii Chapter 1 Introduction 1 1.1 Prior Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Lattice Percolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Continuum Percolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Chapter 2 Formulation of the Model and Justification 15 2.1 Formulation of the Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.2 The Canonical Ensemble . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.3 Random Partitions and Permutations . . . . . . . . . . . . . . . . . . . . . . . 23 2.4 Bose-Einstein Condensation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Chapter 3 Analysis of the Canonical Ensemble 30 3.1 Bose Statistics Weight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.2 Uniform and Ewens Weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Chapter 4 Conclusion 44 i Acknowledgments I would like to thank Stefan Adams for his support throughout the writing of this dissertation, and for his friendship over the past year. I also wish to thank my parents, without whom I could not have undertaken this MSc. Finally, thanks to my sister (the family biologist) for explaining enough genetics for me to be able to include two lines of biological insight! ii Abstract We introduce a new model for continuum percolation of spatial permutations, and give a background in both percolation and the theory of random permutations. We give a detailed analysis of the partition function and associated specific free energy in the canonical ensemble, and related the findings to a model for Bose-Einstein condensation. iii Chapter 1 Introduction Percolation was first developed by Broadbent and Hammersley [BH57], who were motivated by the study of the spread of liquids through porous media. Rather than seeing this as the diffusion of a randomly moving liquid, the percolation models introduced by the authors ascribe the random behaviour to the medium. Since this first defining paper, the notion of percolation has expanded to cover a vast range of models. Simply put, percolation deals with the distributional properties of random graphs under certain defining parameters. The field divides into two sub areas: lattice and continuum models. In the former, a lattice L acts as a reference graph and a configuration is an element of {0, 1}E(L) , where E(L) denotes the edge set of the lattice. For continuum percolation, our configuration is of the form {0, 1}E where E is the edge set of a complete graph with vertices, ξ ⊂ Rd a locally finite collection of points. Throughout we will always work on almost surely infinite graphs (though notions have been developed for defining percolation in a finite setting). We say that percolation occurs if the random graph almost surely contains a connected component (or cluster ) with infinitely many vertices. A model which percolates is said to be in the super critical phase, else it is said to be sub critical ; whether of not the model is sub or super critical depends on some variable parameters. One of the fundamental questions of percolation is to determine whether there is a critical value of the parameter at which the model changes from being sub to super critical. A model which has such a critical value is said to exhibit a phase transition. We will introduce both lattice and continuum percolation, outlining some key results in the literature. Emphasis, however, is put on continuum models since this is the setting for the remainder of this work. Before continuing we make a few comments on notation and conventions. 1 1.1 Prior Remarks Throughout we will use (Ω, F, P) to denote a probability space. Unless otherwise defined, (Ω, F) will be a generic measurable space, and P will be a relevant measure induced by the e µ to denote model, which will be apparent from the setting. In a few cases we will also use P, P, probability measures. M1 (A) will denote the set of probability measures on a set A. Maps X : Ω → S, for a state space S, will always be measurable (i.e. random variables). Indicator functions will be denoted by χ : A → {0, 1} χB (a) = 1 if a ∈ B, 0 if a ∈ / B. Graphs are given by the pair G = (V, E) where V is the vertex set, and E is the set of edges. As is common in percolation, we may refer to vertices as points or sites, and edges as bonds. When we refer to a lattice we will mean the graph obtained from the lattice by taking the points of intersection as the vertices, and the line segments between vertices as the edges. A → − directed graph is G = (V, E) where the directed edge from x to y is hx, yi. For countable sets A, the cardinality of A will be denoted #A or |A|. For subsets B ⊆ Rd then |B| will represent the d-dimensional Lebesgue volume. All figures were modeled using Matlab, and (where relevant) are actual stochastic simulations. 1.2 Lattice Percolation For the purpose of this introduction we restrict ourselves to the lattice L = Zd though significant work has been done for other lattices, in particular for the triangular lattice in two dimensions. Let V be the set of sites, V = {(v1 , . . . , vd ) : vi ∈ Z, 1 ≤ i ≤ d}, and let E be the set of all nearest-neighbour line segments between sites. Following the notation of [BR06], a configuration is a map ω : E → {0, 1} and the sample space is Ω = {0, 1}E , the collection of all configurations. Given a configuration ω ∈ Ω, a bond e ∈ E is said to be open if ω(e) = 1, else it is closed. Avoiding technical detail, we can define probability measures P for a suitable σ-algebra of Ω. In particular we are interested in the Bernoulli bond percolation measure which for p ∈ [0, 1] satisfies: Pp [ω(e) = 1|ω] = p. That is each bond is chosen to be either open or closed according 2 to a Bernoulli p-measure, independently of all other bonds. Figure.1.1 shows a section of a typical configuration. Any ω ∈ Ω can readily be seen as an instance of a random graph, and we freely switch between this interpretation, and the technical view as a configuration in the state space. Two sites u, v ∈ V are said to be in the same component if they are connected by a sequence of bonds (for more details of graph theory terminology, see [Bo98]). The component containing v ∈ V is denoted Cv . As hinted before, our fundamental problem is to determine Figure 1.1: A section of a typical Bernoulli bond percolation configuration, with p = 1/2. whether or not, and for what values of the parameter p ∈ [0, 1] it is possible for Cv to contain infinitely many sites. By translational invariance of Zd we can concentrate on the case C0 , the cluster containing the origin. Let |C0 | , # {v ∈ V : v ∈ C0 } , θ(p) , Pp [|C0 | = ∞] , ψ(p) , Pp [∃ v ∈ V : |Cv | = ∞] . Clearly we have that θ(0) = ψ(0) = 0, and θ(1) = ψ(1) = 1, and further it is known that θ is monotonically increasing in p, [Gr99]. We define the critical probability pc , sup {p : θ(p) = 0} . (1.2.1) 0≤p≤1 At first this may seem to be the wrong description: after all, for percolation to occur we want there to almost surely be an infinite component, whilst on the face of it pc is just the value at which we have positive probability of any given component being infinite. In fact a 3 simple argument using Kolmogorov’s zero-one law tells us that as soon as there is a positive probability that a given component is infinite, then there must be an infinite component. Proposition 1.2.1. For d ≥ 1 ψ(p) = 0 if p < pc , 1 if p > pc . (1.2.2) See [BR06]. For d = 1, the study of percolation is trivial; it is immediate that for p < 1 then |C0 | < ∞ almost surely, and hence pc = 1. The question becomes much harder for d ≥ 2, and in fact the picture is by no means clear for d ≥ 3. In 1960 progress was made towards identifying pc for d = 2 when Harris proved that θ(1/2) = 0, and hence pc ≥ 1/2. In 1980 the picture was completed by Kesten. Theorem 1.2.2. (Harris-Kesten) For d = 2 1 pc = . 2 (1.2.3) Details of both Harris’s lower bound, and Kesten’s upper bound can be found in [BR06]. Having confirmed the existence of infinite clusters for p > pc in Proposition 1.2.1, it is natural to ask as to their multiplicity. Theorem 1.2.3. (Aizenman-Kesten-Newman) For d ≥ 1, and p > pc then there is almost surely a unique infinite cluster. The result is actually much more powerful than that stated above, and answers the question of multiplicity of infinite clusters in various different percolation models. Details can be found in [BR06]. e = {0, 1}V , We take a moment to note that we can define a similar model by letting Ω ep where P ep [ω̃(v) = 1|ω̃] = p, so that each site is determined with a new probability measure P to be open with probability p. Now two sites u, v ∈ V are connected if there is a path joining them which goes through only open vertices. This model is referred to as Bernoulli site percolation. All the definitions above carry through, though many results which are known for bond percolation are still elusive for the site model. Whilst the full form of Theorem 1.2.3 carries over for site percolation, the analogue of the Harris-Kesten theorem is still unproven. (site) It is, however, known that pc (bond) ≥ pc , [BR06]. 4 1.3 Continuum Percolation Continuum percolation was introduced by Gilbert in 1961, [Gi61], and has grown into a vast area, with as many dissimilarities as similarities to lattice percolation. Continuum percolation models differ from lattice models in that they contend with two levels of randomness: in defining the site set, and the interaction between sites. However the classic questions remain the same between the two. The standard distribution for the site set ξ ⊆ Rd is given by the Poisson point process (P.p.p.). We recall a simple characterisation, Definition 1.3.1. For Λ : Rd → [0, ∞) measurable, a random countable subset ξ ⊂ Rd is said to have a Poisson Λ-distribution if for each bounded Borel set U ∈ B(Rd ) the random variable N (U ) , #(U ∩ ξ), satisfies 1. If U1 , . . . , Un ∈ B(Rd ) are disjoint, bounded Borel sets, then: N (U1 ), . . . , N (Un ) are independent random variables. 2. For a bounded Borel set U ∈ B(Rd ), then N (U ) is Poisson distributed with mean R U Λ(x)dx. We write ξ ∼ Poi(Λ). The process is said to be homogeneous if Λ(x) ≡ λ > 0. Unless specifically stated, we will work with homogeneous processes. There are several justifications for choosing to use P.p.p. Importantly, configurations are almost surely locally finite, which is required to make sense of most continuum percolation models. Secondly, the homogeneous P.p.p. is stationary, meaning the distribution is invariant under translations. Also of significance is that the P.p.p. is an ergodic process, though this will not be important for our introduction. In the following we outline four topics from the theory of continuum percolation, in all cases it is assumed that the random point set ξ ⊆ Rd is a homogeneous P.p.p. Voronoi Percolation It is natural to begin by studying Voronoi percolation; although it is an area of more recent development than other models we will introduce, it bares closest resemblance to the lattice model. A Voronoi tessellation of Rd is a way of dividing the space into neighbouring polygonal regions. Formally Definition 1.3.2. Given a locally finite point set P ⊆ Rd , #P ≥ 3, the (closed) Voronoi cell 5 Vp associated to p ∈ P is n o Vp , x ∈ Rd : |x − p| ≤ |x − q|, ∀q ∈ P , where | · | is the Euclidean d-metric. The Voronoi tessellation of P is the set of Voronoi cells about the points in P , V (P ) = {Vp : p ∈ P }. A Voronoi tessellation corresponds to a tiling of Rd into d-dimensional polytopes. We say that two cells Vp , Vq are adjacent if they share a (d − 1)-dimensional face. We can define a graph GP = (P, E) where vertices are adjacent only if their cells are adjacent. If P = {(z1 , . . . , zd ) : zi ∈ Z} then the induced graph is exactly the lattice graph used for d-dimensional square lattice percolation. Voronoi percolation is the study of site percolation on the random graph Gξ generated by the Voronoi tessellation of a P.p.p. ξ ∈ Rd . Remark 1.3.3. It is important to note that general continuum models can be built in two ways, 1. Construct a random point set ξ ⊆ Rd , then build a random structure on this set of points. 2. Construct the random structure and point set simultaneously. The first case equates to the structure on the points being independent of the points, whilst the second calls for the distribution of the points, and the structure to be correlated. In the case of Voronoi percolation we work in the first regime. This helps us to formalise the model, since we can use the fact that the distribution of two superimposed Poisson processes is itself Poisson. We follow the construction described in [BR06]. For fixed density λ > 0, and p ∈ [0, 1], let ξ + ∼ Poi(λp), ξ − ∼ Poi(λ(1 − p)) be independent Poisson point processes in Rd . Then ξ = ξ + ∪ ξ − is Poisson with density λ, and each point x ∈ ξ is in ξ + with probability p, independently of all points y ∈ ξ, y 6= x (note that almost surely no point in ξ is in both ξ + and ξ − ). Points in ξ + (respectively ξ − ) are said to be open (r. closed). An open cluster C, is a maximal connected component of G = Gξ such that C ⊂ ξ + . The left hand image in Figure.1.2 shows a section of a graph G, with open clusters marked in bold. An alternate view is to see this as face percolation on the Voronoi tessellation, where the cell Vx is designated as black if x ∈ ξ + , and white if x ∈ ξ − . The right hand image in Figure.1.2 shows the face percolation diagram for the same point set; it is easily seen that black clusters in this description are one-one with open clusters in the graph setting. 6 Figure 1.2: A section of a typical Voronoi percolation configuration, with p = 1/2. Open sites are marked in black. It is convention to fix λ = 1, and consider the measures Pp for varying p ∈ [0, 1]. With some care, it is possible to reformulate our previous questions for lattice percolation in the Voronoi setting; in particular we can define a critical probability pc ∈ [0, 1] above which we almost surely have an infinite cluster, and a reworking of Aizenmann-Kesten-Newmann confirms that if such an infinite cluster exists then it is almost surely unique. In 2006, Bollobás and Riordan proved the analogue to the Harris-Kesten Theorem. Theorem 1.3.4. (Bollobás-Riordan) For Voronoi percolation in dimension d = 2 1 pc = . 2 (1.3.1) For details, see [BR06]. As with lattice percolation, few results are known for d ≥ 3. Boolean Model In the Boolean model we assign to each point x ∈ ξ a random variable ρx > 0, such that the random variables are independent, identically distributed (i.i.d.), and are independent of the underlying P.p.p. The variable ρx is interpreted as the radius of a ball around the point x. We are interested in analysing the regions in which the balls overlap. A complete rigorous construction of the model requires a deal of measure theoretic technicality which is not enlightening, so it is omitted here. The difficulty arises in giving a construction of the model which allows for independence between the process and the random variables, whilst maintaining the stationarity inherent in the point process. The reader is referred to [MR96] for the construction. 7 Definition 1.3.5. A Poisson-Boolean model in Rd is a triple (ξ, λ, ρ), where ξ is a P.p.p. with density λ > 0, and ρ > 0 is a random variable, independent of ξ. Figure 1.3: A section of a typical Boolean percolation configuration. The occupied component is shaded. As for Voronoi percolation, it is possible to see this new model as simply a random graph model where points x, y ∈ ξ are adjacent when |x − y| < ρx + ρy . However, many of the interesting results in the area come from studying the random occupied region of Rd , O = ∪x∈ξ B(x, ρx ), where B(x, ρx ) denotes the open ball or radius ρx about x, under the Euclidean metric. The vacant region is V = Rd \O. Two points are said to be in the same component, C, if they are connected in the graph theoretical sense, though the component itself is the entire open region surrounding the connected points, C ⊆ O. Figure.1.3 depicts a section of a Boolean model in the special case in which ρ is constant. Rather than talking about infinite components as we did for the previous models, we ask instead whether there is an unbounded component. It is convention to condition the P.p.p. to contain the origin, i.e. to consider the Palm process. Since taking the Palm version of a P.p.p. does not alter the distributional properties of the process, we will always assume 0 ∈ ξ. We use W ⊆ O to denote the connected component containing the origin. Lemma 1.3.6. For a Boolean model (ξ, λ, ρ), where E[ρd ] < ∞ then the number of balls B(x, ρx ) which intersect with the ball B(0, t), t > 0, is Poisson distributed with parameter Z λ Rd P[ρ ≥ |x| − t]dx. 8 (1.3.2) Further, the probability that the origin, 0 ∈ Rd , is not in the occupied component is P[W = ∅] = exp(−λπd Eρd ). (1.3.3) Where πd = |B(0, 1)| is the volume of a d-dimensional unit ball. See [MR96]. By stationarity we can replace the origin in (1.3.3) with any point x ∈ Rd . Letting V = V ∩ [0, 1]d , stationarity and Fubini’s theorem give Z E|V | = E Z = [0,1]d [0,1]d χV (x)dx E [χV (x)] dx = exp(−λπd Eρd ). In particular if E[ρd ] < ∞ then E|V | > 0 and P[|V | = 6 0] > 0, heuristically it is apparent that almost surely the vacant region has positive volume. The technical argument requires details concerning the ergodicity of the model. In [MR96] this picture is completed with the following result Theorem 1.3.7. Let (ξ, λ, ρ) be a Boolean model. Then O = Rd if and only if Eρd = ∞. Hence it is only interesting to study those models for which Eρd < ∞. For the Boolean model, the study of criticality revolves around the density of the process, λ > 0. Concentrating on properties of W , there are several ways to define a critical density. One option is to consider the event that W extends infinitely far, that is d(W ) , supx,y∈W |x − y| is infinite, alternatively we can consider the event that the Lebesgue d-volume of W is infinite. This is further complicated by the fact that under certain conditions it is perfectly feasible that the probability that any occupied component contains infinitely many points is 0, whilst the expected number of Poisson points in W is infinite. We define the following different critical intensities. λc , inf {λ : P[d(W ) = ∞] > 0} , λD , inf {λ : Ed(W ) = ∞} , λH , inf {λ : P[|W | = ∞] > 0} , λT , inf {λ : E|W | = ∞} . Fortunately the picture is simplified under the assumption that the radius variable is bounded. 9 Theorem 1.3.8. For a Boolean model (ξ, λ, ρ) with 0 ≤ ρ ≤ R a.s., R > 0 λc = λH , and λD = λT . (1.3.4) Further for d ≥ 2 λc = λH = λD = λT . (1.3.5) Theorem 1.3.9. Let (ξ, λ, ρ) be a Boolean model with d ≥ 2. Suppose λ > λc , and 0 ≤ ρ ≤ R a.s. Then there is almost surely a unique unbounded component. Proofs of both theorems can be found in [MR96]. Of particular interest is the special case in which ρ = r is a constant. In particular supposing r = 1 we can expect to find an exact critical value above which infinite components occur. To date no exact expression has been found, but P. Hall showed the following bounds. Proposition 1.3.10. For a boolean model with unit radius variable, (ξ, λ, 1) then the critical density is bounded by 0.174 ≤ λc ≤ 0.843. (1.3.6) See [MR96]. Alternatively fixing λ = 1,we ask for what value rc > 0 does r > rc imply percolation. In [BR06] it is shown that this is the same model, and that the critical radius above, and the critical density satisfy ac = 4λc πrc2 , the critical degree of the model, where more generally the degree a(λ, r) = 4λπr2 is the expected (graph) degree of each vertex. Simulation has also been used to obtain estimates on λc , the best computational estimate giving λc ≈ 0.359 to three decimal places, [QTZ00]. Work has also been done in considering the model in which we choose a different domain to surround the points (eg. a square or ellipse in R2 ). For a bounded convex domain S ⊂ Rd we define the occupied component for the Poisson Boolean-S model by OS = [ (x + S), x∈ξ where x + S = x + y ∈ Rd : y ∈ S . Each different choice of S has its own corresponding critical density λc (S) > 0, above which we almost surely have an infinite component. One question is what the most efficient shape is to ensure percolation. In [RT02] the authors show that for all d ≥ 2, the d-dimensional simplex has minimal critical density. Theorem 1.3.11. For d ≥ 2, let λc = inf λc (S) : S ⊂ Rd convex, |S| = 1 . Then the unit 10 volume d-dimensional simplex, Td , is the unique unit volume domain in Rd satisfying λc = λc (Td ). (1.3.7) Random Connection Model Let g : Rd → [0, 1], we say that g is a connection function if it satisfies g(x) = g(y), whenever |x| = |y|, g(x) ≤ g(y), whenever |x| ≥ |y|. In the random connection model we construct a graph by considering each pair of points x, y ∈ ξ and connect them with an edge with probability g(x − y) ∈ [0, 1] independently of all other pairs of points. As with the Boolean model, the precise technical definition is somewhat cumbersome, and can be found in [MR96]. Definition 1.3.12. A random connection model is a triple (ξ, λ, g), where ξ is a P.p.p. with density λ > 0, and g : Rd → [0, 1] is a connection function. The monotonicity property of g gives the model a strong spatial correlation with the probability of two points being adjacent diminishing with distance. Again we condition the process to contain the origin, and let W ⊆ G = G(ξ, E) denote the connected component containing the origin. Figure.1.4 gives examples of two instances of a random connection model, conditioned to stay inside a box. In one image the probability of two vertices being adjacent reduces exponentially with distance, whilst in the other the probability decreases linearly. In the special case g(x) = χ{|x|≤2r} (x) then the graph obtained is exactly that associated with the Boolean model (ξ, λ, r) with fixed radius r > 0. For any function f : Rd → [0, 1], and a P.p.p. ξ with density λ > 0, then let ξf denote the thinned process obtained by removing each point x ∈ ξ independently with probability 1 − f (x). Then ξf is an inhomogeneous P.p.p. with distribution Poi(λf ). For a random connection model (ξ, λ, g) we see that the degree of the component containing the origin has the same distribution as a g-thinned P.p.p. P[#W = k] = From this we see that if R Rd e−λ R Rd g(x)dx λ k! R Rd g(x)dx k . (1.3.8) g(x)dx = ∞, then P[#W = k] = 0 for all k ∈ N, so #W = ∞ 11 Figure 1.4: Random connection models conditioned to stay in a box. The left hand image shows a configuration with exponential connection function. The right hand image has a linear connection function. almost surely, and we do not observe a phase-transition in λ. It follows that we consider only those connection functions which satisfy Z g(x)dx < ∞. 0< (1.3.9) Rd As with the Boolean model we can define critical densities at which the model exhibits phase transition. Theorem 1.3.13. For a random-connection model (ξ, λ, g) with d ≥ 2, and satisfying (1.3.9), there exist λH = λH (g) and λT = λT (g) such that 0 < λT < λH < ∞ and 1. E[#W ] < ∞ for λ < λT , E[#W ] = ∞ for λ > λT . 2. P[#W = ∞] = 0 for λ < λH , P[#W = ∞] > 0 for λ > λH . For the Boolean model, Theorem 1.3.8 stated that the different critical densities were equivalent under the condition that the radius variable ρ was bounded; in the case of the random connection model we can drop any restrictions. Theorem 1.3.14. For a random connection model (ξ, λ, g), with d ≥ 2, then λT (g) = λH (g). (1.3.10) Proofs of the existence of the critical intensities, and of their equality, can be found in [MR96]. Unlike in lattice and Voronoi percolation, where our variable parameter is the probability p ∈ [0, 1], in the continuum models our parameter λ ∈ (0, ∞) varies over the whole positive real line, so we can study the limiting regime as λ → ∞. For a random connection model, we 12 would expect that as the density of the process grows, then the probability that the origin is contained in an infinite component should tend to 1. Perhaps less immediate is that as the density tends to infinity, then the probability that any finite component is an isolated point, converges to 1. These two results are confirmed by the following Proposition 1.3.15. For a connection function g : Rd → [0, ∞) with bounded support then the random-connection model (ξ, λ, g) satisfies P[#W = 1] = 1. λ→∞ P[#W < ∞] lim (1.3.11) Theorem 1.3.16. For a connection function g satisfying (1.3.9) then − log P[#W < ∞] = 1. Rd λ→∞ λ R g(x)dx lim (1.3.12) See [MR96]. Poisson Graphs and Matching The final model that we consider in this introduction is a construction of a spatial random graph with Poisson points as vertices, where the vertex degrees are i.i.d. according to some predetermined law µ ∈ M1 (N), the set of all probability measures over the natural numbers. Further we can consider matching problems, for which the combinatorial (non-stochastic) case was studied by Gale and Shapley. This exposition closely follows [DHH10] and [DHP10]. Throughout we assume that ξ ⊆ Rd is a P.p.p. with density 1. Fixing a law µ ∈ M1 (N), we assign to each x ∈ ξ, a stub variable Dx i.i.d. with law µ. (ξ, µ, Dx ) defines a marked point process, where the marks are the number of stubs at the given vertex. The aim is then to ‘join’ these stubs to create edges, so that the resulting graph has i.i.d. edge distribution with law µ; a graph satisfying this is a matching. A more rigorous definition is given in [DHH10]. Definition 1.3.17. A matching for the marked process (ξ, µ, Dx ) is a point process M on the space of unordered pairs in Rd , such that its distribution satisfies 1. For all {x, y} ∈ M, x, y ∈ ξ almost surely. 2. For all x ∈ ξ, # {y : {x, y} ∈ M} = Dx . The process M is said to be a partial matching if instead of (2), it satisfies: 13 3. For all x ∈ ξ, # {y : {x, y} ∈ M} ≤ Dx . A matching is said to be a factor if it is determined by (ξ, Dx ) and has no additional stochasticity, and is translation-invariant if its distribution is invariant or stationary under translations. As before we condition ξ to contain the origin. Our object of study becomes the graph G = G(ξ, M), with vertex set ξ and edge set M. Additionally we condition on G being a simple graph, containing neither loops or multiple edges. Let W denote the connected component containing the origin. For the matching problem the notion of percolation is the same as before, that percolation occurs if the graph contains an infinite component. With the current set up, the transition exhibited in the model is trivial, and independent of the dimension. Theorem 1.3.18. Let (ξ, µ, Dx ) be as above. For any d ≥ 1 1. There exists a stationary factor matching scheme with P[#W < ∞] = 1. 2. If P[D0 ≥ 2] > 0, there is a stationary factor matching scheme such that G almost surely contains a unique infinite component and P[#W = ∞|D0 ≥ 2] = 1. See [DHH10]. Of interest in graph theory and combinatorial optimization is the notion of stable matching, in which we assign to each vertex a set of preferences, and arrange the graph so as that each vertex neighbours members of its preference set. In our setting we can define such a problem, where the notion of preference is provided by Euclidean distance. Definition 1.3.19. A stable multi-matching is a matching scheme M such that almost surely for x, y ∈ ξ, x 6= y, then either {x, y} ∈ M else max(d(x), d(y)) ≤ |x − y|, where d(x) = max {|x − z| : {x, z} ∈ M}. Percolation for stable multi-matching in Rd is more interesting than percolation for the matching problem, as we find that there is a value k(d) = k such that if each vertex has degree at least k almost surely, then percolation occurs, so k defines a critical value for the model. Theorem 1.3.20. Let M be a stable multi-matching for ξ ⊆ Rd , d ≥ 1. Then, 1. If P[D0 ≤ 2] = 1, and P[D = 1] > 0 then P[#W = ∞] = 0. 2. There exists k = k(d) such that if P[D0 ≥ k] = 1, then P[#W = ∞] > 0. See [DHH10]. 14 Chapter 2 Formulation of the Model and Justification In the remainder of this dissertation we aim to formulate a new percolation model for random spatial graphs, under the restriction that the components are directed cycles. The model is of significance for its contribution to the theory of random permutations, and also for its implications in statistical mechanics. We start by giving the formal definitions of our model, before giving an overview on the theory of random partitions and permutations, which will form a mathematical background. We then discuss the relation of the model to the physical world, and in particular justify why it is a suitable model for Bose-Einstein condensation. 2.1 Formulation of the Model In this section we will define the configuration spaces associated with our model for random permutations, and make some remarks concerning variations, and properties. Formally our configuration space is given by X X Ω , η ∈ {0, 1}ξ×ξ : ξ ⊂ Rd locally finite, and ηxy = ηyx = 1, ∀x ∈ ξ . y∈ξ (2.1.1) y∈ξ Recall that a permutation of a set A is a bijection σ : A → A; we denote S(A) for the set of all permutations of A. A configuration η ∈ Ω induces a permutation on the set of points ξ, by the map η 7→ σ η ∈ S(ξ), where for x, y ∈ ξ: σ η (x) = y, if ηxy = 1. Alternatively we can see → − a configuration as a directed graph G = (ξ, E) where the directed edge hx, yi ∈ E if ηxy = 1. 15 Note that Ω contains all permutations over all possible choices of locally finite sets of points ξ ⊆ Rd . Suppose that η (n) ∈ Ω is a sequence of configurations converging to a configuration η, then we note that Fatou’s lemma can only ensure for us that X ηxy ≤ lim y∈ξ n→∞ X (n) ηxy = 1. y∈ξ In fact it is possible to devise sequences η (n) → η for which η ∈ / Ω (consider the configurations η (n) on two points (0, 0) and (0, n), with a cycle joining them, in the limit the vertex at the origin has neither incoming or outgoing edges). So we have Proposition 2.1.1. Ω is not closed. In particular there exist sequences η (n) ∈ Ω such that limn→∞ η (n) ∈ / Ω. An approach to deriving probability measures on Ω is by localisation: we define a suitable measure on the subspace Ωξ(n) , where Ωξ(n) X X ξ×ξ d , η ∈ {0, 1} : ξ ⊂ R , #ξ = n, ηxy = ηyx = 1, ∀x ∈ ξ , y∈ξ (2.1.2) y∈ξ and look to extend such a measure to a limiting measure on Ω. For this reason closure is important, to ensure that we can achieve limit points. We see from Proposition 2.1.1 that we will have to work on a larger space, and then condition measures to concentrate on Ω. Let X X Ω , η ∈ {0, 1}ξ×ξ : ξ ⊂ Rd locally finite, and ηxy ≤ 1, ηyx ≤ 1, ∀x ∈ ξ . y∈ξ (2.1.3) y∈ξ In this dissertation we do not look at the construction of measures on Ω, however we take a moment to define percolation for this model, and to make some observations. Throughout we assume that we have a continuous family of measures (Pγ )γ∈Γ ⊆ M1 (Ω|Ω), where M1 (A|B) denotes the set of measures on A with support B ⊆ A, and γ is some varying parameter. If our distribution Pγ is chosen so that the marginal distribution of the underlying point process is Poisson, i.e. the map η 7→ ξη , the point set of η, is a P.p.p., then as we saw in our summary of continuum models, we can condition the process to contain the origin, and define C0 to be the event that the origin is contained in an infinite cycle. One way to define this is C0 , η ∈ Ω : ∃(xi )∞ i=1 ⊆ ξ, xi 6= xj , ∀i 6= j, ηxi ,xi+1 = 1 . 16 (2.1.4) Note that this assumption that the marginal is Poisson is not necessarily a desirable feature for the model, since we wish for there to be strong dependence between the points and the edge set. So we also define the event C ∞ C ∞ (η) , [ Cx , (2.1.5) x∈η where Cx is defined analogously to C0 . In the non-Poissonian setting we can still talk about C ∞ , and drop the conditioning on the origin. However, remaining in the Poisson case, as for lattice percolation, we can define a function θ(γ) , Pγ [C0 ]. (2.1.6) It will be desirable to have that for our family of measures, we have a monotonicity property in γ: Pγ1 [C0 ] ≤ Pγ2 [C0 ] for γ1 ≤ γ2 . This allows us to define a critical value γc , sup {γ : θ(γ) = 0} . (2.1.7) γ∈Γ Note that checking technicalities for these definitions is not simple, and is work for the future, here we are simply outlining a course of action. Again drawing comparison to the lattice model we can define ψ(γ) , Pγ [Cx (η), for some x ∈ ξ] (2.1.8) Intuitively we expect C ∞ to be a tail event, which is to say that it does not depend on the properties of the configuration on any bounded box, and with this in mind we expect to be able to use a result akin to Kolmogorov’s zero-one law, to show that for γ > γc then Pγ [C ∞ ] = 1, as in Proposition 1.2.1. Depending on the measures Pγ , care will be needed here since we cannot guarantee the independence which is needed for Kolmogorov’s theorem. We can also consider the multiplicity of infinite components. Let M (η) give the number of infinite cycles in the configuration η M (η) , # x (k) = (k) (xi )∞ i=1 : x (k) (l) 6= x , ∀k 6= l, (k) xi 6= (k) xj , ∀i 6= j, ηx(k) ,x(k) i =1 . i+1 Again, it is possible to ask for the distribution of M under the measure Pγ ; at the moment there is no consensus as to whether for γ > γc we will find M = 1 almost surely, or whether we could even have non-zero probability of there being countably many infinite components. 17 For the purpose of this dissertation we will not tackle these questions, and we work on a smaller configuration space on which we can more readily define measures. In particular we restrict our attention to the model on a box Λ (necessarily implying that the point clouds are finite), where we do not have any interaction with the outside of Λ, which is to say we have free boundary conditions; our analysis proceeds by considering properties of this model in the limit Λ → Rd . 2.2 The Canonical Ensemble Let Λ ⊂ Rd be a bounded region, we can define an analogous space to (2.1.1), but for configurations restricted to Λ X X ξ×ξ ΩΛ , η ∈ {0, 1} : ξ ⊂ Λ locally finite, and ηxy = ηyx = 1, ∀x ∈ ξ . y∈ξ (2.2.1) y∈ξ The simplest form of the model is the annealed case, in which we use a predetermined point set ξ ⊂ Rd (chosen to be locally finite) ΩΛ,ξ X X , η ∈ {0, 1}ξ×ξ : ηxy = ηyx = 1, ∀x ∈ ξ . y∈ξ (2.2.2) y∈ξ In the annealed situation, constructing a suitable σ-algebra is achieved by taking cylinder sets. For points xi , yi ∈ ξ and ai ∈ {0, 1}, 1 ≤ i ≤ k, cylinder sets of length k ≥ 1 are defined by h i (x1 , y1 ) = a1 , . . . , (xk , yk ) = ak , {η ∈ ΩΛ,ξ : ηx1 y1 = a1 , . . . , ηxk yk = ak } . (2.2.3) We define the σ-algebra, Fξ , on ΩΛ,ξ as the σ-algebra generated by all cylinder sets of finite length Fξ = σ h i (x1 , y1 ) = a1 , . . . , (xk , yk ) = ak : xi , yi ∈ ξ, ai ∈ {0, 1} , 1 ≤ i ≤ k, k ∈ N . (2.2.4) For the purposes of this dissertation we will work under a less stringent condition in which we fix the number of points N in ξ, but do not choose any specific point set; this is known as the 18 canonical ensemble ΩΛ,N X X , η ∈ {0, 1}ξ×ξ : ξ ⊂ Λ, #ξ = N, and ηxy = ηyx = 1, ∀x ∈ ξ y∈ξ (2.2.5) y∈ξ We are interested in studying classes of distributions on ΩΛ,N , and in particular analysing their limiting behaviour as N → ∞. So as to observe spatial effects we scale Λ with N , and take a sequence of boxes (ΛN )∞ N =1 such that N/|ΛN | → ρ ∈ (0, ∞). With this in mind we simplify notation and write ΩΛN = ΩΛN ,N . Defining a suitable σ-algebra for ΩΛN is harder than for the annealed case, since we must take into account the fact that our point set ξ is varying. Let Ω0 denote the set of all locally finite point sets n o Ω0 , ξ ⊂ Rd : ξ locally finite . (2.2.6) and let N (Λ) : Ω0 → N be the number of points in the box Λ. Then we construct a σ-algebra for Ω0 by F 0 = σ(N (Λ) : λ ⊂ Rd ). We can now consider a σ-algebra F on the canonical ensemble, where F= [ [ A∈F 0 Fξ (2.2.7) ξ∈A A standard approach to defining a measure over models in statistical mechanics is by way of a Gibbs distribution. In the following, we describe the construction of a partition function, and how this defines a distribution. We define an energy function or Hamiltonian, H : ΩΛN → R; configurations which have large values of H are considered to be high-energy configurations, and should be hard to achieve (i.e. low probability), whilst configurations with smaller values should be preferable (high probability). We introduce a parameter β > 0, the inverse temperature. At high temperatures, there is more thermal energy in the system, which means that the effect of the Hamiltonian is lessened, and high energy configurations are penalised less; at values β 1 we want all configurations to become uniformly likely. For the purposes of our work we will absorb the temperature term into our definition of the Hamiltonian, so given a configuration η ∈ ΩΛN , the Hamiltonian is of the form H(β, η, ξ) where ξ = ξη , the point set for the configuration η.To achieve a distribution we define a partition function, which is a 19 weighted average over the configurations Z e−H(β,η,ξη ) dη ZΛN (β) = (2.2.8) ΩΛN The integral above is somewhat complicated as it depends both on the point set and configuration. So as to find alternative expressions for ZΛN we will work under the strong assumption that the distribution of the points is Poisson, and independent of the configuration; in future work, we will want to be able to move away from this assumption, which currently limits the spatial correlation of the model. Definition 2.2.1. The Gibbs distribution on ΩΛN with Hamiltonian H, and inverse temperature β > 0, is described by the distribution e−HΛN dη. ZΛN (β) γΛβ N (dη) = (2.2.9) In particular, Z P[η ∈ A] = A e−H(β,η̃;ξη ) dη̃, ZΛN (β, ρ) A ∈ F. (2.2.10) The partition function contains all the information about the distribution, making it an important object of study. This leads us to look at the limiting behaviour of Z, which we do through the specific free energy f (β, ρ) = lim − N →∞ 1 log ZΛN (β). β|ΛN | (2.2.11) Consider as a special case the Hamiltonian H(β, η, ξη ) = 1 X d (x − y)2 ηxy − log(2πβ), 2β 2 (2.2.12) x,y∈ξη which clearly has a strong spacial correlation, where configurations are high energy if they contain edges joining distant points. The integral representation of the partition function given in (2.2.8) can be seen as a ‘two-step’ integration: we first integrate over all configurations on the annealed space Ωξ , and then integrate over all possible point sets; formally we get the representation ZΛN (β) = E X e−H(β,η,ξ) χ{#(ξ∩Λ)=N } , η∈Ωξ 20 (2.2.13) where the expectation E is with respect to a P.p.p. with density λ = 1. Letting x = {xi }N i=1 , then we can realise the above representation as Z Z X ··· ZΛN (β) = ΛN Z Z 1 ··· = e−H(β,η,x) dx1 · · · dxN (2.2.14) ΛN η∈Ω x ΛN ΛN X (2πβ)N d 2 e 1 (|x1 −xσ(1) |2 +···+|xN −xσ(N ) |2 ) − 2β dx1 · · · dxN , (2.2.15) σ∈S[N ] which is expressed as integrals over a sum of weighted permutations. Recall that the Gaussian kernel is given by d pt (x, y) = (2πt)− 2 e− (x−y)2 2t , x, y ∈ Rd . (2.2.16) In particular this is the transition kernel for d-dimensional Brownian motion. comparing this with the expression above we see that we will be able to use properties of the kernel to simplify the partition function. First, however, given that permutations induce partitions on [n], we can rewrite (2.2.8) in terms of integer partitions. We will introduce the theory of partitions in the following section, but for now we observe ZΛN (β) = n X Y λ∈PN k=1 1 k λk λk ! Z λk Z ··· ΛN pβ (x1 , x2 ) · · · pβ (xk−1 , xk )pβ (xk , x1 )dx1 · · · dxk ΛN (2.2.17) We can use properties of this kernel to simplify our expression. In particular we will use Proposition 2.2.2. For x, z ∈ Rd , 0 ≤ ti−1 < ti < ti+1 < ∞. Then Z Rd pti −ti−1 (x, y)pti+1 −ti (y, z)dy = pti+1 −ti−1 (x, z). (2.2.18) Proof. Without loss of generality, let x = 0 by translation invariance of Brownian motion. Z Rd pti −ti−1 (0, y)pti+1 −ti (y, z)dy − 21 (t = e z2 i+1 −ti ) Z d (4π 2 (ti − ti−1 )(ti+1 − ti )) 2 − 12 = e z2 (ti+1 −ti ) (4π 2 (ti − ti−1 )(ti+1 − ti )) e (y 2 −2yz) y2 − 12 (t (ti −ti−1 ) i+1 −ti ) − 12 (ti+1 −ti−1 ) y2 (ti −ti−1 )(ti+1 −ti ) dy Rd Z d 2 − 12 e 1 yz e (ti+1 −ti ) dy Rd 2 − 21 (t z −t ) i+1 i d (ti −ti−1 ) (ti − ti−1 )(ti+1 − ti ) 2 12 (ti+1 −t z2 i−1 )(ti+1 −ti ) = 2π e d (ti+1 − ti−1 ) (4π 2 (ti − ti−1 )(ti+1 − ti )) 2 e 21 1 = − 21 d e z2 (ti+1 −ti−1 ) (2π(ti+1 − ti−1 )) 2 = pti+1 −ti−1 (0, z) , where we used the property of the Gaussian integral that for a, b ∈ Rd Z e − 12 ay 2 +by dy = Rd 2π a d 2 b2 e 2a . Considering an integral over a block of length k, and under the heuristic assumption that the above proposition holds when we replace Rd with a box ΛN Z Z ··· ΛN ΛN Z = ZΛN = pβ (x1 , x2 ) · · · pβ (xk−1 , xk )pβ (xk , x1 )dx1 · · · dxk Z Z pβ (x1 , x2 )pβ (x2 , x3 )dx2 pβ (x3 , x4 ) · · · pβ (xk , x1 )dx3 · · · dxk dx1 ··· ΛN ZΛN ··· p2β (x1 , x3 )pβ (x3 , x4 ) · · · pβ (xk , x1 )dx3 · · · dx1 ΛN ΛN .. . Z Z = p(k−1)β (x1 , xk )pβ (xk , x1 )dxk dx1 ΛN ΛN Z = pkβ (x1 , x1 ) ΛN = |ΛN | d . (2πkβ) 2 From which, for the Hamiltonian given in (2.2.12) ZΛN (β) = N X Y |ΛN |λk λ∈PN k=1 λk !k λk (4πβ)λk 2 d . (2.2.19) At the end of this chapter we will justify this choice of Hamiltonian when we see that the partition function arrived at above coincides with the partition function of Adams et al. which describes Bose-Einstein condensation, [ACK11]. 22 2.3 Random Partitions and Permutations We recall some details from the theory of partitions. Let [n] = {1, 2, . . . , n}, throughout we will describe partitions for [n], though often we will be dealing with more general n-sets, and all the theory carries over. A partition of [n] is a collection {A1 , . . . , Ak } of subsets of [n], where for all 1 ≤ i, j ≤ k we have Ai ⊆ [n], Ai 6= ∅, and if i 6= j then Ai ∩ Aj = ∅. The set of k is the collection of partitions into all partitions of [n] is denoted by P[n] , and the subset P[n] exactly k subsets. Each partition of [n] defines an integer partition λ = (λ1 , . . . , λn ), where P λj = # {i : |Ai | = j} is the number of blocks of size j. Note j jλj = n. There is a surjection between partitions of [n] and integer partitions of n, but for n ≥ 3 this is not a bijection (for example the partitions {{1, 2} , {3}} and {{1, 3} , {2}} both induce the same integer partition of 3). Let Pn denote the set of all integer partitions of n, and as before Pnk ⊆ Pn are those partitions with exactly k summands. In [Pi05] Pitman gives a detailed introduction to the theory of composite structures. Examples of structures on the set [n] include graphs with vertices labeled from [n], the set [n] × [n], or [n][n] . Of most importance to us will be permutations of [n]. We say that a V -structure on the set [n] is an element of V ([n]). A composite structure is an element of (V ◦ W )([n]), which is obtained by first choosing a partition {A1 , . . . , Ak } ∈ P[n] , then forming a V -structure in V ({Ai }ki=1 ) (the macroscopic structure), and for each 1 ≤ i ≤ k choosing a structure from W (Ai ) (the microscopic structure). Proposition 2.3.1. Let vn = #V ([n]), wn = #W ([n]) then the number of V -W composite structures is given by Bn (v, w) , |(V ◦ W )[n]| = n X vk X k {Ai }∈P[n] k=1 k Y w|Ai | . (2.3.1) i=1 By the trivial structure, we mean the unique structure in which elements do not interact, V [n] = [n]. In the special case where both V and W are trivial then, vn = wn = 1 for all n, and (V ◦ W )([n]) = P[n] . The number Bn (1, 1) is known as the n-th Bell number. For two structures V, W , we choose a partition uniformly at random Πn ∈ (V ◦ W )[n], which induces a distribution over the partitions in P[n] P[Πn = {A1 , . . . , Ak }] = 23 vk Qk i=1 w|Ai | Bn (v, w) . (2.3.2) Here Bn (v, w) plays the role of the partition function. This distribution on the set P[n] is called the Gibbs(v, w) distribution for partitions in P[n] . Similarly we can study the distribution induced by V -W structures on the set of integer partitions Pn . Let λ(Πn ) denote the integer P partition obtained from the partition Πn , with a total of k blocks j λj = k, then n n!vk Y wj λj 1 . P[λ(Πn ) = (λ1 , . . . , λn )] = Bn (v, w) j! λj ! (2.3.3) j=1 This is the Gibbs(v, w) distribution for integer partitions in Pn . In the special case where V is the trivial structure, and W corresponds to directed cycles, so wn = (n − 1)!, then we note that Bn (v, w) = n! since this is exactly the number of permutations of [n]. And the resulting Gibbs distribution has P[λ(Πn ) = (λ1 , . . . , λn )] = n Y 1 , λj λ ! j j j=1 (2.3.4) which corresponds to our proposed model for spatial permutations in the case that there is no spatial correlation. In [Fi91], Fichtner considers a model for random spatial permutations where edges are distributed independently. Let X denote a countable point set, and for each x ∈ X define independent random variables taking values in X, ψx : Ω → X, where Ω is a generic sample space. The collection ψ = (ψx )x∈X determines a random mapping ψ(ω) : X → X for ω ∈ Ω. Under the restriction that P[ψx = x] > 0, define the matrix α = (αx,y )x,y∈X which fully determines the law of ψ αx,y = P[ψx = y] . P[ψx = x] We wish to study the distribution for ψ, conditioned on the sampled maps being a bijection (i.e. a permutation). Defining such a conditional probability measure is non-trivial, and is achieved by Fichtner by localisation, we describe his method below. Let X2 ⊆ X1 ⊆ X, for a permutation σ : X1 → X1 denote I(x; σ) for the cycle containing x (i.e. the smallest σ-invariant set containing x). Then define DX1 (X2 ) , {σ ∈ SX1 : I(x; σ) = X2 ∀ x ∈ X2 } , (2.3.5) the set of permutations of X1 which contain X2 as a cycle. Then for a finite subset X1 ⊂ X 24 we define a probability measure of mappings on g : X1 → X1 PX1 (g) , Q 1 Z x∈X1 αx,g(x) 0 where Z = P h∈SX1 Q x∈X1 for g ∈ SX1 , (2.3.6) else. αx,h(x) is the partition function. Proposition 2.3.2. Let α = (αx,y )x,y∈X be as above, and suppose in addition that for all y∈X X X A⊂Z, finite. Y αx,g(x) < ∞. (2.3.7) g∈DA (A) x∈A y∈A Given an increasing sequence of subsets (Xn )∞ n=1 , with Xn ⊆ Xn+1 ⊆ X for all n ∈ N, and limn→∞ Xn = X, then for ε > 0 and y ∈ X there is an n(ε, y) such that PXn (I(y, ·) ⊆ Xn(ε,y) ) ≥ 1 − ε ∀ n > n(ε, y). (2.3.8) See [Fi91]. Using this proposition, Fichtner shows that there is a subsequence of sets Xnk for which the measures PXnk converge to a measure of permutations on an infinite set, in particular this is a candidate for the probability measure of ψ = ψ(α) conditioned to form permutations. ∞ Theorem 2.3.3. With α and (Xn )∞ n=1 as above, then there exists a subsequence (Xnk )k=1 and a probability measure P ∈ M1 (X X ) on the space of mappings g : X → X such that P = lim PXnk , k→∞ (2.3.9) where convergence is in marginal distributions. Further 1. P(SX ) = 1. 2. I(x, ·) is P-almost surely finite, for all x ∈ X. See [Fi91]. It is of course essential that we can confirm (2.3.7) for our choice of ψ; as an example, however, Fichtner is able to derive a condition under which nearest neighbour lattice permutations satisfy this requirement. The theory of random partitions is itself an interesting area, with broad implications. Any partition λ ∈ PN defines a Young Diagram, which can be seen as a step function ϕλ : R → R 25 where X ϕλ (t) = λk . (2.3.10) k≥t So ϕλ is a step function, and has R R ϕλ (t)dt = n. The Young diagram normed by a > 0 is ϕa,λ (t) = a ϕλ (at), n which clearly has integral a. In [Ve96], Vershik considers sequences of laws µn ∈ M1 (Pn ); we Figure 2.1: A Youngs diagram and graph for the partition λ = (1, 2, 3, 1, 2, 1, 0, 1) ∈ P30 . define a space of measures on the positive real line, D = {(α0 , α∞ , p)}, where α0 , α∞ ∈ [0, ∞), R∞ and p : [0, ∞) → [0, ∞) is monotone decreasing, and α0 + α∞ + 0 p(t)dt = 1. So D describes densities on the extended positive real line, where α0 , α∞ are included to allow for measures with atoms at the origin or ∞. τ a Let τa : Pn → D be the map λ 7→ ϕa,λ . Vershik is interested in the behavior of µn (τa−1 (·)) on D; in particular for a sequence of measures µn whether there exists a sequence of scalings an for which µn τa−1 converges in distribution (weakly) to a measure µ0 ∈ M1 (D), such that n µ0 is non-trivial, by which we mean that its support is not contained in {α0 , α∞ }; if this convergence occurs we say that we have a limit shape for the laws µn . A measure on Pn is said to be multiplicative if it is of the form n Y µ(λ) = F (λ) sk (λk ), (2.3.11) k=1 where (sk )nk=1 , F are functions of the partition. For instance for the uniform distribution of permutations F ≡ 1, and sk = (k λk λk !)−1 , which equates to (2.3.4). If we choose F ≡ sk ≡ 1 26 for all k, then the measure obtained is uniform over partitions. More generally, any Gibbs distribution on Pn is multiplicative. For certain classes of multiplicative measures {µn }∞ n=1 , Vershik is able to construct explicit sequences an such that we have the desired convergence to a limit shape. Let p(n) be the Euler partition function p(n) = #Pn . (2.3.12) Theorem 2.3.4. Let µn be the uniform distribution over partitions in Pn , µn ≡ p(n)−1 . Let √ an = n, so that 1 X ϕan ,λ (t) = √ λk . n √ k≥t n Then there exists C : [0, ∞) → [0, ∞) such that for > 0 and 0 < a, b < ∞ there exists an n0 such that µn λ : sup |ϕan ,λ (t) − C(t)| < ε >1−ε ∀ n > n0 . (2.3.13) t≥0 And explicitly, √ C(t) = − √ 6 log(1 − eπt/ 6 ). π See [Ve96]. In fact Vershik’s result holds for a larger class of weights called the generalized one-dimension Bose statistics, of which the uniform distribution is a special case. 2.4 Bose-Einstein Condensation Bose-Einstein condensation (BEC) is a natural phenomenon which occurs when a gas of bosons is cooled to a sufficiently low temperature. The area was first developed by Bose and Einstein in the 1920s, after Einstein conjectured that below a certain critical temperature a large proportion of the particles in such a gas occupy the lowest energy state possible. The theory was made more significant after connections were made between between BEC and superfluidity. In 1995 breakthroughs were made after physicists succeeded in cooling gases to low enough temperatures to observe BEC; on the back of this work Cornell, Wieman, and Ketterle were awarded the Nobel Prize in Physics, in 2001. Whilst this phase transition has been observed in the physical world, to date there has been no success in demonstrating the same phase transition in the mathematical model. A common mathematical approach to modeling BEC is by studying random partitions and permutations; 27 in particular Feynman conjectured that BEC manifests itself as an infinite cycle of points, or equivalently the loss of probability mass on finite cycles [Fe53a], [Fe53b]. In [Sü02] and [Sü02], Sütő demonstrated this phenomenon in the case of the ideal Bose gas, in which particles are assumed to be independent. Consider an N -particle (point) system described by the Hamiltionian operator HN = N X −∆i + i=1 X v(|xi − xj |), (2.4.1) 1≤i<j≤N where x1 , . . . , xN ∈ Rd , ∆i denotes the i-th Laplace operator (for which we must define suitable boundary conditions). The map v : [0, ∞) → R describes the pair potential, which is the interaction between two particles. Throughout we will assume that the particles do not interact, v ≡ 0, and study the model for a box Λ ⊂ Rd , with free boundary conditions and Hamiltonian operator HN,Λ = − N X ∆i . (2.4.2) i=1 The partition function can be expressed in terms of the symmetric trace as ZN,Λ (β) = TrL2+ (RdN ) e−βHN,Λ , β > 0. (2.4.3) This corresponds to the ideal Bose gas, and through using Fourier transforms, Bose and Einstein were able to find a description for this partition function in the grand canonical ensemble. In [Sc01], Schakel takes as his starting point the expression Z log(Z) = −|Λ| dd k log(1 − e−βE(k)/kB ) (2π)d (2.4.4) where E(k) is the single-particle spectrum, and kB is Boltzmann’s constant, and we denote the partition function by Z to avoid confusion with the transition function in the canonical ensemble, (2.4.3). Schakel compares the above with an expression for a partition function of a general lattice site percolation model, log(Z perc ) ∝ X ls (2.4.5) s where ls = ls (p) is the density of clusters of size s for a lattice percolation model with openprobability p. He comments that the open clusters in a percolation model correspond to cycles in the Bose gas, in particular remarking that the infinite cluster in a percolation model for 28 p > pc corresponds to the Bose-Einstein condensate for β > βc . In [ACK11], the authors take a new approach to calculating (2.4.3), away from the method of Fourier transforms used above. Through the use of the Feynman-Kac formula, Adams et al. demonstrate that the partition function can be rewritten as a sum over the symmetric group of a system of interacting Brownian bridges. For two points x, y ∈ Rd let µβx,y denote the Brownian bridge from x to y, that is Brownian motion starting at x and reaching y at time β. Theorem 2.4.1. For λ ⊂ Rd and β > 0, the partition function (2.4.3) can be expressed as ZN,Λ (β) = Z Z N O 1 X µβxi ,σ(xi ) . dx1 · · · dxN N! Λ Λ (2.4.6) i=1 σ∈SN See [ACK11]. Note that the boundary conditions defined for ∆i carry over to this form for the partition function, and impose further conditions on the Brownian bridges. Using the Markov property for Brownian paths it is possible to concatenate the Brownian bridges along each cycle in the permutation, so that rather than taking each bridge µβx,σ(x) individually, we consider the bridge µkβ x,x starting and ending at the same point over the longer time period kβ, where x is in a cycle of length k. In the case of free boundary conditions, and given the stationarity of µβx,x , this enables us to write the partition function as a weighted sum over PN , [ACK11] ZN,Λ (β) = N X Y λ∈PN k=1 = X N Z 1 O dxµkβ x,x λk !k λk Λ N Y (2.4.7) k=1 |Λ|λk d λk 2 λk λ∈PN k=1 λk !k (4πβk) . (2.4.8) In this final form we see that the Bose-statistics are an example of a multiplicative weight on PN . Whats more, this agrees with the partition function we found for the Hamiltonian (2.2.12). Our work in the next chapter starts with a detailed calculation of the specific free energy for this partition function. 29 Chapter 3 Analysis of the Canonical Ensemble In this section we look in detail at two different choices for partition functions in the canonical ensemble on a box, with no external interactions. We provide a detailed analysis of the specific free energy under Bose statistics, and compare this to that obtained under uniform weightings, and Ewen’s distributions. 3.1 Bose Statistics Weight Let ΛN be a sequence of bounded subsets of Rd with finite volume such that N/|ΛN | → ρ ∈ (0, ∞), where | · | denotes Lebesgue d-measure. We consider the Bose statistics for our model on the restricted state space ΩΛN , which is described by the partition function (bose) ZΛN ,β , N X Y λ∈PN |ΛN |λk , λ !k λk (4πβk)λk d/2 k=1 k (3.1.1) where β > 0 denotes inverse temperature and PN is the collection of all integer partitions of N . Recall that an integer partition λ ∈ PN is a vector λ = (λ1 , . . . , λN ) such that λk ∈ N and P k kλk = N . We note that any partition λ ∈ PN defines a measure Qλ ∈ M1 (N), the collection of probability measures on the natural numbers where Qλ (k) , P 1 N l≥k 0 λl if k ≤ N, (3.1.2) else. We refer to Qλ as the empirical shape-measure of λ. Recall that p(N ) denotes the Euler 30 partition number given in (2.3.12). Proposition 3.1.1. Define M1N = Q ∈ M1 (N) : Q(k) ≥ Q(k + 1), N Q(k) ∈ N, ∀ k and Q(k) = 0 k > N , then there is a bijection between PN and M1N , and in particular #M1N = p(N ). Proof. Given λ ∈ PN it follows immediately from the definition that N Qλ (k) ∈ {0, 1, . . . , N }. Since each term is a sub-summation of its predecessor, then Qλ is decreasing and we have the inclusion Qλ : λ ∈ PN ⊆ M1N . For Q ∈ M1 (N) let b Q(k) , Q(k) − Q(k + 1). (3.1.3) Q Q Q b Now given Q ∈ M1N then for k ≤ N set λQ k = N Q(k). Then λ = (λ1 , . . . , λN ) defines a partition in PN since N X kλQ k ∞ X =N k Q(k) − Q(k + 1) k=1 k=1 =N =N ∞ X k=1 ∞ X kQ(k) − N ∞ X (k − 1)Q(k) k=1 Q(k) k=1 = N. Hence we have a bijection. This allows us to recharacterise the partition function as a sum over measures in M1N rather than as a sum over partitions. We are going to need the following sets of measures M= , Q ∈ M1 (N) : Q monotonic decreasing , ( ≤ M , Q ∈ M(N) : Q monotonic decreasing, and: ) X Q(k) ≤ 1 , k the set of all decreasing probability measures on N and, the set of decreasing sub-probability measures on N. Clearly: M1N ⊆ M= ⊆ M≤ . Corollary 3.1.2. Measures Q ∈ M≤ are in bijection with 31 n o b : Q ∈ M≤ . Q b be as in (3.1.3), and suppose for two measures Q, Q0 ∈ M≤ that Q b≡Q b0 . Proof. Let Q 7→ Q Then note Q(1) = ∞ X b Q(k) = k=1 ∞ X b 0 (k) = Q0 (1), Q k=1 b is an injection. For and inductively Q(k) = Q0 (k) for all k ≥ 1. So Q ≡ Q0 , and Q 7→ Q n o b∈ Q b : Q ∈ M≤ the inverse map Q b 7→ Q is given by Q(1) = P∞ Q(k), b Q and inductively k=1 n o b : Q ∈ M≤ then if Q ≡ Q0 then b − 1). Given two measures Q, b Q b0 ∈ Q Q(k) = Q(k − 1) − Q(k immediately if Q ≡ Q0 then for any k ≥ 1 b b 0 (k), Q(k) = Q(k) − Q(k + 1) = Q0 (k) − Q0 (k + 1) = Q b≡Q b 0 . So there is a bijection. and Q Our aim is to find an expression for the specific free energy associated to the partition function f (bose) (β, ρ) , lim − N →∞ 1 (bose) log ZΛN ,β . β|ΛN | (3.1.4) We do this by first acquiring bounds on the partition function, and then use these to bound the free energy. Recalling Stirling’s formula √ 1 n! = e−n nn 2πn 1 + o n . 12 (3.1.5) en 1 1 en −γn √ ≤ e ≤ . nn 2πn n! nn (3.1.6) We obtain the bounds where γn ∈ (0, 1/n). Expressing the partition function as a sum over shape-measures we obtain (bose) ZΛN ,β = N X Y Q∈M1N k=1 1 b (N Q(k))! N kρ(4πβk)d/2 N Q(k) b . For any Q ∈ M1N our upper bound in (3.1.6) gives N Y k=1 1 b (N Q(k))! N kρ(4πβk)d/2 where F (Q) , X k≥1 N Q(k) b ≤ e−N F (Q) , ! b Q(k) b Q(k) log −1 , b ∗ (k) Q 32 (3.1.7) and b ∗ (k) , Q 1 . kρ(4πβk)d/2 (3.1.8) b ∗ (k) > 0 for all k ∈ N, and so F : M1 → R is well defined under the Note here that Q N convention 0 log 0 = 0. Using the lower bound of (3.1.6) we find a similar expression N Y k=1 1 b (N Q(k))! N kρ(4πβk)d/2 N Q(k) b −N F (Q) ≥e N b Y e−γN Q(k) q b k=1 2πN Q(k) ≥ e−o(N ) e−N F (Q) . To justify the second line we note that we can consider the product over only those k such that λk ≥ 1, and that there is an α ∈ (1/2, 1) such that # {k ∈ {1, . . . , N } : λk ≥ 1} ≤ N α . So we have the bounds, e−o(N ) X (bose) e−N F (Q) ≤ ZΛN ,β Q∈M1N ≤ X e−N F (Q) . (3.1.9) Q∈M1N Then by Proposition 3.1.1 e−o(N ) e−N inf {F (Q): Q∈MN } ≤ 1 ≤ p(N )e−N inf {F (Q): Q∈MN } . 1 (bose) ZΛN ,β (3.1.10) In [An76] it is shown that asymptotically p(N ) = 1 π √ e 4N 3 q 2N 3 (1 + o(1)). Using the upper bound in (3.1.10) 1 (bose) log ZΛN ,β β|ΛN | ρ 1 log p(N ) − ρ inf F (Q) : Q ∈ MN ≤ lim sup βN N →∞ ρ ≤ − lim inf inf F (Q) : Q ∈ M1N , β N →∞ f (bose) (β, ρ) = lim N →∞ 33 (3.1.11) since limN →∞ N −1 log p(N ) = 0. Similarly from the lower bound in (3.1.10) f (bose) (β, ρ) ≥ −ρ lim sup inf F (Q) : Q ∈ M1N . N →∞ Theorem 3.1.3. Let ΛN ⊂ Rd be a sequence of bounded boxes with N/|ΛN | → ρ ∈ (0, ∞). (bose) The specific free energy of the partition function ZΛN ,β exists and the limit is represented in the variational form f (bose) (β, ρ) = − ρ inf F (Q) : Q ∈ M≤ . β (3.1.12) The proof of this theorem is the content of the following, in which we deal in turn with acquiring upper and lower bounds on inf F (Q) : Q ∈ M≤ . Proposition 3.1.4. lim inf inf F (Q) : Q ∈ M1N ≥ inf F (Q) : Q ∈ M≤ , N →∞ (3.1.13) and hence f (bose) (β, ρ) ≤ − βρ inf F (Q) : Q ∈ M≤ . Proof. This is immediate by the inclusion relation: M1N ⊆ M≤ . Using the method of Lagrange multipliers we find necessary conditions for the existence of minimisers for F lying in M1N . Recall that we have the constraint X Q(k) = k∈N X b k Q(k) ≤ 1. k∈N So for γ > 0 we have the Lagrange function Lγ (Q) = X b Q(k) b ∗ (k) Q b Q(k) log k∈N ! ! − 1 + γk . To find the maximiser of this variational expression we use the method of Gâteaux derivatives; we want to solve for each k ∈ N ∂ (Lγ ) = log b ∂ Q(k) b Q(k) b ∗ (k) Q 34 ! + γk = 0. This is achieved by b ∗ (k) b γ (k) = e−γk Q Q = e−γk , ρk(4πβk)d/2 e γ . We note in particular that this from which we can attain the suitable maximising measure Q has total mass X e γ (k) = Q k∈N X b γ (k) kQ k∈N X e−γk 1 ρ(4πβ)d/2 k∈N k d/2 1 d −γ = Li , ,e 2 ρ(4πβ)d/2 = where Li is the polylogarithm: Li(θ, z) = P k z k /k θ . The total weight obtained in this form is also referred to as the Bose-Riemann-zeta function. We define the value ρc by X 1 1 4(πβ)d/2 k k d/2 d 1 ζ , = d/2 2 (4πβ) ρc , (3.1.14) (3.1.15) where ζ is the Riemann-zeta function ζ(z) = ∞ X 1 , kz z ∈ C. k=1 We refer to ρc as the critical density. From results in [Gr25] concerning the Riemann-zeta function, we note that for ρ ≤ ρc there exists a unique γ = γ(ρ) for which X e γ (k) = 1. Q k e ∈ M= such that F (Q) e = inf F (Q) : Q ∈ M≤ . So in particular when ρ ≤ ρc then there is a Q Note that for d = 1, 2 then ρc = ∞, and hence we can find a probability measure which minimises the variational problem for any value of ρ. When ρ > ρc (for d ≥ 3), then we do not have a minimiser which is in M= , setting γ = 0 then we find that the minimiser in M≤ is given by Q∗ . In [Sü02], Süto confirms that this 35 phase transition coincides with BEC. Similarly we can find minimisers for the variational problem which lie in M1N b γ,N (k) = Q b ∗ (k) e−γk Q k ≤ N, 0 k > N. Assuming ρ ≤ ρc then there are unique γN = γN (ρ) such that P k b γ (k) = 1, where we have kQ N b γ (k) = Q b γ ,N (k). Let Q̆N denote the induced, minimising, simplified notation by letting Q N N measure. Note that whilst Q̆N has support {1, . . . , N }, and is monotonic, it is not in M1N , since we do not necessarily have N Q̆N (k) ∈ N for each k. Define PN 1 j −γ k b ∗ k N Q (l) 1 − l=2 N e j k e N (k) = 1 e−γN k Q b ∗ (k) Q N 0 k = 1, 1 < k ≤ N, k > N, where bxc is the greatest integer smaller than x. For ρ > ρc then let e N (k) = Q b ∗ (k)c 1 bQ 1 ≤ k ≤ N − 1, N 1 − PN −1 l=1 1 b∗ N bQ (l)c k = N, eN → Q e termwise as where we put the excess mass on the final entry. In both cases we see Q N → ∞. To complete Theorem 3.1.3 we need to show the lower bound. Recall that we wish to show lim sup inf F (Q) : Q ∈ M1N ≤ inf F (Q) : Q ∈ M≤ . N →∞ To prove our lower bound holds we must show the following Lemma 3.1.5. For all Q ∈ M= we can construct a sequence QN ∈ M1N such that QN → Q pointwise, and: lim supN →∞ F (QN ) ≤ F (Q). 36 Given Q ∈ M= , and fixing α ∈ (0, 1), define: PbN α c 1 − l=2 N1 bN Q(l)c k = 1, QN (k) = 1 bN Q(k)c 1 < k ≤ bN α c, N 0 k ≥ dN α e, where dxe is the least integer greater than x. Proposition 3.1.6. The sequence of measures QN as defined above satisfy: 1. QN ∈ M1N for all N ≥ 1. N →∞ 2. QN (k) −→ Q(k) for all k ≥ 1. Proof. It is immediately seen that the QN define probability measures, and clearly N QN (k) ∈ N for all k, N ≥ 1. To prove the first statement it remains to confirm that the QN are monotonically decreasing in k. For k > dN α e, this is trivial. For 2 ≤ k ≤ bN α c then Q(k) ≥ Q(k + 1) ⇒ 1 1 bN Q(k)c ≥ bN Q(k + 1)c N N To show that QN (1) ≥ QN (2) we rewrite X 1 1 QN (1) = bN Q(1)c + 1 − bN Q(l)c, N N l=1 | {z } bN α c ≥0 and then by monotonicity of Q, 1 N bN Q(1)c ≥ 1 N bN Q(2)c and it follows that QN (1) > QN (2). So QN ∈ M1N . It is an easily seen result in real analysis that for all a ∈ R 1 n→∞ banc −→ a. n Hence QN (k) → Q(k) for k > 1. For QN (1) → Q(1) it suffices to prove that X 1 1 − bN Q(l)c → 0 N bN α c l=1 37 as N → ∞. Since Q is a probability measure 1= ∞ X Q(k) k=1 = α bN X Xc 1 1 Q(k) − bN Q(k)c + bN Q(k)c, Q(k) + N N bN α c ∞ X k=dN α e k=1 k=1 where we made use of the fact that N −1 bN Q(k)c ≤ Q(k). Rearranging, and using the fact that |Q(k) − QN (k)| ≤ N −1 bN α c ∞ X l=1 k=dN α e ∞ X X 1 1− bN Q(k)c = N ≤ bN α c X 1 Q(k) + Q(k) − N bN Q(k)c k=1 Q(k) + N α−1 k=dN α e →0 as N → ∞. We now return to prove the lemma. Proof. (Of Lemma 3.1.5) Let QN be as defined previously.We show a stronger result that in fact limN →∞ F (QN ) = F (Q). We have ∞ ! ! ∞ X X b b N (k) Q(k) Q b b N (k) log Q(k) log −1 − −1 |F (QN ) − F (Q)| = Q ∗ ∗ b b Q (k) Q (k) k=1 k=1 ∞ ∞ X X b N (k) b Q Q(k) b b N (k) + b N (k) log b ≤ Q − Q(k) log Q(k) − Q ∗ ∗ b b Q (k) Q (k) k=1 k=1 Considering the first term ∞ ∞ X X b b |Q(k) − QN (k) + QN (k + 1) − Q(k + 1)| Q(k) − QN (k) = k=1 k=1 ∞ X ≤2 |Q(k) − QN (k)| k=1 bN α c ≤2 X |Q(k) − QN (k)| + 2 k=dN α e k=1 ≤ 2N α−1 ∞ X ∞ X +2 k=dN α e 38 Q(k) Q(k) −→ 0 as N → ∞. Rearranging the remaining term we find bN α c ∞ X X b b b QN (k) Q(k) QN (k) b b b QN (k) log − Q(k) log QN (k) log ≤ b ∗ (k) b ∗ (k) b Q Q Q(k) k=1 k=1 ∞ X b N (k) − Q(k) b b + Q log Q(k) k=1 ∞ X ∗ b N (k) − Q(k) b b (k) . + Q log Q (I) (II) (III) k=1 We show that all of these terms converge to 0 individually. Considering term (III) first bN α c X ∞ ∞ X X ∗ ∗ ∗ b b b b b b b b Q(k) log Q (k) . QN (k) − Q(k) log Q (k) ≤ QN (k) − Q(k) log Q (k) + k=dN α e k=1 k=1 {z } | {z } | (III 0 ) (III 00 ) b N (k) − Q(k)| b b ∗ (N α ) ≤ Q b ∗ (k) ≤ Q b ∗ (1), For (III 0 ) we observe that |Q ≤ N −1 , and that since: Q n o b ∗ (k)| ≤ max | log Q b ∗ (1)|, | log Q b ∗ (N α )| . Since Q b ∗ is strictly decreasing to 0, there then | log Q b ∗ (1)| ≤ | log Q b ∗ (N α )|, and | log Q b ∗ (k)| ≤ | log Q b ∗ (N α )|. is an N0 such that for N α > N0 | log Q Disregarding constant terms involving β and d, then for α < γ < 1 bN α c 0 (III ) ≤ X k=1 1 α(1+d/2) log N N γ N 1−γ N α α(1 + d/2) ≤ γ log N N N (1−γ) −→ 0 as N → ∞, where we use the fact that x−γ log x → 0 for γ > 0. For (III 00 ) we note that this is an entropy P b b ∗ (k) < ∞ so taking N → ∞ the sum in (III 00 ) converges to 0. term and k≥1 Q(k) log Q We can similarly split (II), bN α c ∞ ∞ X X X b b . b N (k) − Q(k) b b ≤ b N (k) − Q(k) b b + Q(k) log Q(k) Q log Q(k) Q log Q(k) k=dN α e k=1 k=1 | {z } | {z } (II 0 ) (II 00 ) The convergence of (II 00 ) is the same as for (III 00 ). For (II 0 ) we sum individually over those 39 b b b b k ≤ bN α c such that Q(K) < 1/N , and Q(k) ≥ 1/N . Note that if Q(k) ≥ 1/N then | log Q(k)| ≤ log N , and so X Nα b b b Q (k) − Q(k) log Q(k) log N ≤ N N α k≤bN c, Q(k)≥1/N b −→ 0 asN → ∞. b Alternatively given Q(k) < 1/N , then note first that the k for which Q(k) = 0 do not contribute b N (k) = 0, and 0 log 0 = 0. Define to the sum since then Q aN k = N Q(k) if k ≤ bN α c, Q(k) < 1/N 1 else. then bN α c Nα X X 1 X 1 1 N b b b log log ak + QN (k) − Q(k) log Q(k) ≤ N N N k=1 k≤bN α c, |k=1 {z } Q(k)<1/N b −→0 as N →∞ :k∈N ≤ N α−1 sup log aN k −→ 0 as N → ∞. Finally we turn to term (I) and observe that bN α c X k=1 where H(µ|ν) = P s µs log µs /νs b b N (k) log QN (k) = H(Q b N |Q), b Q b Q(k) b N is abdenotes the relative entropy function, and since Q b it is known that H(Q b N |Q) b < ∞. Whats more, H is solutely continuous with respect to Q, bN → Q b then lim H(Q b N |Q) b = H(Q| b Q) b = 0, and so (I) goes known to be continuous and since Q to 0. 40 3.2 Uniform and Ewens Weights As before we consider the state space ΩΛN ,N but consider a different partition function. We consider the weighting obtained by choosing partitions λ ∈ PN uniformly; we observe here that this is different to choosing our weights uniformly in SN , the collection of permutations of {1, . . . , N }. Since we still vary over ξ ⊂ Rd , locally finite with #(ξ ∩ ΛN ) = N , the partition function still has a weak spatial dependence in the form of a volume term in the partition function (unif) ZΛN ,β N X Y |ΛN |λk , . λk !k λk (3.2.1) λ∈PN k=1 As with the Bose statistics we find, (unif) Theorem 3.2.1. The partition function ZΛN ,β has specific free energy given by f (unif) (β, ρ) = − ρ inf F (Q) : Q ∈ M≤ . β (3.2.2) Where F (Q) , X b b Q(k) log kρQ(k) −1 . (3.2.3) k≥1 Using Lagrange multipliers to find the solutions to inf F (Q) : Q ∈ M≤ , for γ > 0 we find the maximiser, −γk b γ (k) = e Q . ρk e γ denote the associated maximising measure then this has total mass, Letting Q ∞ X e γ (k) = Q k=1 = = ∞ X b γ (k) kQ k=1 ∞ X 1 ρ e−γk k=1 1 1 − −γ ρ(1 − e ) ρ In particular for all ρ > 0 there is a γ(ρ) > 0, 1 γ(ρ) = log 1 + ρ , e=Q e γ (ρ) ∈ M= and F (Q) e = inf F (Q) : Q ∈ M≤ . such that Q Theorem 3.2.2. For d ≥ 1, the edge based permutation model with uniform weights exhibits 41 no phase transition. In particular ρc = ∞. We also mention a third choice of weights, induced by the Ewens sampling formula. For a parameter θ > 0 the Ewen’s sampling formula for partitions λ ∈ PN is given by N Y N! θ λk . P[λ = (λ1 , . . . , λN )] = θ(θ + 1) · · · (θ + N − 1) k λk λk ! (3.2.4) k=1 The Ewen’s distribution is an important distribution in genetics, where n genes are sampled, and λk is the number of different allioles that occur exactly k times. Note that in the case θ = 1 we recover the uniform distribution, (2.3.4). Letting θ ↑N = θ(θ + 1) · · · (θ + N − 1), we induce the following Ewens partition function for our spatial model (Ewens) ZΛN ,β (θ) N N ! X Y (θ|ΛN |)λk , . θ ↑N λk !k λk (3.2.5) λ∈PN k=1 The following result shows that the limit of the Ewen’s-θ distribution can be seen as the uniform distribution on a rescaled box. Corollary 3.2.3. For d ≥ 1, and parameter θ > 0, the edge based permutation model with Ewen’s distribution has ρc = ∞. Further f (Ewens) (θ, β, ρ) = f (unif) (β, ρ/θ). (3.2.6) e N be a sequence of boxes with |Λ e N | → ρ̃ then, Proof. By setting ρ̃ = ρ/θ, and letting Λ (Ewens) ZΛN ,β (θ) = = N N ! X Y (θ|ΛN |)λk θ ↑N λk !k λk N! θ ↑N λ∈PN k=1 n X Y λ∈PN k=1 e N |λk |Λ λk !k λk N ! (unif) = Z . θ ↑N Λe N ,β To check that f (Ewens) (θ, β, ρ) = f (unif) (β, ρ̃) for all θ > 0 it remains to confirm that lim N →∞ 1 N! log = 0. N θ ↑N Recall that the Gamma function Γ(z) is the extension of the factorial function to all complex 42 numbers, and on the real line is given by the integral Z Γ(x) = ∞ tz−1 e−t dt, x ∈ R. (3.2.7) 0 Note that Γ(x) is continuous for x > 0, satisfies Γ(n) = (n − 1)! for n ∈ N, and there is a value x0 < 1.5 such that Γ is increasing above x0 . [Pi05] observes that N! N !Γ(θ) = . θ ↑N Γ(N + θ) Since for N ≥ 2, Γ(N + bθc) ≤ Γ(N + θ) ≤ Γ(N + dθe) then N !Γ(θ) N !Γ(θ) N! ≤ ≤ . (N + dθe)! θ ↑N (N + bθc)! In the limit we can ignore Γ(θ) ∈ (0, ∞), since it is constant, and given the bounds above, it suffices to show convergence for the case θ ∈ N. Again we find bounds 1 N! 1 ≤ ≤ θ (N + θ)! (N + θ) (N + θ) and we are done by the sandwich rule since: x N log(1/(N + θ)) converges to 0 for all x > 0 as N → ∞ (in particular x = 1, θ). 43 Chapter 4 Conclusion As indicated throughout, there is still much work to be done on this model. In the following we highlight some of the limitations of the setting we have worked under in this dissertation, and make some remarks about more general settings. We also introduce the notion of Gibbs measures, and how these will be important in confirming that the loss of probability mass on the configuration space does correspond to the existence of infinite cycles. In our introduction we remarked (Remark 1.3.3) that continuum models can be built in two ways: either by taking a point process and building a form of combinatorial structure on it, or by simultaneously building the point set and the structure. Whilst in both cases the structure itself may be dependent on the point set (so there is a spatial correlation), the underlying points are independent of the structure. Throughout this dissertation we have worked under the restrictive assumption that the point set has a P.p.p. which gave us the freedom to easily define percolation. As we noted in Chapter 2, dropping the requirement that η 7→ ξη is Poisson means we can no longer condition the process on containing the origin with no adverse effect. As we saw throughout our introduction, to date approaches to the basic questions of continuum percolation have made their way by the property of the Palm distribution of a P.p.p., so to consider our more general dependent (rather than correlated ) edge based permutation model we will need to use different tools. Likewise, our calculations of the partition function (2.2.8) leant heavily on the fact that we could split the integral as a sum over configurations in the annealed space, and an expectation over the P.p.p. In all we see that this Poissonisation has been a key to our ability to work on this model, and is a feature which will need to be worked away from if we are to consider stronger dependences. Throughout we have also worked on the canonical ensemble; another all together more 44 realistic approach, is to work in the grand canonical ensemble in which we do not fix ΛN to contain exactly N -points. We consider the model to have an average number of points N , and control how much we deviate from the average by an external paramter µ called the chemical (N ) (β, ·) : Ω potential. We define a family of Hamiltonians (H (N ) (β, ·))∞ ΛN → R is N =0 , where H a Hamiltonian on Λ containing N points. Our partition function is then ZΛ (β, µ) = ∞ Z X e−H (N ) (β,η,ξ)+βµN dη. (4.0.1) N =0 ΩΛN A key question in statistical mechanics is whether the ensembles are equivalent. The work of both Sütő and Schakel dealt with similar models to ours but in the grand canonical ensemble, and so showing this equivalence is strongly motivated. Throughout the dissertation we also assumed that we were working under free boundary conditions, in which there is no interaction outside of Λ. To approach an analysis of the full configuration space we will want to view Λ as a box living within a larger system, and to allow for interactions outside of the box; to achieve this we must reconstruct our definition of a partition function. In the following we introduce a more general form of partition function for the canonical ensemble, and define Gibbs measures. Suppose that Ω is the configuration space of a spatial model in Rd , and F is a suitably defined σ-algebra, and that for boxes Λ ⊆ Rd , FΛ is the σ-algebra for the model restricted to Λ, and TΛ = FRd \Λ . If Λn → Rd is a sequence of boxes, then we can define a tail σ-algebra for Ω by T , \ TΛn . n∈N (Technically this needs much more detail). In what follows we will use η to denote configurations on subspaces ΩΛ , and reserve ω to denote configurations on Ω. We define the point cloud ξω (η) to be ξω (η) = (ξω ∩ Λc ) ∪ ξη , that is the point set which contains all points of ω not in Λ, and the points of η in Λ. For a fixed configuration ω ∈ Ω we define a Hamiltonian HΛω (η; ξω (η)) , X ϕ(x, y) + x,y∈ξη X ψ(x, y), (4.0.2) x∈ξη , y∈ξω ∩Λc where ϕ, ψ : Rd → R, are suitably defined functions. Note that in the free case, ψ ≡ 0 and we 45 have no interaction outside of Λ. Our construction of a partition function on a box Λ follows by fixing a configuration outside of the box, and then averaging over all configurations inside the box Z ω e−βHΛ (η;ξω (η)) dη. ZΛ (ω) , (4.0.3) ΩΛ So our partition function is a function of configurations on Ω; again, returning to the free case, we see that ZΛ (ω) ≡ ZΛ is a constant, and agrees with our formulation of a partition function in Chapter 2. The Gibbs distribution in Λ is then (ω,β) γΛ (dη) , 1 ω e−βHΛ (η) dη. ZΛ (ω) (4.0.4) Definition 4.0.4. A measure µ ∈ M(Ω) is said to be a Gibbs measure for the family of Hamiltonians HΛη Λ⊂Rd , η∈Ω and inverse temperature β > 0 if it satisfies µ(A|TΛ )(ω) = 1 ZΛ (ω) Z ω e−βHΛ (η) χA dη almost surely, (4.0.5) ΩΛ for all A ∈ F, λ ⊂ Rd , and where µ(·|·) denotes the conditioned measure. In terms of the Gibbs distribution, (4.0.5) can be rewritten as (·,β) µ(A|TΛ )(·) = γΛ (A). (4.0.6) This is known as the Dobrushin, Lanford, Ruelle DLR-condition. We denote GΛ (β) for the family of measures satisfying the DLR-condition. Gibbs measures play an important role in statistical mechanics due to their description of phase transitions, we say that the model exhibits a first order phase transition if #GΛ (β) > 1 for some β > 0. This is perhaps the best description of a phase transition, since most definitions are rather heuristic in nature. In future work we hope to be able to find a suitable metric under which we can define some convergence of Gibbs measures on Λ to those on Rd . The limiting family G(β) will then describe phase transitions for the whole space Ω. In our formulation of percolation for edge based permutations, we mentioned that we will let our measures depend on a family of parameters Γ. In practice the varying parameter will be the inverse temperature β > 0, and we look for a critical value βc , for which if β > βc then C ∞ occurs with probability 1. Our aim in the future will be to draw up a correspondence between this critical value, and the existence of a phase transition in the set of Gibbs measures: #G(β) > 1 for β > βc . 46 Bibliography [Ad06] Adams, S., Lectures on Mathematical Statistical Mechanics. Dublin Institute of Advanced Studies, 2006. [ACK11] Adams,S., Collevecchio,A., König, W., A variational formula for the free energy of an interacting many-particle system. Ann. Probab. 39, No.2, 638-728, 2011. [Ad11] Adams, S., Large deviations for empirical cycle counts of integer partitions and their relation to systems of Bosons. Analysis and Stochastics of Growth Processes, LMS, Oxford University Press, 148-172, 2008. [An76] Andrews, G. E., The Theory of Partitions. Cambridge University Press, Cambridge, 1976. [BCMP05] Benfatto,G., Cassandro,M., Merola,I., Presutti,E., Limit theorems for statistics of combinatorial partitions with applications to mean field Bose gas. Journal of Math. Phys. 46, 033303, 2005. [Bo98] Bollobás, B., Modern Graph Theory. Springer, New York, 1998. [BR06] Bollobás, B., Riordan, O., Percolation. Cambridge University Press, Cambridge, 2006. [BH57] Broadbent, S. R., Hammersley, J. M., Percolation processes I. Crystals and mazes. Proc. Cambridge Philos. Soc. 53, 629-641. [DHH10] Deijfen, M., Häggström, O., Holroyd, A.E., Percolation in invariant Poisson graphs with i.i.d. degrees. Arkiv för matematik, 2010. [DHP10] Deijfen, M., Holryod, A.E., Peres, Y. Stable Poisson graphs in one dimension. Elec. J. Probab., 16(44),1238-1253, 2011. [Fe53a] Feynman, R. P., The λ-Transition in Liquid Helium. Physical Review, 90 (6),11161117,1953. 47 [Fe53b] Feynman, R. P., Atomic Theory of the λ-Transition in Helium. Physical Review, 91 (6), 1291-1301, 1953. [Fi91] Fichtner, K.-H., Random permutations of countable sets. Probab. Th. Rel. fields 89, 35-60, 1991. [Gi61] Gilbert, E., N., Random plane networks. Ann. Math. Statist. 30, 1141-1144. [Gr25] Gram, J. P., Tafeln für die Riemannsche Zetafunction. Kungl. Danske Vid Selsk. Skr. 10 (8), 313-325, 1925. [Gr99] Grimmett, G., Percolation. Springer, Berlin, 1999. [MR96] Meester,R., Roy,R., Continuum Percolation. Cambridge University Press, Cambridge, 1996. [Pe03] Penrose, M., Random Geometric Graphs. Oxford University Press, Oxford, 2003. [PS08] Pethick, C.J., Smith,H., Bose-Einstein Condensation in Dilute Gases. Cambrdige University Press, Cambridge, 2008. [PS03] Pitaevskii,L., Stringari, S., Bose-Einstein Condensation. Oxford University Press, Oxford, 2003. [Pi05] Pitman, J., Combinatorial Stochastic Processes. Springer, 2005. [QTZ00] Quintanilla J., Torquato, S., Ziff, R.M., Efficient measurement of the percolation threshold for fully penetrable discs. J. Phys. A 33, L399-L407, 2000. [RT02] Roy, R., Tanemura, H., Critical intensities of Boolean models with different underlying convex shapes. Adv. in Appl. Probab. 34, 48-57, 2002. [Sc01] Schakel, A., Percolation, Bose-Einstein condensation, and string proliferation. Phys. Rev. E 63 026115-1, 2001. [Si82] Sinai, Y. G., Theory of Phase Transitions: Rigorous Results. Pergamon Press, Oxford, 1982. [Sü02] Sütő, A., Percolation transition in the Bose gas. J. Phys. A: Math. Gen. 26 4689-4710, 1993. 48 [Sü02] Sütő, A., Percolation transition in the Bose gas II. J. Phys. A: Math. Gen. 35 69957002, 2002. [Ve96] Vershik, A. M., Statistical mechanics of Combinatorial Partitions, and Their Limit Shapes. Functional Analysis and Its Applications, 30, No.3, 90-105, 1996. [ZUK77] Ziff, R. M., Uhlenbeck, G. E., Kac, M., The ideal Bose-Einstein gas, revisited. Physics Reports 32 (4), 169-248, 1977. 49