>>: So next is the Birnbaum's lecture. And... University of Washington from 1939 to 1974. He has...

advertisement
>>: So next is the Birnbaum's lecture. And Bill Birnbaum was a professor at the
University of Washington from 1939 to 1974. He has done distinguished
services to the university as well as to the society. He has served as the
president of IMS as well as the editor for the Journal of Mathematical Statistics,
which is the previous one for -- the Annals of Probability and the Annals of
Statistics, the offspring of that journal.
And he passed away in 2000. And in his memory a few years ago a Birnbaum's
lecture was established for this Northwest Probability Seminar. And it's a great
pleasure and honor to have Jean- Francois Le Gall from Universite Paris Orsay
to give this year's Birnbaum's lecture. He's going to talk about continuous limit of
large random planar map.
>> Jean-Francois Le Gall: Okay. Thank you for the presentation. I also would
like to thank you the organizers for this opportunity. So I want to talk about
scaling limits for random planar maps. So planar maps are just graphs
embedded in the plane or in fact it would be more convenient to view them as
embedded on the sphere. And the idea is to work with random objects. So to
pick such a graph uniformly at random in a given class we need more precise
[inaudible] and to let the size of the graph 10 to infinity and to study the scaling
limit of these objects viewed as metric spaces for the graph distance. Okay?
So essentially if you have a graph, it induces metrics structure a distance on the
vertex set and if you take a graph which is larger and larger you can rescale the
distance and hope to get a limit. And this is what I will try to explain in this
lecture. Okay.
So why is it interesting? Essentially we hope that in this way we get a limiting
universal object. In the sense that it's not going to depend on the particular class
of scripted [inaudible] that we start from, okay?
We also hope that we get in this way an interesting continuous limit, what I will
call the Brownian map. Although as we shall see it is not yet completely
identified in some sense. But essentially if we can study this limiting continuous
object, we also hope that we understand better the properties of the large
descriptive object, the large planar maps, okay.
So in a sense, this is very analogous to what we always do in [inaudible] we use
a convergence of rescaled random paths to Brownian motion and we know that
Brownian motion is very nice and important object. And we also know that if we
understand well Brownian motion we can derive motion about long random path,
okay.
So in some sense we want to do something analogous but instead of dealing
with random path we are going to deal with random graphs. Okay.
So here is a brief outline of the lecture. So I will insist in the second part in the
main technical tool that we are going to use which is certain bijections between
graphs and trees, okay? And the last part I will try to present some more recent
work about geodesics in these objects. Okay?
So let me start with a brief introduction to planar maps. So as I said before,
planar map is just an embedding, a proper embedding means that edges do not
cross, okay? So a proper embedding of a connected graph should say a finite
connected graph, into the two dimensional sphere.
And something which is very important that we look only at the shape of the
graph, so we are not interested in the particular embedding, we identify two
embeddings see they correspond via direct homeomorphism of the sphere.
Okay.
So it's really a combinatorial object. So here is an example of a planar map. So
something which you can define because your graph is embedded is a notion of
a face, okay? So the faces are simply the connected components of the
complement of the union of edges. So in this particular case you have one, two,
three, four, five, six, seven faces.
Okay. You should not forget the external one because you are on the sphere; in
fact, there is no external one. And I'm going to be interested in the particular
instance of planar maps namely the so called P-angulations, okay? So P is an
integer, a fixed integer greater than or equal to three. And P angulation is planar
map such that each face has exactly P adjacent edges. So in the case P equals
three it is a familiar notion of a triangulation and in the case P equals four, this is
what we call quadrangulation.
So this one, for instance, here, is a quadrangulation, okay. You can check that
every face has four adjacent edges. It's also true for the external face. It's also
true for this face here, that may seem a little bit surprising. But when you have a
situation like this, when an edge is completely contained and if you wanted
adjacent to only one edge to only one face we count it twice because we image
that we go around the face like this and we need this edge twice, okay?
So these faces are quadrangular. Okay. So perhaps last thing that I will need, I
will need the notion of a rooted map, okay? So rooting a map means that I
distinguish an edge which is also oriented. So [inaudible] on this picture I show
this edge and I orient it downwards. And then the origin of the root edge is called
the root vertex. Okay?
So this is in a sense a combinatorial trick. It's much easier to [inaudible]
enumeration purposes, it's much easier to deal with rooted objects. And we
believe that this is not so important for -- does not make such a big difference for
the problems I'm going to address.
Okay. So this is just a simulation of a large triangulation, okay, drawn on the
surface homeomorphic to the sphere. So essentially what the idea now is can
we -- here we have hundreds of edges, hundreds of vertices. Can we choose
which such a vectored random lets a number of edges to infinity and try to get
some continuous limit?
So what do I mean by continuous limit? So this will be in the sense of
convergence of metric space. I will view my graph, my planar map as a metrics
space. So this is very easy to do. Look at the vertex set, V of M. I can equip it
with the usual graph distance, okay. So the distance between two vertices is just
minimal number of edges on the path from the first vertex to the second one.
And equipped is a graph distance. The vertex set is of course a finite metrics
space, okay?
And now the idea is to choose the planar map M uniformly at random, for
instance, here in the set of all rooted P-angulations with N faces. So let me
emphasize that we identify two planar maps as they correspond via direct
homeomorphism of the sphere.
So thanks to this identification, this set of all triangulations with N faces is a
infinite set, so it makes perfect sense to choose of course one uniformly at
random in this set. And we want to understand how this metric space, the
associated metric space behaves when we let N turn to infinity, okay. So MN is
so then uniformly at random, as I said before, in this set.
And we expect that if we rescale the distance property, so if you multiply the
graph distance by a factor tending to zero as 10 to infinity, so you can imagine
that you assign a length N to the power minus A to each edge instead of having
edges of length one, so you do this rescaling, you expect that rescale metric
spaces will converge when the number of faces 10 to infinity [inaudible]
continuous limit, okay?
And the meaning of this convergence will be in the sense of the
Gromov-Hausdorff distance, which I will re-call in a while. Okay? So before
doing that, two remarks. Here it's important to rescale the graph distance if you
want to remain in the framework of compact spaces, okay? It's so interesting to
study this convergence here without rescaling, okay.
This leads if you don't do any rescaling you will get of course -- in the limit you
will get an infinite random lattice, an infinite random graph, and this has been
studied by various people, in particular [inaudible] and [inaudible] and others.
But this is not what I'm doing here. Here I want to stay to have a kind of global
limit and not a local limit.
Okay, and as I said before, we expect some universality of the limit. It should not
depend on the integer P. So it should be the same for P -- for triangulations, for
quadrangulations or for more general random planar maps.
Okay. So very briefly remind you of the Gromov-Hausdorff distance. So this is
just a definition of the classical Hausdorff distance between compact subsets of a
metric space. Now, if you have two compact metric spaces which are not a priori
subset of a bigger space, you cannot use of course the Hausdorff distance to
compare them.
But what you can do, you can just -- there is a very simple idea due to Gromov.
You can embed simultaneously spaces into the same big space, okay? So this
is the meaning of the picture. You have here red space E1, you have green
space E2. You try -- you find embedding psi 1 and psi 2. You can always do
that of E1 and E2 into the same space. It's very important isomorphic
embeddings so that I preserve distances. It's really [inaudible] of your metric
spaces that you find into the same big space.
And then once you are in the same big space you can use the Hausdorff
distance to compare your two spaces. Okay? So this is what you do. You
minimize over all possible isomorphic embeddings of E1 and E2 into the same
space capital E. The Hausdorff distance between the embeddings. Again, this
gives you the so-called Gromov-Hausdorff distance. Again, it has nice
properties, and in particular if you look at the set of all isomorphic classes of
compact metric spaces, you can check that the Gromov-Hausdorff distance is
instead a distance on this space capital K. This is not completely obvious. And
moreover, if you equipment K with this distance you get quite a nice metric
space, a Polish space into this [inaudible] complete. Okay.
So now I come back to the problem I was mentioning in the beginning of the
lecture. So it makes perfectly sense to study the convergence in distribution of
the vertex set of my graph, V of MN, equipped with a scale graph distance
viewed as random variable with values in this subcapital K. Okay? So it's the
usual setting of having a sequence of random variables taking values in a Polish
space and you want to study the convergence in distribution. Okay.
So this problem in fact was stated I think first in this [inaudible] for triangulations
by Oded Schramm in his ICM paper. Okay.
So while I can immediately tell you what's going to be the right choice of A, of
course the parameter A is chosen in such a way that the diameter of this space
remains bounded. So here the diameter of the of the before rescaling should be
of order N to the power A, and it is known for some time that value of A is A
equal one-fourth. Okay. These are quadrangulations but as I said before, it
would be the same for P angulations in fact.
Okay. So very briefly some motivations for studying planar maps. There are lot
of motivations coming from combinatorics. Planar maps are important objects in
combinatorics. They have been so I think since the work of Tutte in the '60s.
There are motivations from theoretical physics. There are deep connections
between enumeration of maps and expansions for matrix integrals. And more
recently large random planar map have been used as models of random
geometry. In particular in the setting of two dimensional quantum gravity, okay?
And as I mentioned, there is recent work by Bertrand Duplantier and Scott
Sheffield which is related to this, except that they don't deal with planar maps as
they use a different approach involving the Gaussian free field but it is expected
that sometimes both approaches should be equivalent.
Okay. Of course as I said in the beginning, there are motivation purely from
probability theory in some sense. You want to get the kind of analogy of
Brownian motion but replacing path by graphs. So a kind of purely Brownian
surface, okay.
Okay. There are other motivations from metric geometry and also motivations
from algebra and geometry. But, okay, I will not say more about that.
Okay. So let me come perhaps to the description of the main technical tool I'm
going to use. It depends on the bijections between maps and trees. So I need
two different notions of a tree. First notion is a notion of planar tree or plane tree.
So here you can view such a tree as a genealogical tree for population that starts
with an ancestor. I represented here the ancestor by the symbol empty set. And
then the individual in this genealogical tree can be represented by a word made
of positive integers in the obvious way, okay, for instance the children of the
ancestors are represented by the symbols 1, 2, and so on. Children of 1, 2, by
the symbols 121, 122, 123. So it's obvious definition. So you can simply define
such a tree as a collection of these words of integers. And this collection has to
satisfy certain obvious rules. I don't state them.
So what is important is notion of planar tree. The fact that -- okay. First you
have a root, of course, the ancestor here, and you have an order. Okay just the
order is put in the description of the tree, if you want. When you say that the
children of the root are 1, 2, and so on. 1 is the first child, 2 is the second child
and so on. So you have a lexicographical order on vertices.
Okay. So this is a very basic notion of a tree, of course, in combinatorics. Now
slightly more complicated notion. What I call a well-labeled tree. It's going to be
a pair consisting of planar tree first two and then a collection of labels assigned to
the vertices.
Okay. So the neighbors here as numbers in red on my picture, and they have to
satisfy certain properties. The label of the root is [inaudible] to one. Labels
positive integers. And when you move along one edge, the label can vary by
plus 1 or minus 1 or it can stay the same, okay? So in a sense if you have an
individual here with label 3, it should run label 2, 3, or 4, but these are the only
possible choices again.
Okay. So why is this notion interesting? Because there is a nice bijection
between these objects well-labeled trees and quadrangulations. So I consider
the set capital TN of all well-labeled trees with N edges. And there is a bijection
from this set unto the set of rooted quadrangulations with N faces. So for the
moment it's that same matrices, just saying that both sets have the same
cardinality. But in fact you get more bijection. So this is explained below. You
get the fact that if you start from a well-labeled tree you can construct the
associated quadrangulation capital M in such a way that the vertex set of the
map is the same as the vertex set of the tree, plus one extra vertex which I call
partial D here. So vertices of the tree become vertices of the graph.
And moreover, which will be very important for us, what was labeled on the tree
becomes the graph distance from the root vertex in the planar map. Okay some
so you have this property here. Labels become graph distances when you go
from the tree to the graph. Okay?
So before I explain to you the bijection, let me mention that there are similar
bijections for much more general planar maps and for instance for triangulations
and so on. And these have been studied by various people but in particular by
Bouttier, Di Francesco, and Guitter, who are theoretical physicists, in fact.
But I will stick to the case of quadrangulations for explaining this bijection
because it is the simplest case. So here is an example. So you start from the
well-labeled tree. It is here. And you want to construct this quadrangulation
which is here. Okay? It's not yet constructed, but the vertices of the
quadrangulations will be the red vertices here. So they are the same if you want
of course as the vertices of the tree. Plus one extra vertex which is here, okay?
So I start by adding this extra vertex partial tree. And I assign to this extra vertex
the label zero by convention. Okay?
So now what I do, I follow the contour of the tree in the way indicated by the
arrows here, okay? And in this way I will encounter of course every vertex of the
tree, but more precisely I will encounter every corner associated with the vertex
of the tree. Okay? So you will see for instance this vertex here has three
associated corners, one here, one here, and one here. And I will visit all three of
them during the contour. Okay? And the rule as follows: So when I do this
contour I start here from the bottom of the tree, from the [inaudible]. What I do, I
connect this vertex to the last visited vertex with smaller label, okay? So the first
step of course I'm at the bottom of the tree. I have a vertex with label 1. Okay. I
have to connect it to a 0. So there is only one which is this extra vertex partial D
here. So I draw this red edge here at the first step, okay?
But then what I'm going to do, I'm going to move this way, okay? So I arrive
here. I connect this vertex labeled 2 to the last vertex labeled 1, which is this
one. There is not much choice here, but later there will be some choice. Okay?
Continually arrive at this one. When I have a vertex label 1, I connect it, in fact,
to the unique vertex labeled 0. Okay? So then I go down like this. So I visit
again this vertex labeled 2 but I visit a different corner now. Okay? So, in fact,
what I do, I connect it by an edge starting from this corner here to the last one,
which is now the vertex here. Okay? When I'm going downwards arriving at this
vertex to the last vertex labeled 1 which I have visited is now this one, okay?
This is why I drew this edge here, okay?
Okay. And then, okay, 3 I connect it to the last 2 which is here. 2 again I
connect it to the last 1 which is here. 1 I have no choice to 0. And you continue
this way until you have explored every corner have the tree, until you have
finished the contour.
And you can check that now the graphs that you have constructed via this
algorithm is indeed a quadrangulation. Okay? You can check that all faces have
degree 4. By convention you root this triangulation at the first edge constructed
in this algorithm in such a way that the extra vertex partial D is a root vertex,
okay? So you will root the triangulation here.
And you can now verify it, okay, at least on this example, that the red figures
here which were the labels of the tree, okay. Question?
>>: Yeah. How do you make the connections? How do you draw them in the
plane? You said which edges draw but not embedding in the plane.
>> Jean-Francois Le Gall: The tree is drawn in the plane.
>>: Right.
>> Jean-Francois Le Gall: The tree is drawn in the plane. What's important is
that -- I didn't want to give too many details but since you ask me, what's
important is that each time you drew an edge, I should maybe come back -okay. Maybe -- okay. What I'm doing here, I'm going down, I arrive at this vertex
here. I drew an edge of the graph from this vertex to the last one which is here,
and this vertex starts from the corner at which I have arrived, okay? So this is -- I
draw this edge here like this. And you can do that in such a way that edges do
not cross the property of this but it's not very difficult to check. And there is
essentially a unique way of doing that. So the planar map is uniquely defined.
So I don't want to give too many details. Does that answer your question? Okay.
So what I was saying that the red figures here, which are the labels on the tree,
you can check that they now coincide with distances from the root vertex. This
vertex here had label 3 and 3 is also the distance from this vertex to the root
vertex which is here. Okay? And this is true for any other vertex.
So now just to you summarize our strategy will be first to understand continuous
limits of trees that have already been done in some sense in order to be able to
understand continuous limits of maps. But there is a difficult point which is that
as I explained before what you will understand in the graph, in the planar map,
you will understand distances from one specified vertex which is a root vertex
because they coincide with the labels, okay?
But of course this is not sufficient if you want to get the convergence of metric
spaces. You should be able to understand distances between any two vertices.
Not only distances from one specified vertex. Okay. So let me explain the
asymptotics for trees I'm going to need. So this is a theorem which is basically
due to David Aldous. So you look at the set of all planar trees with N edges and
you pick one uniformly at random in the set. I call it tau N. So again you can
view to N as a metric space for the graph distance, okay. The same kind of idea
that I had before but now I apply this idea to a tree. Okay?
So you rescale the graph that stands basically with the factor one of a square
root of N. This converges in distribution in the Gromov-Hausdorff sense as
before to certain limiting random compact metric space which is called -- which
is, in fact, the so-called CRT tree, Continuum Random Tree. Okay?
So why this notation TE, DE, comes from the fact that you can view the CRT and
this is perhaps the best way of looking at the CRT. You can view it as a tree
coded by a normalized Brownian excursion. Okay?
So let me explain that perhaps first to give a definition of the CRT I should
introduce the notion of a real tree, okay? So what is a real tree? Well, the formal
definition is here. It's compact metric space that's between any two points A and
B you have unique arc meaning that there is a unique way of going from A to B in
a continuous injective manner. Of course up to a parameterization. Okay?
There is a unique path from A to B if you want to up to a parameterization. If you
require your path to be one to one and continuous of course. And moreover if
you choose parameterization your path is isometric to a line segment. Okay?
So this is a definition of a real tree as a union of line segment just as in the
picture which is there. Of course this union has to be connected. It has to be a
tree in the sense that you will have no loops. It has to have the shape of a tree if
you want.
But then you get -- if you look at such a union of line segment if you equip it with
the appropriate distance, the distance between A and B here has to be for
instance the lines of the red path which is here. Then you get the real tree.
Okay? So really any compact real tree I'm going to look only in fact here at
compact real trees. Any compact real tree can be obtained as a limit, an
increasing limit if you want of finite union of segments such as in this picture, of
course. But of course it can be much more complicated than this. You can have
infinitely many branching points and you can have even uncountably infinitely
many leaves. But basically you should think of a real tree as an object like this.
Okay?
So in the discrete setting it's well known that you can code the planar trees which
I introduced before by Dyck paths or contour functions. And in fact you can do
the same for real trees. Okay? So this is the next slide.
You can code a real tree and in fact you can code any compact real tree by a
function in the the following way. So here I start from the function, okay, the
function G, which is not only defined on the interval 0, 1, continuous, of course,
which start and finishes at 0, so this is a red function here, and you can associate
with this function a real tree so perhaps I should give the intuition behind the
construction you imagine that you -- the red graph here is a strip of paper and
you put some glue below this strip of paper, so below the graph you put glue
everywhere and then you imagine that you squeeze the graph like this. You
push you the right side on to the left side if you want. And so what's going to
happen if you do this operation? So what's important when you you do this
squeezing? You keep the vertical distances, okay. It's very important that you
don't change the vertical distances.
So what's going to happen is that two points which are the same level below the
curve, so in this point and this point and on your strip of paper are going to be
glued together. Okay? And if you do this gluing it's not hard to imagine that you
get the tree.
In this example it would be a very simple tree in fact, okay? But the precise
mathematical definition is there. What you can do, you can define pseudo
distance on the interval 0, 1. The G of ST is just G of S plus G of T minus twice
the minimum of G between S and T. And you glue -- well, not the real distance,
only a pseudo distance, so there are -- you can have S defined from T and DG of
ST equals 0, but what you do, you identify say T and T prime. If DG of T, T
prime is equal to 0. This is just the same as saying that G of T is equal to G of T
prime and is equal to the minimum between T and T prime. Okay?
So then as usual what you can do is you take the quotient space of 0, 1 for this
relation. You equip it with DG, which is now a distance. And you get a real tree,
okay?
Something which is also important, and you can have a convention for rooting
this real tree, you root it at the equivalent class of 0, okay? Something which is
important in this construction is that you also get -- using this construction you
also get a lexicographical order on your tree. You will say that the vertex which
is the equivalence class of S comes before the vertex T if S is smaller than T.
The same lexicographical order we had for discrete trees before you also get it
for real trees, in fact, via this coding. Okay.
So now I can just -- this is just a restatement of Aldous' theorem. In the limit what
you get is a random real tree coded by Brownian excursion of length 1. And this
is a CRT. Okay? So informally you should imagine that you have your tree. You
could define what it means making the contour of your tree like this. And if you
would record the distance from the root in this evolution you would get the
Brownian excursion. This is intuitive idea.
Okay. So these are just limit of trees. Now I have to speak about limits of labels.
Okay? Remember in the discrete setting we had labels assigned to the vertices
of our plane trees. Okay? So I want to do the same now for my continuous
trees. Okay. So of course this CRT here is a tree coded by your random
function, of course, so it's a random real tree. So I start by considering the
deterministic real tree here, capital T, D.
So something you can do very easily, just looking at Brownian motion indexed by
the tree, okay? So this is -- well, it's a standard definition, if you want, of
Brownian motion indexed by the line except that you replace the line by a real
tree. But otherwise exactly the same. So what does it mean? Just means that
you have run independent Brownian motions along the branches of your trees.
Labels evolve like Brownian motion when you move along the tree. Okay?
But of course if you look at two different line segments, disjoint line segments on
your tree, you have to use independent Brownian motions to describe the
evolution of labels. Okay. Okay? So you can do that for any real tree. If you
have some [inaudible] conditions on your real tree you can prove that there is a
continuous modification of your Brownian motion indexed by the tree. Okay?
So this is similar to what we had in the discrete setting. Remember in the
discrete setting we -- the labels could move -- could change by press 1 minus 1
or zero along its edge. So it was a kind of tree indexed to random walk. So what
I'm doing in the continuous setting I look at tree indexed Brownian motion. Okay.
It is the same. Except that we lost the positivity constraint, okay? If I look at
Brownian motion indexed by the tree in this way, of course it's a Gaussian
process, it's not going to be positive, it starts from 0 at the root. Okay.
So we have to do something else. To take into account this positivity constraint,
and it turns out that there is something very simple we can do. The first idea
would be just to condition the labels to be non-negative. Okay? And in fact it
works. But there is an even simpler construction which I many going to explain in
the next slide. So I will state, in fact, the scaling limit for well-labeled trees. So
just to remind you what was a well-labeled tree in all this plane tree wit red labels
which were positive and satisfied the rules I explained before.
Okay. So we take one labeled tree with energies uniformly at random for theta
and L, N, V, L, N, V, as the labeled. So we scale the tree by 1 over square root
of N, just as in Aldous' theorem of course, not a surprise. And we scale the
labels by 1 over square root of square root of N. Okay. So why is it so?
Remember what I just said? In some sense the labels evolve a little bit like
preindex to random walk. If you forget about the positivity constraint, okay. So it
seems the height of the trees of order one over square root of N -- sorry, the
height of the trees of order square root of N typically if you look at the value of a
label it will correspond to the value of a random walk at the time of order square
root of N. Okay? And a random walk, well, I sent out random walk of course at
time square root of N is of order square root of square root of N. So this is where
we get this N to the power one-fourth which is very important. Okay?
Okay. So now if we do this rescaling we can find explicitly a form of the scaling
limit. And what we get when the limit for the trees is a CRT is not completely
obvious, okay? I will explain that in a while.
And for the labels, we don't get Brownian motion indexed by the CRT, but we get
a process which I call Z bar. So this Brownian motion was defined on the
previous slide for a deterministic tree but you can take the CRT of course. So
what you get is this process can't to stay positive, and there is a simple way of
doing this conditioning.
What you do, you define Z bar as Z minus its minimum value. So in this way it's
suggested that it becomes non-negative. I see that the limit was not exactly the
CRT. But it is a CRT but you have to change the root, okay? You have to reroot
the CRT at the vertex that I [inaudible] which minimizes the labels. Simply
because in -- if you want to continuously meet, you also want the root to have a
minimal label which was true in the discrete setting, okay? So you have to
change the root of the CRT. Okay. Okay. But otherwise it's a scaling limit.
This already gives you a lot of information about distances in a random planar
map, okay, because remember that these labels corresponded to distances from
the root vertex. Okay. So here is theorem for [inaudible] which was proved by
Chassaing and Schaeffer saying that if you look at maximal distance from the
root in a random quadrangulation with N faces you can rescale it by N to the
power minus one-fourth. So if you use the bijection between quadrangulation
and trees, the maximal distance from the root corresponds to the maximal label.
Okay? So this one-fourth here of course is one-fourth is the same as the
one-fourth here we are using to rescale labels of course. Okay?
And you get more or less explicit form for the limited distribution in terms of
Brownian motion indexed by the CRT. It's possible in fact to compute to some -to compute [inaudible] for instance that have a kind of Laplace transform, to have
some information about the limiting distribution. Okay?
So [inaudible] in connection with universality, so this result has been extended to
much more general planar maps, including triangulations and, in fact, so random
planar maps where faces do not all have the same degree, they can have
different degrees. In particular by Gregory Miermont in this course, these
courses.
Okay so. The next section I will come back to the program I started from, the
program of the scaling limit of random planar maps. Okay? So this is just to
remind you of the notation. So I look at all rooted 2P angulations with N faces.
It's very important that I take only here even integers, 2P, okay? That is bipartite
case, okay, so this does not include triangulations. Although it very likely that the
results I'm going to state also holds for triangulations. Okay?
So I pick one uniformly at random in this set. I look at its vertex set. I rescale the
graph distance by one over N to the one-fourth. So CP is just a constant
depending on P. And we get the converters and distribution towards a certain
random compact metric space in the sense of Gromov-Hausdorff distance. And
fortunately it's only a sequential limit, okay? There is a compactness there. You
get this limit at least along a subsequence. Okay. I will come back to that in a
while.
Still you can describe the possible limits in a fairly explicit way. You can show
that the limiting space is M infinity, which I'm going to call the Brownian map
later. It is a quotient space of the CRT. Okay? T. And so here I have another
equivalent relation. This equivalent relation under CRT is defined in terms of
Brownian labels on the CRT or Brownian motion indexed by the CRT.
So I use the same notation as before. Z is Brownian motion indexed by the CRT.
Z bar is Z minus its minimal value. And now it diffs equivalence relation by
saying that two vertices are equivalent if and only if they have the same label and
if between A and B the labels are larger.
This is in fact similar to the equivalence relation we used to define the coding of
trees by functions, okay? But here it's in a different setting. So I should say what
it means here to say that C is between A and B. C belongs to the interval A, B.
It makes sense because as I said before, we have also a notion of
lexicographical order on the tree, okay? So we can make sense of this interval
here. Okay.
So, now, this M infinity is completely defined, the CRT quotiented by this
equivalence relation. Now, what is capital D now? Well, of course D is a
distance on M infinity, can prove that it induces the quotient topology. We can
prove several bounds on capital D, on lower bounds and we can also prove that
the distance between any vertex A and the root rho star is the label of A. So it's
just similar to what we had in the discrete setting. But we cannot -- I cannot
completely identify capital D. Okay? That's still an open problem. Okay? But
we have some information.
Okay. So before I discuss this open problem let me explain to you perhaps why
we have this equivalence relation here that comes up, okay? So you have to
remember the discrete setting, okay? In the discrete setting when we -- did we
draw such an edge between two vertices U and V, we drew such an edge when
U was last visited vertex before V with smaller label. Okay? So this means that
an interval between U and V, all the labels, all the labels of this red vertices here
are at least as large as the label of V.
You imagine that you go backwards on the tree and the first time you meet vertex
with a smaller label is this U here. Okay? So this means that we have these
properties if you prefer. And now the definition of the equivalence relation as a
continuous limit is just Z analog of these properties in the discrete setting. Okay?
So what's happening if you want in the -- when we passed to the scaling limit is
that we will have this property here between two vertices U and V which are very
far away from each other that will happen from time to time and seeing now that
they are connected by an edge in the quadrangulation or in the planar map this
means that in the scaling limit because distances are rescaled by a factor tending
to 0 this means that in the scaling limit they have to be identified. So this is a
reason for this identification.
Of course what's difficult to prove is the converse, that you don't identify more
than this. And in a sense you don't identify so many points. Typical equivalence
class is a singleton, okay, and you have only countably many equivalence
classes with [inaudible] point in fact, okay? But the typical case is singleton
because you don't identify it.
Okay. So as I said before, you don't identify really the limit but you identify its
topology, just the quotient topology in this quotient. And open problems are just
to identify capital D. If one is able to identify capital D this would imply that you
don't need to take subsequences, okay? And otherwise you -- of course you will
also want to prove that capital D does not depend on P, that you have this
universality property, same limit for triangulations, quadrangulations, and so on.
Okay. Although the limiting space is not completely identified, the space M
infinity D, you can prove a lot about it, okay. So this space is called the Brownian
map. The name was given by Marckert and Mokkadem who had a different
approach to the same object, in fact not dealing with a Gromov-Hausdorff
convergence.
And here are two theorems you can prove about the Brownian map. You can
prove that the Hausdorff dimension is equal to four almost surely. And you can
prove that almost surely it is homeomorphic to the 2-sphere. Okay?
So in a sense it is not totally surprising because you start from graphs drawn on
the sphere and you put more and more vertices, more and more edges and the
limit you get something which is homeomorphic to the sphere, okay? Not a big
surprise, maybe, but still it's not obvious because you could imagine that in your
random graphs you could have bottlenecks like this and cycles such that both
cycles of both sides, sorry, of the cycle are -- have a microscopic size but such
that the length of the cycle is small in comparison with the diameter of the graph,
okay? And what is [inaudible] in fact is that this does not occur, in fact, for large
planar maps.
Okay. So in the last five or seven minutes I want to talk about more recent
results concerning geodesics in the Brownian map. And maybe I can start by
describing what happens in the discrete setting. Okay? There it's very easy to
understand how to construct geodesics while say to the -- at least two to the
route vertex from any vertex moving to the root vertex, okay?
So you use a bijection which I explained in the case of quadrangulations. But
there are more general bijections. Okay?
So what you do, you start from V and I explain before you go backwards in the
tree, you look for the last visited vertex before V with a smaller label, with label
LV minus 1. Okay? So for instance in this example it's going to be V prime,
okay? And then you start from V prime and do you the same. You go
backwards in the tree like this. And this again is a last visited vertex before V
prime with smaller label. Okay? And so on.
And if you do that, remember that the label corresponds to the distance from the
root vertex. Each time you decrease by one the distance from the root vertex.
So in the end of course you will reach the root vertex and you will get the
geodesic.
Of course geodesics are not unique and you can see that they are not unique
because when you arrive at the vertex like this, which is not a leaf have the tree,
you can't cost the tree in some sense, you can continue -- decide to continue this
way or you can, as I did on the picture, you can explore what is a [inaudible].
And this is why you don't have unique [inaudible] in geodesics.
But this is the way you get geodesics from the tree if you want. Well, now you
can do exactly the same construction in the continuous setting. Okay? Now you
have this CRT. It's important that you have the lexicographical order on it. And
then you start from any vertex A here. So it has a certain label, this Brownian
label, Z bar A, which coincides with the distance from the root, and you go
backwards in the tree and for every T between 0 and Z bar A, you look at the first
vertex that you meet going backwards from A on the tree, which has label, so it's
actually the same as in the discrete setting, okay? But of course now it's a
continuous curve, okay? It's indexed by T in interval zero from between zero and
Z bar A which is a distance between A and the root.
But essentially by the same formula the discrete case you get a geodesic and it's
called a simple geodesic. Okay.
Now there is a nice fact which is the key point. The fact that except the starting
point simple geodesics will visit only leaves of the CRT, okay? So remember
what I explained the discrete setting. I explained that the non-uniqueness of
geodesics was linked to the fact that arriving at a vertex was not a leaf, you had
several choices. Okay? And here because you visit only leaves, at least
informally, you can guess at least the power of simple geodesics you will have
kind of property of uniqueness of geodesics, okay? And this is what I will explain
now.
If you have a leaf of the CRT you get a unique geodesic, while a unique simple
geodesic at least. If you start from a point which is not a leaf, if I insert this one,
you can start while either from the left or from the right. This makes sense, okay
-- I should define it in a more proper way, but you can construct two distinct
simple geodesics and even three simple geodesics when you have a branching
point, okay?
So I'm using the fact you can not get more because three is a maximum
multiplicity. Okay? So the key result now is the fact that in this way you get all
geodesics to the root, okay? So here is a main result about geodesics. So I
need to define the skeleton of the CRT. Skeleton is just the CRT minus its
leaves. It turned out that that you look at the projection, the canonical projection
here from the CRT on to the Brownian map. Remember the Brownian map was
this quotient. This projection restricted to the skeleton is homeomorphism, okay?
This means that you are not identifying any point of the skeleton with a different
point, okay? If you want the only point that can be identified in this quotient and
leaves of the CRT.
So you can prove that this -- okay, I call skel measure the skeleton and
[inaudible] projection, we can prove that skel as dimension 2. So in a sense it's a
very small subset of the Brownian map. So we call that Brownian map M infinity
had dimension 4. Okay? So this skel is a kind of -- okay. [inaudible] the
Brownian map it's a kind of dense tree embedded in the Brownian map, okay?
So why is it important? Well, it can prove that this [inaudible] geometry may call
the catalyst of the Brownian map with respect to the root. It is a set of points
where you don't have a unique geodesic root. And, in fact, what you can prove is
that if you take a point X in scale, the number of distinct geodesics to the root is
exactly the multiplicity of the point in the skeleton, okay? Remember that skel is
homeomorphic to the story so the multiplicity just makes sense as multiplicity in
the tree. Okay? Okay?
So this result is very analogous to classical results of Reimannian geometry
which go back to Poincare and others. The cut-locus is always a tree for surface
which is homeomorphic to the sphere.
I should mention that you have -- the root does not play any specially role here.
The same result rho if you replace the root by a point -- typical point in the
Brownian map. Okay? I think I have one minute.
So other things you can deduce from this result is confluence property of
geodesics. If you have two points X and Y, you take any -- suppose that X and Y
start outside the big ball, a ball of radius delta, and you take any geodesic from X
to the root and any geodesic from Y to the root, they have to merge at some
point, say before hitting this smaller ball around the root, okay?
So in some sense there is just one germ of geodesics starting from the root, and
the same holds for an atypical point, okay?
So maybe I will go very fast now because my time is over.
You can apply all these results or so to uniqueness of geodesics in discrete
planar maps, in discrete graphs, okay? Of course there you don't get
uniqueness, but you get a kind of macroscopic uniqueness. So this corollary tells
you, for instance, that if you take a point uniformly at random in the vertex set of
your graph, of course you will not have unique geodesics but any two geodesics
will be close at another smaller, small in comparison with a diameter of the
graph. Okay? So you have the kind of macroscopic uniqueness.
And you can also study points, exceptional points where you have -- don't have
macroscopic uniqueness. You can show that you have at most three
macroscopically different geodesics for exceptional vertices in your planar map,
okay? Okay. So I think I should stop now. Thank you.
[applause].
>>: Questions or comments? Let's thank the speaker again.
[applause].
>>: So it's a pleasure to introduce the speaker from the north this year. We have
Gordon Slade from UBC, and he's going to be talking about weakly self-avoiding
walks in four dimensions.
>> Gordon Slade: Thank you very much. And I'm very happy to have this
opportunity to talk to you today about this recent and ongoing work with David
Brydges, my colleague at UBC about 4-dimensional self-avoiding walks.
So let me start just by reminding you a little bit about order self-avoiding walks.
Think about the discrete-time model first, although I'll want to go to continuous
time shortly. So SN of X is the set of all self-avoiding walks in zed D of length N
that start at 0 and end at X and take nearest neighbor steps. That's the
Euclidean distance. And this condition is the condition that the walk not intersect
itself so that that's what makes it into a self-avoiding walk.
So SN of X is a said of all N steps self-avoiding walks that start at 0 and end at X,
and SN is the set of N steps self-avoiding walks that start at 0 and end anywhere.
We'll be interested in the cardinality of these sets. So CN of X is the number of N
steps self-avoiding walks from 0 to X. CN is the number of N steps self-avoiding
walks that start at 0 and end anywhere. And there's an easy subadditivity
argument that tells you that CN is growing exponentially in the sense that the Nth
root of CN converges to a limit mu which is called the connective constant. And
the measure that I want to put on the set of self-avoiding walks of length N is just
the uniform measure.
So look at the uniform measure on SN. So that means that each self-avoiding
walk has probability one over the cardinality of SN which is CN. Another
interesting quantity is the two-point function. You know, when you have a
combinatorial problem like this, it's often useful to go to generating functions. So
the generating function is this power series with coefficient CN of X. I'll call that
G subzed of X. And this for every X turns out to have radius of convergence zed
C which is one over mu. So CN of X is growing like mu to the N for all X. Not
just CN.
Now, what I'm interested in is critical exponents. And these are concerned with
the asymptotic behavior of various questions that you could ask about this
problem like CM gross like mu to the N. We know that here. But what about
corrections to that leading behavior. And there's a critical exponent gamma here
which is predicted to exist and has been measured numerically in all dimensions.
And the mean square displacement, so this is with respect to that uniform
measure on SN what's the expected distance squared. That should grow like N
to a power which is called 2 nu. And if we look at the 2 point function right at the
critical value zed C, this one over mu, which is the radius of convergence, then it
should be finite and decay as X goes to infinity according to a power which is
written 1 over X to the power D minus 2 plus eta. Pretty much so this gamma,
nu, and eta are examples of critical exponents. They're not independent. At
least that's the prediction. There's a relation called Fisher's relation, come from
the sort of physics arguments that tells you that they're related by that equation.
So I'll show you a picture. This is a random self-avoiding walk on the square
lattice that takes a million steps. This is a figure by Tom Kennedy. And what you
can see is that it doesn't look anything at all like a Brownian motion. It's not like
ordinary random walk path would look like. A ordinary random walk path would
look like a plate of spaghetti, and this doesn't look like that at all. So it's different.
But in high dimensions they look the same. So you can come up with a rough
argument that tells you that in more than four dimensions self-avoiding walks
should look like Brownian motion like this. There's more than one way to do it.
But maybe the easiest is to say that random walk paths are two dimensional and
two two dimensional objects don't want to intersect generically in more than four
dimensions and so if you tell a random walk not to intersect itself, it won't care in
more than four dimensions and it will just be a Brownian motion.
So that is hard to prove. But there's some theorems many people have worked
on over the years that say that CN is growing pursuing exponentially to leading
order, that the mean square displacement is linear in the number of steps. The
critical two-point function has the same behavior as the green function for
random walks, one over X to the D minus two. And you have convergence and
distribution to Brownian Motion.
So these are old results. And what I want to talk about today is what's happening
in four dimensions. These results were approved by lace expansion techniques
which do not extend to four dimensions. They cannot be applied.
Now, the prediction from physics is that the upper critical dimension is four. Well,
that's also that little argument that I just gave you, and that the asymptotic
behavior for four dimensions has log corrections to these relations here,
especially these two, and that CN should grow like mu to the N with a log
correction, not an N to the power but log N to a power, and the mean square
displacement should be a little bit bigger than N but not that much bigger, just a
power of logarithm.
And well, for the critical two-point function actually there's no logarithmic
correction. That's what the prediction is. This goes back to renormalization, non
rigorous renormalization group methods from almost 40 years ago. And these
logs appear also in the susceptibility, which is just the generating function for the
sequence CN. It will diverge as zed approaches zed C from below linearly but
with a log correction. And the correlation length, which is the related to the
exponential rate of decay of the subcritical two-point function, this E1 here, is unit
vector in the one direction. So G zed at NE1 is behaving like E to the minus N
over the correlation length for zed less than zed C. You have exponential decay
if you're below the critical point.
That will -- that correlation length will diverge as zed approaches zed C from
below with the square root divergence and then also a log correction. So one
would like to try to prove these things.
We have been looking at a modification of the -- this self-avoiding walk that I've
just been talking about called the continuous-time weakly self-avoiding walk. So
I'm going to describe it just in high dimensions here, which is what I'm interested
in.
So you start with a continuous time simple random walk. So what that is is it's a
process that instead of moving at integer times it moves at random times which
are separated by independent exponentials. So there's these exponential
holding times for how long you stay at a place until you make a jump and when
you do make a jump you choose uniformly from your 2D neighbors and move to
move to one of them.
So that's what this E0 is. It's expectation for that continuous time nearest
neighbor simple random walk with exponential holding times. And then we
introduce the local time of that process at a point U in zed D which is just the total
amount of time spent by the process at U up to time T.
And then we form this intersection local time I of T, which is the L2 norm squared
of this local time at U.
Now, if you write out this LUT as an integral and do it twice because it's squared
you have a double integral and you can actually do the sum over U to eliminate
one have the delta functions and these are just chroniker deltas. And you'll end
up with this expression here for the intersection local time. And you can see
what this does it's measuring the amount of time that the walk spends at the
same place. And so it's a measure of how much intersection is actually
occurring.
Now we'll define the two-point function of this continuous time weakly
self-avoiding walk in the following way. First of all there's a parameter G positive
which is giving the strength of the self-avoidance we'll be looking at Gs which are
quite close to zero later on. And it's given by this expression. So what this does
is it -- let's look at the inthe grand here. It's taking the expectation with respect to
this continuous time simple random walk started at 0. We're forcing the walk to
be at X at time T. And then we wait a walk with E to the mines G times this
intersection local time. So the more intersection that takes place, the bigger this
exponent is going to be in the negative sense and so the less weight that the
path will have.
Then we want to sum over T. That's like summing over N when we were
discussing the two-point function for the discrete time self-avoiding walk. So now
we have to integrate over T and we used to have a zed to the N, now there's an
E to the minus nu times T, which is playing that role. So this is the quantity that I
want to be studying. This is the two-point function.
Well, you can apply a subadditivity argument to this expectation here as well to
see that the susceptibility which would just be defined to be the sum over all X in
zed D of this two-point function will be finite if nu is large enough so that you
have enough exponential damping here, but it will be infinite if nu gets too small.
We want to study what happens right at the critical value. And here's the main
result. So actually the title said D equals 4 but the method works more easily
when D is greater than 4, so I'm including it here in the statement. There exists a
G nought which depends on D such that if G is in between 0 and this G nought
than the critical two-point function with that positive value of G will be
asymptotically X to the D minus 2 as X goes to infinity with some higher order
correction.
In particular, there is -- there's no logarithmic correction in four dimensions here.
So I want to talk a little bit about the method of proof that we use. But before
doing that, I want to say that we think that the -- this method has potential to do
some other things that we're actually actively working on right now. One of them
is to prove the logarithmic corrections for the susceptibility length in the
correlation length in the case of D equals four. We haven't achieved any of these
three bullets yet but they're things that we're interested in in working on.
And another one is to prove the same result also with the small nearest neighbor
attraction. So this is a model which is used in modeling polymer collapse, a
polymer in a poor solution. So a polymer in a good solution is measured by a
self-avoiding walk. But a polymer in a poor solution refers -- the poor solution
refers to the fact that the polymer cannot stand to be in contact with the solution.
And so its only other option is to be in contact with itself instead.
And so there's some competition between the self-avoidance which tends to
push the walk, make it bigger, and the self-attraction which is making it want to
be next to itself. And that's a very hard problem actually that has not been solved
very much. And what Roland Bauerschmidt, who is a student at UBC, has
noticed is that the techniques that we use in the proof here can include the case
of nearest neighbor attraction, and that's something that is currently being
investigated.
Also there's a model of self-avoiding walks, strictly self-avoiding walk in discrete
time with a particular weight associated to each step, and it's a long range steps.
I think I won't talk more about that, but this is something else that we're working
on as well.
The methods that we use are renormalization group methods, rigorous
renormalization methods that have been developed in the course of some work
that David Brydges has been involved in for about 20 years, including a paper
with Evans and Imbrie in 1992 and with Imbrie in 2003 that solved this kind of a
problem and other problems, including the end-to-end distance problem with
those logarithmic corrections that we saw earlier for the case of a 4-dimensional
hierarchal lattice.
Now, I don't want to say what a hierarchal lattice is here, but it's a kind of a
replacement of the hypercubic lattice by something with a sort of tree like
recursive structure that makes it more easily adaptable to renormalization group
analysis. And so that was in -- those were important precursors for what we've
been doing.
There's also a totally different approach using different renormalization group
methods by Hara, Takashi Hara, and his student Ohno. All right.
>>: [inaudible].
>> Gordon Slade: Delta's the Laplacian on the lattice. I'll define in it a moment.
All right. So now I want to talk a little bit about the methods that enter into the
proof of the theorem. So we'll fix a G greater than 0 and we'll usually drop it from
the notation. And we need to make a finite volume approximation. And actually
it's quite easy using standard technique called Simon-Lieb inequality which may
be familiar to people from percolation or easing model or other places. This has
been around for a long time now, that allow you to show that this critical two-point
function on zed D can be written actually as the limit of a two-point function which
is subcritical nu greater than nu C corresponds to subcritical here because it's E
to the minus nu.
So there's a two limits that take place here. There's an infinite volume limit and
then there's a limit as nu approaches nu C.
So what is this finite volume problem here? This is doing the same two-point
function on a torus. So R is some number which will be going off to infinity here,
an integer. And lambda is the torus of side length R. And what this two-point
function is, it's the same thing, it's the weakly self-avoiding walk two-point
function on the torus. So this is the continuous time simple random walk on
lambda. That's what this expectation is. And this is the self-intersection local
time on lambda. So we only sum over the vertices on lambda here. Those are
the only vertices that exist as far as this finite volume model is concerned.
So I wanted to take this for granted and say that in order to study this critical
two-point function on zed D, it's enough to study it on a finite torus and work a
little bit subcritical provided that we're able to do it with sufficient uniformity both
in the volume and in nu that we can take these limits.
Now, there's another formula for the two-point function. So this is a theorem that
this finite volume two-point function can be written as a certain integral. And in
order to say what that integral is, I need to make several definitions which I think
may look unfamiliar to some of you.
So phi here is a complex field on lambda. So lambda's the torus and phi is like a
spin, phi of X is like a spin sitting at X, and it's a complex spin. So it's not like an
easing spin which is plus or minus one. It's a complex field associated to the
points in lambda. And phi bar, so phi is U plus IV at X, then phi bar is just the
complex conjugate.
This delta is the discrete Laplacian on lambda. So it just compares the value of
phi add X with the values at its neighbors. Usual definition of the Laplacian
there.
And then we need to go to differential forums actually. So associated to this phi
X which is sitting at a point X, we think of that as a complex variable and
associate to it a differential D phi X. And there's also a differential D phi X bar
and there's some scalar which ends up going in front of them which is just there
so it doesn't show up later.
And then we define this differential form which has a 0 form, just a function here,
phi X, phi X bar, and then there's this two form, psi X, psi X bar, which is
multiplied together using the usual wedge product and the wedge product is just
a way of multiplying together differentials, and its anti-symmetric -anti-commutative rather. So that's the important feature about this product is that
it's anti-commutative, it matters what order you write things if you change the
order of two adjacent differentials then you change the sign.
And if you want to write this in terms of the real coordinates then you can write it
this way. So in particular psi wedge psi is equal to zero because it's
anti-commuting.
So we have to define this tau X, which is this form. And then there's tau delta X
which is another form which is kind of like tau except the Laplacian, minus the
Laplacian has been inserted on the four possible factors where it could be
inserted here.
Then this wonderful formula holds that the two-point function for the weakly
self-avoiding walk with continuous time is equal to this integral. Now, this integral
requires some interpretation. I have to tell you actually what it means. It's an
integral over C to the lambda. So it's a very large dimensional integral. What
you don't see at the moment in this formula is a measure showing up. And
usually you like to see a measure when you do integration so you know how to
do the integral.
And unfortunately or what's very delightful about this is that the measure is
actually up in the exponent. So this tau U and tau U squared -- and by the way, I
put the wedge in red here to emphasize it, and I'm never going to put it again. So
when you see tau U, that's actually tau U wedge tau U.
Now, the differentials are up in the exponents. And we'll have to interpret that.
And I'll say what that is on the next transparency. But this right-hand side is
something which in physics would be called the two-point function of a super
symmetric field theory with a Bosonic field phi and phi bar so the phi and phi bar
of the Bosonic field. And the differentials are the Fermionic fields. Fermions are
anti-commuting objects, and the differentials are doing that job here.
This has a long history actually going back 30 years, the work of physicists Parisi
and Sourlas and McKane in 1980, Luttinger, from the mathematical side Le Jan,
Brydges, Evans and Imbrie --Brydges, Imbrie and I have a survey paper from last
year on this topic. .
So I'm not going to tell you how to prove this, but I'll try to tell you at least what it
means. What is the meaning of this integral? Okay. So the problem is we want
to interpret as an ordinary Lebesque integral but the measure is up in the
exponent and it's sort of all mixed up. So we want to bring it downstairs and
straighten it out.
So what you do is you expand the entire intergrand in a formal power series
about the degree-zero part. That's the part that doesn't the have any differential.
So if you have a function of the differentials then what you do is you just expand
that function in a tailor series. And that tailor series is going to terminate actually
as a tailor polynomial because of the anti-commutativity. You can't have, you
know, you only have a psi and a psi bar at every point in lambda, and you can't
have a higher power than two lambda.
So, for example, if you expand E to the tau, tau was this, phi, phi bar plus psi, psi
bar. Then you just expand out the psi, psi bar so E to the psi, psi bar when you
expand it, you just get one plus psi, psi bar because if you were to square it, you
would get zero.
So you keep only the terms with one factor D phi and one D phi bar for each X in
lambda and then write those in terms of their real and imaginary parts U and V
and do the same thing for the D phi and the D phi bar.
You're going to keep only the terms that have one factor D phi and D phi bar
once you've done that big expansion up here for each X in lambda. And then
you'll use anti-come mutivity to rearrange the differentials so you can write them
as the product D1, DV1 times DU2, DV2 and so on. And then you have a
Lebesque and so then you just do the integral.
So I mean in practice, this is not what you would do if you wanted to evaluate
one of these integrals. That's the way that it's defined. But instead what you do
is you prove properties that these integrals have that come out of this definition
and use those properties in the analysis. And those properties tend to turn out to
be really quite nice.
So, for example, if I define S of lambda to be the sum over X in lambda of that
form tau delta, the one that had the minus Laplacian put in plus M squared times
tau, then if I integrate any function of tau that can be integrated, so this tau
should be regarded as a vector whose components are tau X so it's a vector with
-- whose components is equal to the cardinality of lambda, so I have a function of
tau at all of the Xs, that that integral just turns out to be F of 0 no matter what F it
is that you're dealing with as long as the integral exists.
And if you take as a function that you want to integrate against this E to the
minus S of lambda, this is S of lambda here, phi 0 bar phi X, then you will just get
minus Laplacian plus M squared inverse, 0X, which is -- I mean, the minus
Laplacian was in here and the M squared is here. So it's coming out of the -- out
of this S.
So what will happen now is that we're trying to prove the asymptotic behavior of
the limit of this as lambda goes to zed D and nu goes to nu C. We're going to
work with the right-hand side from now on and simply forget about the walks from
this point. We won't see any more walks. We'll just see the integral.
Now, I have to do something a little bit technical here for the moment but it -- the
idea is simple. It's a change of variables. So in that integral I just want to -- it's
just ultimately a Lebesque integral. I want to make a change of integrals which is
to scale -- introduce a scalar here. Square root of one plus zed nought where
zed nought is just some really number greater than minus one.
Then if you do that, you end up with this formula. So this is just a simple change
of variables. I won't be labor it. But, you know, you had a phi bar and a phi here.
Together those are going to give you a one plus zed nought that's popped out.
And the tau delta and the tau were quadratic in phi and phi bar and D phi and D
phi bar and so they're also going to produce a one plus zed nought. And this tau
U squared is quartic in phi and in D phi and so you'll get a one plus zed nought
square root to the fourth power which is where this comes from.
Now, it's very convenient very often in statistical mechanics to introduce an
external field. And I want to do that here because that will allow us to move this
phi 0 bar and phi X down from below up into the exponent. So -- well, there's
various notation here that is -- it's not so important to follow the details of it, but
essentially what is important is this sigma phi 0 bar plus sigma bar phi X which
has been introduced. So sigma is a complex field here. It's not really a field, it's
just a scalar. It doesn't depend on X. It's a complex variable.
And it's been introduced so that when we write E to the minus S of lambda minus
V nought of lambda here then in that V nought of lambda is this term. And if we
differentiate that with respect to sigma and sigma bar then what will happen is
we'll bring down the phi nought bar and the phi X and recover the formula that we
had here.
So there's sort of the interesting thing which has happened in this line.
Otherwise it's bookkeeping. So there's this S of lambda which is which has been
isolated here. The reason for introducing this M squared is that the Laplacian is
not invertible on the torus and so we have to add a little bit of a term here in order
to make minus Laplacian plus M squared invertible. So M squared is a positive
number here. And you can see that actually it's related to taking the -- the limit
nu goes to nu C is related to M squared going to zero. So there was the
ultimately the limit is lambda goes to zed D and nu goes to nu C that we need to
take that limit as nu goes to nu C will now be replaced by the limit as M squared
goes to zero.
So essentially what we want to show is that this green part -- or the green, red
part here V nought is like a small perturbation of S of lambda in the sense that -well, that it's not playing an important role. If it doesn't play an important role, if
we were just to eliminate it, then we can do this calculation and it's related to one
of those properties that is I showed you on the previous slide and this limit just
turns out to be minus the Laplacian inverse on all of zed D which has the decay
that we want.
So our problem really is to show that this V nought is a small perturbation.
Now, I wanted to introduce a kind of expectation. It's not an expectation like in
normally in probability theory but it has many of the properties. These integrals
behave very much like Gaussian integrals, ordinary Gaussian integrals even
though they have this sort of thermoyacht I can aspect to them with all these
differentials all over the place. So given a positive different lambda by lambda
matrix C, whose inverse is A, will define SA of lambda to be this kind of quadratic
form in phi and in D phi so the psi is basically a multiple of D phi and so we're
introducing this extra term here.
And then for form F, like the kinds of things that we've been integrating on the
previous transparencies, the expectation of such an F with respect to this
co-variant C will just be defined to be this integral. And it has the property that if
F is equal to one that this integral will be equal to one. Actually for any A. This is
one of these marvelous properties of these integrals is that they're self
normalizing. So the integral of E to the minus SA times one is just one, no matter
what A is, as long as it's positive definite.
So it has at least that aspect of an expectation. So the A that we're interested in
is minus Laplacian plus M squared. And then I would rephrase what it is we're
trying to compute in terms of this expectation just by writing that integral that we
had previously which was one of these integrals now as an expectation.
And we really profit by thinking of this as an expectation even though we're
working in much more generality than is usual in probability theory because we
want to do conditional expectations as well, in which case these -- this
expectation will have the value, it will be form valued. It will be different -- its
value will be a differential form not a number.
One of the ways in which they behave like ordinary Gaussian integrals is with
respect to convolution properties. So I want to use the abbreviation, I'll write phi
for the pair phi, phi bar and D phi for D phi, D phi bar just with the different font.
And just recall the basic fact that we teach our students as undergraduates that if
-- you can take a formal random variable with variant sigma 1 squared plus
sigma 2 squared. It will have the same distribution as the sum of independent
normal random variables with variances sigma 1 squared and sigma 2 squared.
And that finds and expression also for these kind of Gaussian integrals that we're
dealing with here. Another way of saying this same thing is that the convolution
of two Gaussian densities is a Gaussian density. And so here's the kind of
expression of that fact for these expectations that if I define theta of F so there's
been a doubling of the field here, the original field phi and psi, now there's
another field psi and Boson field and eta Fermion field. So we can add phi and xi
and eta and psi and evaluate F on the sum like that. So what this integral does is
it integrates out the xi and the eta leaving phi and xi fixed. Phi and xi fixed rather.
And then the second integral integrates out the phi and the xi.
So this is a version of this factor up here about ordinary Gaussian random
variables which is going to be quite useful for us. Because what we're going to
do is we want to evaluate the expectation with respect to this specific co-variant
C, which is minus Laplacian plus M squared inverse. We're going to decompose
that covariance in an intelligent way into a very large sum of covariances, and
we're going to write the original integral that we're trying to compute as an
iterated integral. And that intelligent decomposition comes from a result of
Brydges, Guadagni and Mitter from 2004, which they used in their hierarchal
work. So works for any dimension greater than two you fix some large L and
suppose that the lambda, the volume, this torus has side length L to the N and N
will be going off to infinity, L will be fixed and large.
Let C be minus Laplacian plus M squared inverse. Actually they worked on zed
D. It works -- you can extend what they've done, also, to the torus. On zed D
you can take M squared to be zero. On the torus you cannot. Anyway, it's
possible to decompose this covariance as a sum of N covariances C1, C2 up to
CN, positive definite that have the property of being finite range. That means
that CJ of XY will be 0 if X and Y are separated by distance of order L to the J.
And what that means is that the fields at points X and Y are going to be
independent as far as CJ is concerned. If they have such a separation. And
moreover, except for the last one which is special, so CN is a bit special, I don't
want to talk about that, but until you get there, the covariances obey very nice
estimates. So there's this something called the dimension of the field here which
is one-half of D minus two and four dimensions you might as wealthy about four
dimensions, the dimension of the field as just one. Then the covariance is
bounded above, so this covariance sub-J is bounded above by L to the minus
two times J minus one. So the covariances are getting smaller and smaller. And
so the fields are distributed according to that covariance under the Gaussian
measure with that covariance would be getting smaller and smaller, and well,
more generally, smoother and smoother. So every time you take a derivative, a
discrete derivative with respect -- that should be a Y there, with respect to either
one of the entries of the covariance, you get an L to the minus J, minus one for
every derivative that you take as well.
So we'll use that fact. I want to use that decomposition of the covariance you get
a decomposition of the fields. This corresponds to writing your Gaussian
measure -- your Gaussian random variable with variant sigma one squared plus
sigma two squared plus X1 plus X2. So now both phi and D phi become
decomposed, though, so that's something that you have to prove that it works
also for these kinds of Gaussian expectations that I'm talking about now, but it
does. So you can decompose the field like this in such a way that expectation
that you want to take is actually the convolution or the composition, rather, of
these expectations where expectations of C1 will integrate out xi 1 and D xi 1 and
so on. Second expectation will integrate out xi 2 and D xi 2. By the time you're
done there will be nothing left.
So we want to write phi J as what is left to integrate after you've done J of these
operations, and then you can -- well, just by this definition you can write phi J as
xi J plus one plus what's left after that one. And then we'll define these sort of
partition functions. Zed 0 is E to the minus V0. That's what we initially have to
integrate with respect to C. And here we have a kind of progressive way of doing
the integral. We're -- it's only been partially done. The first J of them have been
done here. And we'll call that zed J. It will dependent on phi J what remains,
what hasn't been integrated out. And the differentials that go with that.
And so what we need to do is to compute zed N, which is the full expectation.
And so when led to study the so-called renormailzation group map which is the
map that takes zed J to zed J plus one by doing the next expectation. So that's
what we have to study. And this gives rise to a dynamical system. Now, when
you're looking at a dynamical system you want to know which directions are
expanding, which directions are contracting, which directions are marginal and
not doing either one.
And it's here that the dimension four comes in, in fact. So I'd just like to say a
word about that. Let's focus, first of all, about D equals 4. You look at those
covariance estimates more CJ plus one. That suggests that this deal that you're
going to be integrating out with that covariance is of size L to the minus J
because the covariance was L to the minus 2J and the covariance is just the -it's the variance because these fields are centered. So this field should have size
which is like the square root of the covariance which is this L to the minus J. And
moreover because of the smoothness, that field will be roughly constant on a
block of side length L to the jail. Because the covariance was -- its derivative
was very small.
So if you look at a block B in lambda of side length L to the J, this is a Kubi then if
you look at the size of that field raised to the Pth power summed over that block,
the field's roughly constant and it's of size L to the minus J. We're taking the Pth
power. There's this many terms in the sum, that many means L to the four times
J, and so this is L to the J times four minus P. And you see that this is an
expanding direction if P is less than four this is going to be blowing up as J
increases if P is less than 4. It will be marginal. It will be order one if P equals 4.
It will be irrelevant. It will be getting smaller as J increases if P is greater than 4.
And well, tau was like phi squared. Tau squared is like phi to the fourth. And so
those were the terms which were showing up in the -- in this V0, in this potential
that has to be -- whose exponentially has to be integrated. So this is -- in a way
is explaining why we don't ever see any tau cubed or tau to the fourth or higher
order terms. And in fact, if you take other symmetries into account, including the
super symmetry, which is a sort of symmetry between the phis and the D phis,
then you find that the only relevant and marginal monomials are precisely the
ones that shows up in the V0. So this is somehow saying that this V0 has put its
finger on the right things to be looking at.
And the role of D equals four shows up because if you look at tau squared which
is what multiplies G that strength of the self avoidance you find that the tau
squared term is relevant if D is less than 4 and irrelevant for D greater than 4 if
you do the calculation this is how it works out. And so what this means is that D
greater than 4 is an easier problem than D equals 4 and D less than 4 is a harder
problem.
All right. So we want to look at this map E sub-CJ, which is taking us from zed J
minus 1 to zed J. Let's just try and talk about the first one before we move on to
the general one.
So what this map does is takes a function of phi which is phi 1 plus xi 1 to a
function of phi 1 by integrating out the xi 1 and also the D xi 1.
So let's write zed nought at a single site now as I nought of X which is E to the
minus V nought of X, V nought was this polynomial in tau X and tau X squared
and tau delta X. And for a subset of lambda define I 0 of that subset to be the
product of the I 0s which because we're talking just about an exponential here
will be E to the minus the sum over little X in capital X of V nought of little X. And
I'm writing that in this way.
So this is a function of phi. V nought depends on tau which is a function of phi
and D phi. Now, suppose I had some other polynomial V1 which is a version of
V nought whose coupling constants are different. So instead of G nought, nu
nought and zed nought, I have some other ones which I'm just get to pick. I want
to pick them eventually in an intelligent way, but they can be anything for the
sake of the argument that I'm presenting right now.
And I want to regard this V1 as being a function of phi 1. So essentially what I
want to do is I want to think that when I take the expectation of E to the minus V
nought of lambda I'll get E to the minus V1 of lambda. Now, I won't get that.
They'll be some corrections. And I'm trying to now investigate what is the nature
of those corrections and as they all have to be controlled.
So let's suppose that we had some good guess for what V1 would show up. If
you like it's just the log of the expectation of E to the minus V0. And we're trying
to approximate it by a polynomial. Let's suppose that we have some good guess
for it and then let delta I be the difference between I0 at phi 1 plus xi 1 and I1 at
phi 1. Then we can take our partition function zed 1 of lambda, which is this
expectation of I0, and write it as the expectation of the product of I0 and write I0
as I1 plus delta I1 because that's what it turns out to be here. Then this product
can be evaluated. So when you take this product basically you decide when do I
take I1, when do I take delta I1, that decision is being recorded by this capital X
here which tells me what I took the delta I1, and this expectation only acts on xi 1
which shows up only in the delta. And so I can pass it through the sum and
through the I1 and write it in this form.
And we end up with what's called the circle product representation. So this zed
one lambda can be written now in this form so here's the formula that I had a
moment ago. What I'll do is X is represented by these little squares here. And
go to the next scale. So P1 is polymers on the next scale, which consists of
blocks on this large scale here. So P1 is some collection of blocks.
And I'm just essentially performing this sum by conditioning on what is the set of
larger blocks which contain these points? So here U is that set of larger blocks.
This I1 is in the background. And K1 will be what it needs to be in order to get
that result.
So we're kind of moving on to the next scale. And this formula is an instance of a
circle product. So if F and G are forms which are associated to polymers so PJ
are an element of PJ is just a subset of the set of all blocks of size L to the J and
what we've got here is a kind of a convolution which shares lambda among F and
G.
This defines an associative and communicative product, and it's possible to write
zed nought and zed 1 in terms of this circle product. And I think I'm running short
of time here so I'll have to speed up a little bit.
But here's the theorem. I'm standing it for D equals 4. There's a choice of
coupling constants. So GJ, nu J, zed J and lambda J, QJ which determines IJ
according to some formula. So IJ is kind of like a funny dimensional part of the
dynamical system that we're following in detail. KJ is an error term such that if
we write zed J in terms of circle product of IJ and KJ, then when we move on and
take the expectation, the nu zed again a circle product of IJ plus 1 and KJ plus
one where IJ plus 1 is given in terms of IJ by this dynamical system.
And so what we need to do is to study that dynamical system and we prove a
fixed point theorem for that dynamical system which says that there's a choose of
initial conditions zed nought which is something which shows up in the constant
in the 1 over X squared decay of the two-point function in 4 dimensions and nu
nought which is what actually is putting us at the critical point such that the
solution of this dynamical system in the limits is driven to zero. This is what
physicists call infrared asymptotic freedom. They're very good at inventing
names for things.
And from this we -- and from -- well, other ingredients when I don't have time to
explain we can simply do the calculation of the two-point function. Here's the
formula that we had for it initially and from that integral representation. By the
time you get to scale N, when you do the circle product, it's rather easy because
it's a way of dividing up the space between I and K, but on scale N, there's only
one block, and so either I gets it all, or K gets it all. And K we show is essentially
zero in the limit, and I once all of the integration has been done has no field left in
it, and it's possible to do this calculation.
And what we find is that the limit is given in terms of the inverse Laplacian on zed
4, which has the decay that we want to prove. Thank you very much.
[applause].
>>: So the critical dimension is really 4, it's not 4 and a half.
>> Gordon Slade: It's 4.
>>: Right. So is there a universal reason why critical [inaudible].
>> Gordon Slade: So they're not. And there was something that I skipped over
here that I really ought to mention if I could find it. Yeah?
So what you can do is take as your underlaying random walk, not the nearest
neighbor walk like I did, but take a long range walk so that the -- you know,
something that would be converging to a stable loss so that the weight
associated to a step from here to a point X which is distance R away would be
like one over R to the D plus alpha. Then if you -- as you vary alpha, then you
have the effect of actually varying the critical dimension. And there's a formula
for what the critical dimension is in terms of alpha. I think it's 2 alpha. And there
is something quite interesting here that I didn't mention by Mitter and Sculpala
[phonetic] paper in 2008. So they did that with the choice alpha equals 3 plus
epsilon over 2. That model has critical dimension 3 plus epsilon. And they
actually work in dimension 3 which means that they're slightly below critical for
that model. And our -- have taken the first steps in constructing this
renormalization group flow in that context.
So that's quite interesting. It's analogous to studying the nearest neighbor model
in dimension 3.999.
>>: Any other questions?
>>: [inaudible] seems maybe the simplest case would be to figure out [inaudible]
critical dimension one and then [inaudible] or [inaudible].
>> Gordon Slade: I don't think it's easier. So, in fact, I've talked to Mitter about
this and if you play with alpha, then you can make this DC to be essentially
anything less than 4 that you want and so if you choose the right alpha, then I
guess it would be 1 plus epsilon over 2 so that DC would be 1 plus epsilon, then
the problem is equally difficult. So it doesn't get any easier, no.
>>: So your result they have this limiting G0 where [inaudible] like the result
should be true.
>> Gordon Slade: That's right. So the result -- the result should be true for all of
G positive but our method is definitely restricted to small G. Yeah.
>>: Anything else? Okay. Let's thank the speaker.
[applause]
Download