>> David Wilson: Okay. So Bertrand Duplantier is... currently he's research director at Saclay. And today he's...

advertisement

>> David Wilson: Okay. So Bertrand Duplantier is a pioneer in theoretical physics and currently he's research director at Saclay. And today he's going to talk about some giant work with Scott Sheffield on the KPZ equation. So Bertrand.

>> Bertrand Duplantier: Thank you, David. So I should first thank you all at Microsoft for the kind of possibility to be here for a week and speak here.

Also I would like to apologize for the people like [inaudible] in the back who already heard a former version of that talk. But this one will have movie, so especially designed for entertaining him.

So it's a joint work with Scott Sheffield from MIT. And so is it's about a rigorous approach to Liouville quantum gravity and KPZ.

So I'm supposed -- so I was told that this is a new approach to talks, so one should have half hour of introductions and a little break of five minutes, and then half hour of more technical slides. So that fits.

So in principle at the middle I should have reached my goal, which is to introduce to you what Liouville quantum gravity is, what KPZ is, which stands for Knizhnik, Polyakov and Zamolodchikov, and you know what is the problem we have to solve. And that way we'll make a break.

So first just Polyakov. So many of you probably -- it depends on your backgrounds. So many of you never heard of Polyakov. So he's a theoretical physicist from Russia who is now in Princeton. And in 1982 he wrote in my opinion at the present time we have to

[inaudible] of ending sums over random surfaces. These sums replace the old-fashioned and extremely useful sums over random paths.

So that was -- the aim was to -- actually it was linked with string theory and also actually gauge theory that you had to end all sums over random surfaces.

So here is one example of a closed random surface, which is actually made of quadrangles which have been folded to make triangles, so it look like a random triangulation, so in the plane or in the -- in space, which is embedded in space.

And since this one is just a bit complicated, I'm going to move to a model which is a bit simpler.

Here is just a tiny random quadrangulation, so it has -- so it looks like Batman, unwillingly. That was just a -- the map which did it.

And so you have a boundary, which is in yellow here. This is just another view of the same object. So it just actually -- so it was made by taking a piece of paper in the plane and putting quad -- I mean, having quadrangles of equal length. That's really quadrangle squares in the plane.

And so with a certain topology of this thing, and just folding it and gluing things together.

So we show you the map in a minute.

So it's just an embedding in the space of some flat map, topological map. And so this is the way it looks. So in here you have a perspective of your quadrangulation embedded in space.

So all of this -- so here the dotted lines are the folds. Because the squares were just folded to have, you know, an easy embedding in space.

And here if you just look at the way all of these things has been drawn, it comes from this map, which is a planar map, where you know essentially -- so here I just didn't respect at all the way it was built. This is just the topological map.

So here you have the boundary in red and each square here is actually you have quadrangles here, so you see one quadrangle here. Another one here with some curve.

So I just did it topologically. The dotted lines are just the folds, diagonals along which it's going to be folded in space here, just by this choice.

And all the white -- so if you look at these thick blue lines, they are only the edges of quadrangles. And except that sometimes you have these indentations, so it's simply two sides. So it means this segment like that counts for -- counts for two sides because the two sides of the segments are actually two edges of the quadrangle.

And when you have these kind of segments, it corresponds to the [inaudible] here when you glue them together in space.

So I just -- I'm not going to describe these objects in detail today. This just serves as a motivation. Because you have -- so this is just a purely topological map. And if you -- and all of the connections are just -- don't -- are, you know, in correspondence with this one.

So people understood a long time ago already, say 30 years ago, how to count these objects, how to do statistics of those. And it was done -- and so a similar paper was by

BIPZ [inaudible] where somehow introduce the -- they -- so it was they could do it by using random matrices in the larger limit where N is a number of -- is a dimension of the matrix.

And of course there is a long list of people -- I'm not going to describe that -- which work on using random matrices to describe those kind of random planar maps.

When you do -- when you use random matrices, you usually don't embed them; you just look at the statuses of these objects, random planar maps, or triangular. In that case, I took quadrangles, but you can also do triangles and so on.

More recently there was another approach which was done by the combinatorics, combinatorics people. So starting with essentially with Schaeffer in '97, so it's really -- that's really 20 years after.

And so now they have ways to describe these random maps, in particular the quadrangles here, by -- just by pure combinatorics and by bijections to trees -- to trees which have complicated decorations.

And these [inaudible] decorations somehow encode the maps. So there is -- there are many kinds of bijections, and now that's a blooming field, it's really a field which is under expansion.

And they really often recover results that you could get by random matrices. Sometimes people who are versed in random matrices can somehow hint about the results they should find in combinatorics, and then you find a way in combinatorics to get the same results.

So somehow this really started in physics 30 years ago. This now is in combinatorics field, you know, and also theoretical physics use the same techniques now, like these people here were from Saclay.

And but now I want to -- so this will just for the time be a representation, just a topological representation of some embedded random surface. Even so, you know, I did not here count the degrees of freedom you have when you embed, you know, the formations and so on. It's nothing [inaudible] just have one representation in space here correspond to this map.

Now you can use this, you can [inaudible] this structure with a metric which is

[inaudible] from the metric which is the standard metric on the quadrangles, on triangles.

So you can see that it's kind of manifold. And then try to map it to the plane.

Say in that case since there is a boundary to the disc, and then that corresponds to this other thing which looks very similar to the other one. Because all of these quadrangles, which were, you know, quadrangles folded in space, now once you map them, of course they get totally distorted.

So you use have something which looks a bit like that, except that you have embedded your -- you have mapped your structure to the disc here.

And of course the key question is what happens when you take very large structures like that, very large random surfaces. What kind of metric do you get once you map it to the unit disc in that case or to the sphere to the plane, the complex plane.

So it is believed since a long time, since actually Polyakov, '81, we introduce some other model from another perspective, not from that perspective, that should be described by

Liouville quantum gravity. I'm going to come to that in a few minutes.

And I just want to mention that in physics, it is a very famous theory, Liouville quantum gravity, and people actually spend a lot of efforts to calculate correlation functions in that theory.

And in particular there are the names of the Zamolodchikov brothers who probably, you know, did the most, and also the Teschner who did, you know, the most advanced calculations.

So people look for correlation function in their techniques. This is a continuum theory, is the Liouville quantum gravity, as you will see the continuums here.

Well, in between, so that was -- so that you see all of this developed in the '90s and up to the thousands, and in between there was another idea which appeared in '88 which was to -- actually in '86 with Kazakov which was to look at these random surfaces, here just a tiny one, but you can have big ones, and put statistical matters on top of them.

So, for instance, the first one which was done was the easy model, which was solved by

Kazakov on the random lattice. I don't remember exactly what was the [inaudible] probably [inaudible] vertices. And so that was '86.

So he solved that model in a way that [inaudible] solved it in the plane in the '40s. So there was a solution of this model when Ising -- the Ising spins are defined on the vertices on the faces of this random surface.

So of course you can put -- you can look at your map and on the map, you know, put your spins and try to get [inaudible] partition functions.

I should say when it's solved it means the weight is really -- the partition -- so you take a given graph, you look at the partition function of the Ising model on that graph for, again, temperature, see, then you sum the partition function with a weight which is just the exponential of minus times the exponential of minus the parameter times the array of the graph, which is the Gibbs weight of the area being the discrete case just a number vertices.

Right? So you weigh the partition function of the given graph by the exponential of minus the area. And you [inaudible]. So in physics [inaudible] I needed some, some partition functions.

So you get an object which depends on two parameter, the temperature of the Ising model and the parameter you have put in front of the area of the random lattice. Okay. So you have a two-parameter space, and actually you are able to calculate that object. I mean, you know, in complete details.

And then you look at the -- and then you can look at the critical point from the Ising model becomes critical and where the surface becomes very large, essentially infinite. So it's a double critical point in the jargon of the field.

And then you get some special critical exponents. So I'm going to wait on to that later.

So that was Kazakov. And then, you know, after that you could look at the self-avoiding walks on the random lattices, O(N) & Potts model and so on. There are many names of people who contribute and solved these [inaudible] models using again random matrix models.

In parallel, you have the bijective combinatorics was also able to solve a few model on the same lattices, like Ising model, I think. And nowadays I guess Bousquet-Mélou and

Bernardi are trying to solve actually parts of models completely on those random lattices.

So, again, that came much later. But essentially they are going to reproduce in principle the same kind of process.

So now -- so here I drew a line which you can think of as, say, the example. In that case the example is just a self-avoiding walk, just a random walk, which is self-avoiding on the random map, on the random surface. So once you map it to the plane, it's just a random path in this random metric --

>>: [inaudible]

>> Bertrand Duplantier: I'm sorry?

>>: It works on the faces?

>> Bertrand Duplantier: Yes, it works on faces here. You can -- there will be, you know, many versions, faces, edges, and so on. And of course you can change here the words quadrangles, you can take triangles, you can take duals of trivial vertices, which are polygons. There are many versions, and usual people choose the version which is most suitable for them.

But what is believed, the heuristic that's in the critical limit, when the surface is very big, when you look at the critical point of a model, the whole [inaudible] will be universal.

So all the scaling loads, the exponents will be always the same independent of the model.

Depending on which you choose, and depending on what kind of particular Ising model you choose or what kind of O(N) & Potts model. But of course Ising and O(N) & Potts model won't be the same of course.

>>: [inaudible] different universe [inaudible] graph than all these models [inaudible]?

>> Bertrand Duplantier: You mean the regular lattice?

>>: Yes.

>> Bertrand Duplantier: Yes. Totally. You will see how they are rated. So KPZ is just a relation between both. They are different. Of course. Because summing over the matrix -- or summing over the possible graphs actually changes the properties of the system quite a bit. And you will understand, it changes the metric actually in a random way, and this is exactly what I'm going to describe in detail in the second part.

So, again, the continuum limit. So if you imagine that you have a very large graph, random graph, and a very large [inaudible] very long self-avoiding walks, or if you want to -- if you want to work like [inaudible] is nowadays interested in random walks on those structures, so, you know, you have the problem of what is the statistics of random walks on those random graphs.

Then if you take the continuum limit, the heuristics coming from physics is that you should have again Liouville quantum gravity plus some more fixed array, in the jargon of physicists, which I'm, you know, going to briefly allude in a minute.

So now it's time for me to introduce the -- what Liouville gravity is, and that is following.

So I'm sorry because I'm going to use the formulation of physicists, and some people might not like it very much. But that's really the way you find it in textbooks. And that

is the way it is written in the papers. And comes down to -- it comes really from

Polyakov. It was [inaudible] by him in '81.

So you remember you have mapped your random -- you have -- you kind of infinite

[inaudible] of very large conditions to the disc. And then you look at what kind of

[inaudible] to the disc. But you can map it to any manifold with the curvature, background manifold with the curvature, background manifold, which is here represented by G hat, the background metric G hat, having possibly a curvature R hat.

So what is believed that when you take the continuum limit, you should get something simple, and that thing which is simple is described by the field, H, which for physicists is called a free field, at least for the term which is here, meaning that the energy, if you want, we called it the action actually, the Liouville action here, is really the interval of the background manifold here of the -- essentially the square of the gradient of the field, just a representation of the grid and square in this [inaudible] manifold.

There is a linear term coupling the H, H to the curvature of the background curvature.

And the only thing one has to -- so what is the meaning of H here? H relates your random metric, the metric you have iterated from the mapping of your random surface to the plane.

The random metric, GAB, is supposed to be G hat AB, your background metric, times some conformal factor, exponential gamma H, where this H is actually driven by this action.

So for this part here, it's just a free field action. I'm going to return to that later. But there is an extra term here. And that extra term is exactly what you people call the

Liouville terms, the Liouville potential.

So that's a reason for the name of this fixed array. So is it's not nonlinear because you have this exponential, but actually the role of this term here in this action is just to fix for you -- not to fix but to control the average area of your random manifold.

Because if you -- you know, and actually just the object you put in the Gibbs weight to weight your configurations.

So actually what you have to do, you know, of course you have to define things properly, but essentially heuristically what you have to do is to take the exponential of minus S, that is the Gibbs weight associated with this action, or if you want energy somehow, and that is the weight of your random manifold for the metric part.

And so you get the exponential minus all of this object. But actually if you look at -- simply at this term here, this exponential gamma H is integrated against D to Z square root of G hat.

And so it means that it is simply the quantum area if you coexist a random metric, the quantum metric, so this is either of course a name which comes from physics again, so it's a random metric, but it's also called a quantum metric. I would just comment on it in a minute. The quantum area is simply the integral of this conformal factor.

So the term which is here, once it is integrated, it's simply lambda, which is a parameter here, times the area. And when you take the exponential minus S, that provides a term which is exponential minus lambda times the area, which is this Gibbs weight I was mentioning on the random graphs.

You remember I told you when you have a partition function of a model, Ising, you multiply it by exponential minus the area, the total number of sites of the -- of your random lattice.

So that is a continuum version of this Gibbs weight for the area. Of the random -- because you -- this of course describes the random manifold, right, with kind of random metric, which is very wide because this is written by this Gaussian free field.

So is that -- does that make sense at least intuitively?

>>: [inaudible]

>> Bertrand Duplantier: I'm sorry?

>>: [inaudible]

>> Bertrand Duplantier: Huh?

>>: [inaudible]

>> Bertrand Duplantier: Yes. It's [inaudible]. I just fixed by the background metric.

So if you map it to the disc, it's going to be zero. And there will be a boundary term due to the boundary, which I did not write here.

But, you know, you can simplify your lift. This is what we are going to do in a minute.

We are going to just look at the case where you map it to some -- to a flat manifold and you won't have this [inaudible].

Actually, these terms, this linear coupling comes from the covariance of the -- of the theory you have to -- once you have this term, you have to add this, if you have curvature.

Remember I said -- in the former slide I said when you put the model on the random lattice, take the continuum limit, map it to the plane or to the disc, take the continuum weight, that should be Liouville quantum gravity. Perhaps a conformal on that.

What is a conformal on that? On that the part which is describing the model, say Ising, since I described Ising for the time being, for the being. Here you will have some action representing the Ising model. Of course it's a continuum action. [inaudible] know how to do it. It's linked with some [inaudible] some [inaudible] degrees of freedom.

So there are ways at least formal to represent the Ising model in the continuum limit -- in the plane, of course. We are -- everything is in two dimensions, of course.

So if you wish, there are extra terms because you have to represent the system which is represented -- a continuum random manifold which holds -- which, you know, supports an Ising model in the continuum. So and that similarity is just formally represented by some conformal fixed theory, which I'm not going to describe here at all. I'm going to take a much simpler approach in a few minutes.

But I just wanted to say that the conformal on that size, the following fact that you have grown -- the point is that you have grown your surface, your random lattice, you have weighted it by the partition function of the Ising model.

So when you take the continuum limit you get something which depends on the fact that you have grown it with a weight which was the Ising model weight. And actually on that is that it simply fixes the value of this factor of gamma. So it just simply -- you see all of this is normalized by -- to -- by [inaudible] so this H is normalized in a certain way. And then the parameter gamma here, it's just an important parameter which tells you how wide are the fluctuations of the metric. The gamma -- the bigger the gamma, the wider the fluctuations.

So and the -- what is known from theoretical physics, I'm not going to explain how people could guess it or understand it without proof of course at this level, that the gamma term here is related to what is called central charge of conformal field theory.

But for people who don't know central charges here, just it's a parameter which labels which model you have. Say if your model is Ising model, C will be one half. If your model is a self-avoiding walk, it's equal to zero. If your model is [inaudible] on the random lattice, is equal to zero again.

So there is a one-to-one correspondence between the critical model and the value of central charge. And so by this kind of miracle formula, you get the value of gamma.

But some people here probably prefer SLE, I guess, so Schramm Stochastic Loewner

Evolution. So if you put SLEs -- SLEs are usual the continuum limit of all our interfaces of those models I mentioned. So if I go back here, if you have Ising model, the interfaces are supposed to be actually proven now by Smirnoff [phonetic] to be SLE 3. The self-avoiding walk is supposed to be SLE [inaudible].

So they are -- all of these SLEs are leaving those models in the continuum limit. They are just some geometrical objects which are actually describing the boundaries of clusters in those models. So when you take the continuum limit of your system, if you have a symmetrical, you know, view -- I mean, image of your system, you are going to have some kind of random curves, which as a continuum should be the quantum version of

SLEs, so meaning SLEs in the quantum metric in the background metric which is fluctuating.

And so beyond that actually if you -- if your model which you have put on your random lattice converges to the continuum to a certain SLE, well, that SLE will have certain gamma, and then the gamma will be fixed by essentially being [inaudible].

So you see your -- you have the fine tuning of the radio of gamma due to the -- linked with the model you have grown on your random surface. And in the continuum you still have a trace of this.

So that is the [inaudible]. And so one remark that you -- so is actually C is bounded to be smaller than 1, so is these gamma here in this approach are always small or equal to 2.

So 2 will be kind of [inaudible] boundary, which I might return on later.

And Q here just related to gamma because by conformal covariance you -- if you have a certain gamma here, the Q of the value will be fixed by gamma. So this is just a remark, just this is not a free parameter, it's linked with a gamma parameter.

Okay. So that was physics approach. But, you know, I'm going to follow the gut and the gut tells you this [inaudible] so meaning start the by looking for the simplest things and, you know, it follow syncs step by step.

So there's room for the direction of the mind. So that's an old thing, but of course it's useful.

So I'm going to start -- I'm going to concentrate here on the Gaussian free field part of it, which is the part which is here, so I'm going to forget this term which was contouring the area, just going to look at -- I'm going to take a flat background metric, so that will be zero, so we will have simply the square of the gradient as a weight or our field.

And then I'm going to look at the properties of the metric when you put this extra with this exponential of gamma H as a conformal factor.

So the Gaussian free field, this is like to say that this weight is a Gaussian weight exponential minus one half. The product -- the Dirichlet inner product of H with itself, which is just the product of the [inaudible] of the product of the gradients. And so the

Dirichlet inner product in domain D is simply [inaudible] two functions, F1, F2, is just simply the residual of the gradient.

So this is just a definition of this notation, which you have to keep in mind, because it's going to be there all the time at the end, in the second part of the talk.

And then one nice -- so of course so you have a weight which -- so the weight is such -- the Gaussian weight is such that H is the distribution, which somehow can be defined by the covariance properties. They are Gaussian covariance properties.

Plus this formula, which is -- H is the distribution such that if you project H on a test function F1 with the Dirichlet inner product, H on F2 is Dirichlet inner product, take the covariance of these two objects where is simply the Dirichlet inner product of the two functions.

So just a way to raise H when you calculate covariances that defines a two-point covariance, and then from the Gaussian properties you get all the other covariance as you wish.

Okay. So but physicists like to somehow ride this weight for the field, and then say that you have to integrate in some ways to a flat measure on all possible fields with this Gibbs weight of the -- so that somehow exhibits the weight you had before in a Liouville -- in the Polyakov notation.

So if you take [inaudible] of your field on a small grid, 20 by 20, you have something which looks random, which this is a picture done by Scott, which looks like that, not too mean, but if you move on, if you take a finer -- much finer grade as a regularization, you get something which fluctuates like this.

So these are just snapshots of configurations of the Gaussian free field with some regularization on some lattice on the disc with zero boundary conditions on the disc. And that was done by Nam-Gyu Kang who was at Caltech.

So, now, if you put -- so this is your random field. And remember you want to have a metric which will be the exponential of the field. So this is really -- so in a way what the aim of the work will be to somehow define what is this random measure, and the random measure will be just a Lebesgue measure in the plane times the exponential of gamma H where H will be this random -- this random Gaussian field, free field, right?

So as you will see, I've put quotation marks here because I will have to -- there will -- I will have to do some work to give a meaning to this random metric -- or random measure.

Sorry.

So here I'm just going to show you the emergence of it when what changes with respect to the Lebesgue measure in the disc when you find -- when you turn off this quantum -- this quantum metric when you start from gamma [inaudible] zero to some finer value of gamma.

So -- okay? So here you had your disc, you had zero -- just Lebesgue measure, and then you see this is just an illustration of the exponential of the field, so it's really -- it's quite spiky. And if you want to have values of gamma which are a bit bigger, you get something like that.

And in that case, there is a rescaling. So when gamma increases, you have more and more spiking, higher and higher spikes. So you do rescale. This is why you have

[inaudible] that everything flattens, because actually the field has gone down because it has been rescaled constantly. And so when you have a gamma high enough, which is gamma equal to, you have these huge spikes which tell you there is a lot of area here, and then the rest is almost very flat, and you have this enormous spikes where you have a lot of the area of the random manifold.

Okay. So that was just for the movies.

>>: And so [inaudible]

>> Bertrand Duplantier: Yes. So it's gamma -- gamma is the exponential -- the gamma and the exponential gamma H is increasing. And so it seems that the figure goes down because you have to risk it constantly. So gamma, you know, shoots up --

>>: [inaudible] beginning?

>> Bertrand Duplantier: Yes. So this is a formal one where there is no rescaling.

>>: [inaudible]

>> Bertrand Duplantier: This plotting exponential gamma H as a function of the position in the disc in perspective.

>>: So H is [inaudible]?

>> Bertrand Duplantier: That's right. That's one configuration, one sample of H and just increasing gamma. So this was done without rescaling, because gamma was small enough, but the next movie is for -- so this is just the same -- so of course at some point you take your grid, otherwise everything becomes black so you take away the grid, the regularization grid.

And then you -- it's constantly rescaled, so of course it flattens all the values which are not, you know, essentially infinite. And then at the end you have this world where, you know, you have enormous fluctuations here, but you have even more enormous

[inaudible].

So in physics these spikes are called the baby universes in quantum gravity, because the idea is that all of this random surface has, you know, burst -- outburst of -- they have -- remember, if you synch in terms of the triangulation or quadrangulation, imagine a very, very big one.

At some point you can have narrow necks. Out of these necks you have the big bubbles.

And on those bubbles you have other bubbles and so on. And so there is kind of branch structure. And each one of these big bubbles is called a baby universe.

But once you map this back to the plane and look at the quantum metric, you have something which looks like that. All of the spikes [inaudible] bubbles.

>>: Why are there negative values?

>> Bertrand Duplantier: They were not negative actually. They're just -- it was a -- one was -- it was --

>>: It still has a negative value.

>> Bertrand Duplantier: Yeah. That is just because you -- yeah, but it -- sorry. That's the other one. The boundary here it's for H equals zero, so it's value 1 of the conformal factor. But if you go -- when H is negative, you go below it, right?

So but I don't know exactly how the scale has been done here. But in principle nothing is negated. It's just exponential minus something negative here. And zero is just -- 1 is on the boundary. Not zero. [inaudible] actually.

Okay. Let me move on probably, because I still have to reach the post time. So you can look at it differently, which is, you know, probably easier to understand. So now you take -- say that was actually a torus, a fine torus with a very fine grade.

And now you take one sample of your random field, and then you look at -- you divide -- you make a dyadic decomposition of your torus in such a way that all of these squares have essentially the same quantum area, same random area

So you just start with a torus, divide in four, calculate the area of one-fourth of it. If the area is still bigger than some prescribed value, tiny value, tiny fraction of the total quantum area, so you want to have, say, all the square to have similar quantum area, which is delta or slightly less than delta where delta is a small fraction of the total area.

So you do -- you just divide and keeps dividing as long as your area in the square has begun, then you stop. And then you have a sampling or decomposition of your square, your torus here, in terms of squares which all essentially have the same quantum area delta.

So it looks like that. And here you have some -- at some point you have red spots but the next -- so that was for gamma equal 1, but that can go to gamma equal 2. It's gamma equal 3/2 here, and here you see better.

So all of these squares essentially -- they are Euclidean squares, but usually Euclidean square have essentially the same quantum area. So one is light green here, you don't have much -- the densities of quantum area is not much. And when you have spikes, red dots here, it's points where you have a lot of quantum mass, a lot of quantum area.

So just to -- somehow plane view of the -- what we had before seen in perspective.

Well, KPZ relation, so that is actually the aim of the work here, is just to address that relation, its relationship which was discovered, say, more than 20 years ago by Knizhnik,

Polyakov and Zamolodchikov in Russia at the time, and its relationship between what happens for factors of random curves in the plane and in the quantum metric.

I should say why do -- why do the people speak of quantum metric or quantum gravity simply because when you have a metric, you know, a metric is gravity in Einstein. Here it's just two-dimensional, so it's just a simple version of that. So the metric is gravitation, so this is why I speak of gravity, but you have seen that the conformal factor of the metric here, exponential gamma H, is a Gaussian free field. So it's really random. So this is why people speak of quantum gravity as the only reason for that.

So seen from the point of view of geometry or probability, it's just a random measure with a factor which is just driven -- which is exponential of a Gaussian free field.

>>: So besides the random, any other connection to quantum theory?

>> Bertrand Duplantier: But, you know, quantum -- people think of quantum theory as something really, really, really different from -- quantum seems like a magic word. But because you have this -- all of these tiny -- but quantum --

>>: [inaudible]

>> Bertrand Duplantier: Quantum mechanics? No, no. It's no -- non -- yeah, yeah, but quantum mechanics, you know -- the quantum mechanics version of -- what is his name already? -- Feynman, which is somehow that, you know, all of this [inaudible] operators can be represented summation of a random path. Those paths actually are Brown motions.

So that's a -- there is a version which is heuristic of quantum mechanics which is just summing of exponential of random phases, which are -- which is drawn by random paths which are Brown motions.

If you do it in the statistical mechanics representation, which is exponential minus the action instead of I times the action, which is real quantum mechanics, you get statistical mechanics.

So this is really statistical mechanics, to be honest, but it is statistical mechanics version of quantum mechanics. That why people call it quantum.

So this is something I should explain to you in details. It's really a change of -- it's no longer quantum mechanics. It's really statistical mechanics, to be honest. And you can see it's just part of the theory. But, you know --

>>: [inaudible]

>> Bertrand Duplantier: No, no, no. Not at this level. No, no. Not in this version. You have to take the full complex version with phase factors with exponential Is and so on.

Otherwise there are be no interference, no nothing. This is just [inaudible].

But still people keep the name, quantum -- people are very much used to what they do -- what they call do a Vick rotation. In quantum mechanics they have all of these Is, they rotate to the complex planes, they have no longer Is, and they are in statistical mechanics and they are happy because things converge much more.

So indeed we are -- but still they retain the name quantum. Because, you know, that's how they approach these things, because much -- it's simpler to approach this way.

So I agree, it's a bit confusing from outside. It's not -- I understand. Yeah. It's a bit confusing. But really that's terminology of the field now.

Okay. So KPZ. So forget about Liouville quantum gravity, forget about everything. Go back to standout probabilities [inaudible] or standout statistical mechanics and take random paths or take random statistical models.

And as an example here I just took an SLE. So I guess, you know, many of you know about SLE. So here you just have an SLE drawn in the half plane. And in that case actually it's not real. In SLE it's real [inaudible] drawn statistically by Tom Kennedy with one million steps. But it's a good representation of an SLE [inaudible].

So this path is a random path. And to each random -- to this random geometry are specific -- are critical exponents typically. So you have heard about that.

So if I take this path, for instance, you know that you can look at what are the probabilities that the path traverses a small -- intersects a small ball of radius epsilon.

Okay. 1 epsilon goes to zero. Typically the intersection probability will scale like some powers of the radius, epsilon to the 2x.

So by definition we put a 2, and then x is called a scaling exponent. Actually in conformal theory, it's called a conformal weight. But essentially this probability of interaction will be driven by some exponents. And those exponents really are specifying, somehow characterizing the geometry of this random path.

Now, if you want to think in terms of other dimensions, the other dimension of the path is simply 2 minus 2, [inaudible] 2 where [inaudible] 2 is the one corresponding to the interaction of the pass with a ball, meaning that actually I put a sub 2 because they are two ends to -- two legs to this pass. So you know there is one -- one enters, one exits.

But of course you have also configurations which are associated with the tip of the path and usually there are other exponents [inaudible] sub 1, as you could do with the neighborhood of this tip.

And if you are near the boundary and know that your pass avoids a boundary, just here it's [inaudible] you have some intersection probability you know of -- you know of the boundary in some domains, and they will be actually governed by some probabilities of intersection with the boundary like -- scaling like the lengths of your introvert on the boundary to some power which is [inaudible].

So I just want to say that there are collections of exponents in the plane, collections of exponents on the boundaries [inaudible]. And you can invent many with speculation, with Ising models and so on. So there will be always in these kind of setting because here I have just a geometrical approach to it.

Okay. So you have some scaling behaviors of intersection probabilities. That will come back at the second part of the talk. If you think in terms of SLE, the other dimension is known to be 1 plus kappa over 8, and that tells you what [inaudible] 2 in that case.

Right? Okay. That is an example.

Now, so suppose -- just an example, that could be a random [inaudible], that could be a

Brown motion, could be [inaudible], Ising model, Ising boundary -- boundaries of Ising clusters. All kinds of random models you could -- that you could choose.

Now, imagine now you look at -- you -- imagine this is -- this of course is a continuum.

Imagine that this system now leaves in the quantum metric. So you just look at the same thing and you look at the intersection probabilities now of the same pass, the same self-avoiding walk in that case that we have, right, with -- now with your grid, with your random grid, the grid which was drawn here from the sampling this Gaussian free field on the very fine torus.

So you remember we defined all of these squares to have essentially the same random area, delta, so these probabilities now can be expected to scale the intersection probability of squares with the path, can be expected now to scale with some parallel of delta, of little delta.

Little delta is the area which is on those squares. And the exponent will be kept through delta. And actually here if you look at boundary effects, you will have some new exponents, delta -- capital delta tilde, which has the exponents as suited with the quantum lengths along the boundary, delta tilde.

Okay. So just the same thing as before, now you look at the intersection with the quantum squares.

So the key thing -- so you have quantum gravity in the jargon, scaling exponents, delta, and delta tilde. The key thing that actually the deltas and the Xs as well as the capital delta tildes and X still does actually relate to one to another by this famous KPZ relation.

Right. I'm sorry. There was this remark of let me just give you a KPZ [inaudible].

X and delta -- X tilde and delta tilde are rated by the KPZ formula, which is just a quadratic map between the quantum ones and the Euclidean ones.

They, of course, depend on which kind of random measure you have chosen. And remember that the only parameter we had was the value of the parameter gamma.

Gamma was the parameter in front of H in the exponential of gamma H.

So just of course the relationship depends on gamma. The X here are just in the plane, so it don't depend on gamma. And that relationship tells you what kind of delta you expect when you turn off quantum gravity with a certain parameter gamma. So of course all of these deltas will follow gamma in this way, in the way that X is fixed.

So actually so that was understood by KPZ in the '88. It was a kind of surprise because you can see what Polyakov said last year in a paper which he wrote about his approach,

"From Quarks to Strings."

"To our surprise we found that the anomalous dimensions coming from our theory -- the deltas, is what they called anomalous dimensions, deltas, capital deltas -- coincide with the ones computed from the matrix model -- actually the Ising model by Kazakov; at that time that was the only one, in '86 -- that left no doubts that in the case of minimum models, the Liouville description is equivalent to the matrix one."

So they did not believe that the matrix models, which were discretizations of random surfaces, right, in the continuum limit would be represented by Liouville's theory.

Originally Polyakov didn't believe that. He saw that Liouville was something else he wanted to have, but the random matrix -- the random matrix models were not [inaudible] for that.

And actually when they found that the exponent calculated by Kazakov for the Ising model, the deltas, you know, these capital deltas calculated by Kazakov, for a certain value of gamma which for the Ising model gamma is the square root of 3, so gamma square is 3 here.

So [inaudible] then if you put the -- plug these values here, you are going to find as X is the values of [inaudible] for the Ising model. And that fitted. So that was a big surprise.

And that --

>>: Those were just single numbers.

>> Bertrand Duplantier: Yes.

>>: There was no formula on there, right? It was just -- I mean, where did the formula come from? They didn't [inaudible]?

>> Bertrand Duplantier: This one?

>>: Did they have a formula [inaudible]?

>> Bertrand Duplantier: Which one? This one?

>>: [inaudible] formula?

>> Bertrand Duplantier: Which one? This?

>>: Yeah.

>> Bertrand Duplantier: Yes, they had that.

>>: Oh, they did.

>> Bertrand Duplantier: Yes. That's the point. The '88, they have this formula in their paper. Not written this way, but essentially this. No, there's the point. They get this formula. They get this formula from the properties of the Liouville field theory, looked at in some gauge with -- they call the [inaudible] gauge.

And actually they did follow the approach we have with Scott because somehow

Polyakov says -- I don't have the sentence here, but he says that, you know, there is a problem of regularization -- making a proper regularization of this theory, which is obvious, so they move to another gauge.

And then they founded kind of invariance, and they went out with this formula. They had to believe, but also they had to understand actually at that time.

So I'm not going to describe what happened next at that time because it was 10, 20 years ago, but actually it started calculations to check that in other models, not only in the Ising model. And you could check that it was working each time.

Each time with the identification -- the identification of gamma, which I gave before -- no, it's too far away now -- but you remember I told you that gamma was already always fixed by the [inaudible] growing with your random lattice. So that was fitting too.

So actually there was soon another derivation, but all of this is in terms of mathematics heuristics for you. For theoretical physics, that's just fine, but for my dimension it's not the same thing.

So there was another one based on Liouville's theory, and then there is this approach that we developed with Scott, which I'm going to describe soon. And, you know, 20 years after, so that was 20 years after.

And then you know soon after Benjamini and Schramm showed that you actually you could get also KPZ in one dimension in some cascade models, and that, you know, was followed by Rhodes and Vargas who went back to the same model as us, so finally also.

And here there was a heuristic rederivation of it, but it's just heuristics here.

So, anyway, you have a 20 years' gap. And actually I see that's a good time to make the post, except that of course I have eaten quite a lot of my time.

So the next slide is just to start with the -- somehow the mathematical approach to it, which is taking the Gaussian free field, you know, on a domain, just a simple one, look at its properties, understand better its statistical properties and see how the KPZ formula, which is simply this -- what we really want to -- is to have this relationship between exponents, how it comes out of the properties of the random conformal factor where H is the Gaussian free field, right? This one we have here.

But for doing this, we have to regularize the GFF properly and understand how it is -- okay. So I did --

>>: [inaudible]

[applause]

>> Bertrand Duplantier: Now start the somehow more technical approach. So we have to regularize the GFF. So we saw it was distribution. So we just take -- so now we take a domain, flat domain, just with some other here in the plane, and just take -- the main object will be the H subepsilon of Z, which is the average value of the Gaussian free field on the circle centered at Z.

So you take .Z, you take a -- you draw a small circle of radius epsilon, take the average of

H on this circle. So this will be our regularized value, our regularized free field.

Okay. So actually if you want to write it in an explicit way, you are going just to project it on the density, the projection being just a standout projection, the integration in the domain, on the density of O subepsilon of Z.

So it looks a bit complex, some of these notations, but after -- if you make some effort to absorb them, you will be happy -- with the rest of the talk. Otherwise, might not be the case.

So F especially will be important, but all subepsilon of Z is just the uniform distribution of minus 1 on the circle, on the boundary of the ball. So B subepsilon is this ball here, little ball here. And let me do this. So you have this little ball, B subepsilon. You look

at the circle which is a boundary and you pick a uniform distribution of minus 1. So it -- it means you average the field on this circle.

Now, this is just a projection here, but you can rewrite it as a Dirichlet inner product. Remember HF is really the integral of the gradients of both. Remember that.

So it is really also the projection of H on the potential, F subepsilon of Z in the Dirichlet inner product. And what is the potential.

So you see the difference here. There is a Lebesgue assigned. And of course so what is the potential? It is a potential to which the density is associated. So this is a potential which is created by the density of mass.

So the mass is just located on the ring in two dimensions. That creates a Newtonian potential, which is essentially logarithmic, so the mass is just located on the ring in two dimensions.

That creates a Newtonian potential, which is essentially logarithmic, except that when you enter the ring, you know that the potential is cut because you -- there is screening inside. So just a -- so the potential is really given by this.

So sorry for the notation, but F subepsilon of Z, it's rooted at Z because Z is a center point here. Epsilon is the radius of the Z ring, okay, and the potential is essentially minus the log of the distance to the center. The running point is here.

But if you are entering the disc, if you are entering the ball, you have to separate the potential, you have to take the max of the distance to Z and epsilon. So it means when you enter you see just in this perspective here -- when you enter your ball here, your potential saturates its minus log of epsilon.

We want to have Dirichlet boundary conditions, so you have to correct for the boundary effect. So just adhere the harmonic extension of the log on the boundary with respect to the boundary. Right? So you just put this extra term which is just a smooth function of the random point and smooth function of Z, just the -- correcting for the zero boundary here.

So essentially what you have is the Newtonian potential, which is like logarithmically singled out, except that when you enter the disc it saturates and you have this nice cutoff here.

I'm going to skip this. So I just need to address KPZ. We need only three little lemmas.

So one, the first one is the fact that the variance -- so I forgot to say, but of course the fact that, you know, just by integrating -- if you integrate by part here with the gradient, if you just transform the gradient into a [inaudible] of F, it will give you the density, and that is the same as the other one, of course. It's just by integration by path.

So we have this -- so you see we have this H subepsilon of Z which is really a mean on the small disc, is simply the projection of F on H on this function F, on this potential.

And you remember we had this -- so if you -- well, let me forget this. So I don't think that's a proper time.

So it means that if you can create the variance of your random average, H subepsilon, it will be simply the variance of HF with itself. And I showed you this formula before. I said just erase H. So you end up with the Dirichlet inner product of the potential with itself.

But this is just an energy -- self energy. It is the Dirichlet inner product, also the energy, the certain energy of the potential with itself -- of the field with itself. And by Gauss theorem, it is also the value of the potential at the center.

So this is one first, you know, remarkable result. You take your average free field, okay, your average on a small ring, you find something, a random variable, and that random variable has a variance which is simply the value of the potential of the center.

And we saw what it is. We saw what it was. It was just minus log of epsilon plus some correct -- some correction factor for the boundary effect.

So what you find actually, it is really the log of 1 over epsilon, which was this divergency we had in the middle, and then the ratio, it is a ratio of the conformal radius of D viewed from Z to epsilon.

So it's like a condensed set of formula. It's a log of the ratio of the radii. So just an energy of [inaudible] from, you know, the old days.

So the variance, so it means -- and you see that if you shrink your ring, the variance is going to blow up because 1 epsilon goes to zero. So you see that you have no limit, of course, just you have your function which is fluctuating.

So now H subepsilon of G is a Gaussian random variable. So if you look at -- if you -- we are going to define, you know, in a few minutes this random conformal factor to be regularized by this H subepsilon. Instead of taking exponential gamma H, I'm going to take exponential gamma subepsilon, because I want something which, you know, I can handle as a conformal factor.

So if you look at this expectation of this guy respect to the Gaussian free field, because of the simple properties of the Gaussian free field, since it is a Gaussian random variable, it's simply the exponential of the variance. You don't have a mean here because you have the Dirichlet boundary conditions, so the mean is zero.

So I just have a gamma square times the variance. The variance is given by the log of C over epsilon, so you get C over epsilon to the gamma square root of 2.

So clearly the expectation of your conformal factor, if you just regularize its weight, is going to blow up when epsilon goes to zero.

So ultimately we'll see later we are going to correct for it and put an extra epsilon to the gamma square of 2 in front and take the limit to have a proper continuum limit.

So that's the first lemma. I just -- you have to understand what is this quantity once you regularize and how it blows up.

So you see the potential is -- simply is variance. Just all of this [inaudible] thing is really the variance of the -- our regularized field.

Okay. Lemma 1. Lemma 2, something which is linked with this, too, is the fact that there is a relationship between the GFF and Brownian motion. So if you -- this was the definition, H subepsilon is a mean value of H on the circle, so define a time which is minus log of epsilon. Define a B sub-T, which is just H of epsilon equal E minus T.

Right? All center that Z.

For Z fixed the law of BT is standard Brownian motion in T. And you see that by just by calculating the variance of the difference of these averages on two nested circles. So you take a big one, epsilon prime, a small one, epsilon, calculate the variance of the difference, by the same technique you are going to find the log of the ratio of the epsilons, and which is just the difference of the Ts. You have -- the model is here, so it just -- and if you call it BT, this is just the variance of BT minus BT prime because of the definition.

But that is a characterization of the Brownian motion, if you know that your process is

Gaussian. And so that's sufficient -- that is sufficient to say that H subepsilon seen in time T equal minus log of epsilon is a Brownian motion.

So it means that when you shrink your ball, you're not going to convert to any value because your average value is just going to diffuse, like one-dimensional Brownian motion.

I should insist it's one dimensional because this is just a simple number, H subepsilon, so this BT is one-dimensional Brownian motion. Right? It's Brownian motion in terms of the radius of the disc. Just the value of your random field.

Of course that is an important ingredient. And the third one is a change of measure which occurs when you look at the conformal factor. So I do it in the physicist way, so I just write weight like a physicist would do actually here.

So this is just a -- I'm just here recording what we saw before, the fact that H subepsilon is the projection of H on the potential, and the fact that the variance is just the Dirichlet inner product of the potential with itself, which we saw before.

Now, look at the weight we have. If I write it like the physicists like to write it, they like to write a Gaussian weight for H which is H-H Lebesgue.

Then you have this -- we have -- in fact of it was have the Gaussian -- we have the conformal factor of the random metric, which is exponential of gamma, H subepsilon, because I choose to regularize it.

Now, this H -- so I claim that this is just quadratic linear. You can rewrite it as a quadratic form of a shifted H. So you redefine H bar by H -- called H bar plus gamma F subepsilon of Z.

If you shift and calculate, just put this in this quadratic form here, it's going to reproduce -- when you expand it, it's going to reproduce a gamma H of epsilon term

because by contraction of H with gamma F, you get the H of epsilon which is here. So just writing the square plus a linear term as a square simply.

But of course you have an extra term. The extra term is of this type, and which is a variance. And because it is [inaudible] it is also the expectation. We just saw it.

So it means that you really have this weight. So the Gaussian weight modified by the

Liouville factor is just a new Gaussian free field, H bar is standard Gaussian free field, shifted, and here you have the marginal of the point Z.

Okay. So it means that in law with this measure, H of epsilon -- so this is my shift. Now

I just average this on the circle. I get H of epsilon of Z. It is just the average of the starter GFF on the circle. We saw it was a Brownian motion, plus a drift which is the average value of the potential on the circle, which is a value of the center, Gauss theorem.

So essentially this -- with this -- under this measure, this H subepsilon of Z is simply a

Brownian motion in terms of the time which is minus log of epsilon again plus the potential [inaudible] the center. Okay? So I'm going to use it for KPZ, because these are the three lemmas as we needed.

Okay. So one application of the first lemma is that you guess that if you want to define a properly continued measure when you take the regularization off, you are going to just define D mus of epsilon, which is the exponential of gamma, H subepsilon of Z, times this epsilon to the gamma square over 2. Because we know that's going to appear downstairs when you take expectations. And then you multiply by the Lebesgue measure here.

And so that -- so the first result that when -- as long as gamma is smaller than 2 here, this -- when epsilon goes to zero, that converges to random measure. And that is -- you know, that comes from the fact that essentially the divergency game from the variance and we have repaired the variance problem with this extra factor here.

>>: [inaudible]

>> Bertrand Duplantier: Well, you don't see it here. It is in the proof. Otherwise you -- essentially intuitively that's because when gamma is greater than 2 you have these baby universes getting out of the plane. And those ones carry a finite mass, and each one has other baby universes, so you have certainly to do something to control this measure. And that is the part I'm not going to address that we have been looking at, too, what happens when gamma goes above 2.

Okay. But in the proof in the convergence of this measure, gamma -- lemma 2 is crucial.

And actually that was known before because there is Høegh-Krohn in the '70s who looked at this model, not from this perspective at all, but somehow got also some results with other collaborators.

I have still to look at whether he had gamma code 2 as a boundary or square root of 2, because there is another apparent -- there is another square root of 2 which appears somewhere, which is superior, so I don't -- I have -- I still have to look at the paper again.

Anyway, so we have this result. But I want to readdress KPZ here, so I'm going to address KPZ in the following way. So I'm going to introduce for you the quantum ball.

So what is the quantum ball. And I'm going to cheat a bit. I'm going to take the simplest version, you know, accessible to physicists. So, you know, in principle it should be okay for you. But the reversion is bit more tricky. But the version for physicists, this is the one I'm going to give.

So I'm going to redefine somehow the measure in such a way that's easy to handle. So the quantum ball will be just -- the quantum ball will be just a Euclidean ball in the plane, and I'm going to define its quantum area in the following way.

So quantum ball is just these dotted things here. So you take a small disc of a radius epsilon, pi epsilon square is the Lebesgue measure of the disc. You multiply it by the epsilon to the gamma square over 2, which is a correcting factor. And you don't take the limit of the measure and integrate on the disc, you just take as a measure -- just take as a conformal factor -- just take the conformal factor, measure it on the boundary.

Okay. So it's not a full measure. It's not exactly. Of course it will be current when epsilon goes to zero in some sense. But I'm not going to address this. But I take this as a definition of my -- if you take a quantum ball, it has a quantum area which is defined by this.

Okay. So we just saw -- so this is a complete -- so you see you have dressed your

Euclidean area by epsilon square by some -- you have some equipment here. You have this extra gamma square over 2, which is nontrivial, which is something which is key here, and then you have this random factor.

But we know that -- we know statistics, we know the law of this guy. We just saw it. It's standard Brownian motion, BT, in time T equal log of epsilon, plus the term, which was the value of the potential at the center.

But remember the value of the potential of center was gamma log of epsilon plus -- plus actually the log of the conformal radius, was just a [inaudible], which I'm going to ignore here.

So essentially minus gamma log of epsilon, which is gamma T. So it means H subepsilon of Z, it's just BT plus gamma T. So Brownian motion replaced, you get delta equal exponential gamma BT, simply because I'm substituting. You have also some T terms which are log of epsilons coming from this one, because epsilon is exponential minus T, so it does contribute to this.

And you end up with the term A here, AT, a drifter, which is 2 minus gamma square root

T. You have to pay attention here. You started with 2 plus gamma square root over 2, but there is an extra drift with gamma T here. And they do times -- times gamma, because H is gamma F and you also multiply by gamma. So you have gamma square. So you end up with a change of sign of this plus gamma square is minus gamma square root

2.

So -- and overall so you have your quantum ball, the quantum area of your quantum ball is simply the exponential of the Brownian motion, simple Brownian motion, plus a drifter, and that drift term is simply given by 2 minus gamma square root 2.

And you see here, again, the role of gamma less than 2. You see that as long as gamma is less than 2, A is going to be positive. It's going to come back in a minute.

Take log rates, because actually the good variables here are the minus the log of the area, so we have tiny areas, so take positive quantities, which are minus the log of the areas, so minus the log of the quantum area is AT minus gamma BT, okay, Brownian motion with drift.

Okay. So that's the only thing I need to know, right, as a representation of the quantum ball area and to know so what is A.

Now, so you are going to define your quantum ball in the following way. You want to -- remember, you want -- now, that was a quantum area of a given ball of radius epsilon.

Now, suppose -- remember in the setting we had before, we wanted to have squares, remember, in this dyadic composition where all the areas were given like little delta.

So I'm going to do that in this setting here. I fix little delta. Okay. Fix little delta. And I want to have -- sorry. I fix little delta and I want to -- I start -- I run my Brownian motion from a large disc, so it meaning small T. And when T increases, that means when epsilon decreases, T increases, I follow the Brownian path. So I start with a big disc. The area is too big, I shrink, I shrink, epsilon decreases, time goes up and I stop as soon as I reach the quantum area.

So this is stopping time for Brownian motion, so I want to have this equality for -- I look at the first time subject, I have my BT subject, I get just minus log of delta where delta is tiny quantity which is prescribed.

So in terms of if you take a drawing, you have your time, you have this is AT minus gamma BT. This is just AT. And plus fluctuation, Brownian fluctuations and you stop, you take the first time such that you redeliver A where A is simply minus log of the delta.

So you have minus log of delta. You stop -- so you stop here. And that defines your radius, epsilon sub-A. A is just a relabeling of delta. So A is just the quantum level you want to reach. So that defines your stopping time. That also defines your stopping radius in terms of A.

Okay. And you see the good thing that delta is small, A is positive, and you showed up with A as long as gamma is smaller than 2, A is positive, so the drift is in the right sense.

Otherwise, of course, you have a less -- you won't have a probability 1 to have a stopping time. So that will be another --

>>: Are you describing to us how to build this discrete Gaussian free field, the

[inaudible]?

>> Bertrand Duplantier: Gaussian? This?

>>: [inaudible]

>> Bertrand Duplantier: This is the continuum version of the squares. This is a continuum version of the squares. Because you see we are in the domain of the plane, everything is continuum. But we have defined the quantum area of a small ball. We have fixed the quantum area to be delta. And we see locally how you have to shrink your

Euclidean radius in such a way that you just get delta.

So that ball, you know, at that point will be one of these small square we had before.

Before it was dyadic composition, now it's just small balls in the continuum.

So the epsilon sub-A is the radius of this -- of the epsilon sub-A is a Euclidean radius of a ball which has quantum area little delta. So just your continuum version of the dyadic composition before.

Okay. Why am I introducing this? Because I want to -- well, there is a certain probability solution to solve this radii, which, you know, I'm probably going to skip here.

You can calculate it actually. And I'm going to return to KPZ.

So now we want to look at fractal sets with this quantum area, quantum measure. So fractal sets in quantum gravity.

So you remember -- so it's -- it will be quite simple now. Don't -- don't -- it's only in two slides you have KPZ, so it's almost done. So everything is ready.

So remember that in the Euclidean setting, when we had the random fractal of exponent

X, we were saying that X [inaudible] little X is the probability that a ball of radius epsilon centered at Z intersect X, scales locally like epsilon to the 2x uniformly. You remember that. So that was just the definition of my exponents. And the other dimension, as I said, was 2 minus 2x.

Okay. Now, how are we going to define now the quantum exponents --

>>: Because there are students here [inaudible] this doesn't -- you know, this works for these factors, but it's not like a [inaudible]. This computing one exponents doesn't give you the --

>> Bertrand Duplantier: The other dimension.

>>: You need to know a little more to know --

>> Bertrand Duplantier: Yes, absolutely. Yeah, yeah. Yeah, of course. Of course.

You're absolutely right. This is really --

>>: In this case --

>> Bertrand Duplantier: Yes, yes.

>>: [inaudible]

>> Bertrand Duplantier: Yes. [inaudible] fractals, all of the other dimensions are fine, they are robust and you can -- they can be also mean [inaudible] dimensions. They are all the same. But I agree. There are a lot of cases possible. Yeah. Of course. Of course.

That would be somehow -- but, anyway, in our description, that will be the -- forget about the other dimension. It's just if you have some intersection properties of this kind, you are going to have KPZ [inaudible]. If you have this. You will see why.

Because suppose you have these kind of intersection probabilities. Now, to describe the quantum probability we are going to -- you give yourself -- you give yourself a certain quantum area delta, remember, so a certain level A, A equal minus log of delta. You sample the radius of the ball epsilon -- the ball, B subepsilon, in such a way that you just sample epsilon of A, just look at what kind of radius you have to forget the exact delta.

Then you look -- you sample H and Z and X independently, and then you look at the joint probability for intersection of your Euclidean ball with the fractal.

So just essentially define -- so you just look at the probability of the intersection with your ball with X, which is of this type in the Euclidean setting. But now of course this epsilon sub-A is random. It's just in the expectation sense here. That's the result we have. You just take the expectation of this intersection probability.

So you calculate this expectation. And what you hope is that it should scale à la quantum gravity, which is the quantum area which has been fixed here to some power, which I call delta. And if you are lucky, if you guess right, that should be scaling like this first. And second, delta and X [inaudible] KPZ.

Okay. So it is true? Yes. That's the last slide. So you do it -- just use your Brownian motion or presentation. So remember we had minus log of epsilon which was TA, the stopping time, which is an infinite number of the time such that AT minus gamma BT is called A.

So take the -- just it means that epsilon to the sub-A to the 2x is the exponential of minus

2xTA. And it means that the expected -- your epsilon to the 2x, the expectation is just the expectation of that exponential.

This was a stopping time. That can be calculated -- that's a lemma problem in Brownian probabilities. It scales like the exponential of minus the level you have to reach.

Remember, it was -- it was done this way, right? You have to reach the level, you have a drift term, you see [inaudible] the Brownian motion and you stop the first time you cross the level here.

And you look now, you want to have Z, the distribution of Z, the expectation of the

[inaudible] Z, the exponential of this. Or you can even calculate the probability solution if you wish.

And so let's just calculate the expectation of this. It's given by the exponential of minus the level times a certain factor. That factor, actually delta, so just admit this for a minute.

So you have this. Level A is a level A was minus log of a delta, so this is just delta to the capital delta.

And how is delta determined? Delta is just determined by the -- this is just the

[inaudible] the Laplace transform of TA, so you just find essentially that X -- the delta is fixed by 2x called A delta plus gamma of square root to delta square. This A delta linear term, comes from this linear term here, AT. If you just forget the Brownian motion, you are going to find this directly. And this gamma square root 2, this gamma square is just a fluctuation of BT.

So and then you have delta square corresponds to BT. So you have this formula which is a Martingale formula. And A is 2 minus gamma square over 2, and this, if you substitute

A, 2 minus gamma square over 2, this is just KPZ, which we saw before.

So you do have your KPZ version. That's simple of this -- from this simple calculation.

Okay. So, well, the next one was the Liouville quantum gravity, you know, what happens when gamma is greater than 2. So essentially we have seen this picture.

Actually I have it here. So let's run it again.

So gamma is going to shoot over 2 now. Unfortunately it is regularized, so you don't have this drastic transition. You will see -- so you know it keeps going like that. So we have crossed 2 and, you know -- so of course it's a very wide world here, because it's this quantum gravity is very, very strong, and here you have these enormous universes which has been bursting out.

So as I said before, the key thing that as long as gamma is less than 2, what happens that you have your random surface, and then you have these baby universes which take a certain fraction of the mass. But they are not of the same -- they are actually a certain power lower and for -- let's say for gamma equals square root of 8/3, one knows exactly what they are actually. One knows that -- actually, one knows in general it's a certain power of the fraction, it's 2/3 of the total mass which is in these baby universes, so which we discussed two days ago.

But when gamma gets higher than 2, suddenly you know the -- you have these branches which multiply up to infinity. So you have a different structure. So you have your base world and then you have all of these baby universes which has been -- so of course this is all, you know, look like fairy tales.

So if you look at the way they look from this perspective, you essentially -- you see spikes and, you know, very flat -- very empty spaces and then these huge spikes, which we saw somehow in the perspective too.

But, again, all of this is regularized, so you don't see exactly the structure of this. You know, I just here use baby universes because this is the way it is addressed in physics.

So there is some restructure of that starting in the '90s, and starting where people could extract scaling loads of these things, and with Scott we were able somehow to reconstruct those scaling loads from a Brownian approach or maybe another -- a better approach with proper probability description of this.

And I don't think it's time to describe that, but essentially when gamma -- there is a duality, meaning that there is a relationship between what happens when gamma is greater than 2 and what happens when gamma prime, in terms of 4 over gamma.

And then you have -- you can define new dimensions and they are rated to the former ones, which were calculated by KPZ. So there is a certain duality which you can work with. And for people who like SLE, in SLE2 there is duality kappa and 16 over kappa, and this is the same duality because gamma is square root of kappa.

So this is just the image of the SLE duality in quantum gravity. So you have something similar. So you can somehow describe the gamma greater than 2 quantum gravity in terms of gamma prime. So you just have the gamma prime quantum gravity. And then from this you have to have a certain Poisson distribution of universes which are sticking out. If you combine both, you should have a proper description of it hopefully.

And okay. So I think I should stop here. Thanks.

[applause]

Download