Document 17863514

advertisement

>> David Wilson: Next speaker is Stanislav Smirnov who will talk about the title you can see here, SLE percolation and scaling limits.

>> Stanislav Smirnov: So it's a great honor to speak here. And I greatly add admired Oded and his mathematics. And well of course, most of my mathematical connections with Oded were through percolation, and

SLE, though almost for interaction at least. Of course before I read his circle packing papers and the complex analysis stuff. And I was trying to remember last week when I first heard about SLE.

So actually I realized that I first heard from Tatiana, who returned from a conference where Oded was speaking I think about high dimension, but he then said that in dim ension two there is this approach with Loewner, so I tried to Google, actually that was before Google, okay, I tried to search on the web, and the only thing I found was a research proposal on his web page. So it was still '98. Which says then that I intend to try to prove that my construction indeed describes the loop-erased random walk scaling limit.

So I tried to since there was no text, I tried to reverse engineer from this what an

SLE is, and I still kind of regret that I wasn't able to do it nicely. So I had to wait until the paper appeared some time later. So that was one year later. And that was actually one of the sort of my nicest experience in reading papers. You kind of feel like putting together zig saw puzzle. And there are these pieces which seem like from activity rows completely different, and suddenly like everyone clicks together.

And this was certainly -- I'm actually very envious. It must have been a very nice experience to invent SLE as the moment you realize how this whole come together, this whole really different things, and then shortly thereafter the first papers of Greg and Wendelin and Oded, and then actually understood that this is going to be a very exciting thing.

And actually I remember because Oded was always sort of a humble person. I remember that I -- when I first realized that how excited I am about this, this was in this first SLE conference in Strasbourg. I said to Oded, wow, it's so nice and it's just thank you very much for this great experience. And now -- and I was trying to understand what a conform [inaudible] theory is at that moment, well, I'm sort of still trying, but I said, well it seems that we're now finally have that. But you need to understand it all. And this thing of yours, it will describe. Oded said oh, no, no, I'm not so ambitious it should give like percolation and loop-erased random walk, but maybe uniform spending tree, but not much, much more.

And I said no, but it sort of should. I mean your limit applies. He said -- he thought for like few minutes said oh, yeah, indeed it does. And that was actually very nice. And so people were saying few things about how he did mathematics.

So I did my main impression that he always had this sort of two qualities that he posed the correct questions, and he brought the correct tools which is very

visible with SLE, but it's also in [inaudible] his contribution. So he starts with some question other people posed and then humidifies it. And it then makes much more sense. And so it was like with SLE. So you can -- so I'll try to speak about the scaling limits, how it's set in the title. And he actually might -- so this was -- so from my viewpoint one-half of his great contribution that he posed the correct question, he took this very nice object, a random curve, and it's -- first it's not too much. It's not whole few theory. So it's kind of you can touch with your hands.

But on the other hand, it apparently already carries all the information. And then he brought in connect tools the revolution. So people were saying that he ran

[inaudible] so I think he told me that actually he kind of he knew working in complex analysis about it, but he never thought about it in this connection.

And then he was thinking about this parameter as composing random maps and then actually immediately got the random maps which he composed and first two coefficients are additives so he immediately come to the fact that if you have this

Markov property then the first coefficient will be Brownian motion.

And then the so we don't need Loewner's theorem for that. You need Loewner's theorem to say that if you know behavior of the first coefficient you knee everything.

And so he said that he like observed -- then he said that I must have seen this somewhere. And then I'm -- I think he was still in Jerusalem, so there was a library. So he went and he discovered the papers of Loewner, and -- but I'm sure without Loewner, he would have reinvented it anyway.

And so people were saying that he kind of always liked to invent his own proofs.

So actually I think it wasn't sort of kind of, I don't know, arrogant think or something. It's more like he wanted to have a hands off feeling of the subjects.

And he preferred having some proof which doesn't use some esoteric machinery, you don't have to read long books but instead uses geometric intuition and that he wrote much more.

But it didn't mean that they were like simple, using simple technology. Usually it was sort of hands on and maybe down to Earth. SLE in some sense is down to

Earth, but it's sort of with some interesting twist. And it was not necessarily with the proves. So I remember that like many, many of us used his pictures or his computer programs. So I remember that first time I used his program I need the picture of percolation, so I used this picture -- let me maybe scroll a little bit. So it's a base file.

So what he did is that he used the postscript as a full scope programming language. So this is actually small program in postscript, which so -- so this is just a small part of it. This is how it looks. There are some routines. So in postscript you have operators, you have loops, so you see for example for exploring the rotation, left rotation, right, and he draws the random thing and then the nice twist which he has here is that he didn't want to sort of -- he wanted to have -- well, if you ever needed that picture for percolation paper of something

which happens almost surely, you know that usually you have to test it 10 times before [inaudible] actually happens and you get a picture which looks generic.

So here [laughter] so he had this routine picture which would probably use picture and then there was a common loop.

So what happens if you run this file, if you -- it's one page 5, but if you page down

-- okay. So if you press page down, it just enters this loop and it produce a new picture. Press page down, produce new one. It's produce new one. So I didn't know that. And I needed that picture for a talk, so I sent it to the printer and went off to lunch. [laughter]. And well, the printer also is a full fledged computer, it has a processor which in the sense postscripting language. So when I returned back a year -- not a year, sorry, an hour later, I discovered that our system manager was very upset because he several times tried to stop it. But I mean

UNIX was resending it back. And there was, you know, this big printer like Xerox special which has a few boxes full of paper.

So what I had had after was I had about like 2,000 pictures like that. And I had to do something with them. Actually that was very nice because I always could easily just find a picture to illustrate any phenomenon, well, which has probability more than one in a thousand, and I use them as a script paper. So that's, yeah, okay.

So this is -- no, put then actually I deleted this loop thing because it turned out to be a bit counterproductive.

So the other thing is -- so as I remember Oded, he was always a very positive and very upbeat and I just understood preparing yesterday that of all my memories of him, except for this terrible day a year ago, all are very, very positive. So it's what was [inaudible] saying.

So I said to show some pictures, two last pictures I found in my phone, so this was at the Oberwolfach meeting a year ago. And we had a meeting about the future of SLE and being [inaudible]. So Oded is laughing -- well, I won't say the complete story, but one of us made the statement that SLE is the most ridiculous thing he has ever seen. And then Oded turned to me looking for help. I said but it describes a scaling limit of percolation. And the answer was, well, but percolation is even more ridiculous. So this is what he is laughing about. And we had actually wonderful discussion there.

And then the last time I saw him was a year ago in Montreal, and he was also sort of very upbeat and happy. Actually it's not visible in this picture, but he had this thing on his head it's like from a biker's costume with the flames and cross-bow, and he was sort of explaining that you know a friend presented to me, and I have to wear something on my head. Oh, well. So that was a very nice conference. And actually Stephen and Di and Coreasta [phonetic] were organizing sort of an SLE conference which goes a bit wider this year in May. So we'll probably put together a web page and send an advertisement next week.

So those of you who are interested in SLE or related things with Loewner force, please come or those who are not yet interested, please get interested. So it's in

Zurich conference center, which is Swiss Italian Alps. So it's a view from that place. So it's like an Oberwolfach but in the mountains.

So so the title I now -- I think I'll switch to the blackboard. So I will be can you go scaling limits of percolation. And we were discussing them with Oded on and off for like about eight, nine years. And actually this was what [inaudible] mentioned that when Oded was writing papers alone he put more personal comments in them and he would put more personal comments. So this is what he thought about the scaling limit. I like actually some touches that: I'm not sure if I have the foresight to decide on the definition now, but maybe we should attempt it, for otherwise who knows what will happen.

And the thing is that I'm not -- I don't think -- well, if he writes I don't think we have any ideal definition for the scaling limits because in part it depends on what you want to do with it. Maybe you want to connect it to few theories or maybe you want to do some other sensitivity things or for a spectrum like Christophe and Gabor are doing or were doing with Oded or maybe you want to calculate the dimensions.

So I don't think -- still, that he wrote a few years ago, probably four or five years ago. But I don't think we actually have an optimal definition now. So I will be discussing one possible definition. And it's sort of joint work with Oded, which is for one what specific for one sort of noise sensitivity. And I want just to add. The one thing I like about SLE, it's not even the sort of it gives definition of the scaling limit, but that more that it's sort of hands on. You can -- it's very nice to prove things with. So it's really maybe more important not as a definition but as a tool.

And if -- if you want other definitions it can be used to do other definitions as well.

So now I think I'll take -- I'll put this thing up. Wow, it has intelligence of its own.

So it starts -- so I will sort of discuss theorem which can be essentially stated that percolation in the plane is a black noise. And someone already mentioned the word noise here but not giving a definition. So it's -- so I'm not going to give a definition of a black noise. I will say bit what is noise. So it starts with the work of Sirison [phonetic] and Verchek [phonetic]. Phonetic. So there are two or three papers by Sirison which I highly recommend. So one of them is his talk at the

ICN, which is half about it, and the other one, there are two versions probability surveys in some [inaudible] is the same one as Wendelin's course about SLE where he discusses the -- what is a noise.

And in his language essentially noise is a continuous product of probability spaces. So he mostly works with the spaces indexed by line. So you assume that you have, let's say space [inaudible] and some probability measure

[inaudible] and then there are sigma-algebras F indexed by new numbers with the property that F is the limit of those algebras if T goes to minus infinity, goes to plus infinity.

Then this algebras F, T, S is translation invariant. And also that if you take algebra TR, then it's -- it's sum of algebras TS and SR. So it's -- okay. Let me -- so F, S, R. So one thing you can think about is the white noise that that would circle.

So it's kind of a random object index by line, which is kind of infinitely divisible in translation invariant. So the obvious example is white noise. But what they did, they constructed some more interesting noises which [inaudible] calls black noise because so his motivation is that the white noise spectrum is constant so you see all frequency and that's the white color. And black noise you don't see any frequency, so there is no linear response, so we have to have some more difficult machinery to detect it. So I don't go into exact definition. You can read Sirison's papers. It's very nice experience.

And so they construct some example using the [inaudible] and the web. And so there is additional paper here [inaudible] and then there are some papers of

Sirison, and the questions they ask there is whether you have some object of this type indexed not by real line but by plane. So of course you can do white noise on the planes, you can have white noise on the plane. But can you have more complicated noises indexed by a plane?

And the canonical sort of other interesting examples -- interesting examples on a plane. So what you would suppose to be having there instead of these properties that if you have two -- so here there are two disjoint intervals and some sigma-algebra, sigma-algebra for that bigger interval. So you should have, if you have two domains, say you cut domain Z into two parts, Z plus and Z minus, you would want to have a property that FZ is the sum of FZ plus and FZ minus.

And so I think that Sirison asked this question this [inaudible] whether you can get such a thing from percolation. So in percolation certainly you would have, if you construct some scaling limit of percolation, you would have this property translation invariance because any reasonable model is translationally invariant, and then you would have the property that the percolation picture in whole plane, you can exhaust it by domains.

And so the main question, if you have percolation picture in some domain and you cut it in half, whether you can reconstruct the whole picture. And there are -- there are of course potential difficulties. So potential difficulties are -- I don't know. It's not visible for -- or where is the line under which it's not reasonable?

It's okay? Or I should even write higher.

>>: [inaudible].

>> Stanislav Smirnov: That line is okay. Okay. So what -- so the potential difficulty is that there is some information stored on this slide. So basically there are two questions. One question is how would you define sigma-algebra F, so what is the percolation configuration because there were several definitions proposed in the terms of like sets of O curves as crossings, sets of O interfaces.

You can do some height functions also for percolation.

And the second thing is how to prove that there is no information supported on the line? So now -- so we ask not to get technical. Now I will get technical. No, that was a joke. So now I'll try not to get technical. It's just so that you all look up

at the blackboard. So there's some motivation of why there should be no information start on the line. So maybe two remarks. One remark is that for percolation the probability of being -- so the dimension of pivotal points, so the dimension of the pivotals is more than one. So it's -- what is it? Three quarters.

So it has to do that with the probability to have from some scale to another scale to black arms and to arms of the opposite power.

So if you have let's say this is size one and this is size epsilon, then probability no matter what is the mesh of the latest probability is iterated by epsilon to the power five quarters. So the dimension of the set of pivotals is three quarters. So what happens is that if you take a line then basically is probability that you touch a pivotal is zero because line has dimension one, pivotal has dimension three-quarters, three quarters plus one is more than two. So they are two independent sets. So if you throw a line on the percolation configuration there is chance zero that you hit a pivotal and vice versa if you fix a line interests chance zero that you'll be there.

So this actually has some quarrel with it that you cannot take an arbitrary curve here if you take curve here which has dimension bigger five quarters then there will be pivotal so needs so you are in trouble. So there will be some [inaudible] on the line. So if you denoted by alpha. So if closed door dimension of alpha is smaller than five quarters, we seem to be okay.

And the other remark it's sort of -- so this is sort of this such sort of argument they go back to early work of Oded and I Cubaratus [phonetic] on noise sensitivity of percolation, how much creeping you can do so that you don't change the crossings. And the second remark is that if percolation is about crossings and we'll look at the crossing. So if I split it into two parts -- well, there is -- there is another exponent. So if you look at the exponent, the probability -- probability that a cluster attaches given box so it's a [inaudible] in half plane, then this will be actually -- actually this is comparable. This will be comparable to absolute to the power one-third.

So the dimension, the dimension of the cluster intersected with the boundary of the plus, so it's cluster in the plus, only it's equal to two-thirds. So if you have two clusters, two big clusters which ties this from above and below, dimension is two-thirds here, dimension is two-third there. And two-thirds plus two-thirds is bigger than one. So you have two independent sets of dimension two-thirds.

There is a positive chance that they will touch. So it means that if there is a crossing, then there is a positive chance that there is a crossing which only once intersects this line. And in this case it once intersects this line by the first three mark, there is no sensitivity, there is no information stored on the line. You cannot flip this because it's the probability that this line you will get pivotals is zero.

So in this case so there is a positive chance that there is no sensitivity to this line then you could do this. So this is sort of an easy counting argument. And this is the -- actually I want to ask a question. I still don't know that. So that Oded somehow didn't like this question, so I will ask another question which he liked,

which is the question that I liked. Is it always true that if you have a crossing then you always can optimize it so that it intersects the line only finitely many times.

So is it true that probability of having a crossing is the same as probability of having a crossing which intersects only finitely many times, intersections of this crossing let's say intersect with alpha is finite.

>>: [inaudible] finite [inaudible] just --

>> Stanislav Smirnov: Well, a finite realm you always get a finite number of crossings. No, this -- yeah. So in finite realm, it's a question is whether it is straight because here I didn't really specify whether I'm working with the finite realm and uniform estimate so that was implicit or I work with the scaling limit or some sequential scaling limit.

>>: It may be that in the finite world you have lots of clusters but if they're --

>> Stanislav Smirnov: Yes, that's -- that's -- that is actually what is likely to happen, that in finite realm you don't -- you probably won't have a finite number of crossings but what you would get is that there are finite number of groups of infinitesimal crossing. That's possible. But I don't know. I'm not sure. It's kind of zero-one low question is that you have a event which has positive probability. So this doesn't have the full probability or not in so can you optimize it?

Because in general percolation crossing it has dimensional bigger one. So if you take the left most crossing it has dimension for third to intersect this line dimension for thirds minus one, one-third, so in the counter set.

If you take the shortest crossing, I think there is no physical prediction, but the medical estimates show that it's 1.15. So the shortest crossing will have dimension intersection of dimension of .15. But here we want to optimize in a different way. Not the shortest, not the left most, put the best with respect to this particular line. Can you optimize so that it will have only finitely many crossings?

So I don't know, but it's a nice question. And actually it's -- if it's true, it will simplify most of what I'll be speaking about and also it will give a sort of much more straightforward construction of the scaling limit because it will be very easy to glue the squares together.

>>: I will project that it's not true, that you can --

>> Stanislav Smirnov: That's what I just said, so it's --

>>: [inaudible].

>>: That you fix the line.

>>: I mean I would conjecture that it's [inaudible] [laughter].

>> Stanislav Smirnov: Let's have a vote. [laughter]. Democracy. So who thinks it is not tight? [laughter].

Who thinks it is tight? Four to one. I mean that's the probably with democracy, only -- well, whatever. [laughter].

>>: [inaudible].

>>: I'm sorry?

>>: [inaudible] restaurants in Seattle.

>>: No, no, no. [laughter].

>> Stanislav Smirnov: Okay. Let Bertrand -- Bertrand will be our representative for this question.

>>: [inaudible].

>> Stanislav Smirnov: Yes?

>>: You know there's even a positive probability that you can get --

>> Stanislav Smirnov: Yes.

>>: [inaudible] at the top and just cross at one point?

>> Stanislav Smirnov: Yes. Yeah, yeah, that's -- yeah. That's the secondary work. Yes.

>>: Yeah, so [inaudible].

>> Stanislav Smirnov: Oh, you know, it's sort of the modified allow 0-1 law, let's say, no limit. You can name it after us. So Kolmogorov has 0-1, [inaudible] algebra, for all sigma algebras it's true that any reasonable event has either full measure or zero measure. So this has positive measure, so it has to be full.

You so it's -- because the other possibility's that you can minimize -- that possibility's is that some capacity kicks in some complex analysis that you have intersection set which is zero of some capacity, and then it is okay, if it has positive capacity then it's not okay to things. Because what I was speaking about that can glue two things and it remind the configuration. If this would be true, the root for the dramatic configuration would be very easy. You take thing above, you take all clusters, all loops, and then you ask whether you can combine this in finite way. So for example, this would be okay, 1, 2, 3, 4 jumps and that's it.

If this conjecture is not true, then you very procedure if it take procedure exists, there is no constructive proof. Then constructive procedure would test this set of jobs for some sort of capacity criteria and it would be interesting for a complex

[inaudible] but probably very difficult.

>>: You can prove this, that the two sets intersect?

>> Stanislav Smirnov: Yes. Yeah. Because there are two independent sets of dimension two-thirds.

>>: Well, you need more. There are two -- there are pairs of independent sets.

>> Stanislav Smirnov: Well, okay, okay. Yeah, yeah, yeah, I -- I'm not that thick.

Yeah, I know that I need more, but I have more, yeah. That's -- it's -- they are such and they're correlated, yeah. So it's -- yeah. At least once I checked it and it seemed to be okay. I don't know. If -- unless of course this is time independent this problem then.

So the thing which I will be speaking about, let me maybe start here, so the theorem that I get is that for scaling limit of percolation of critical percolation, critical percolation on triangular lattice or any subsequential limit, subsequential limit on square lattice. You have this property so the sigma algebra in G is the sum of two sigma algebras in G plus and G minus. So it's -- and so well, I mean strictly speaking you also have to add sigma algebra of measure zero. So this is the formal -- so I mean, so here by scaling limit I just mean that what we take would take the sigma algebra of percolation on the lattice and then we would take the limit as the limit as sigma algebras, and then after the profile say a bit how we actually construct it. Because that will be important for the proof because in the proof one specific way is used. So the proof actually has to be careful here because since we are speaking -- if we do the discrete version, if we just sort of try to use this estimate to show that there are no pivotals on this line it works for the discrete version. But since we pass to the continuum limit, there is some trouble there because you might have two different configurations which converge to the same configuration the continual limits. So there for the lattice was mesh epsilon the distance between them say is square root of epsilon so they get closer and closer.

So actually what one has to do, one has to show that if you have two configurations lets say omega one and omega two so that omega one is approximately only got two like off alpha, epsilon off alpha, let's say, distance, distance S of alpha, and then they are like omega two and omega one resampled completely independently in neighborhood of alpha. So sort of independently the same boat. Then with high probability, high probability, there is crossing in omega one if and only if there is crossing in omega two. So the discrete version would be -- so there is crossing in omega one if and only if in omega two. So the discrete version would say something like that, that their upper limit as the mesh goes to zero of their percolation measure in mesh, so this is percolation on that a greet with a step mesh. And then you take -- they went -- let's say the probability that there is let's say [inaudible] like that, probability that there is a crossing given a condition on the sigma-algebra of F and D minus, F neighborhood of alpha is between epsilon and one minus epsilon that this limit is zero. So if you know this sigma-algebra, then you would know up to epsilon that the crossing either exists or doesn't exist.

So here now if you prove it, there are two parts one has to do. One has to tackle that we can -- we should resample near alpha and one has to tackle that we can move it off alpha. And then when this is done, one has to sort of say that, well, this takes care of one crossing event and that this crossing event indeed in a nice sense generate our sigma-algebra. So there are three problems. So for first

[inaudible] check that this is okay when you perturb off alpha when you complete your set near alpha and then say that if you just checked it for one crossing event it is enough that the collection of all crossing events in nice way generates the sigma-algebra. So maybe -- so what do we have? So I will just sort of go very fast over this and then we'll go to this scaling limit. So here there are two parts.

So one part is -- so what one does, one introduces another configuration let's say omega one prime. So this is we just resample near alpha. And this is with perturb a little bit off alpha.

So this first part is okay. So the first part is okay by this five quarters estimate, so this goes back to other noise sensitivity papers. So basically what you do, you -- so let's say the probability that there is a crossing in omega one but not in omega two. And you take this band of which S around curve alpha.

So what you do, you cut it into squares of size S. And you start resampling the squares one by one. So the total number is one over S. So it's basically at most one over S probability that the same happens when you resample N squared.

So it's probability power. Let's say they are there -- you actually go from omega one to omega one one to omega one two, et cetera, et cetera, and then you go to omega one, one over S, which is omega one prime. So just start resampling the squares one by one.

And what happens if that really sample if you square, and now we're sampling this and crossing disappears or reappears? Well, it means that in this square we would have a pivotal for our configuration. So we'll have picture like this. So probability there exists a pivotal in this square, small square queue. And probability of having a pivotal is S to the power, what was that, five quarters. So it's S to the power one quarter. So as S goes smaller, this goes to zero. So this is not problematic.

Now, the second part if you go from omega one prime to omega two. So now you have two configurations which are same here but are perturbed. So let's say this is omega one and prime and this is omega two prime, they're perturbed of the line. So now one has to be careful because if I just do it in a dump way like here, then indeed we can easily queue something because we can, for example, have a crossing which has a small part here, which is important for the crossing.

So it goes, for example, like that, and we perturb it by epsilon, this part disappears. And there is no uniform bound on epsilon because it can have very small ones. So what one does, one fixes another parameter. So let's say there is a [inaudible] S and there is a [inaudible] S prime. And then what one does, one takes -- well, there are two sigma-algebras. There is a sigma-algebra F over

G minus alpha S. There is sigma-algebra of J, of F -- F of G alpha S prime. So this has less information because you -- you take away more things. So it's contained in this.

And one introduces another sigma-algebra, let's say SS prime, which basically has all the information here. Maybe that's -- let me put this in blue. Plus whatever you have part of the clusters which are sticking down from this. Up to the place where they might touch this boundary. So it didn't have completely this tree, but it has whatever can be important for us. And with this one has stability because if S is much smaller than S prime, these things, they will become -- they will have good control over them, and they will become stable because you -- we no longer -- we no longer care about such small -- such small things. And we no longer care about things -- well, there are two types of things. There are things which don't reach the bottom and things which reach the bottom. So the things which reach the bottom, they are long enough so they are stable on the epsilon perturbation if epsilon is small enough, if it's smaller than S prime. And the things which don't reach the bottom we don't care about them. So it's basically these things. So in -- if S is much smaller S prime, if let's say distance between omega one prime and omega two is much smaller than S, whatever that means in some sort of metric. Then there is a stability again by the [inaudible].

So now this is basically the idea. This is short part, but the more interesting part is that one, even once to have a proof from that. So what we -- what I sort of described here is if you completely know percolation picture here and there you can reconstruct whether you have a cross link or not with high probability.

Now the question is this enough to construct the full percolation picture? So if you know all the crossings, is it enough? Well, if you know all the crossing it's enough, but here's it's like for every crossing up to epsilon can reconstruct this crossing and this can accumulate so you have to have some control over it. And this brings to this quote from Oded which also has to do with the definition of percolation limit. So we discussed it many times and as his quote from his show, so he prefers a definition where you already have some limit not assuming anything difficult. So I in a sense preferred always that you would -- well, for percolation it's very easy to do something like [inaudible] so immediately have that crossing parameterized curve, so you already have a lot of things.

Now Oded, maybe he was looking forward to other ones saying that it's sorts of -- it's a bad thing to do this. It's better to have something where you have a limit not assuming much. It's more or less the same with constructive Brownian motion. You can first prove that the Brownian motion trajectory will be here continuous and then prove that it exists. And of course it's easier to prove. But first you have to do some difficult work. Or you can first prove that it exists as a measurable function. And then prove that it is actually continuous.

So yet prefers this approach. And so the definition which we liked, and I think some of -- he was thinking about it long before SLE, so -- okay. So that definition. You take -- so in a sense the way percolation model was formulated was the question about crossing rectangles. So take the space of all rectangles, the power code rectangles and the probabilities of crossing them. So you take -- so I just call them quads, quad Q is a map from square, a continuous map from -- a continuous map from a square into our domain. So continuous now. So you think of such a thing.

And now the percolation configuration, so let's say percolation configuration omega, omega is the set of Q which are crossed. Meaning that between two horizontal edges there is a crossing. So this works in the discrete setting. And then of course percolation measure nu is a probability measure. So nu with some mesh epsilon is a probability measure on the space, on the space of configurations. I haven't said what is the topology. And then you pass to a limit.

Now so this is interesting because we had big arguments about this, so I -- my idea was that again you should use these properties of small -- well, not of small things but of continuity of crossings. And Oded was saying that this is an overkill.

So he came up with an abstract construction which was only using the ordering of the quadrangles in the plane. So the reason that nobody was ordering that if I do two quadrangles like that, so this is the second one and this is the first, if there is a crossing over first, there will be a crossing over second one.

So one can draw more interesting examples. So it's -- well, this is also a quadrangle, so there are more intricate things. But this gives us an order. So you say that let's say -- well, first of all, you tee fine that this space of 2D is a Q, a space of OQ with uniform metric. Metric. And then you define that Q1 is at most

Q2 if a crossing Q1 implies -- so let's say crossing Q2 applies crossing Q1. So here it's on the opposite, so this is Q2, this is Q1. And then you can -- since you have this metric, this uniform metric just of the embeddings, we -- yeah, there is actually here I mean the same quadrangle is given by a few embeddings, so we consider them separately. So this gives another, a slightly better or the Q1 is more than Q2, if neighborhood of Q1 is smaller than neighborhood of Q2.

So for example, such thing won't work because you can perturb slightly, so one has crossing and another not, but the original I've drawn is okay.

And what else? You say that -- so I will just finish with a definition, and then I won't prove anything. Say that S in this case of quads, quads is hereditary.

Some subset is hereditary if -- if Q inside S and Q prime smaller Q implies Q prime belongs to S. And this is the property which percolation configuration has because if you crossed the big quad then you cross the small quad.

And then HD is the set of S and QD which are hereditary, and TD is minimal topology generated by, and then there are -- one has to construct to take two type of sets. So let's say for open subset you take the sets of S such that they don't intersect -- don't intersect you so that intersect you because it's a stable property if you intersect something openly stable and for closed sets well it's enough to take points. You take such S that -- so for points which should be stable that X doesn't belong to S.

And then the -- well, this is the space. So HD was TD. Is where percolation lives. So the interesting thing is that indeed we don't -- it's a very abstract setting. You can start so the way most things are proved about this that it's, for example, it's a compact Hausdorff space, so this proved only using this monotonistic property of percolation.

So essentially you start with any topology space which has an ordering. To you this construction, and it works. And remember that -- so we're discussing this, and then when Oded said first [inaudible] he said that we expect the [inaudible] very general result is well known but haven't located the reference. So it's again fits in the earlier comments. And actually I since tried. I found one reference. It was in my general topology notes from the university, but I can't find the book.

>>: [inaudible].

>> Stanislav Smirnov: So maybe it's here. So now this is how it works. If I get

[inaudible] it's rather technical. So I wanted then to ask a question which was a favorite question of Oded and also of myself. So what -- yeah? This gives you limits and subsequential limits. And then this is enough actually to set up the thing here that this is enough to set up things that actually sampling finite number of quads is enough to reconstruct with probability epsilon the whole picture.

So it's -- this sort of compactness of this space. And then if you want to show that the percolation -- so this gives you subsequential limits of percolation easily if you want to show that there is a -- there is a unique limit for triangulation then one has to run the SLE branching because SLEs have to do something, one has to do something. So this is -- one has to do technical work. We didn't know anything which would be sort of straightforward.

So the nice question is what -- so there was one question I've written -- yeah, okay. So let me maybe raise the noise for the blackboard. So how much one needs to construct -- well, this space and this sigma-algebra. So let's say F and percolation measure nu and the space H. So what one does for triangulation is one uses the property that will have their formal for any given crossing. We have locality. We have monotonicity. So this is the only properties which I used. In principle you can use some continuity properties, too. They simplify the life, some sort of continuity.

So the question is whether you can take away the [inaudible] formula. So if you have some percolation measure in this sense, there are this measure and crossings of the quads which has locality property which has -- well, monotonistic property makes this stop working. What one needs to ask so that it gives you the critical percolation. Yeah.

[applause].

>> David Wilson: Okay. I've been informed that the question session will be, if there are any, will be after the next talk. So let's take a three minute break, and then we will have question session [inaudible].

Our next speaker is Scott Sheffield, SLE scaling limits and the Gaussian free field.

>> Scott Sheffield: I want to say one thing about Oded, and that he was unusual in the extent to which he helped other people solve problems. If you -- you know, look at the -- you could make divide up the problems he worked on, that were the

problems where he actually coauthored a paper. Then there were problems where he really contributed something very pivotal to someone else's paper. And maybe he didn't want to be a coauthor, he didn't want to write -- he was happy to help with someone else's paper.

And there were problems where he -- you know, wrote it up a Mathematica notebook and sent it to people or wrote e-mails, very detailed arguments. There were problems that he solved and communicated directly with individuals. There were problems that he and his coauthors totally intended to write up one day knowing as these things usually go that, you know, other things might end up taking precedence. And then there were are problems that no intention of every writing up even while solving them and just did because they were enjoyable and they were -- people learned something in the process of working on them. And you know, even when you go down into layers two, three, four, and five, you know, you find there are things that would be in other people's level one, you know. He really had remarkable results at all level. And you know, the published results with his name on them are only really a small part of I think what he accomplished and gave to us.

Today I'm going to talk about proving things converge to SLE. So SLE, you know, has now been the topic of several hundred papers building on, you know,

Oded's early work constructing SLE. And you know, the problem of proving discrete models converge to SLE is just one aspect of that. But it's -- I think a very important one. And it's one you know when people visualize SLE, often what they have in mind are these, you know, these discrete models with scaling limits. And it was a very I think it -- very important part of the field. And so you know, Oded sketched out some ideas in this direction at his first paper and the real hard work was done by Lawler, Schramm, Werner, by the three of them, by the three of them in first -- proving the first scaling limit result for loop-erased random walk and uniform spanning tree.

And so the -- the method used in that paper was the same one that was later used in the other papers Oded worked on with me on proving things converge to

SLE. And you know, and variants of this were used by Strauss as well in convergence proofs with, you know, modifications of two and three. But the -- you know, essential ideas, well, you start -- you want to prove something converges to SLE, what do you need to do? Well, first of all you have to say what convergence means. And you have some discrete paths you can define.

And you would like to show that these discrete paths get close to some sort of a continuum random path. And the first thing you need is well some sort of metric on closeness. And so the way Oded liked to think was first we'll look at the

Loewner evolution. And so if I pick a point in the interior of my domain to call the center, we can formally map to a disk, so I have this point in the center, then as I draw this path, I can look at the Loewner evolution function, W sub-T that essentially tells me at each time when I could formally map back what is the image of the tip of the path. And as we all know by now, just knowing W sub-T let's you reconstruct the path. And so if I have a sequence of discrete paths,

WN, converging to some other path, first thing I might say is well, this path is close to this one if somehow these functions are close. And so the Loewner evolution perspective gave him a natural metric on the set of paths. You know,

these two paths are close if they have about the same capacity time and if I, you know, the end point occur about the same capacity time, and if I draw the two paths, the two Loewner evolutions, W sub-Ts, I find that the supreme distance between them is small. And so that's a definition of close. Paths are close that if

-- in the Loewner sense he would say if this is small. Or informally he would say, you know, viewed from this point, these paths are Harmonically close together.

You can't see the difference between them viewed from this point.

Okay. So this gives us a metric on the set of curves. In the first thing you might try to do is prove -- well, it's actually a second thing, step two. You might try to prove that these random paths converge to this one in law, so in distribution, that means weekly with respect to this particular metric. And then once you've done that, you might like to strengthen the topology to a more natural metric where you would say two paths are close to each other, if you can parameterize the two in such a way that for all time the two paths don't get far from each other. And so

Oded sometimes called that the strong metric on paths. So you could start by proving convergence in this Loewner driving function metric and then proceed by continue and strengthen the topology and prove convergence in this strong sense.

But what Oded always felt was the most important step was this step one, which is what he then used to do step two and three. And the point was just find something about the continuum martingale or the continuum SLE that looks like something in the discrete picture. And this is -- you have some martingale. So what would that be? Well, program, in loop-erased random walk it's something involving the Green's function, and so you have a point you can look at a Green's function viewed from the tip, and you have some -- some function that's varying as the path changes.

And if you can show it that function is a martingale for SLE and it is approximately a martingale for the discrete version, then there's some magical arguments that let you just from that information show that the driving functions converge.

So I think a key observation of this -- well, more than an observation of this

Lawler, Schramm, Werner is this principle, that having this some sort of martingale observable, this one object lets you then deduce the driving functions are close. And this is a beautiful argument, and, you know, I don't have time to really prove it and would be the wrong person to do it anyway, but I will at least show you the paper.

So here it is. This is the paper by Lawler, Schramm and Werner. It was in an annals of probability. And this was, you know, part of a hugely productive string of papers in between roughly '99 and 2001 by these three authors. And well, you can at least see from the contents here what you do, you know, you define a

Loewner evolution, you define a discrete version, and you give some background, and you recognize the driving process using these magical tricks, and then you strengthen the convergence to a stronger topology. And that gets you loop-erased random walk SLE.

And similar thing for the piano curve, the boundary the uniform spanning tree.

First you get the driving function, then you get uniformly continuous and then you're done. And you need various estimates and, okay. This gives you at least rough sense what's in the paper. You can all go read it of course.

All right. So here are a few of the SLE scaling limits that one should have based on various ideas from physics and math. So first kappa in two and eight. These are -- the kappas come in pairs, one being 16 over the other by this duality relationship. So two and eight were handled in this original paper. And next thing is, you know, eight-thirds in six, which is related to self-avoiding walk and critical percolation. And so this self-avoiding walk, well, okay, now that you've seen all these I'll show you my color coded version. So these are things that are solved. So critical percolation, so this -- you know, I guess you'd say is due to

Smirnov. So you know there were some more detailed arguments for the latter steps that were given by Camia and Newman later on, you know, filling in parts that Strauss didn't completely describe.

But you know, Oded always felt that once you had step one, the rest was kind of easy, you know, at least for him. And he felt that, you know, that a, you know once Strauss had written the conform invariance that the extension SLE was essentially done. And he always referred to it in that way in saying that you know, it's done, and that proves it. Because you know, he knew exactly how to go from there.

The -- so critical Ising cluster boundaries and FK cluster is work in progress by

Strauss. And this kappa equals 4 harmonic explorer and Gaussian free field level lines were works that I the jointly with Strauss. And double dimer model I put in green because I --

>>: [inaudible].

>> Scott Sheffield: Oh, with Oded, that's right. And double dimer model I put in green because I -- I have a hunch that Rick is going to solve this some day soon now. But anyhow, it deserves to be in green I think.

Okay. Before I arrived at Microsoft I was sending Oded e-mails during the summer telling him that I would really like to think about level lines of the

Gaussian free field. I thought it was very interesting. I had doing my thesis with

Amir, I worked on random surfaces, and I'd read some things about the Gaussian free field, and I knew from talking to Oded and Rick earlier there were some connections between the Gaussian free field, and there was this conjecture for the double dimer model that, you know, we had talked -- where I had talked about with Rick and Oded and they had, you know, known about for a while, which is that, you know, this should look like SLE 4. And again, it was -- well, you know, and the problem of course is that we could not quite get to the level of step one, you know, we had the martingale. In some sense we had the discrete martingale, but we couldn't estimate it, couldn't show it it was approximately the continuum martingale.

And but you know, based on this, I was sort of emboldened to think that, well, okay, all I have to do is come up with some model where I can do step one, and

Oded has insured me that everything he has done will work, and steps two and three will just fall into place. So this sort of freed my mind. I was a little intimidated by steps two and three, so I could just focus on step one. And so I started to think about this -- let's see. Okay. I started to think about this harmonic or this Gaussian free field. And I you know sent him all kinds of e-mails during this summer, most of which contained ideas that, you know, were half baked and wouldn't work, and he would explain to me why they wouldn't work, you know, some -- you know, it seemed like well can't can you define the level lines of the Gaussian free field right in the continuum, you just -- you know, you take zero boundary conditions and you condition on say all the level lines hitting the boundary and you know you've got zero conditions on them, too, and so conditioned on that you repeat the process and somehow this fills up and gives you all the zero level lines. And you know and he explained this structure just simply cannot be right because you know you're just going to repeat and everything's going to be zero in the end. And you know, you know, the expected

-- you know like the variance of the average height on a disk like this, and you know that as you're observing these zero level lines, you're learning information.

And it can't be the conditional expectation is still zero when you're done no matter what.

And so there was -- there was clearly something wrong with the picture. And you know, despite many tries and finally at some point decided that we had to do something more along the lines of double dimer, make this model look more like the double dimer model where we understood it and we had in mind Strauss argument on the hexagons and so we thought well, let's try something with hexagons.

And that's where we came up with this harmonic explorer. So I guess I -- well, no, I guess I -- that was next. We came up with the idea of taking -- so let me -- let me first give these references, and then I'll move on with that discussion. So that first reference is scaling limits of loop-erased random walks and uniform spanning tree by Schramm, and then next this Lawler, Schramm, Werner paper.

These two were with the scaling limit results that I worked on with Oded.

This is in green because it's not finished yet, but it's part two of this paper that hopefully will be finished. And this is in green because this is a joint work that I'm writing now with Nike Sun which was something that I think -- it was sort of started thinking about with Oded to go in this big paper, and it kind of ended up on the chopping block. I think from Oded's perspective maybe it was in category three or four. You know, he didn't want to -- want to finish it. But it was something you know that I thought was quite nice, and now we're trying to -- Nike and I finished this up. And so I'll mention that as we go.

So okay. So first of all, the -- so the discrete Gaussian free field, well, you have a function on a graph, you define an energy which is proportional to the sum over all edges of the difference in the function between one neighbor and one end point of the edge and the other squared. That's sort of a, you know, altanorm of the discrete radiant, if you will.

And you define that energy in the discrete Gaussian free field, it's a random element of the set of functions on the space where the probability is proportional to either the minus this energy over two. And because this energy is a quadratic function, this turns out to be a Gaussian distribution for particular covariance.

And here's example of what it looks like if you fix the boundary conditions to be zero and just look at the fluctuations. This is a random Gaussian function. And so we had this idea that if we could just set -- take some boundary conditions where we set zero boundary on one size and positive boundary on the other side of some lattice and then we triangular lattice and then we color according to the sign of the function. So black means the function is negative, white means the function is positive. If we did that, then there would be a picture like this where -- a dual picture that would look kind of like percolation. And so we'd have this natural path which is sort of an interface between negative on one side and positive on the other. And, you know, so -- and somehow it's -- you know, it's zero in the middle. So this should be something like a level line of the free field.

And that's -- what we did, we also had variance using -- if you set the initial boundary heights to be something else, then you get something else besides

SLE for it. But the theorem here if you take these magic initial boundary conditions, then the interface converges to SLE 4.

And, okay, so and theories a picture of this. We take these boundary conditions minus on one side, plus on the other side. We draw the interface. Here's another view of the function -- seen as a function. You can't quite follow the interface, but it's there. Here is the expectation given the values on the interface.

And you see it's roughly constant on one side and constant on the other side.

And so realize that if we could prove this really was one roughly one constant on the other side and roughly one constant in the other -- on -- so we really had this kind of constant height gap between the two sides, then that would be enough to give us the control on the martingale we needed. That would give us step one.

And this machine that Oded had built would then churn through steps two and three.

So we -- so we proceeded to prove what's called the height gap lemma. So we used the exact same steps one, two, and three that were in Lawler, Schramm, and Werner, and -- but we took this martingale to be this function which is the harmonic extension of minus lambda on one side and plus lambda on the other side of the path. And that's something that turned out to be a martingale for continuum SLE and a martingale for approximately a martingale in our case.

Okay. So and here is a picture of the zero level lines from the boundary which is just kind of pretty. We were also able to describe the scaling limit of this.

So this paper -- I started this with Oded in the fall of 2002 while finishing my thesis, as I mentioned. You know, kind of a busy time. I had my first child coming in December and this thesis I wanted to get done. But it was really so exciting, you know. It was like a drug. I was an addict, you know, coming to work on this. And you kind of by you know February, March of 2003, we sort of

had convinced ourselves it was going to work. And as we came up with this sort of fair distribution I said look, I'm still intimidated by this -- these steps two and three. I'll just do step one, and you write up steps two and three, and we'll divide it up that way.

And you know, it turned out that step one was more challenging than I'd anticipated. That ended up being the core of the paper and 80 pages and highly technical and at some point I did have of to, you know, return for assistance.

And, you know, it was really -- you know, very collaborative effort finishing the -- that step one which is very -- but very enjoyable.

I want to show you some e-mails from just while we were finally finishing up step three in 2006. So three years later after many distractions, including other joint papers with each other, we ended up somewhere between six and eight joint papers together, depending whether you count the ones that have not appeared yet. But so you know, these are fairly mundane e-mails as we go through and try to decide how we're going to handle this step three, you know, what are we going to do? You see Rick and Strauss's name here. Oh, you missed it. They came up. And I hope you all could follow this. You know, at some point we decided we would leave out the identification of lambda until the second paper. Which it is.

That second paper is still pending. But should come out soon. Oded says we can decide to say a few words as a trailer, but how it is done in the next movie, I mean paper. At some point he, you know, came up with his own version of the topology improvement. And so at some point we sort of had competing versions of this lemma. And well, we ended up going with his version, but the problem was we wanted to prove this convergence also when you have other boundary conditions.

So, you know, not just plus and minus lambda, but some other boundary conditions. And in that case, it turns out the path hits the boundary. And this step three actually became very subtle in that case. And it was supposed to be the easy case, but it -- you know, we had to do a lot of work to prove it. And we ended up giving up and deciding we would not prove it for the boundary hitting case, we would only prove -- we would prove convergence in the strong sense for the general case, and for the boundary hitting case, we would only prove convergence in the driving function metric. And so let's see if I can get this to show some of these discussions. So, you know, I plan to do everything in another paper but he said this would take a long time, maybe we should make the topology upgrade pending some modular result. I'm not so fearful of using our other technology in doing this all directly. We'll have to rely in some sense on these early your papers but I think our primary goal should be to release a clean paper proving the contours in at least one form conversion to SLE 4. If we want we can also refrain from submitting it before we are happy that it fits well with the rest of our plans because you [inaudible] stated explicitly in the introduction -- and you know, we went back and forth on this. I have about 15 pages of e-mails which is just sort of you know these mundane deciding what we're going to do, how much we're going to prove and what we're going to leave out and exactly you know the paper already ended up being 130 pages and, you know, what are we going to cut out? At some point, you know, I gave in and decided we wouldn't do the most general powerful part but do the part we could actually finish.

And you know, I read over your version, you know, it's -- it's shorter, especially since some of the results could be needed as preliminaries anyway but it's less general. There's a tradeoff as we knew between how much we can gain and how much can be gained by having more general and more modular proof. Of course we can pledge to include a more general modular proof in a later paper which may even be better in that you could get more attention that way but the utility has to be discounted based on well, dot, dot, dot. So I rambled on for some time.

>>: [inaudible].

>> Scott Sheffield: Based on the distribution of the amount of time to completion, the discount interest rate.

>>: [inaudible].

>> Scott Sheffield: Okay. Probability will never get finished, et cetera. We of course [inaudible] arguments both ways. And Oded's response to this was very terse. He just wrote right. Since the most specific [inaudible] essentially finishes shorter, let's go with that. And he wrote enough to feel comfortable. And I said well, okay.

So we went with that. But the longer argument is now something that I'm working on with Nike and this is an excellent young student you should all get to know,

Nike Sun who is just starting -- she just finished at Harvard and is starting her

PhD program at Stanford. She's been working with Amir over the summer and she's going to spend a year in Cambridge.

But I know what was -- the essential thing we're working on, which ended up on the chopping block of the work with Oded is to say that is it possible that just knowing the Loewner driving convergence is actually enough? Yes. Okay. All right. So I'm -- well, okay. Almost out of time. Yes. So I'll give a quick description of this.

So first of all, there are these known examples where you can have paths that kind of -- it goes up and down and then back inside itself. So this is a continuous simple path, but it's sort of wobbling up and down like this in such a way that it looks really close to a straight line in the Loewner sense, and yet, in the strong topology sense is very not close, it's wobbling all over the place. And so what we noted with Oded is that well, if in fact you knew that you had Loewner driving function convergence for both directions going forward and backward, then you could rule out this sort of funny business. Because even though the leave driving function in the forward direction of this looks normal, in the backward direction it doesn't. And it turns out there is something similar that holds, although the story gets quickly more complicated, but it's something very similar that holds for non-simple, which is essentially that if you can show that for a generic point your path converges to SLE, no matter which direction you parameterize it in, with respect to the Loewner driving function metric, this automatically implies that the whole path converges in the strong sense.

So in fact, all you need is step one and two once you've written a 30 page paper giving this general result. But basically now once you have step one and two, three follows automatically. Okay. I'll leave it at that.

[applause].

>> David Wilson: I guess are there any questions for Scott or comments?

>>: I think we're opening the floor for questions to the last two talks. Especially if you want to ask one speaker about the other -- [laughter] the speakers want to ask each other. [laughter].

>>: So I have an admission to make. I now move to the other party and I believe that it will be tight, so -- so who thinks it's not tight? [inaudible].

>>: [inaudible].

>>: [inaudible].

>>: Let's decide [inaudible] this question.

>> Scott Sheffield: Well, okay. Maybe I should thank again the organizers and

Oded and it's a pleasure to be here and a part of this, and I think -- you know,

I've often wondered if you could do an experiment what would happen if Oded had just finished his first paper on SLE and the left the rest of us to work this out on our own, how long would it have taken us, what would we have achieved? I mean, it -- it certainly would have been -- even with all of us in this room, I think a lot harder. And you know, unfortunately now in some sense we're going to play out that experiment since we have to go on without him for the next 10 years.

But, you know, we're fortunate to have had him with us and we're fortunate to have been part of this conference. So thanks to everyone.

[applause]

Download