>>: Okay, very happy that Lionel is about to tell... Fluctuations from Circularity. Please.

advertisement
>>: Okay, very happy that Lionel is about to tell us about Logarithmic
Fluctuations from Circularity. Please.
>> Lionel Levine: Okay. Thanks [indiscernible]. Okay, so I gave
[indiscernible] several options for what I could talk about and this
was his choice. And he insisted that I don’t combine it with any other
topic because he wants to see the full proof. So, well, we will see
how it goes. Okay, so the title of my topic is Logarithmic
Fluctuations from Circularity and that refers to a growth model called
Internal DLA. I will move this hand out to the corner so Russ doesn’t
get too annoyed. I hope it doesn’t bother him too much hanging out up
there.
What is Internal DLA? Many of you know. It is a very simple model. I
start with n particles at the origin in z2 and one at a time each one
of them is going to perform a simple random walk until it finds an
unoccupied site. In other words a site where there are no other
particles. And once it finds such a site it stays there forever.
So you run n such walks in succession and that will give you a random
set of n occupied sites in z2 and we want to understand the shape of
this random set. So here on the bottom of the slide I am writing out
in symbols what I have written in words on the top. So I define this
random set inductively. Once I have an I define an plus 1 as the union
of an with a single additional point. Xn of tau n where these x’s are
independent simple random walks in z2. And tau n is a stopping time.
It’s the first time that this Nth walk reaches a site that is not in
the set an.
So let me show you a simulation of what this does. So particles are
starting here at the origin. There is a question, “Why do we have two
colors”? And I will explain what the colors mean in a minute.
Essentially the red points are the ones that join the cluster earlier
than expected and the blue points joint later than expected so that the
set an is the union of all the red and blue squares in this picture.
And you are not seeing the individual random walk steps, but you are
seeing the results when they reach and unoccupied site.
>>: I am sorry I couldn’t find the notation.
cluster?
Can you come back to the
>> Lionel Levine: Yeah so okay, you start with set a1 that’s
deterministic, just consistent of the origin. Then inductively you
define an plus 1 as the union of an with one more point. And to find
that point, what you do is you run a simple random walk; which I call
xn because it is supposed to be independent of all the previous walks,
and I stop that walk at the time tau n when it first exists the set an.
So that’s the unoccupied site and I just adjoin that point to the
cluster and I have an plus 1.
Okay, so if you run that simulation for longer this is what you get.
And you can see it is kind of striking the circular. So what this talk
is about is measuring exactly how close this set is too circular. And
if you zoom in on the boundary this is what it looks like. And well,
you can really see you know of course it’s a random set so you will
have some fluctuations on the boundary but they seem to be quite small.
So you have too look pretty hard to even find a single site like this
for example which is unoccupied, but all four of it’s neighbors are
occupied.
Okay. So the question is, “Why is the limiting shape a disc and what
is the scale of the fluctuation”? Let me tell you a little bit about
the history of this problem. It goes back to chemists Meakin and
Deutch in the 1980s. And they imagined internal DLA as a process where
you have come corrosive particles, those are your random walkers and
they are diffusing in some medium. And when they reach the boundary of
this medium there is some surface and they are corrosive so they etch
away a little bit of the surface. And the question they were
interested in answering is, “Can you use a diffusion limited process
like internal DLA in order to etch a certain pattern or a certain
design on your surface”? And they concluded the answer was no, you
can’t make any sharply defined pattern using a diffusion limited
process, because the surfaces you get are just too smooth. So, in our
model that’s reflected by the fact that these fluctuations from
circularity are very small.
And they were a little bit surprised by this finding so they wrote, “it
is also of some fundamental significance to know just how smooth a
surface formed by diffusion limited processes may be”. And they have
this quote that really made me smile when I read it, which shows that
they expected like most of us that when you ask for fluctuations of a
process that’s derived from random walk you expect some kind of square
root fluctuations, or at least some kind of power law. And it is
pretty clear from this quote at the bottom that’s what they expected so
they said for them c is the scale of the fluctuation and L is the size
of the system.
So initially we plotted log xi against log l, but the resulting plots
were quite noticeably curved. Figure two shows the dependence of log
xi on log log l. And then they found a straight line. And so what
that straight line is indicating is that actually the scale of these
fluctuations is only logarithmic in the system sense.
>>: So that was a physical experiment?
>> Lionel Levine: Yes, so I should say another interesting thing about
this paper is it is journal of chemical physics, it doesn’t prove any
theorems. It was all done with computer simulations. Which I think in
1986 is very impressive to find this kind of logarithmic defendants.
Computer simulations back in those days couldn’t get to very large
scales.
Okay. So parallel to this more applied literature there is an
independent history in the math literature.
>>: [indiscernible] not that long ago, so.
>> Lionel Levine: It’s not that long ago, but with Morris Law the order
--.
>>: You know there was a video where you could throw bananas at
gorillas from across like cityscapes [indiscernible]. There was no way
you could simulate such a process, especially how far. [indiscernible].
>> Lionel Levine: Well there is a good joke here about gorillas and
bananas but ideally I am not finding it.
Moving on. So mathematicians studied this process, some would say,
unaware I think for a while of this literature in chemistry starting
with
Diaconis and Fulton who realized this process as a special case of
addition law of sets in zd. So what they defined was a away to take
two finite subsets of zd and add them and the sum is a random set whose
cardinality is the sum of the cardinality of a and b. And internal DLA
is the special case of this operation when you take the singleton set
consisting of just the origin and keep adding it to itself. And so
motivated by this idea of Diaconis and Fulton law Lawler, Bramson and
Griffeath the next year proved this limiting shape that in fact it is a
disc in z2 and a ball in higher dimensions.
So this is saying that if the number of particles I start with is say
the closest integer to pie r squared. So I would expect it to fill up
a disc of radius r then with probability 1 for all sufficiently large r
we have that this random set contains a disc of radius one minus
Epsilon times r and is contained in a disc of radius one and sometimes
r.
And Lawler improved these bounds a few years later to show the
fluctuations are at most order r to the one-third to log vectors and
said a more interesting question is whether the errors are o of r to
the alpha for some alpha less than one-third. So that’s the question
we are going to answer today.
Okay. So the theorem which my co-authors and I proved independently
and it was at the same time as two French mathematicians Asselah and
Gaudilliere is that in fact you can get these fluctuations all the way
down to constant times log r. And I write log squared here, but I
should really update this slide. So their first paper had log squared,
but then in a subsequent paper they proofed it to log. So these are
really two independent proofs of the same theorem. This is in two
dimensions and you can ask what happens in higher dimensions. It turns
out there the surface is even smoother. So there you can get down to
square root of log fluctuations.
>>: It isn’t just [indiscernible], so it’s not clear that it’s
smoother.
>> Lionel Levine: That’s it. So okay, so it is a question of how, are
there really fluctuations that are this large? And actually in higher
dimensions we now know the answer is yes. So the square root of log r
is the right answer. So that’s a recent posting on the archive of
Asselah and Gaudilliere. And in two dimensions it’s still open. It’s
at least root log r, but we think its log r.
Okay, so I am going to try to give you a picture of the proof. So what
goes into the proof? Well there is an initial ingredient which doesn’t
take into account the overall circular shape and it’s just kind of a
local lemma. But what happens if you zoom in at a point on the
boundary which says that certain formations, which we call thin
tentacles, are unlikely occur?
So a thin tentacle, I think I have a picture on the next slide, but
it’s something that looks like this; where you have a point that
belongs to the cluster an. Can everyone see this? But not too many
nearby belong to it.
So once we have ruled those out as unlikely then we are going to, for
each direction, so for each direction we will define a martingale which
will detect fluctuations from circularity in that direction, and the
fact that you can’t test in tentacles means, well it strengthens these
martingales. It says that, so basically the martingale is going to be
defined by summing a discrete harmonic function over the cluster and we
are going to choose a function that blows up at zeta.
So the fact that if zeta is in the cluster that a lot of nearby points
also have to be in the cluster really means that when you sum your
discrete harmonic function that blows up there it really is getting a
large contribution from all these points near zeta. And okay, I will
say more about that.
And then lastly there is a kind of self-improvement character to the
argument, or maybe bootstrapping is another way to put it. So starting
from the fact that your cluster is roughly circular you can deduce that
with high probability it has to be even more circular. And you could
keep going in that way, all the way until the log scale. And that’s
when this bootstrapping breaks down.
Okay, yeah. So here is the picture of the thin tentacle. Okay, so
precisely what is a thin tentacle? So it’s a point z which belongs to
the cluster AN, but the number of points in a ball of radius, sum
radius m around z that also belong to the cluster is small. So it’s
less than a small constant d times the volume of the ball. And so we
show that these are exponentially unlikely in m squared in higher
dimensions and m squared over log m in two dimensions.
>>: [indiscernible].
>> Lionel Levine: Well either one. I mean you can just union bound it.
These are tiny probability which imagine it’s fixed. It will be fixed
in the proofs.
>>: So the m and n are related how?
>> Lionel Levine: The only requirement is that this ball doesn’t
contain the origin. So, yeah.
So let me maybe comment on why you get this m squared in the exponent
here. So if you think about you know, what’s the most naive way to
form a thin tentacle? Well here’s your point z which you want to
belong to the cluster. Here is the origin and you know your cluster
looks something like this. But you have to get all the way out to z.
Well naively you could just, you know, form a path directly to z and in
order for z to belong to the cluster you have to fill in all these
points of this path.
So imagine you have random walks that just, you know, the first one
visit’s the beginning of the path and fills it in and then the next one
walks straight up the path and fills in the next one. So your random
walks just walk straight up and fill in these points. And it’s doing
to take this tentacle has length m that’s going to take, you know,
order m squared random walk steps to get m particles to fill up this
path in succession. So just this most naive way of building a tentacle
already, you know, it has this probability. So you certainly are not
going to improve on this m squared.
Okay. So now I’ll explain what the colors meant in my simulation. So
those encoded early and late points in the cluster. So we are going to
measure earliness and lateness in units of the radius of the cluster.
So we will say that a point is m-early, like this point here, if at the
time n equals closest integer to pie r squared it’s already occupied
even though this point is outside the ball of radius, r plus m. And
likewise this point here is l-late because it’s still not occupied even
though it’s inside the ball of radius, r minus l.
Okay. So these are the same definitions in symbols that the picture
showed. Okay and here the events that we want to show are unlikely.
So we want to show if m and l are bigger than the constant log n that’s
it’s unlikely that any point in an is m-early and it’s unlikely that
any point in this ball is l-late.
Okay. So here’s the overall structure of the argument with this
bootstrapping self-improvement step. So there are two lemmas. And
they are almost symmetric. So one is that if there are no l-late
points and m is bigger than a large constant times l than with high
probability there are no m-early points.
The other one is the other way around. So if there are no m-early
points than with high probability there are not l-late points, but now
the arithmetic is different. So you only, you get some savings here,
you only need l to be at least the geometric mean of m and log n. So
if you imagine applying these two lemma’s in succession and iterating
well if you start with their being no l-late points and you apply both
lemma’s then you will get that with high probability there is no c
squared log n l square root of c squared log n l-late points. So this
is decreasing until you get down to the scale where l is constant log
n.
So another way to say this, you know, sometimes iterating makes you
suspicious because you think secretly there is some constant blowing
up. Another way to say the same thing is suppose l and m are the
maximum lateness and earliness occurring in any cluster a1 up through
an. So these are lower case, but they are random variables.
So you have this random point l, m and we want to show that it lives
close to the origin so there are no late and early points. So what we
can do is say, “Well as a first step the cluster has to be path
connected”. So it can’t be any extremely early points okay. So this
rectangle in green up here has a probability of 0. You can’t have an n
early point in the cluster an.
Okay. And then the two lemma’s say that, well one lemma says that this
kind of half infinite rectangle is unlikely and the other one says that
this kind of half infinite rectangle is unlikely. So you can write
down a logarithmic number of rectangles here that are all very unlikely
and then you get to, you know, this white region where the maximum
earliness and lateness that occurred by time n is the most constant log
n.
Okay. So what goes into the proofs of these lemmas? So the basic
ingredient is a martingale which detects earliness and lateness near,
or in a particular direction. So fix a point in z2, zeta and then we
are going to take a discrete harmonic function which I will define
precisely on the next slide, but it approximates this continuous
harmonic function which is essentially just real part of 1 over zeta
minus z. So this picture, it has a pull at zeta and its level lines
are circles.
Now because, so the pull at z, or sorry the pull at zeta is really
important here because if this thing --. If we take this harmonic
function and sum it over the cluster we want it to detect whether there
are points occupied near zeta. So, I mean, that’s the reason we take
it to have a pull at zeta. Of course that also causes problems. So
when you try to make a discrete harmonic function out of this thing you
find that you can’t make it discrete harmonic everywhere in the whole
z2. So, discrete harmonicity will fail at a few points near zeta which
means that if you want this process to be a martingale then you need to
stop your particles from reaching these bad points. So we are going to
define a certain set omega zeta where our particles are going to stop.
Okay. So this gets slightly technical, but it’s sort of, I mean the
details are really important. So I decided to put them in. So the
simplest case is when zeta is say, positive real. So then we define H
zeta as just the difference of two adjacent values of the potential
kernel. The potential kernel is this thing that acts like the green
function for simple random walk, but because simple random walk is
recurrent you need to take a difference here. So in three and higher
dimensions this would be simpler. You could just delete this first
term and take. So here xn is a simple random walk so you are just
counting the total expected number of visits to the site z. Okay, but
that’s infinite in two dimensions so to normalize it and keep it finite
you subtract the expected number of visits to 0 up until time n minus
the expected number of visits to z.
Okay. So what are the relevant properties of this az? Its discrete
harmonic except at the origin and its asymptotics are really well
known. So it behaves like constant times log absolute value z plus a
constant plus a very small area term. And I mean this is kind of what
all these circularity arguments have in common is that somewhere buried
in the argument you are going to use this kind of asymptotics for the
potential kernel.
So in our case this H zeta we define as a gradient. A difference of
two consecutive values of this shifted potential kernel which means so
log absolute value z that’s real part of the log of z. So this h zeta
is like real part of 1 over z minus zeta plus this small error.
>>: You don’t get an even better error?
>> Lionel Levine: Actually you do get an even better error, but we
don’t need it. So that was the case that zeta is positive real and in
general you can so something similar where you take a linear
combination of three different values of potential kernel.
exactly the same, just messier.
Okay. So I defined the harmonic function
now I have to define the set where we are
where we are going to stop is essentially
are lots of things that are very close to
take exactly the right one.
It’s
that we are going to use, but
going to stop. And the set
a ball, but you know there
a ball and it’s important to
So the one we are going to take is essentially a level set of the
harmonic function. So what we do is take this function, which is
defined on lattice points and extend it linearly along edges of the
squared grid. So now we are working with this one dimensional grid
instead of this set of discrete points. And okay.
And now so take the connected component of the origin in the set which
is this grid minus the point zeta; all the points where h zeta is at
least 1 over 2 norm zeta. So what does this look like? Here’s the
particular example zeta 6 plus 4 i. Okay so the boundary of this omega
zeta consists of a whole bunch of points, you know, typically on the
middle of these grid segments, where this function h zeta is exactly
equal to 1 over 2 norm zeta. And then it also has zeta itself as a
boundary point.
So the important arithmetic here is we have cooked up this set so that
h zeta is discrete harmonic on the whole set and all these boundary
values are this 1 over 2 zeta; except the boundary value at zeta itself
is constant order between 1 and 2. And the value at the origin will
also be important. That’s 1 over zeta.
Okay. And then because this green function, or this potential kernel,
has thus good asymptotics you end up that you can show this set omega
zeta is really close to a ball. So it only differs by a constant
amount in the radius. It contains a ball of radius normal zeta minus a
constant containing a ball of radius normal zeta plus a constant.
Okay.
And the other thing very closely related to this first lemma that you
need to know about this set omega zeta is that, is a mean value
property. So if you take your harmonic function and you sum it over
the lattice points in omega zeta, well so you sum h zeta, z minus h
zeta of 0. If h zeta were a --. So you know, in the continuum if this
sum were integral then you would get just 0 by the mean value property
of harmonic functions. You can’t expect to get exactly 0 in the
discrete case, but you get something very small. So only like log of
data.
Okay. So we are almost done with the kind of set up and we can
actually get to the proof. So the last piece of set up is that we have
got this discrete time martingale, but we really want to have a
continuous time martingale. So what we can do is instead of running
discrete random walks we can run Brownian motion on this grid.
And we can define our --. So our martingale was defined in discrete
time by just summing the discrete harmonic function over the cluster.
Now to make it a continuous time martingale we just add one more term
which is also the value of the discrete harmonic function at the
currently active --. Wherever the currently active walker is we add
the value of the function there.
Okay. So then we have a continuous time martingale and the reason we
want that is to be able to use the martingale representation theorem
which says, “Well we can represent this martingale m zeta as a random
time change of a standard Brownian motion”. So this time change is the
quadratic variation of the martingale so the limit here is over all
refinements of the interval from 0 to n.
Okay. So now we can see how the proof actually works. So we want to
prove this lemma which said that if there are no l-late points the nm
is like a large constant times l. Then with high probability there are
no m-early points.
So what we are going to do is rake up the thing we want to show is
unlikely into a union of a bunch of events q, z, k. So q, z, k is the
event that z joins the cluster at time k, z is m-early, this here is
just a reminder of the definition of m-early. No previous point is
m-early and finally no point is l-late. Okay. And now given z and k
we are going to pick a martingale to use to show q, z, k as unlikely.
And so that means we have to pick a zeta. Where is the pull going to
be located? And it will be located close to z, but slightly further
from the origin.
So the picture is we have this early point z and we have our martingale
m zeta and we are going to place the pull, here is the origin, we are
going to place the pull just slightly along the ray from 0 to z, but
slightly further along.
>>: So zeta is not really a lattice point?
>> Lionel Levine: I don’t know. Probably it should be a lattice point
so let’s take the closest lattice point to this.
Okay. So how are we going to show this event is unlikely? Well, okay
maybe read the bottom line first. So the goal here where we are trying
to get is a large deviation for this Brownian motion and the Brownian
motion came from the time change in the martingale representation
theorem. So basically we are going to somehow show that this event
would imply that this Brownian motion, if you run it for a certain time
s exceeds s. Somewhere in the time interval from 0 to s and that’s an
event that’s exponentially unlikely in s. And the time s that we are
going to take is logarithmic in n. So in the end we will get something
that is polynomial unlikely an n.
Okay. So how do we get there? Well observe it would be enough to show
two things. So, on this event we will show that with high probability
this martingale is pretty large and also with high probability not that
much time has elapsed. Right?
Okay. So do people agree that all we need to do is show the 1 and 2?
Right? So, just to say it one more time, on this bad event we are
going to show the martingale is likely to be large and the elapsed time
is likely to be small. And when you put those together you get that
this Brownian motion had to become very large in a very small amount of
time.
>>: No late point is used for the fact that the martingale has to be
large?
>> Lionel Levine: Yes. Yeah, so James raised a good question. Where
do we use the no late points? So I will be sure to point that out.
It’s coming up.
So okay. So why does the martingale have to be large? Well remember
we have this [indiscernible] estimate that says thin tentacles are
unlikely. Okay. So we know that with high probability this --. So we
have got this point z which is our early point and not only does z
belong to the cluster, but a large number of particles nearby belong to
the cluster, specifically a constant fraction of the area of this ball.
So this distance here is order m. So a large, a constant fraction of
the area of this ball b, z, m has to be filled up already by the
cluster. So here we have order m squared points in this ball, b, z, m,
that also belong to the cluster am, or I mean its ak. And okay. And
how much does each contribute to the martingale? So remember our H
zeta was like real part one over z minus zeta.
So it decay’s like 1 over z, like 1 over the distance.
of these points contributes –-.
Okay.
So each
>>: [indiscernible].
>>Lionel Levine: So all the --. The sign is always positive. So the
negative --. In this half plane our function is positive; in this half
plane it’s negative.
>>: I didn’t know that all these points are --.
>> Lionel Levine: Ah, ah, okay. Because of our stopping rule, so
remember we are stopping our points when they exit this level set,
omega zeta, that’s the level set of a positive number. So everything,
right, it’s a harmonic function. It’s positive everywhere on the
boundary so it’s positive on the inside. So this, this --.
>>: So when you say you stop it, so what does that mean?
>> Lionel Levine: Okay, so it means when we run our [indiscernible]
process we will stop our random walk either when it reaches an
unoccupied site or when it reaches the boundary of this set omega zeta.
>>: [indiscernible].
>> Lionel Levine: It just stay’s there.
>>: So it could be in between two lattice points?
>> Lionel Levine: It could be in between two lattice points and there
could be many particles accumulated there.
>>: Okay. And so your theorem about thin tentacles are unlikely lemma
applies in this case also, is that what you are using?
>> Lionel Levine: Which lemma applies in this case?
>>: Thin tentacles are unlikely?
>> Lionel Levine: So I think we are using it for the actual set ak, but
it’s also true of this modified set. So yeah, that’s a good question.
So I would have to look at the proof again to know at which set we are
using it for.
Yeah, I mean let me come back to that if there is time.
true of both sets.
I think that’s
Okay. So Russ’s original question is, “How do we know that this
contribution is positive”? So the point is this function h zeta is
constant equal to 1 over 2 norm zeta everywhere on the boundary, except
at zeta where it’s order 1. And it’s discrete harmonics, so certainly
everywhere on the inside it’s not a negative.
Okay. Right. So we are trying to argue that the martingale is large
on this event and the point is that because of no thin tentacles we
know that a lot of nearby sites are occupied, order m of them and each
contributes order 1 over m to the martingale because this function
decay’s like 1 over the distance. Okay. So we get a total
contribution m squared times 1 over m so of order m.
Okay. So the one thing we need to careful about is why do we know that
contribution isn’t swamped by the rest of the picture? Because, you
know, this tentacle is tiny right? M could be as small as log n. So
you know we have got a huge number of other sites out here that are
also all contributing to the martingale. Why don’t they just burry
this contribution? And that’s where James’s question comes in. So
this is where we are going to use no late points.
So the thing to observe is that, well what does no late points mean?
It means that if I go just a little distance inside I know that all the
sites in a slightly smaller ball were already occupied. So this
smaller ball, b, r, minus l, is completely occupied.
Okay. Now what’s the contribution of sites in this ball to the
martingale? Well it’s very small by the mean value property. So this
entire ball b, r, minus l contributes only a constant log r because its
total contribution is the sum of the discrete harmonic function over
the ball.
Okay so finally, well okay that takes care of most of the points. But
there’s still a fair amount left so why can’t they destroy this
contribution of order m? Well what’s the worst that could happen?
>>: The sum r function is h?
>> Lionel Levine: h zeta.
>>: This is not a negative?
>> Lionel Levine: Yeah, so the martingale is defined by summing the m
zeta of n is the sum over the cluster of h zeta, z minus h zeta 0. And
remember that h zeta 0 is 1 over a normal of zeta and h zeta everywhere
else on the boundary --. So everywhere else on the boundary it’s 1
over 2 zeta; except at zeta.
Okay. So that means from a single point, you know, the biggest
negative contribution we can get from this sum is minus 1 over zeta.
So okay, so we only have --. If we started with pie r squared points
--. Can people see down here? No. Okay. So --.
>>: Remind me, how big is a norm zeta?
>> Lionel Levine: Like r.
>>: And how does that relate to m?
>> Lionel Levine: M is tiny.
M is like potentially as small as log.
Okay. So we started with pie r squared particles and most of them get
swallowed up by this ball. Okay. So how many are left? Well about r
times l okay. And so this is left over particles that could have a
negative contribution, but each contributes. You know the worst it
could be is if it contributed minus 1 over zeta.
Okay. So, total you will get at most --. Yeah, so this is basically
minus 1 over r. So total you will get at most minus constant times l.
Okay. And remember our arithmetic here was that m is -- m is larger
than large constant time’s l. Okay. So this is not enough. Okay.
This is smaller in absolute value than this.
Okay. All right. So that was one of the things we needed to do. But
we also need to know that the --. We know that the martingale is large
so we need to know that the elapsed time was pretty small.
>>: [indiscernible].
>> Lionel Levine: Well, I mean we use this, I mean we use this --.
>>: [indiscernible].
>> Lionel Levine: Right, right. So if you are asking why we had to go
to continuous time, we didn’t really have to. It was convenient, but
you could do it without that device.
Okay. So we want to show the elapsed time as small and so here what we
are going to do is take this quadratic variation time change and look
at it’s increments and what will show is there are independent standard
Brownian motions such that these increments are bounded above by the
exit time of these Brownian motions from a certain interval.
>>: Why do the martingales’s [indiscernible].
>>: [indiscernible].
>>: Lionel Levine: So what’s going on here is pretty simple. So we are
going to see is, “Well what happens if we look at inside this time
interval from i to i plus 1"? That’s when we just have a single
particle walking around that hasn’t yet found an occupied site.
So if we take, you know, m so I as an integer time and t is an interval
01 and we look at the martingale at time I plus t times I that’s just
the value of this h zeta at the current location of this. So I write
beta t for the Brownian motion on the grid.
Okay. And then I use the maximum principal for discrete harmonic
functions. So I don’t know where this Brownian motion currently is,
but I know it’s somewhere inside the set omega zeta. So this h zeta bt
is at most --. Well this is really b, i, t, it’s the I Brownian
motion. So it’s at most this bi and at least this ai. Where these are
just the min and max over the current cluster. So this is by maximum
principal.
>>: [indiscernible].
>> Lionel Levine: No I did mean to put boundary because --.
>>: Are all the boundary lines the same except for 1 zeta?
>> Lionel Levine: Except for zeta, yeah.
>>: So you know what these boundaries are?
>> Lionel Levine: Um, well okay. That’s for omega zeta, they are the
same. But this is over the --. Well this is the boundary of this
random set ai so we haven’t --.
>>: Oh, right.
>> Lionel
the point
contained
same over
Levine: Yeah, so the boundary vales are all different. But
is because h zeta is harmonic on ai, because ai is always
in omega zeta, then the maximum minimum over a are still the
the boundary by the maximum principal. Okay.
Okay. So this means that, you know, if you combine this with the m of,
m of t is b of s of t, so our martingale is the random time change of
Brownian motion. So it’s saying that --. So ai --. So if I take b of
s of t plus i minus b of s of i, that’s bounded between ai and bi. So
that says if I take --. So I want to come up with these independent
Brownian motions that I should take this bi to be, you know, bi of u is
b of s of i plus u minus b of s of i. And that will be bounded between
ai and bi and these are independent from different i by the strong mark
of property. So yeah, so that’s all that’s going on, basically the
maximum principal. Okay, and right the --.
>>: Sorry, at this point you could ask why are you using it now? It
seems more difficult because you are using Brownian motion. So at this
point you can ask, “What is the benefit of using the Brownian motion”?
>> Lionel Levine: I guess the benefit is that it’s easy to control the
large deviation of the first exit time of a varying motion from an
interval. So right, I mean we want --.
>>: No, but you are just controlling the implements of the martingale
to, I mean --.
>> Lionel Levine: Okay, so to prove this bound you had to work slightly
harder to prove the Brownian motion, but now the pay off is now we just
have to do large deviation for this simple one dimensional thing about
Brownian motion exiting from interval. And that’s something relatively
straight forward. So in particular you could show that expectation of
e to this exit time is at most 1 plus 10 ab. Where a and b are the
minus a and b are the end points and okay, so then writing the time
change as a sum of it’s increments that’s bounded by this product of
expectations of e to the Brownian exit times.
And then okay, it’s relatively easy to estimate these ai and bi’s. And
what you get out of this is that on this event q, so this expectation
of e to the e quadratic variation time’s indicator of q is at most,
[indiscernible].
Okay. And then [indiscernible] you get this. Okay, so I showed you
the proof of one of the two lemmas. The other one superficially looks
very similar. It’s all the same ingredients. Maybe I cheated because I
showed you the lemma that doesn’t give you the extra savings that makes
the whole thing work. So I can try to go through that, but incase I
run out of time with that let me just say what changes in higher
dimensions.
Well you have to choose this function h zeta look like it was using
some conflicts analysis with this 1 over z, but really what we are
using is something like the cross on kernel. It’s like a discrete plus
on kernel. So essentially the function, the harmonic function we are
using is, you know, 1 at the point zeta, 0 at all the other boundary
points and take the harmonic extension of that.
So that’s what you could use in higher dimensions. And when you go to
estimate the quadratic variation you find that it’s actually constant
order in higher dimensions instead of logarithmic. And what this means
about the fluctuations is they are dominated by a different effect. So
rather than having kind of large portions of the boundary that are a
little bit further out or a little further in than you would expect;
it’s really these tiny features that are dominating the fluctuations.
The thin tentacles, which as we saw at least heuristically, can grow to
this length squared of n by this very simple mechanism of traveling
straight up every time.
And so that’s formalized in this paper of Asselah and Gaudilliere that
show that actually you do get fluctuations of order square root log n
in all dimensions. And this leaves gap only in dimension 2 where we
believe that the fluctuations are log n, not square root of log n, but
the lower amount is still open.
Okay.
that.
I will stop there and if you really want to see lemma 2 I can do
>>: Question.
So what is the strategy that the other people use?
>> Lionel Levine: Okay. I can’t say I read their paper line for line,
but roughly what they seem to do is define a different process whose
fluctuations are easier to handle and then construct some complicated
coupling between that process and IVLA. So the difficulty of that
paper is in verifying this coupling. And that portion of that I have
no real insight into so I won’t say anything further about it.
>>: For the [indiscernible].
>> Lionel Levine: Yeah, z equals 1 right. So this is a fun exercise
that fluctuations they are the square root of n.
>>: I will ask the other question.
of [indiscernible]?
So is the log n related to maximum
>> Lionel Levine: Right, okay, right. So another way that you can try
and analyze the fluctuations: instead of being very picky and asking
for the absolute furthest lattice point that is occupied, or the
absolute nearest that’s unoccupied you could sort of take local
averages and then we can show that the fluctuation scale to a variant
of the Gaussian free field. So, okay, so the question is, “Is this log
that we are getting related to the maximum of Gaussian free field”? I
mean I think morally the answer is yes, but --.
>>: [indiscernible].
[indiscernible].
When you say the fluctuations converge
>> Lionel Levine: So we prove it on the,
scale. So you really are averaging over
picture. But maybe it’s true on smaller
that’s what you would need to look at to
question.
you know, constant order
a constant fraction of the
scales and that’s --. Maybe
answer [indiscernible]
So I guess I will mention related to that is another question which I
think is still open, although that most recent paper of Asselah and
Gaudilliere does say something about it, which is fluctuations in a
fixed direction. So what we have bounded here are fluctuations in the
worst case direction. Suppose you just look along the x axis and you
want to know, you know, what’s the furthest sight along the x axis
that’s occupied?
Okay. So we would leave the answer there as the square root of log in
two dimensions and I think Asselah and Gaudilliere have one side of
that, but not the other.
>>: In constant and higher dimension?
>> Lionel Levine: Yes, constant and higher dimension.
>>: So is that completely consistent with the picture [indiscernible].
>>: So I just want to make sure I understand what you were just saying
about averaging and that’s what they did. In this picture you have a
positive and negative in common, but you said that [indiscernible] on
an average instead you would have [indiscernible]. Is that what you
are saying?
>> Lionel Levine: Yes, so the thing that scales to Gaussian free field
is you take this picture, but you shade it red or blue according to how
late or how early the point arrived to the cluster.
>>: Okay, if there are no further questions I think we are done.
Download