Document 17868123

advertisement
22685
>> Yuval Peres: Okay. We're very happy to have Charles Smart from NYU, soon
moving to MIT. He'll tell us about scaling limits of the abelian sandpile.
>> Charles Smart: Great. Thanks for the invitation. All right. So I want to
talk about scaling limits of the abelian sandpile. So before I actual start on
the math, I should mention that most of what I'm going to be talking about is
joint work with Wesley Pagden, who is at NYU, and a little later, if I actually
get to it, it's also joint work with Lionel Levine, who is here now and moving
to Cornell.
All right. So to start, I want to give a little bit of an introduction. I
know that probably most of you are already familiar with the sandpile, but I
guess for our viewers at home, I'm going to try to start from a little more
basic point.
Okay. So I'm going to start by talking very briefly about internal DLA, which
I think as sort of like the natural precursor to the sandpile. You guys may
disagree about this, but somehow this is how I ended up thinking about it.
Okay. So what is internal DLA? So internal DLA or internal diffusion limited
aggregation is sort of a growth process for subsets of the integer lattice.
So this is a diffusion-driven growth process for subsets. Can you all see
this? For subsets of the integer lattice Z to the D. So D dimensional integer
lattice.
What's the idea? Well, you have some subset. Maybe this thing is going to be
in the way now. Can you actually see if I write down here? Probably not.
Hmmm. Okay. Well ->>: Two more lines we can see.
>> Charles Smart: Two more lines. Okay, I'll try to stick to this little
area. I guess I should try to very quickly get to the part where I want to
actually show the important picture, and then we can get rid of the screen.
Okay.
So what's the idea?
Well, you take some subset of the lattice.
I can
only draw two dimensional pictures but the idea is you have some subset of the
lattice. Some subset A of Z to the D.
Okay. And you grow this set by sort of dropping a token somewhere on one of
the vertices inside and you run a random walk.
You let that
And then you
start with A
next set be,
origin.
token walk randomly on the lattice until it exits the set. Okay.
add that exit vertex to A. Okay. So the idea is let's say we
0 just being the set that contains the origin, and then we let the
well, the first set union of exit vertex of a random walk from the
Okay. So you iterate this process. Because a random walk on the integer
lattice sort of looks like Brownian motion if you zoom out. We expect the set
will be spherical looking. In fact, that's exactly what happens.
And, well, if you rescale it properly, you know, it really does converge to a
sphere. So what is the proper rescaling? Okay. So if we define this sort of
rescaled set A to be, well, the set of points in R to the D, so that, well, N
to the 1 over D power times X is NAN.
Okay. Well, then for a certain notion of convergence of sets, what we get is
that these rescaled guys converge to, well, the ball of radius -- I think this
is to the -- ball of radius, the volume of the ball to the minus 1 over D
power.
I think this is right. Okay.
notion of convergence here.
Okay. So I want to think
this. Okay. So the idea
okay, what we're going to
has sort of a token arise
Almost surely.
Okay.
For some appropriate
of the sandpile as just a deterministic version of
is, well, instead of actually having a random walk,
do is we're going to -- okay. So I can imagine this
at a point then it jumps to a random neighbor.
That keeps happening until that token gets outside the set. Okay. Instead
what I'm going to do is I'm going to have it be the case that a vertex sort of
just stores tokens. Until it has, at least as many tokens as there are
neighbors in the lattice, okay, and then when it has at least that many tokens
it just distributes them all at once to all of its neighbors.
So the process is -- the way you build a sandpile is you add -- you start with
sort of N tokens or chips at the origin.
And then you iterate this process where if some vertex X and NZ to the D has at
least 2 D tokens, then we topple X, okay, by removing 2 D tokens from X and
adding one to each of its neighbors.
And there are 2 D neighbors on the dimensional lattice. Okay. So what
happens? We keep doing this until it stops. We get some final configuration
SN mapping as Z to the D until, well, the possible numbers of tokens for stable
configuration.
So every vertex has to have between 0 and 2 D minus 1 tokens left over. Okay.
So if you did a good job of actually making internal DLA deterministic, then
we'd expect that the result would be a circle.
You expect the result would be a circle. And that maybe which value you take
when you're inside of the circle would be kind of randomly distributed. Right?
And, of course, this fails spectacularly. So we have this picture of what the
final sandpile looks like. Maybe I'll look at a low res one first which my
laptop doesn't want to display.
So this is just one with a thousand chips. So if you start with a thousand
tokens at the origin, and you run the process -- you get something like this,
okay, but as you get -- as you add more, it gets a little more refined and so
you get sort of a, kind of stranger and stranger looking picture.
>>: Grade of, shade of gray.
>> Charles Smart: That's right. Yeah, yeah. So here the shade of grade is
just the density. So black in this case is just three chips and the dark gray
is two and the light gray is one, and white is zero. And I guess maybe it's a
little difficult to tell but the light colored specs in here are light gray.
The only 0s are outside.
No, wait a minute. No, no, there are 0s in there. There are 0s in there.
Okay. So then finally, if I just load a really high res one we get a picture
like this. So this is 16 million starting nodes.
We get this picture, which, okay, so I want to point out a couple of things.
First of all, it is definitely not a circle. Right? You can see that there
are sides. So it's flat on the bottom. I guess maybe full screen mode is not
so great.
Okay. So it's actually flat on these sides here. So you get this what looks
to be like a 12-sided figure. Except that these things maybe are slightly
curved.
Okay. And then, of course, on the inside it doesn't look at all randomly
distributed. There are these regions where you just get solid 3s, and then if
you zoom in, okay, to some region where there aren't solid 3s, what you see is
that rather than having sort of solid 3s what you have is some sort of solid
periodic pattern in these other regions.
Okay. So this is not at all what you would like to get if you were actually
just trying to make a deterministic version of internal DLA.
Okay. So what the heck did Wes and I prove? Well -- oh, man, this is just -this is just not working at all. I apologize. Well, the pictures will soon go
away and ->>: 18 million.
>> Charles Smart: This was 32 million. Okay. Maybe that's the problem. I
think, actually, the hard drive is failing on this thing. So that's probably
what's going on.
Okay. So I guess I'm done with the screen.
[laughter].
>>: [inaudible] $1 million.
So I don't know how -- great.
[laughter].
>> Charles Smart: I said the wrong thing. So what the heck did Wes and I
prove? I guess actually one thing I didn't point out when they had those
pictures up there is that as you add more chips, the picture gets more and more
refined. It doesn't change that dramatically as you add more chips. In fact,
it seems like the image is actually converging to something.
And this is what Wes and I did.
So we figured out that in fact there really is
a limiting image.
I want to explain how we did that.
Okay. So let me talk about that. So in order to explain the proof I need to
give a little bit more background about the sandpile.
>>: Proof, could you clarify the statement what kind of convergence you mean.
>> Charles Smart: Exactly. That's part of what I need to talk about. Let me
first give a little more background. I guess, you know what, I can tell you
the notion of convergence now, if you want.
Okay. So if you remember, for internal DLA in order to get the set to
[inaudible] we had to rescale. So you can do the same kind of rescaling here.
Okay. So there's this final reconfiguration, after you run the dynamics, when
you add N chips you get some map from Z to the D into 0 through 2 D minus 1.
Okay. But this map sort of the support of this map is expanding. It's blowing
up as N goes to infinity. Okay. So we need, in order to keep it bounded, we
have to rescale it.
So the idea is, okay, well we're going to rescale like this.
set S bar sub N of X to be SN of, well, H inverse X.
So we're going to
Okay, where H is N to the minus 1 over D. Okay? So now this guy is SN bar.
Well, this is a function from HZ to the D into 0 through 2 D minus 1. Okay.
So they're defined on sort of ever-finer lattices.
>>: So there's some rounding?
>> Charles Smart:
Right?
Some rounding --
>>: [inaudible] in the lattice.
>>: No, it is because X is ->> Charles Smart:
this ->>: I'm sorry.
All right.
So like this argument is supposed to be in
>> Charles Smart:
So you hit it with H inverse, and it goes into here.
>>: So if you do this, actually, well, SN is going to stay bounded. So the
support of this guy is compact. Okay? Sort of independently of N. Okay. And
even better, you can check that the mass is constant. Okay. So the idea is if
I -- here I'm integrating over R to the D. I'm sort of abusing notation a
little bit. The idea is if I have some function on the lattice I'm just going
to make it piecewise constant in a little square around each lattice point.
Okay. So if I rescale like this, the mass stays constant. And the support
stays bounded. So what Wes and I can prove is this: The theorem. Okay, is
that, well, there exists -- so there's the function S which is in L infinity on
R to the D. It's bounded. It's a measurable function. It's been -- its value
state is between 0 and 2 D minus 1. And these guys converge weakly star L
infinity to S as N goes to infinity.
Okay. So what the heck does this mean? Okay. What this means is -- so that
is -- okay, if I integrate against a test function, I get convergence.
So the integral of SN bar times some test function phi converges to the
integral of S against that test function as N goes to infinity, for all test
functions.
So all continuous functions, or you could even just take smooth functions on R
to the D that have compact support. Okay. So this is a weak notion of
convergence. It says nothing about what's actually happening to the values.
Okay. But it really is the correct notion of convergence for this problem.
And the reason is that well the values of these S and bars really don't
converge to anything. The reason is if you look in the sandpile, if you
actually look at those images, well, okay, there are regions where SN bar is
piecewise constant. It's like there are these regions where it's solid black,
where it's all 3s, and those regions it does converge point-wise, but in other
regions you have these rapidly oscillating periodic patterns.
And in those regions, as you rescale -- so as SN gets bigger and as N gets
bigger and bigger, okay, well, those patterns stay the same but you're sort of
zooming out on the pattern.
And what happens is in the limit, these patterns are replaced by their average
value rather than any of the individual values. And that's exactly what this
notion of weak star convergence captures.
>>: Here's a question. So here's a [inaudible] convergence which it seems
ought to be also true, whether you do this or not, that the proportion of 0s
and 1s and so on converges.
>> Charles Smart: Hmm. Okay. So, yes, that's true. But I have no idea how
to prove it. But that's definitely true. Based on a open set of full measure,
that's true. But I don't know why.
>>: Compared from the pictures or ->> Charles Smart: Yeah. Yeah. There's a little bit of additional secret
evidence that we have that hopefully I'll get to later.
>>: This is just the average height used?
>> Charles Smart: That's right. So you just get the average height.
So of course I want to try to explain how you prove this.
Okay.
I guess I didn't mention this before, because I got distracted talking about
the statement of the theorem. But so the dynamics, so the reason this thing is
called the abelian sandpile is because the dynamics are abelian. So if you
remember, when I wrote down the definition I said, okay, you iterate this
process where if some vertex has at least 2 D chips you topple it, right? But
that doesn't specify the order in which you're supposed to topple the vertices.
But it turns out that the order doesn't matter at all. So this is what is
meant by us saying the process is abelian. Because all that actually matters
in determining the final configuration is just the number of topplings that
occur at each vertex.
So I just want to mention some more background before giving the proof. So the
final configuration, SN from ZD and to Z, not to the Z but into the interval.
Okay, this guy is determined by the odometer function.
VN. So this guy is a map from ZD into, well, the not negative integers. So
where does this guy count? This just counts the number of topples. So the
number of topples at each vertex.
And so from this odometer function, so from this function which counts from the
number of topples you can calculate very easily the final configuration. The
final configuration is it's just the initial configuration, so N chips at the
origin. So I'm using this for the indicator function of the origin. Plus the
redistribution caused by the odometer function. So this is the discrete
Laplacian of the odometer function.
So what's this guy? So the discrete Laplacian is just the sum over all the
neighbors in the grid of the difference between that neighbor and the center.
Like this. Can you guys actually see this past the podium? Okay. Great. So
because the process is abelian all we actually care about is the odometer
function. And you can find the odometer function by essentially just -- you
can run the dynamics and what is that doing? So the idea is you're just taking
the maximum of all of the legal odometer functions. So if you have some legal
odometer functions, so some number of topples you've achieved so far, legally
according to the dynamics, okay, well either the configuration is stable or you
can increase the odometer function somewhere. You can topple again. So you
just kind of build out the odometer function until you can't make it any
larger. And that gives you the final odometer function.
Okay. So Lionel and Yuval in a paper together -- this one was also with Ann
Fay, what they did was figure out basically what the dual of that sort of
construction is.
Rather than trying to take the maximum of all the legal odometer functions,
here what you do instead is you take the minimum of all the stabilizing
odometer functions.
So what they figured out is that this final odometer function -- so this is -this is -- wait. This is Fay Levine. What they figured out was that you can
write the odometer function as the minimum -- so rather than maximum -- over
all functions integer value functions on the lattice, such that two things are
true: Well, it has to be nonnegative. And it has to be stabilizing. So what
does that mean? That means if you take the starting configuration, and you
redistribute it according to W, that this better be less than or equal to 2 D
minus 1 everywhere.
Okay. So they figured this out.
our proof of convergence.
And this is basically the starting point for
So what's the idea? So in some sense this is like the dual of the dynamics,
like linear programming of the dual of the dynamics.
But you can also think of it as sort of
So the idea here is, well, we want sort
at the minimum of all W, which are sort
certain right-hand side, which are also
a discrete elliptic obstacle problem.
of -- we want W to be -- we're looking
of discrete super harmonic, with a
greater than or equal to some obstacle.
So this is a discrete elliptic obstacle problem, because we have an obstacle
here. And here we have sort of an elliptic differential inequality. Albeit a
discrete one.
>>: Planning to say the words action ->> Charles Smart: Oh, that's right. This is called the least action
principle, yes. Yeah. The idea being that W is sort of a measuring the total
action, right? So the number of topples is in some sense a measure of action
or activity. And then you're trying to do the least amount possible.
>>: Since this is going to be on the Web do you also want to mention DAR?
it related? DARs.
>> Charles Smart:
Is
I don't know.
>>: Is it different?
The DARs least action principle?
>> Charles Smart: Okay. So we have this lease action principle. So the idea
is, well, if I were just to sort of draw a picture what do these things
actually look like?
So I have 0 and the function sort of comes in and it lifts off 0 and has this
cusp here. So this is sort of what a typical VN looks like. The idea it's
going to be actually equal to 0 outside of some radius.
And then at the origin it's going to have this cusp because of this N delta 0
sitting here. And every else it's going to be curving up as much as it
possibly can. How much it's allowed to curve up is determined by this 2 D
minus 1 here.
And so it's trying to curve up as much as possible, because if you can curve up
more, you can get lower. So if I were to increase the curvature, the allowed
curvature, I could get a function which stayed in here and just was much more
curved and still had sort of the same angle in the cusp.
Okay. So it's sort of trying to bend up as much as possible here so it can get
as low as possible. Okay. But it's pushing up against this obstacle, the 0
obstacle here. And that's sort of what actually keeps it up.
Okay. Great. So now already -- I guess I already talked about the rescaling
for the sandpile. So there's this way you need to actually rescale the final
configurations in order to get convergence. And along with that rescaling
there's a natural way to rescale the VNs.
So if you remember, we had H being N to the minus 1 over D. SN bar is SNH
inverse X. Okay. And then the natural rescaling that you get for V, okay, for
the odometer function looks like this. So we do H squared VNH inverse X.
Okay. So why is this the natural rescaling? It's the natural rescaling
because it preserves this calculation here. Right? So what it does is it lets
us write SN as, well, N times a direct at the origin, plus the H discrete
Laplacian of VN, where this guy, this is exactly what you would expect .
So this guy is just defined to be 1 over H squared times the sum over all the
neighbors of X of the difference between you and your neighbor.
Okay. Okay. Great. So there's this natural rescaling for V that preserves
this. And we still get the least action principle for this guy. So this guy
is still the minimum over a certain set of functions, but now we have to change
which lattice we're working on, so now W goes from HZ to the D to H squared Z.
And we have the same sort of constraints.
So something like that.
goes along with it.
So we've got a rescaled least action principle that
Okay. Great. So I guess I need to hurry up a little bit. So how do we get
convergence? So the idea is what we're going to show is that there's a unique
limit for these rescaled odometer functions.
Okay.
And from that using this calculation here, okay, we'll get that there's
actually a unique limiting sandpile. Okay. And the idea is well just to
import techniques from PD, from the theory of elliptical obstacle problems.
Okay. So there's sort of two steps to this process. So the first thing I need
to do is I need to, well, actually check that there is sort of a limit at all
for these guys.
So how does that work? So I first want to talk about convergence along
subsequences. All right. So this has sort of two ingredients. So the first
ingredient is so Lionel and Yuval already did a lot of the work for us in that
they showed that these guys, these functions are sort of equi bounded away from
the origin.
So what they showed is that so VN is bounded independently of N in any compact
set K that doesn't contain the origin. So any subset of R to the D minus the
origin.
Okay. So they showed this in their paper on the strong sphere class
symptotics, I guess you didn't quite state it like this, but what you have
there implies this very easily.
Okay. So we have this and from the least action principle here, we know that,
well, the discrete Laplacian, the H discrete Laplacian of VN is between 0 and 2
D minus 1. Okay. So what does this tell us? Well, it tells us that the value
is bounded, and the discrete Laplacian is bounded.
And any contact set away from the origin. And so from that what we get -well, okay. So from that we get basically for free from standard theory of
finite different schemes, we get regularity estimates from VN.
So from this plus some standard theory, finite different schemes. We get
something like -- so one thing you can prove, for example, is that if you
picked this compact set K, then you have something like this.
You get some kind of holder continuity. So, for example, it's easy to prove
this with one-third for the Laplacian. This is for all X, Y and K.
>>: Theory [inaudible].
>> Charles Smart: Well, okay, I mean it's like there are a bunch of textbooks
that you can dig this out of. Or, for example, papers by Trudenger and Cuoo
[phonetic] I think, they have some nice papers, actually much more general fine
deterministic equations like this, they give you things like this or if you
want this is just kind of a fun exercise. You can do it in a few hours. It's
not that bad.
Okay. But you get this. And this is enough to apply the Arzolascoly theorem
[phonetic]. So this is enough to run our Arzolascoly. Okay. So what you get
is that, well, for every sequence let's say MJ going to infinity, okay, there
is a subsequence. So N, J, K going to infinity and a function V which is
continuous on R to the D take away the origin, if you remember, all of this was
away from the origin, such that the rescaled guys, VNJK, converge locally
uniformly to V, as J goes to infinity, sorry, as K goes to infinity.
So from these sort of standard fine difference techniques we get at least
convergence along subsequences.
So the question is is this continuum limit unique? So how do we do that?
Well, sort of as a first attempt at trying to get uniqueness, so feel like sort
of the natural thing to do is to look at this lease action principle. We know
the VNs satisfy this.
So some sort of continuum version of this is going to be inherited by this
limit. Okay? And hopefully we can find a version, a continuing version that's
inherited which gives uniqueness.
So sort of a first attempt would be to just set H to 0 in here. So we just
send H to 0. So we could try to show this. So we could try to show that V
equal to V star the nth over RW, so this is going to be continuous and R to
D take away the origin such that W is greater than or equal to 0, and delta
plus D rack W is less than or equal to D minus 1, where this time I mean an
actual D rack delta. So maybe I'll put a little hat over it. So this is a
rack mass. Not just the indicator function of the origin.
So it's the limit of
limiting V satisfies
by anything over all
than or equal to V.
is
the
0
D
this guy. Right? So we get basically for free that the
these constraints. Okay? And because V star is computed
of those things we get immediately that V star is less
So we get this for free. And now we ask, is V star
greater than or equal to V? And the answer is sort of immediately no. Or at
least it better not be if the pictures are correct. And the reason is very
obvious.
The reason is just that, well, this equation here, this constraint is regularly
symmetric. Whereas if you take any function W satisfying these two
constraints, you can rotate it any amount around the origin and it's still
going to be in this class. So this theme is going to be regularly symmetric,
but we know the limiting isn't. This can't be right. We know from the
pictures ->>: We don't know rigorously.
>> Charles Smart: Right. We don't know rigorously. That's right. So I don't
know how to prove this. Actually, well, okay, I might know how to prove this
is not true. I'm not totally sure.
Okay. So in any case, this is a bad first attempt. Okay? So how do we make
it better? So what we're going to do is just use a simple idea from viscosity
solution theory. So I think I told the story before at MIT, but, you know, for
me this is kind of a natural thing to do because I learned about this whole
problem from Lionel and Yuval who gave talks at a conference about problems in
viscosity solutions theory. After I saw this, it was quite natural to sort of
go back to NYU and talk to Wes and try to make this work. It's like if you're
taking an algebra class and you just learned about C subgroups this week you
know probably that the homework exercises require that you use the C subgroup
theorems.
So I was, well, I have to use viscosity solution somewhere. So how do I use
them? Okay? So what do we do? I mean, it's a pretty naive thing to do. So
the idea is: Let's suppose I take a smooth test function. So suppose I have
some smooth function on R to the D. And suppose this smooth function touches
the limiting V at some point X that's not the origin? So the picture is you
know I have this V sitting here, okay, going up to infinity, and I touch it
from below at some point X, which is not the origin, where the vertical
asymptote is with V.
Okay? Okay. So what I want to ask is: Well, sort of what does this tell me?
About the Hessian of V at X? So this is what I want to know. All right.
Well, I don't really know very much about V. But I do know -- well, I know one
thing I know that in sort of a little neighborhood of X, well, I know that
these approximations, okay, are converging uniformly, right? So I can pick a
little finite approximation here.
Some V bar NJ, K, for some big K, that's sort of sitting close by. By maybe
adjusting phi a little bit I can get it to touch approximation a little bit. I
want to use this sort of read-off information about the Hessian. If you're
sort of careful about -- you blow up at this point and you're careful how you
choose these representatives, what you can prove is the following: Okay. So
what we can prove is that -- right. So here's what we can show. We can show
that -- for every epsilon greater than or equal to 0, there is a function, a
global function from ZD to Z. So global integer valued function such that two
things are true. So the discrete Laplacian of U is less than or equal to 2 D
minus 1 everywhere. So for all Y.
And this function majorizes the quadratic form you get from the second
derivative, from the Hessian.
Okay.
So we get something like this.
>>: The epsilon ->> Charles Smart: Oh, yes. Minus. So I have to subtract up. I have to lose
a little something. I think actually we don't need this. We think we can do
this with a linear factor. But I know this is true at least. So if you
subtract off a little bit, if you make phi a little steeper here, okay, then
you can zoom in and rescale properly and get sort of a global integer function
which beats it everywhere, whose curvature is not too big, whose discrete
curvature is not too big.
So this is for
a definition.
of this -- I'm
such that this
all Y and Z to the D. Okay. So now I'm going to turn this into
So I want to actually write this as a set. So I want to think
going to define gamma to be the set of symmetric D by D matrices
is true where I put this matrix here.
And what I get from this calculation is that the Hessian is in this set gamma.
Okay? Okay. So this gives me a new candidate for the obstacle problem. Okay.
So is everyone happy with this definition?
There are two quantifiers, so it takes a little while to read.
So the idea is
if you bend the quadratic form down a little bit then you can find this global
integer value function that beats it. Sort of like a global odometer function
that beats it.
Okay. So now what's the new candidate? So this is the second attempt. Okay.
So this new candidate is we're going to nth all of the Ws that are continuous
on R to the D take away the origin such that, well, W is not negative and we're
going to keep this constraint with the D rack. So we also know this. And then
we're also going to include this thing I'm going to use kind of funny notation.
I'm going to write D 2 minus W is contained in gamma. Okay. And what this
minus is supposed to mean is that well you're only supposed to interpret this
in the sub solution sense. So let me just explain what this notation means.
This means that if phi is smooth and touches W from below at some point X, then
the Hessian of phi at X is in gamma. So that's what this funny notation means.
>>: No one can see this.
>> Charles Smart:
Oh, really?
>>: Some people cannot see it.
>> Charles Smart: Okay. In the interests of time, they have to ask me
afterwards. Because that's okay. Unless you really -- unless everyone wants
me to rewrite it? Can I just move this?
>>: No.
No.
[laughter].
>> Charles Smart:
Even if I just hit it really hard?
>>: No, don't ->> Charles Smart: Don't do that, okay. But you can at least see this
notation, right? So this just means anytime you can touch W with a smooth
function from below, the Hessian of that smooth function at the contact point
has to be in gamma.
Okay. So the last like I guess like am I allowed to go five minutes over since
I started five minutes late?
>> Yuval Peres:
You have until 4:30.
>> Charles Smart: No one likes talks more than 50 minutes. Okay. So I claim
actually now we get uniqueness. So from the calculation that I just talked
about, we know again that V star has to be less than V okay. So now the
question is just what about the other direction?
Okay. So what we're worried about is somehow this fails. So let's suppose for
contradiction that we have some point Y where the sky is strictly smaller.
Okay?
Well, so now at this point I need to I mean sort of brush under the rug even
more details. So sort of standard sort of regularity theory for the Laplacian
let's us do the following. So if this happens, then sort of by a theory for
the Laplacian, which I don't want to talk about, okay, what we get is that we
may select a point Z which is not the origin, such that what happens? Well,
the second derivatives of V and V star exist. And even better, what I know is
that the second derivative at V star is strictly less than the second
derivative of V at Z in the matrix order.
Okay.
So meaning that this minus this is positive definite.
So the picture is something like this. So maybe I should mention, so this is
nontrivial. So why am I saying these things exist? And the reason is, well,
okay, we're just emptying over continuous functions to satisfy some
differential constraint. And sort of a priori we have no idea if this thing is
even differential and we're asserting there's second derivatives.
It takes a fair amount of machinery to make that work. But it's something you
can just read out of a textbook. Well, not a very nice textbook, but still a
textbook.
>>: [inaudible].
>> Charles Smart: So I think actually the only place I know -- you can get
this out of Coremander [phonetic] or you can read textbooks on singular
operators and sort of get it from there, at least those are the only two places
that I know of to find it.
Okay.
So what's the picture?
The idea is okay we have our V sitting here and
somehow V star has managed to get below it. So V star is coming in here like
this, right? But in order for this to happen, there has to be some point,
let's say right here, where V star is curving up more than V above it. Right,
because if it weren't, the inequalities, I would have to go the other way.
So there has to be this point where this thing is steeper, is curving up more.
Okay. So the idea now is, well, if I took sort of a piece of this guy around
here, some piece where it's like second derivative, and I translated it up
here, it would sort of cut in like this.
>>: You said ->> Charles Smart:
V star is strictly less than V.
>>: Right. But then in your picture you've got the second derivative to be -V star to be bigger than V.
>> Charles Smart:
That's correct.
That's what's supposed to happen.
>>: [inaudible].
>> Charles Smart:
Yes, I apologize.
Yes.
So it's supposed to be like this.
>>: Much better.
>> Charles Smart: So if I took a piece of the sky and moved it up it would
sort of cut through like that.
Okay. So now if these guys were both somehow -- if these were the finite
differentiation problems, these were actually the rebar sub N, then this would
be a contradiction, because what I'd be able to do is sort of nth V with this
piece of V star, which would allow me to make V lower and still be in the same
class, meaning that I've contributed the lease action principle.
Okay. But we can actually make that happen. So what do we do? Well, we pick
some very close finite approximation of V. So you pick some approximation of V
that's very close here. So this is V bar sub NKJ for large J. Okay. And then
what I do is I take the global approximation of the Hessian of V star, add Z.
So if you remember we had this U which satisfied this. So I have this integer
value guy.
And what I do with this guy is I just rescale it down the same way this guy's
rescaled, right? So I just define this U bar sub -- well, okay. So if I
define this guy to be rescaled the same way V is, okay, then I can take some
sort of translated copy of this thing and it's going to be sort of sitting here
like this and this is going to be U bar NKJ. Okay? Plus a little linear
factor.
And I can put these two things together and break the Lees action principle and
that gives you constriction.
So therefore we actually
totally out of time, but
follows immediately from
of the limiting odometer
know that these two things are equal. Now, I'm
the point is well the rest of the convergence just
this because the limiting image is just the Laplacian
function.
>>: So convergence, local uniform convergence of the odometer functions?
>> Charles Smart:
Yes.
>>: And then all you can deduce from that is the weak star?
>> Charles Smart: That's right. That's right. So we know that there's this
limiting odometer function and it converges, actually okay, in a certain sense
it converges locally uniformly just because of the singularity that's forming
but actually we kind of, the singularity is very nice. In fact, if you sort of
subtract off the difference between the continuum fundamental solution for the
Laplacian and the finite difference fundamental solution, then actually the
convergence is uniform.
>>: Then all you can infer ->> Charles Smart: All you can infer -- you get convergence and you know that
all along the way the discrete Laplacian is bounded and you're converging with
something with bounded Laplacian.
So all you get from that is weak star conversions. So you don't get anything
like convergence of the proportions of the different values.
>>: So it looks like up until fairly late in the proof we don't know that there
exists any points where V star has a well defined Hessian.
you know there's at least one point.
>> Charles Smart:
And at this stage
That's right.
>>: Then at the end you're saying ->> Charles Smart: But the way the theory actually works is once you have
bounded Laplacian. If you have some function -- if you have some function
which is continuous, art of the D or subpart of the art of D, and you know this
guy is bounded in the distributional sense, then this guy is differentiable,
almost twice differentiable almost everywhere.
>>: Any sense of nondifferentability for D star?.
>> Charles Smart: It would be nice to know. I know it has measure 0. But
that's what we'd really like to know is it's just not twice differentiable like
on the boundary of the sales of the picture, of the sandpile. You have regions
where you think the second derivative is constant, and it only fails to be
twice differentiable on the boundaries between those regions.
>>: But there's also the pattern regions where it probably also falls on their
boundaries.
>> Charles Smart:
Which pattern?
>>: Periodic patterns.
>> Charles Smart: But in the periodic pattern region, the second derivative is
actually still constant in there. Because the patterns, because they're so
regularly, they weak star converge to a constant.
>>: Depending on the boundaries of those patterns->> Charles Smart: On the boundaries of all those regions. There's all these
regions in the image where it's not differentiable. This seems to be the case.
>>: So it's got some positive fractile dimensions, supposedly?
>> Charles Smart:
That's right.
It should be like a Surpinski gasket
[phonetic], basically, the boundaries, natural lies.
>>: All right.
So Charles is here until Friday.
>> Charles Smart: Yes, I'm here through Friday. I'm happy to talk more about
this. He didn't get to the part of the new stuff that we figured out. So Wes
and I started working with Lionel after we did this and figured out a bunch
more exciting stuff. You'll have to come by and ask me about it if you really
want to know.
[applause]
Download