>> Yuval Peres: Good afternoon. We're happy to... us about competitive erosion.

advertisement
>> Yuval Peres: Good afternoon.
us about competitive erosion.
We're happy to have Shirshendu Ganguly tell
>> Shirshendu Ganguly: Thank you, Yuval. So, yeah, so I'll be talking about
something called competitive erosion which is a competing particle system on
graphs and [inaudible] based on joint work with Lionel, Yuval, and Jim. And
so yeah. Okay.
So, all right, so for the model, so like I said, it's a competing particle
system, so you have two kinds of particles and you have some interaction. So
to define the process, you need some underlying data. So what do you need?
So you need a finite graph G. It does not have to be finite. So you need
some graph G, say, with vertex set V and E and edge set E. And you need two
probability measures, mu 1 and mu 2, on the set of vertices.
So you have a graph, you have two measures, mu 1 and mu 2 on the set of
vertices, and you need an integer K between 1 and -- so suppose the graph is
size N, then you need an integer between 1 and minus 1. So you have a graph,
you have two measures, and you have a number K.
And formally this process will be a Markov chain on all K subsets of the
vertex. So you have this graph V, you have this number K, so you look at all
subsets of V of size K. And formally this process is going to be a Markov
chain on this state space.
So to define this, so I need to define how to define how to get ST. So ST
was my process, I need to define how to get ST plus 1 from ST. This is what
we are going to do for the rest of the talk. So think of all the vertices as
having the two colors, red and blue, and so K was my size of ST, so think of
ST being the set of blue vertices. So the K vertices are going to be blue
and the rest of them are going to be red.
And so I have to define how to get ST plus 1 from ST. So what I do is -- so
ST of size K. So what I do is I add one point, which is XT, and I subtract
one point, which is YT. I'll define what those are. Okay. So yeah. So
first you have to add one point, XT. So I have these two measures, mu 1 and
mu 2. So what I do is I start a random walk on the graph with initial
starting distribution mu 1 and then I wait till it hits this complement of
ST.
So ST of size K, I start my random walk and look at the first site, which is
now red, which it hits, and that I include in my set ST. So ST union XT is
now a set of size K plus 1. And now I [inaudible] of one more element to get
the right size. So I have this other measure that I can still work with,
which is mu 2, so I start a random walk with initial distribution mu 2 and
like wait till it hits the existing set of size K plus 1, all the blue
vertices, and then at the first site that it hits, I remove it. So I have K.
I added one point. It became K plus 1, and then I removed them at their
point. So I'm back to the set of size K.
And so this was introduced by Jim around 2003, and so I'll come back to some
of the motivations later.
So the example which will make this definition clearer, but okay. So here is
a very basic graph of size six and K is 3, so all the blue vertices are the
set ST. And now in this example, mu 1 and mu 2, which are my starting
distributions, are just point masses at these two points. So think of this
as mu 2 and think of this as mu 1.
So first I start a random walk from mu 1, so it does some random walk, so it
picks this edge, so it's already hitting ST complement in the first step, so
I make this blue. So three became four. And then after they start one more
random walk from here. Suppose it picked this edge, so it's still on the red
cluster, and then in the next step it hits the blue sites, so this will turn
red. And so one step I went from this configuration to this configuration,
so the number of blue points remain the same, but they moved around a bit.
So is the definition clearer of the process? Okay, good. So this was a very
general example, and now we will sort of see an example of a graph which
we'll really care about during this talk. And so this is a disk of radius 30
and every vertex is colored independently red and blue with probability of
one-third, and so number of reds is two-thirds fraction of the [inaudible].
So every vertex is colored blue or red with probability of one-third and
two-third independently.
>>:
Question.
There seem to be more than two colors there.
>> Shirshendu Ganguly:
>>:
So there is some shading it shows, yeah, yeah.
So --
Okay, that's just an artifact.
>> Shirshendu Ganguly: Yeah, yeah. It's some pixel thing. And my mu 1 and
mu 2 are, for this example, just a point north pole and the south pole. So
my blue random walk will start from -- so think of this as in the complex
plane and if this is the union disk, it will start from minus I and the red
random walk will start from I.
Okay. So let's see how this looks. So, see, there is some -- the blue
vertices are getting accumulated near the blue source and the red ones are
getting accumulated near the red source. And they run for some time. So let
me go ahead. Okay. So yeah. So you see that they have come close to each
other and, okay, of course there's some particles here which are still yet to
be colored red, but now they're done. And so you have this like two
different regions of blue and red. And then this is boundary or interface
between them.
And it's sort of like not very deviating. Like it's roughly smooth. Of
course the size is just 30. So if you make it more and more, then it was
even smoother pictures. So the interface is not very fluctuating.
And also you see what the shape of the interface is like. So you can already
guess what this looks like. There's some fluctuations, but roughly it's not
a straight line. Roughly looks like an [inaudible] circular arc. Right? It
might be clear, it might not be. But okay. So good. So this was a general
example.
Then we saw the example on the disk. And now some background and motivation.
So there is some similarities at the very, very basic fundamental growth
model, which is known as the internal DLA, but there are just -- there's just
one particle involved, so you have a graph and you just try to see how a set
of particles sort of grow with time.
So we just have this one version. So IT is your cluster at time
get IT plus 1 from IT you -- so you have just -- so you now have
distribution, mu 1, you start a random walk with distribution mu
wait till it exits the cluster IT. You have one more point that
your site. So at time TT, the size of the cluster is exactly T.
growing by one at every time step.
T, and to
just one
1 and you
you add to
So you keep
Okay. And so this was first proposed but I think Meakin and Deutch around
'86 as the model for several chemical processes like ->>:
I think it's pronounced Deutch.
>> Shirshendu Ganguly: Deutch? All right. Thanks. Electropolishing,
corrosion and -- yeah. And later like so there's a notion of addition of
sets which was invented by Diaconis and Fulton, and this turned out to be a
special case of that. And this was done around '91. Okay. So this is like
one particular version of that. And probably this is done on the square
lattice.
And mu 1 you should think of in this example as being the [inaudible]. So
I'm going to always track my particle at the origin and look at what happens
on the square lattice. So the first particle stays at the origin because
there was nothing there. So the cluster at time 1 is just the origin. Then
I start one more random box, suppose it picks this edge, so you have these
two points at time 2. You keep doing this, you have size 3 and size 4. So
something like this. So at time T you will have exactly T vertices on the
lattice that you are [inaudible]. Okay.
So there's one more. Okay. So this is an IDLA cluster growing on the square
lattice, and this will run for -- this is about like 3,000 particles. So you
see that it's roughly isotropic. So it's almost circular. And there's
some -- okay, still not circular yet, but like if you run this for long
enough it will be and the boundary is something that has some fluctuation but
not much.
Okay. So it's roughly growing like a ball. Okay. So some comments about
this process. So like I said, so IDLA at time T is exactly of size T. So if
your graph is finite then -- and of size N, then IDLA is not interesting
anymore after time N. So it's usually studied on infinite graphs. And
questions that people ask are about what this [inaudible] cluster shape is
like.
And once you understand that, well, then you can ask finer questions about
what the boundary is, what are the fluctuations. Yeah. So you can ask what
the cluster is, whether it's like a ball and what the boundary is once you
understand the first question well.
Whereas erosion makes sense on a finite graph for all times. And it's a
Markov chain, so you can ask -- all the basic questions about Markov chains
are still relevant in this setting, so you can ask what [inaudible] measure
is like, what is the mixing time, et cetera. Okay.
So some references to what has been known about IDLA. So the first rigorous
proof of the fact that this actually looks like a ball in some sense was by
Lawler, Bramson and Griffeath around '92. And they showed that if a cluster
at size N, then the right radius to look at is R so that pi R square is N
because you want to conserve area, and they showed that the ball is within R1
minus [inaudible] factor of one plus minus epsilon, the ball is going to be
within that cluster with high probability.
But then subsequent improvements are made like first by Lawler himself and
then two groups, one including Lionel [inaudible], and there were other -there was another competing group. And eventually so there's now very, very
precise ->>:
That's a lot of Ws.
>> Shirshendu Ganguly:
Yeah, yeah, yeah.
>>: You said Scott, but there may be more than one Scott who works in this
field.
>> Shirshendu Ganguly: Yeah. Right. So Lionel, [inaudible] in one group;
the other group was [inaudible]. And now they sort of really understand what
the even not just order of the fluctuations but what the scaling limit is.
So there's very precise understanding of this process.
And there's another setting which is even more relevant to the case that we
are talking about, is when you have IDLA but like there are now multiple
particles. So you have -- you still have, say, the infinite graph, so you
have infinite room, but there are still particles of various kinds which are
trying to occupy your spot.
So,
the
say
try
for example, like suppose -- so the IDLA starting my particles only from
origin, but now I can say pick two points and minus one and one, and they
alternately like emit red and blue particles from them. They will not
to kill each other once they meet. If a red meets a blue, then it will
keep working till it finds an empty site.
you find an empty site.
So you will have this stuff when
And so here is a picture of what IDLA with two source look like. So the
center of the disk are the sources and you have these particles, and the
colors represent the points that the particles came from. So you have these
two sources. So the purple and the green are the places where the particles
only from the center of the disk occupied and then in the intersection in the
outside you will have particles from both colors. So there is some here.
So there are some rules about so the particles are exchangeable. So once
both particles land here, you can choose which one you want to like make work
again, make them work again. So the red and the sky blue portion are regions
where you will have part particles from both sources coming.
>>:
What's the difference between the black line and the red?
>> Shirshendu Ganguly: So here you can -- so potentially these are the two
regions where you will have -- you can have particles from both sources
occupied.
>>:
[inaudible].
>> Shirshendu Ganguly: Because -- so there -- so if you emit ten particles
from here and particles from here, they will -- so you can first make them
work, they will work like independent IDLA. So they will have these two
balls. So the overlap of both sides will be exactly in the intersection, and
then you can choose which one you want to like make them work again because
this [inaudible] typically will have now two particles.
>>:
[inaudible] what's the difference between light blue and red?
>> Shirshendu Ganguly: No, no, so there is no difference -- no, no, no,
there is no difference. In the sense that if we run IDLA independently, they
will first -- so depending on what time you occupy the sites.
>>: [inaudible] generating the two source cluster in a certain order just to
make a one source cluster, then you make another one source cluster, and then
you take care of the overlap.
>> Shirshendu Ganguly: Yeah, so depending on what time you occupy the site,
this will -- yeah. Mm-hmm. But potentially because the particles are
exchangeable, you can -- does not matter which particle is occupying which
site.
Okay. So this is still on
now we will sort of switch
general graphs, but now we
very special graphs, which
the infinite graph and you have two sources. And
back to erosion. And so we've defined erosion on
will, for the rest of the talk, we will talk about
is sort of approximations of smooth domain. So
like the disk, you discretize it.
talk.
And I will define what smooth means in our
And at this point I should also remark that we started this process on a
similar different setting with Jim and [inaudible]. The main result of this
talk will be joint with Yuval, but there's a different -- but we'll have the
setting which studies the same process, and that was joint with Lionel that
and Jim as well.
Okay. So what is the setting. So think of a smooth domain. By smooth I
will mean something which I'll state precisely, and I look at the
discretization of the domain using very fine mesh. So say one over NZ2. So
think of this as being embedded in the complex plane and look at the
intersection with one over NC2 and look at the part of the graph inside it,
by which I mean all the [inaudible] completely inside it. So all the points
which are in the interior of the domain and induced by them.
And the class of domain that we'll work with, we'll have analytic boundaries,
which means the local parses expansions and equivalently like the remain
wrapping term guarantees that these domains will have maps going to the disk.
Because these are simply connected domains. And, in fact, the boundary will
be so smooth that the maps sort of will have analytical extensions across the
boundary.
And for the moment, at least, so the setting will be still like -- the
sources will be like still point masses on the boundary. So you have a
domain. Take two points, externally extend the boundary. They will be
roughly your sources. And the graph will be the intersection of this with a
fine mesh. Of course it's still like little informal because these points
need not be lattice points and need not lie in the graph but like roughly
this is going to be the setting.
>>: When you talk about extending across the boundary, you're saying you
could extend to an open set that contains.
>> Shirshendu Ganguly:
>>:
Yeah, yeah.
Okay.
>> Shirshendu Ganguly: So yeah. So you have this domain U and this T, so
there open says U1 and D1, continuing UND so that the conformal map from U1
to D1 restricted. Mm-hmm. Okay. Good. So all right. So summary of the
setup is the graph is UN which is U intersection 1 over NC2, and so now I
have to give a number K, right, so for erosion I needed a graph, I needed two
measures, and I needed the number K, which is the set of -- size of the blue
cluster.
So this is going to be a constant fraction of the number of vertices. So I
fix an alpha between 0 and half, and number of blue vertices will be alpha
times the size of the graph. So that's going to be the setting. And, again,
like I said, mu 1 and mu 2 are roughly like [inaudible] measures.
still may not be lattice points. So okay.
But they
So the basic question is how does a blue region look once you run this chain
for long enough and with your sources X1 and X2. So on the disk we saw that
the blue vertices [inaudible] the blue source and similar to the red
vertices, and there was some interface. And the limiting interface, like it
was roughly like an orthogonal circular arc.
And the basic question is what is the truth for general domain, so what
happens for general domains. And so there is some connection to this
process, the Reflected Brownian motion, which I'll make like more precise
later, but like it's sort of unknown that formally Reflected Brownian motion
is conformal invariance. So and here we were talking about Reflected
Brownian motion with normal reflection.
And so this led to the conjecture by Jim which is that understanding the
process on the disk sort of suffices; that if you want to understand what the
process looks like on a general domain, so the final picture should be just
the image of the final picture on the disk under some map.
Now, one comment that I want to make here is remember that the blue was like
half a fraction of the total region and conformal maps don't preserve area.
So it's -- so one estimate presents what this means. So but roughly a
picture like this should be true with a conjecture, and the goal of this talk
is to show that this is true. So are there any questions?
>>:
This has to also hold for the details of the interface or [inaudible]?
>> Shirshendu Ganguly: Yeah, so we will see what we -- yeah, we will see
what we prove and then -- yeah. So the interface is -- so one has to even
prove that there is an interface. So right. So your question is whether the
properties of the interface also carries over, right?
>>:
Yeah.
>> Shirshendu Ganguly: So first one has to prove that there is an interface.
So that is not provable at this point. So we don't know how to prove that.
I mean, I said we don't know how to prove that. Yeah, yeah. Yeah, yeah.
Mm-hmm.
>>:
Sorry, you don't know how to prove what?
>> Shirshendu Ganguly: That exactly a picture like this is true. That there
is a -- so everything here is blue and everything -- and so there is a
particular interface, and that carries over.
>>: The initial state could be just like a checkerboard of when blue and
there's no well-defined interface.
>> Shirshendu Ganguly:
Right.
>>: You have to show that over time you move towards something where there
does appear to be well-defined [inaudible].
>> Shirshendu Ganguly:
Mm-hmm.
>>:
[inaudible] saying at this time in the talk or the --
>>:
Yeah.
>> Shirshendu Ganguly: No, so we will see -- so we will see what exactly the
main result is, and then it will -- so we will see what exactly the main
result is and what it exactly conforms, and then we will see ->>:
[inaudible] there's more work to be done.
>>:
Keep going.
>> Shirshendu Ganguly: Right. Okay. So the modified setting -- so till now
we were talking about these two sources which are points on the boundary.
But like there will be some convergence issues and we will sort of take the
message, go to zero, and so it will be technically convenient. And so the
main result will be sort of in the setting where instead of having point
masses, the two random box, the mu 1 and mu 2, will be sort of uniform
measures on small disk.
So you have this point X1 and X2. We wanted to start random walk from X1 and
X2, but for technical setting, like we will take two disk of radius set delta
near X1 and X2 and mu 1 and mu 2 will be uniform measures on all lattice
points inside the disks. So fix a small delta, take two disks and look at
all that lattice points inside them, and the random walk will start uniformly
from those.
>>: I know you said the second clause.
[inaudible].
And also the distance about
>> Shirshendu Ganguly: Yeah. So this -- yeah. So this is a smooth domain.
So you can fix a small delta. And then we can choose -- for any delta small
enough you can choose disk which are, say, a distance delta by 2 from the
boundary and also radius say delta by 4 or something.
Okay. So instead of starting from points, we will start from the small
disks. And then the main result will sort of involve sending the message to
zero, the size of the message to zero followed by sending delta to zero so
that we asymptotically sort of recover these point sources.
Okay. So we need some notation. So you have the disk, union disk with these
two points, IN minus I, and you have this join domain will arbitrary points
X1 and X2 on the boundary. So the [inaudible] guarantees that there exists
conformal maps going from the disk to disk domain so that I is sent to X2 and
minus I is sent to X1. And [inaudible] are going to be inverses of each
other.
And just worth mentioning that there is no UNIX chip here so because a map is
uniquely determined if I specify the values at three points. So here I'm
just specifying the values at two points. And so this is actually a family
of such maps. So I'll just do choose something arbitrarily and -- yeah.
Okay. So and so now so for the disk I had this orthogonal circular arc, and
these sort of have these sort of equations. So they turn out to be the level
set of this function, log Z minus A over Z plus I. So every orthogonal
circular arc which is symmetric with respect to minus INI will be the set of
Z inside the disk so the disk is equal to beta for some beta on the real
line.
And now I want to define a corresponding thing for our general domain view.
So just transfer with the conformal map. So serve this disk. This is the
orthogonal circular arc corresponding to beta. I call this region D beta and
I just look at the composition with respect to phi, which will give me
corresponding regions on the general domain menu.
And note that this parametrization of beta depends on this conformal map. So
this was worth the value of this function on the geodesic -- on this curve is
and however we're interested in area, right, so we started with alpha
fraction of the area being blue, so we want to sort of find the beta which
gives you alpha fraction of the area. So this we call U alpha is U beta for
the value of beta which has fraction alpha of the area.
As beta goes from minus infinity to the infinity, you covered the whole
region. So you can find such a beta. And like I said, so this area is not
preserved. So phi of D alpha is not necessarily U alpha. Of course for a
particular value of alpha, you can choose your phi to be size that this is
true, but that's not [inaudible] for all alpha.
Yeah. So this is going to be my
fraction is alpha. Okay. And I
about the set of blue vertices.
region so I look at union of all
set of blue points on my graph.
U alpha. So I pick my beta so that the
need my last notation. So I'm interested
I want to pass on the set of vertices to the
boxes around the blue points. So I have the
I look at all boxes of size 1 over N, which is centered at those points. And
this is [inaudible] of the main result which says that so you have the set U
alpha, you have the set of blue vertices, you look at the region version of
that, look at the symmetric difference and take its area, and the expectation
of that under the [inaudible] sort of will go to zero if you send the mesh
size to zero followed by delta going to zero.
So this blue region here, take the box version which is a region on the
plane, take the symmetric and you have this region U alpha, which we just
defined, take the symmetric difference, look at its expectation, and then
that goes to zero if you send end-to-end infinity followed by delta 20.
However, there's some technical requirement. So we sort of need N to go
along powers of two to infinity. And this is because some of the proofs here
rely heavily on the fact that random walk on this graph's UN [inaudible]
motion on the domain U. And the results that are known in the literature
uses the fact that that sort of convergence is true only if N is [inaudible]
part of 2.
Of course, a person has spoken to the [inaudible] of this paper, and they -and they sort of believe that is true for the but the results that appear in
the literature uses this assumption that N is like 2 the K.
And so in words roughly if the sources are small enough, which means delta is
very small, then if the message goes to zero, then the as [inaudible] the
blue region, looks like the set U alpha in the sense of symmetric difference.
So the area of the symmetric difference is small. So that's roughly the
statement of the result. So is this clear, what the theorem is proving?
Okay.
So a remark which might not be pretty clear is that this chain a has
well-defined stationary measure. So it's not irreducible, but still there is
one recurrent class, so you can still well define the stationary measure.
And also because it's a finite system, even the equilibrium, most of the time
it will look like there is an interface and the interface is sort of like
this image of the circular arc, which is the geodesic.
There will be a positive fraction of time where it will look something very
weird. So it does not have to look like this always because it's a finite
system. So this has positive probability happening. So it will sort of
happen like in a positive fraction of time, even though very small
probability. Okay.
So these are some of the initial comments. And so now let's see why one can
expect this to be true, why you can expect the interface to be like a
geodesic. So if there was an interface, then it should have some stability
property that so red random walk would start from top and like it will stop
when it hits the blue site, and similarly blue random walk would start from
the bottom.
And so if this interface has to stay in place, for every point roughly the
push from both sides should cancel each other out. Right? And the push is
nothing but the harmonic measure starting a random walk from here. So you
start a random from here, you look at the chance that it hits this point, and
that should sort of agree from both sides.
This is one of the properties that you -- I mean, if there was some
interface, it should have a property like this. And, okay, so now first we
have to sort of find the candidate which has this property.
So, okay, so here is where the conformal invariant nature of Reflected
Brownian motion comes into play. So we don't know the geodesic will have
this property. And a way to see this is so you can think of Brownian motion
in the half plane. So think of Reflected Brownian motion on the half plane,
which is very easy to define.
And just by symmetry, you can look at the semicircles, and the semicircles
will have this property that the harmonic measure [inaudible] Brownian motion
from the origin, and look at the distribution of a semicircle, that will be
uniform just by rotational symmetry.
And, similarly, if you start Brownian motion, say, from infinity, whatever
that means, so from far away, it will also sort of hit the outside harmonic
measure will also be sort of uniform. So for the half plane is very easy to
see just by symmetry, and then because path properties of Brownian motion
don't change under conformal maps and images of semicircles, so if you take
the half point of the disk, the images of the semicircle will exactly be this
geodesics.
And so this fact that is easy to show on the half plane will sort of imply
that even on the disk the right curves which has this property will be
geodesics.
>>:
Doesn't look like a geodesic.
It should be perpendicular.
>> Shirshendu Ganguly: It should be perpendicular here, yeah.
is a hand-drawn picture, so yeah.
Yeah.
This
Okay. So geodesics have this property that harmonic measures sort of cancel
each other out from both sides. Okay. So this is one candidate for the
interface. And now you have some self-correcting thing. So you if you have
some other interface which is not like a geodesic, then you can argue that
the part which is below this geodesic, this blue curve, say, will have net
harmonic measure pushing it up.
So the harmonic measure from the bottom will actually exceed the harmonic
measure from the dock because they sort of are equal on this geodesic, so you
can argue that here they will sort of be more from the bottom.
So there will be net up for push in this part of the curve, and similarly the
net downward push on this part of the curve. So if you didn't have the
geodesic to start with, this side of heuristic tells you that it will sort of
eventually sort of push yourself towards geodesic. So those are sort of the
right candidates.
Now, again, like I said, so interface, the notion of interface is not -- I
mean, it does not exist for all configurations. So it's only if all the
blues are like near the source of the blue source and if all the red points
are near the red source, then only can sort of expect to define an interface.
For example, like in a configuration like this, so even in each source, you
will have like both red and blue particles. So it can still define an
interface, maybe, but like say it's not like it will separate the two
sources.
So if you have a small disk here, a part of it is red, a part of it is blue,
and similarly the other side. So you will not have an interface which sort
of separates the two sources. It will sort of -- like it can cut to the
middle of a source. Right?
So it's sort not take the separation directly but like slightly different it
sort of tries to quantify this idea. So that would sort of involve something
called a Green function which I will define in a bit.
So here's one more sort of heuristic in the same direction which is so I'll
define what a Green's function is in a moment, but like let's just go with
this. So it's well known that, okay, so you have this graph here. I'm still
working with the setting that there's an interface, even though I said that
the whole problem of making this formal was because there was no interface.
And so you have these two -- you have this graph and there are two region
set, the blue and the red, and you can define something called the Green
function on the whole graph and on the two graphs individually.
So you can look at the blue part, you can look at the red part, so the two
Green function that you can define are done at the Laplacian, whatever, so
I'll -- so this is like an informal thing. So [inaudible] the Laplacian of
the Green function, here is the harmonic measure at this point. And
similarly for the red one it's harmonic measure at this point.
And we said that this interface should have the property of the harmonic
measure to cancel each other out. So you'll have some condition like this,
that the green -- the Blue Green function that's a red Green function should
have a Laplacian zero near on the boundary, and of course they're also
harmonic in the blue part and the red part individually except for the
sources, so ignore the sources for the moment. So what you end up with, this
difference of the Green function is sort of harmonic everywhere except these
two sources.
And the actual Green function on the whole graph also satisfies the same
property. That is also harmonic everywhere except the two sources. So this
difference of Green function has the same Laplacian condition as this Green
function. So you have this -- so this is the Green function of the whole
graph, this is the difference of the Green function. They have the same
Laplacian condition which means that the difference is the harmonic function.
And harmonic functional bounded graphs are constant. So these are same of
the constant, which means -- and notice that -- so I didn't mention this, but
like these two green function are also sort of vanishing. So the Blue Green
function sort of vanishes on the red region, and the Red Green function sort
of vanishes on the blue region.
So on the interface, both of them are sort of like zero. So you have this
GN. This function is roughly like zero on the interface, so GN is roughly
like a constant on the interface. And the blue region was one side of the
interface. It's reasonable to believe that the set of blue sites is all
points where GNX is bigger than some constant.
So GN was constant on the interface. On one side it was bigger than the
constant. On the other side it was smaller than the constant. So roughly
the blue region should be occupying a level set of the Green function. Okay,
good. So these are heuristics sort of involving the harmonic measure, but
this one is a Green function and the fact that derivatives of the Green
function are like the harmonic measure near the boundary. Okay.
Good. So this sort of gives you an idea of what the blue region is. So it's
a level set of certain function. Okay. All right. So now we'll state a
quantitative version of the main result which will imply this. So we had
this result which said expectation of some area was going to zero if I took
these limits. And now we'll have a quantity version of that result.
So I need some notation. So okay. So look at all configurations which is
almost like what we want it to look like. So there are only a few blue
particles in this region where we want it to be all red, right? And this
quantifier is -- so you only allow at most epsilon N square vertices of this
type, which is in this region, you have a complement and has blue color or is
in U alpha and has red color.
So at most epsilon N square particles are colored opposite to what they
should be. So they're separate from the geodesic and has the wrong color.
Because they're falling in the wrong region. Okay. So okay. Good.
And so for any such configuration, the blue region and U alpha, the symmetric
difference is at most has area at most epsilon because there are just epsilon
N squared particles on the other side. Okay? So if I show that the measure
of this set is large, then I'm done. Right? Because I want to show the
expectation of the area goes to zero N probability. Expectation of the area
goes to zero, it suffice that you show that this thing goes to zero N
probability. Area of the symmetric difference goes to zero. Mm-hmm. Okay.
So this is the formal statement. So, okay, [inaudible] quantifiers, so take
a small delta and large N, depending on epsilon, then this measure of this
set of epsilon is 1 minus E power of some constant times N square. Okay. So
have this epsilon fixed. Then for a small delta and N large enough, the
[inaudible] measure of the set is 1 minus exponentially smaller N, square
actually.
And then these are automatically impressed because this is epsilon
[inaudible] arbitrary, I will set epsilon to zero. Okay? So proving this
suffices.
Okay. So okay. So the key to using the proof is identification of a certain
function. So this is two properties that I want to show that omega epsilon
is a large measure. That is what I want to show. So I identify a function W
on the space of colorings which has -- which is maximizing omega epsilon.
And when it's outside of omega epsilon, it sort of increases on average.
So you have the safe space of all colorings, you have the set omega epsilon
that you want to show has large measure and identify a function which is
maximized inside that set and whenever it's outside it sort of increases on
average, which is what this means.
So if you start from something which is outside omega epsilon, then in one
step the value of the function increases. And this implies that -- this
should imply that no matter where you start from, you should hit the set
epsilon quickly. Because the maximum is inside this set and you cannot
continue increasing forever, which you will do if you are outside the site
for all times. Okay.
And to motivate the construction of
discussed says that the blue region
So all the blue particles should be
K, right? Which means, okay, which
candidate for what this W should be
this W, recall that the heuristic that I
is roughly GNX bigger than K for some K.
in this region GNX bigger than K for some
then sort of gives you a natural
on the space of colorings. Okay.
Before I'll define formally what -- or like semiformally what Green function
is. So it's harmonic everywhere except blue and red sources. We have these
two disks, it's harmonic everywhere else. And the Laplacian is sort of one
and minus one on these two sources. Only I'm suppressing some constant, so I
actually don't want one and minus one, I want it to be something else. But
after constant, this will be the function.
And there are formulas you sort of [inaudible] the random walk on the graph,
and then you basically look at the time that a random walk spends in one
source minus the time it spends on the other source.
Now, this integral goes from zero to infinite. So individually these two
integrals are infinite because both of them have positive measures, so random
[inaudible] spend infinite time in both of the them, but the difference is
sort of integral because these two disks had the same area.
So what I'm saying is that you look at the expected local time in one source
minus the local time in the other source. And that will have these
properties. So the Green function that I defined in -- I was talking about
in the heuristic is going to be this function. And then the heuristic set
that the blue region was roughly a level set of this function, okay, so which
means that a natural candidate for W should be this function; that is, some
GN to be -- some GN over all the blue sides.
So you have some coloring, you
look at the sum of GN over all
have these two properties, the
data increases in average when
have some function on the graph, and then you
the blue sides. So I said that the W should
W sort of maximizing omega epsilon, and the
you're setting outside.
So these are the two things that are actually like considered the main body
of the work. So these are not -- these are technically challenging things to
prove.
>>
[inaudible].
>> Shirshendu Ganguly:
>>:
Sorry?
First one is challenging?
>> Shirshendu Ganguly: Yeah, because so I defined my set, so omega epsilon
was defined in terms of U alpha, right, and U alpha was defined in terms of
this Lyapunov function, right, and here I have a discrete function, so you
have to show some convergence.
Okay. So like I said, so you have to show something like this is true, that
roughly GN, if N is large enough, then roughly looks like this function.
Okay. So okay. So I'll -- so I will -- all right.
So do you remember what site was? So you have these domains D and U, and
site was the average then from U to D. And this was exactly -- this bigger
than beta was exactly the region U beta. And if GN is -- okay. So and the
proof sort of uses the fact that random walk on this graph UN converges to
Reflected Brownian motion on U, and you have to sort of use the fact that
Reflected Brownian motion has some nice differential properties.
So it is a solution for [inaudible] with some [inaudible] boundary
conditions. So you use the fact that the Reflective Brownian motion
satisfies some PD and the fact that the random walk heat kernel converges the
Reflective Brownian motion heat kernel. And this set of convergence uses -the things that we use are actually very recent results about local
convergence. So the fact that random walking Reflective Brownian motion is
not very old, so it's pretty new, and the fact ->>:
Wait.
The fact the random walking convergence [inaudible].
>> Shirshendu Ganguly:
>>:
Is new?
In this reflective setting.
>> Shirshendu Ganguly: Yeah, yeah. So if you have a domain and do random
walk on that, that it will converge to Reflective Brownian motion.
>>:
I thought it was classical.
>> Shirshendu Ganguly: No, no, no. Random walking Reflective Brownian
motion -- depends on what your time length for classical is.
>>:
For my intuition back in the --
>> Shirshendu Ganguly:
>>:
I thought it was conjecture --
>> Shirshendu Ganguly:
Mm-hmm.
>>:
Yeah, yeah.
But -- yeah.
Proving thing is -- yeah.
[inaudible] but in special cases --
>> Shirshendu Ganguly:
>>:
I mean, of course --
For example, on the half plane, it's easy to show.
Okay.
>> Shirshendu Ganguly: Okay. And so there's a recent paper which sort of
proves quantitative bounds for this convergence. So they show local CTL
estimates of this convergence. And this is what one can use to show that
this [inaudible] function which is used -- which is defined in terms of the
random walk actually converges to this continuing function on the whole
domain.
And then you can use the fact that this heat kernel for the Brownian motion
is a solution to some PD. Okay. So what does that imply? So we had this W
occurring which is [inaudible] of GNX over the blue vertices. You said that
GNX is roughly like this function.
So which means that this region was roughly the region where this function
was less than beta and this function was bigger than beta, and GN is now -like these functions of GN bigger than GN less than beta roughly the same
regions. Right? So this region, U alpha, was defined in terms of this
function being bigger than some beta. But now I show that GNX is also
roughly like that same function. So U alpha is roughly the region where GN
is bigger than beta.
And so to maximize W, you want to back in all your blue vertices in this
region. Summing GNN over the blue vertices, I want to maximize it. This is
a place where GN is bigger than beta. It has alpha fraction of the whole
area. So I should put in all my blue vertices in this region to make W
maximum. Okay. Which means that the maximum realized in the set of omega
epsilon because you cannot allow too many blue vertices in this region for it
to do maximum.
Okay. So this was first one that this sort of is maximizing omega epsilon.
I still have to show this positive drift condition that if you're outside,
then you increase on average.
Okay. So this sort of uses an energy argument. So look at this graph. Look
at this blue region on this side. Look at the red region on this side. So
now you have three graphs, one is the whole graph, one of these two colored
graphs. And so now look at the effective resistance, which I'll define in a
moment, for this whole graph and these two other graphs. So you have this
three effective resistances for these three graphs, and it turns out
[inaudible] it turns out that this difference is exactly the effective
resistance of the whole graph minus the sum of the effective resistance of
these two small graphs. So of this blue region -- sorry?
>>:
Exactly that.
>> Shirshendu Ganguly:
>>:
Okay.
[inaudible].
>> Shirshendu Ganguly: I mean, it's not because -- okay, so the interface is
not like a line. So you have these two regions of the red. Yeah. So it's
not exact. So there is some small error here, but like roughly -- and the
small error is polynomial is small NN, so it's 1 over N to the some beta or
something. But roughly this is true. Okay? Okay.
So let me -- okay. So let me define what refractive resistance is in the
setting. So these are not blind sources. You have these two small disks,
and you start your random walk from the disk and you stop it -- okay. So you
have these two small sources, which are not points, and effectively in the
whole graph in this setting is energy of the unit current to the blue source
to the red source so that the current through each starting point is one over
the total number of sides.
So roughly you have some current flow going from the blue source to the red
source. That will give you some energy on the whole graph. And effectively
the whole graph is that energy. And similarly for the blue and the red
regions, the energy of the unit going from the blue source to the interface
will be your resistance to the blue part and similarly to the red part.
So you have these three graphs. One of them you have these two sources, so
you send unit current from one source to the other source, compute its
energy. For the other two you have a source and an interface, so send a unit
current, compute its energy.
And Thomson's principle sort of says that if you look at the graph, look at
any flows, then the energy of the flow which is -- the minimum energy of the
flow would be only when the flow is occurring itself. So current [inaudible]
all flows.
And so then you can see why this thing should be non-negative at least.
Because if you take the flow which is on the whole graph, going from one
source to the other source, and restrict it to the blue part and the red
part, they will give you honest flows on them which has the same divergence
as this current flow. So they have strictly more energy. I mean, they have
at least as much energy as these ones.
So RF should be at least as big as the sum of these two, because RF
restricted to blue is bigger than this. RF restricted to red is bigger than
this. And these are sort of these joint sets.
And but then you can actually prove some quantitative lower bound if you
started from something which was away from -- so if you had this initial
thing that there were at least epsilon N square vertices on the wrong region,
then you can actually prove a quantity of lower bound. This is just
non-negative but it's bigger than some constant A depending on epsilon.
So if I start from omega epsilon complement, you will have some positive
[inaudible] which is lower bounded by a function of epsilon. Okay?
>>: So I guess I thought the effective resistance will be smaller than the
sum of those two.
>> Shirshendu Ganguly: Why? Is that these joint graphs? So yes. So I look
at the energy of the flow from going from here to here, and I look at the sum
of these two, which is the same as shorting this curve, so I should think of
this as one point. So what is the sum? The sum is the effective -- the
energy of the current going from here to here, if you glue all these points
along the boundary.
>>:
[inaudible] series, you're connecting things in a series.
>> Shirshendu Ganguly:
>>:
You short out the boundary, you make the resistance less.
>> Shirshendu Ganguly:
>>:
But so -- and even so --
It's same potential along the whole boundary --
>> Shirshendu Ganguly:
interface.
>>:
And because this --
No, no, no, so this is some -- this is some arbitrary
Oh.
>> Shirshendu Ganguly: If this was the same [inaudible] then this would be
zero. But the fact that you are starting from omega epsilon -- outside omega
epsilon tells you that this interface is not of the same potential, should we
actually make some quantitative gain, which is this epsilon.
Okay. And so now -- so I'll quickly say how these things will imply the
result that we wanted to prove. So you have this function W which increases
on average when you're outside. And the maximum is inside this set of
epsilon. So this should imply that hitting time of that set should be small.
So the first result is that hitting [inaudible] omega epsilon, no matter
where you start from, is going to be like order N square. So it's being
bigger than CN square is exponential, has exponentially small property.
So suppose this is [inaudible] of all colorings and omega epsilon is a small
set inside, you start from anymore, in order N square time you hit the set
omega epsilon. Right? So this just involves what I just said. But be sure
that decision measure, this is close to one. You also have to prove that
once you enter the set you don't escape quickly.
So this is the next step which is that once you reach -- so you first hit
omega epsilon, then you carry on and hit something smaller on epsilon prime.
And once you're in epsilon prime, then you will actually take exponential
time to get out of omega epsilon.
Okay. So these are just submartingale [inaudible] bounds. So the function W
of sigma T as T runs is a submartingale, and so you have [inaudible] bounds
that will sort of imply this. And this is the same that you -- same argument
that you have for a biased random walk. Okay. So now we have two -- so we
have the sitting time results, and then we have to transmit that stationary
measure. Okay.
So this is not very hard. So stationary measure is the amount of time that
is spent asymptotically in the set. So you start from anywhere and you look
at -- and you want to measure the stationary -- you want to measure the
stationary measure of the set, and you look at the proportion of time that it
spends in that set.
And the hitting time results show that no matter where you start from, in
order in square time, it will hit the set omega epsilon and it will stay
there for an exponentially long time, right? So you typically spend only
order N square time outside omega epsilon or in time interval of
exponentially large length.
So this tells you that something like this is true; that the mass of omega
epsilon is bigger than one minus some -- this is exactly the proportion that
you spend outside.
Okay. So the proof of the quantitative
the main result just because -- just by
now I'll quickly wrap up just by making
result was in terms of this area of the
result is complete which then implies
sending epsilon to zero. Okay. So
some final comments that the main
symmetric difference, right? And in
the picture, in the simulation we saw that exactly there was like no red
particles in this region UL, so there was some interface, but there was
absolutely no red particles in this region.
So in general with Lionel Levine, Yuval and Jim, we manage to prove this
result in a slightly similar setting, but like which was the simpler
geometry. So the setting was we started this process on the cylinder, so you
have the cylinder and now the sources are the top and the bottom. So the
random box starts uniformly from the top, the red random walk and the blue
random walk starts uniformly from the bottom. And there we would actually
show that, on equilibrium, and because of symmetry in one coordinate, this
should be the interface because there is nothing going on in the other
coordinate.
So the blue should -- so if the blue is like alpha fraction, then the right
height should be alpha, if this height is one. And then you could show that
if you look at the region which is outside alpha plus minus epsilon, then
with high probability there would be no blue particles in this region above
alpha plus epsilon, and there would have been no red particles in this region
alpha minus epsilon.
But [inaudible] is the fact that some things were very, very precisely known
on this geometry, which is not true for general domains. So, yeah, we use
the fact that IDLA was very well understood on cylinder.
So first we prove the similar [inaudible] for the cylinder that there would
be some dust particles but like very small, and then we use some precise
estimates to show that those will be wiped out very quickly. Okay.
So a relevant question in this setting would be to prove such a term for this
general context. So having a general domain and show that even a picture
like that says that is true even in this setting.
And once you do that, then the next question would be to understand the
interface better like people did for IDLA. So once they show that this was
roughly like a ball, what was the interface going to be like. And because
you have forces from both sides, it's believable that the fluctuation should
be even smaller than what happens for IDLA.
So for IDLA we know that this -- the fluctuations or the boundaries are like
logarithmic, so it should be even at most that much. And then you can ask
the question in high dimensions for general domains. And the current
condition would be again this Green function intuition which says that the
level -- the separating hypersurface should be still a level set of the
green -- of a suitable dipole potential function on this region.
And then you can ask a question, so I'll start working with two measures
which were separate from each other, but then you can ask the other extreme,
which is what happens if both the blue and the red random box starts from the
same source. And now suppose you have the infinite lattice so you have Z2
but the model is use alternatively released red and blue particles from the
origin and they will keep working if it finds a site which is either empty or
of the opposite color, so blue work can hit a red guy or and empty site.
In either case it will turn it into blue. It so it will treat white and red
as particles of opposite color. And it turns out that even though you have
like ten to the ten particles, only like a few thousand [inaudible] have been
occupied. So most of the activity would be like red trying to eat up blue,
blue trying to eat up red. So it would really like fill up these empty
sites.
But nothing is conjectured in the setting, what is the picture like. So this
is one. And then you can ask the question about what happens if you have
multiple colors. So this is a simulation where you have like three colors,
each of mass one-third and the sources of the roots of unity and then you run
it and this is what it looks like.
But if you did not start from the setting where every color was equal and you
have -- so remember that in the term we had these two sources, we took a
conformal map. So those map to the -- so two sources on one domain would map
to two sources on the other domain. And then we had this one more degree of
freedom which said that you could map this region D alpha to actually U alpha
using -- but here, once you fix these three sources on two different domains,
you have no control on what the areas are going to be like.
So you don't exactly know what [inaudible] set of true here and what exactly
is the -- what exactly is the truth in this setting. Okay. So I'll think
I'll stop here. Thanks.
[applause]
>>:
You said the -- in IDLA the fluctuations logarithmic.
>> Shirshendu Ganguly:
Um-hmm.
>>: It's just bounding [inaudible] or is there any understanding of what the
limit is with log scale?
>> Shirshendu Ganguly: Yeah, so it's very well known, and so you should
probably asks Lionel because he was one of the -- yeah.
>>: So two dimensions. So the largest fluctuation is logarithmic, but
typically you think they are square root of log. And there's a limit which
is a close relative of gaussian tree field.
>>:
That's known?
>>: Yeah. I mean, you can always start a strength in the results.
known for certain class of test functions.
You're
>>:
Yes.
So the answer is you're getting gaussian [inaudible].
>>: It's not quite the gaussian free field because the origin is special.
But the origin is a special point. So we called it augmented Gaussian. But
it's very, very close to Gaussian in the space.
>>:
[inaudible] normalization.
>>:
Right.
>>: Normalization.
Any point is tight.
Normalization. The fluctuations themselves are tight.
Only the maximum gives you the [inaudible].
>>:
Two dimensions you could [inaudible].
>>:
So there is a root log normalization.
>>: I think this might be the difference between Gaussian and augmented
Gaussian free field. So...
>> Shirshendu Ganguly:
So you had a question?
>>:
Higher diminishes, fixed dimensions, this type.
>>:
Is there anything you had the sources [inaudible].
>> Shirshendu Ganguly: Right. So, first of all, the Green function order is
more. So if a point was -- so if you first start random from a point, it
will be hit like more that than -- so like roughly like log N times. So if
you're stick random walk, starting from a point and kill it on the other
point.
But and so -- and this kind of [inaudible] that's now how because Brownian
motion does not hit this point, so we could not make that work that if you
take this discrete function and then work with the limiting behavior of
the -- how to show the direct convergence so to something like this log Z
minus A over C plus A.
But one approach would be just without bypassing all this Brownian motion
convergence, so you can start with this function log Z minus A over Z plus I,
which you know is harmonic and has a singular disk and has a normal boundary
condition and the continuum and like restrict it to the lattice.
So it's not going to be exactly harmonic. So it can have some error. And
then you can show that even with -- so I had this weight function which I was
summing this GNX over blue sites, right, so instead of summing GNX over blue
sites, you can take this actual continuum harmonic function and then just sum
that over blue sites and show that that works.
So I have this function GN, I show that GN convergence is some function log
of something over log -- log of something over something, right? But you can
start with this log of something over something directly instead of having
the discrete function. Then it will not be exactly harmonic. You will have
some errors. And then but just show that you will still have this positive
drift condition even in that case.
>>:
But that still incurs some technical --
>> Shirshendu Ganguly: Yeah, yeah, of course. Yeah. So this will be a
robust way because this set of convergence of random walking Brownian motion
is only known for square lattice. So...
>>:
But this still incurs some technical --
>> Shirshendu Ganguly:
sure. Mm-hmm.
Oh, yeah, of course.
Yeah, yeah.
But like, right,
>>: I just want to mention some of the motivation behind what I was doing in
2003, which was I was very interested in [inaudible] router and what they
might be good for, so I was trying to come up with probabilistic questions
from this kind of derandomization would give a close match. And, in fact,
this is an example of that. So you can do this kind of competitive erosion
and derandomize things. And if you do it the right way, you get the same
interfaces, level sets of these functions, conjectural.
>>: Right, right, but you mean just doing [inaudible]?
something else?
So you have to do
>>: Here's the thing I always forget. I could look it up in my notes and
fine out which is which. There's two obvious candidates for how to do
[inaudible]. Depending upon whether the red particles and blue particles use
the same rotors [inaudible]. And one of them gives you the interface you
expect, creates microscopic disturbances of the interfaces.
I never understood it and I can't remember which one was which.
anyone is interested, it's kind of --
But if
>> Shirshendu Ganguly: I would imagine the independent [inaudible] would
sort of give you -- if you use two different set of verticals for the two
walks, probably it should make the random walk closer. And it's not clear.
>>:
[inaudible].
>> Shirshendu Ganguly:
Oh, opposite?
>>: The red should spin the rotors backwards from the blues. Never tried
that. That would be the way to do it. Same direction for both of them.
Basically one of them you can definitely see there was a kind of a skewing, a
clockwise skewing of the interface in a very systematic way.
>>:
Do you is still have pictures?
>>:
I could dig them up.
>>: It be nice to [inaudible] there are several natural versions, and only
some of them work.
>>: The other thing I'll mention is I did an experiment with three colors
using the kind of the same thing that you did on the last slide, but where it
was lopsided, or not symmetrical, and there still appears to be a median
clock where the angles go around in 20 degrees, which is a sort of a natural
[inaudible] ->>:
But how did you arrange to keep the math equal?
>>:
The areas were unequal.
>>:
No, but how did you arrange to keep it unequal?
>> Shirshendu Ganguly:
So what was the rule?
Did you --
So you --
>>:
What was the rule?
Each time did you --
>>:
I believe the rules just cyclically alternate between them.
>>:
That should be equal to the [inaudible].
>>:
Sorry?
>>:
That should equal like the mass.
>>: If you introduce a red particle and then the blue particle and then the
yellow particle ->>: Because each time it captures something of another color.
reds are in the minority. Then when -- so when the ->>:
I remember now, yes.
>>:
That's the right.
Suppose the
It's the particle that just captured --
>>: So when red steals from yellow, then yellow gets to go next.
yellow steals from -- I'm pretty sure that was it.
>> Shirshendu Ganguly:
Yeah, yeah.
>>:
Yeah, that's the way to keep it --
>>:
Yeah.
Then when
>>:
Otherwise it's --
>>: Anyway, the pictures are part of a Mathematica notebook, so we can see
what the code does, but I'm pretty sure that was it.
>>:
We just went through the same realization.
>>:
Thanks.
[applause]
Download