Document 17864807

advertisement
>>: Welcome to the second lecture. Well for me personally the first time I understood the allure and
the challenge of mixing times for spin systems was in Fabio Martinelli’s lectures in San Fleur back in
ninety-seven. I’m happy that he’s going to tell us something about one aspect of this topic. So Mixing
Times for Constrained Spin Models, please Fabio.
>> Fabio Martinelli: Thanks Urvale, so thanks for the invitation to the Pacific Northwest Probability
Seminar. I was here for three months and I really would like to thank the Microsoft Research and the
Theory Group, because they were really three fantastic men, from any point of view.
So I will talk about mixing time for some you know simple looking Markov chain that arise from some
statistical physics problem. I became aware of those chain, actually it be more than ten years ago when
I visit Berkeley, David Aldous who did some work with Persi Diaconis, actually the only rigorous work at
that time on this model. He wrote to me before I got to Berkeley asking whether I had any idea how to
prove you know so rigorous problem, I didn’t. So I didn’t for a, I mean a few years later. Then suddenly
with some people in Rome and in Paris we start to understand this model. But, so they’re interesting
but also difficult with still many open problem that I will mention at the end.
Okay, so this is a simple example that is due to Aldous-Diaconis in their original paper in two thousand.
They conclude with this, asking what’s happened in this situation. So we have a graph that is binary
rooted tree with L levels. Attached to which vertex there is a spin with namely a variable that is zero
one. Then Pi is a product Bernoulli(p), so the probability that you see one at one given vertex is P, just a
product I ID variables on the vertex set of this final tree.
So the Aldous-Diaconis chain is the following. So for any vertex say we are in continuous time. So with
rate one any vertex does the following, tosses independent coins for every vertex of course. So a pcoin’s a probability of one is P and sample accordingly a value in zero one. Then the vertex check the
state of its children, two children. So if they’re both zeros at that moment then the vertex updates the
current variable Omega X to the sample value, okay.
So, I guess that leaves are here. Okay, yeah, so without the constraint that both children should be zero
then the chain would be really a product chain completely trivial chain, okay. So the question of Aldous
and Diaconis was, what’s the effect for example mixing time another property of the chain given by,
produced by the constraint? So of course I mean the, I mean once you have this constraint you should
check for irreducibility of the chain, so one has to put boundary condition that ensures that the chain is
able to reach any state. And convention, so usually this is done by saying that say the leaves of the trees
are frozen equal to zero, so that any vertex with the two children on the leaves is always unconstraint.
Okay, so, okay so let us examine the main feature of the chain. So first of all, I mean a key feature is the
following is that the chain is the reversible with respect to the product measure. The reason is that the
constraint at the given vertex doesn’t look at the state of the vertex itself, but just look at the state of
the neighboring vertices, so in particular the two children.
Okay, then, so if a move is legal, namely the two children are zero. Then we can always undo the move
by reverting the coin. The [indiscernible] of the chain is clearly reversible with respect to the stationary
measure. The [indiscernible] at the level of the stationary measure this chain in particular they don’t
show any you know face transition or you know change of behavior. Okay, they are always trivial, okay.
Now the chain is ergodic because the leaves are always zero, okay. Now there is the bad news that
actually the chain is not attractive or monotone. Because in a sense you would imagine, you could
imagine that if you inject more zeros into the system then you could sort of speed up the chain.
However, more zero in the system they allow more moves. Now more moves they can produce either
more ones, but also kill other zero’s, so it’s not clear the total outcome, okay.
Now the fact that the chain is not monotone or attractive, means that, I mean a number of powerful
tools like FKG inequalities, monotone coupling, Paris Winkler censoring that has been so you know really
powerful for many other model, like the easy model or monotone surfaces, or other relevant model.
They’re no longer available, okay. So in a sense there are very, very few tools that are available for a
rigorous analysis of the chain.
Okay, so in spite of the fact that the reversible measure is trivial so it’s just product measure. So the
following happened, start the chain from all ones, okay. Define the hitting [indiscernible] L to be the
time that it takes to put a zero at the root. So of course the zero at the, if you are able to bring a zero at
the root the zero must start somewhere at the leaves, okay. Then sort of find its way you know climb
the tree up to the root, okay. Okay, so, sorry?
>>: Today you put two and one, now we are two.
>> Fabio Martinelli: Oh, sorry, okay.
[laughter]
Sorry about that sir. That’s the result of a midnight revision.
[laughter]
Okay so the theorem and I will later quote I mean the proper authors. Okay, so the result can be
summarized like that. But if P is less than half then the expected hitting time grows like linearly in the
height of the tree. If P is bigger than a half then it grows exponentially fast. If P is a half then it grows at
most polynomially and at least you know like L cube, and at most like L cube plus an extra power. Okay,
an extra constant.
Okay, so there is a phase transition in the behavior at least of this hitting time are carved into the value
of P. So and one half is the critical value. So before going to you know more technical result let me
explain why this critical behavior. Well imagine that you work the chain and it’s well defined on the
infinite binary tree, okay. Then on the infinite binary tree you can look at standard percolation, okay. If
P is bigger than half then the chances that the root percolates to infinity with an infinite path of one is
positive, okay.
Now an infinite path of one can never be unblocked. Because every part, every vertex on that path has
always a child that is not zero and therefore is blocked forever. So in the system there are infinite block
configuration, okay, at least if we sample the initial configuration from the stationary distribution Pi,
okay. So it is clearly, should be clear that this property having or not you know an infinite class that
should play a role in the dynamical behavior of the chain, okay. The role is expressed by this law, okay.
So…
>>: What does that find?
>> Fabio Martinelli: We don’t know it just, we have some bound, yeah.
>>: Okay.
>> Fabio Martinelli: So of course I mean you, this is an amusing model, so but why trees? Okay, so that
looks very special. Now actually the above chain is just one example of a general class of Markov
processes that are known at least in the physical literature Kinetically Constrained Spin Models. This
word kinetically constrained, the kinetically refers to the fact that the constraint is inside the dynamics,
okay. So that’s the explanation of this word.
So the main motivation comes from dynamics of real glasses, namely just a few words about that, so
one characteristic of real glass is that the real glass is a liquid that is cooled very, very rapidly. So that
when you cross the crystallization temperature this system does not become crystal but continues to be
like a liquid, okay. A characteristic is that if you look at the distribution if you lose experiment like
measuring correlation in the system. You don’t see anything particular when you are at low
temperature. But actually the dynamics of the glass becomes slower and slower when the temperature
is lowered. There is some experimental temperature where essentially the dynamics of the system
occurs on a timescale that is like a day. Okay, so it’s a time scale characteristic of an experiment.
So physicists they try to understand are there models that sort of mimic some of this, at least some of
the feature of the dynamics of these glasses. These are these constraints main model. So there are
many, sorry large class but maybe the three main models could be, so the East model. So in this case
the graph is just the line. Okay and they constrained it, look at your East neighbor and if it is zero then
toss a coin and reset the value according to the coin, okay.
Then there is a North-East model. So the graph for example is Z2, okay. Now the constraint is more
demanding because you ask that the North and East neighbors at the same time are zero. So this is,
looks more like the Aldous-Diaconis chain on the binary tree because you ask that both children are
doing something, okay, busy or in that case, okay. Then there are Frederickson, Anderson, two
physicists, two facilitated model. So the graph is Z2 and the constraint is that at least two out of the
four neighbors are zero, okay. Then the application P is related to the inverse temperature by this
formula.
Okay, so the constraint tries to mimic what physicists call the cage effect. They made the fact that
Mezzos copy part of the glass is not able to move unless the surrounding Mezzo’s copy region have a
certain property, okay. The common feature of all the above model is the following, either there is an
ergodicity breakdown at some critical point. This is for example the Aldous-Diaconis chain at one half
it’s exactly this behavior, okay. So the chain say on the infinite graph is no longer ergodic and, or the
process on the infinite graph is always ergodic and, but the relaxation time becomes huge when P goes
to one. So the relaxation time here is the inverse spectral gap of the generator of the process, okay, this
then.
So example of the first class, namely when there is an ergodicity breakdown is for example the NorthEast model. In this case the critical value is the critical values for oriented percolation in two dimension.
So if P is above the oriented perco, critical point for oriented percolation then with positive probability
there are infinite oriented path in Z2 and of one. This path are always blocked, okay.
Example instead of where the process is always ergodic but the relaxation times becomes huge when P
is close to one or is the East model and this two facilitated model. So in the East case the relaxation
time has, is essentially as this far, this asymptotic sign here means that we don’t control exactly the
constant C, okay. So there will be an upper and lower bound with different constant. But apart from
that the expression is this, and N is log base two of one over one minus P. Surprising it’s possible to
have such an explicit formula, but this is actually related to some deep fact of the East model. That
maybe I will be able to mention at the end. So instead for the two facilitated model, the relaxation
diverges exponentially fast in one over one minus P with a specific constant, okay. Maybe I explain why
this constant arise, okay.
Okay, so there are, so the general result that are available that would deal with collaboration, Cancrini,
Roberto, Intitus, and Christina Toninelli. Not to be you know mixed with the brother Fabio. Christina is
in Paris. Is that, so under a very general condition the critical point where the ergodicity breakdown
occurs coincides with a critical point of the corresponding bootstrap percolation. So consider for
example the model here where you’re required at least two neighbors, okay. So and do bootstrap
percolation with this constraint, namely generate I ID Bernoulli variable on the lattice according to Pi.
Then whenever a vertex has at least two neighbors that are zero put that vertex equal to zero and
continue the operation, okay. This is called bootstrap percolation.
The question that you ask is whether after infinitely many step of this map, what is the chance that the,
say the origin is still one? Okay, so plus proof long ago and hold right here as a beautiful work about
that, is that this probability is zero. But actually it was also shown that if P is close to one then in order
to see that this probability really becomes zero after infinitely many step of the bootstrap map. You
need to observe the system on a scale that is exponentially large in one over one minus P with a precise
constant that is exactly this Pi square over eighteen that I mentioned before.
For that reason in, when people were doing simulation, they were doing simulation on two smaller set
and they were actually claiming that there was a phase transition instead the phase transition was
absent. Okay, so this is the bootstrap map. So what we prove is that the ergodicity threshold always
coincide with the threshold for the bootstrap map. That if P is below this threshold the process is on the
infinite graph is always ergodic with a finite relaxation time, so positive spectral gap, okay.
Now out of equilibrium which means for example start from an I ID distribution with a different density
of the, of Pi and ask whether the distribution of say the density at the origin or in some given vertex
converge with time two p. Okay so this is a very, you know the basic questions sort of, and as being
actually answered only in few model. So in East model and model on trees, and mixing times, so not
relaxation time, mixing time are much harder because really you have to look at what the dynamics
does. There is no monotone coupling, all this in a monotonicity argument are missing. So coupling is
very hard.
Also the critical behavior, namely what happened exactly when you are at the critical point is
particularly hard. The Aldous-Diaconis chain on the binary tree is, so far is the only model for which a
recent answer was provided. Okay so main result, okay so I recall you that the mixing time is the
minimum time such that for any starting point you have variation distant from Pi of the you know time
distribution, less than say a quarter. So with Christina Toninelli we prove that, so the mixing time grows
like L, so the depth of the tree times the relaxation time. So that’s general and then that if, so below
criticality the relaxation time is order one. Above criticality the relaxation time is exponentially null and
the far below criticality the mixing time is linear. Okay and above is exponential.
Then with this other people at criticality we prove that the relaxation time in set grows at least like L
square and L square plus, and between L square and L square plus a bit Alpha. Still the mixing time
grows like L the height times the relaxation time, so there will be a cube factor. We also prove that if
you are right below criticality, so P is a little bit below PC on the infinite binary tree then the relaxation
there is finite. That’s a general result but actually diverges when P grows to PC like inverse of a
polynomial of the difference between PC and P, okay.
So let me describe maybe the easy part, okay. Now the constraint looks at the children and the forest is
oriented, okay. So because of that if you look, if you project the chain on say the two branches that
originate from the root then you still get that each projection chain is still a Markov process. So it is easy
to deduce that if the two projected chain at certain time distance, L to distance from Pi that is say one.
Then after an extra time that is constant time the relaxation inverse of the gap then also the big chain as
the same property.
>>: Can you attempt to [indiscernible] can you go back one slide?
>> Fabio Martinelli: Yes.
>>: So the fact that P mixes L times T rel. So that’s true even when, I guess in the previous case when it
was one more slide back when you had exponential relaxation time. So even there it’s [indiscernible].
>> Fabio Martinelli: Yeah.
>>: Now the [indiscernible] is very different from any case [indiscernible] when you have exponential
relaxation you don’t lose the L. Here you still lose the L even in the bottom single critical case?
>> Fabio Martinelli: I would say so but, okay, so. Okay, I have really to check, but I, okay. I think that
this was, okay, sorry. I have to go forward. Okay, so in any case the mix, the L to mixing time, namely
the first time that you reach say distance one from Pi in L two. This satisfied the following relation that
on dep L is less than the one on dep L minus one plus constant relaxation time. So P two grows like L
times the relaxation time at most. The mixing time which is the mixing time in L one is less than the
mixing time in L two. So you have this bound.
Now the lower bound, so this a similar result at Ding, Lubetzky, and Peres obtained for easy models on
trees. The key ingredient is that the mixing time of a product chain is related at least if N is log to the
number of factor in the chain, so log N half log N and then they, then the minimum of the inverse gap of
the individual chain, okay. So in our case if you for example cut the tree into two and you look at all the
branches of dep L over two down below they’re all independent. Each chain, they’re just identical and
the further we get immediately that the mixing time is, well the number of this L over two trees is
exponentially null therefore this log gives you an L. Then you have a very relaxation time on L over two,
on L over two chain, okay.
So lower one for at criticality. Okay, so this typically this strategy here is to use test function in the
variation of characterization of the spectral gap. So a good test function at least that worked was the
cardinality of the cluster of one’s attached to the root, okay. So it’s a computation using you know the
fact that we are at one half with P shows that the variance goes like L cube. Instead the Dirichlet form of
this test function grows like L in the [indiscernible] Dirichlet grows at least like L square.
So for P bigger than half, for example the indicator that the root is connected to the leaves by a path of
one is a good cut set. The reason is that this cut set, this path is connected with many other paths to the
leaves. So it’s very hard to detach the root from the leaves. So in order to detach the root from the
leaves you need that the root is connected to the leaves just by a single path, which is a very unlikely
event, okay. So you do this computation and you get an exponential cut set, okay, and the further mix
in the relaxation time is at least exponential, okay.
The hard part is instead to get the upper bound on the relaxation time. So here we use Martingale
decomposition of the variance. That actually was inspired by work I did ten year ago with Sinclair at
[indiscernible] for easy model on the trees. Then comparison with an auxiliary chain in which the
constraint is not you know just one level below checking the state of the two children, but you check
you know little L level below so you have a long range constraint. The constraint is the following vertex
is free to flip if within in the little L step of the bootstrap map it can be make zero. So roughly this mean
that if you go on the little L levels below X you find some part of zero that are able to climb up and make
the level free, okay.
So if P is less than half it is enough to choose this free parameter to be like a large constant depending
on P. To get that the relaxation time of the chain of this auxiliary chain is less than say two then you do
a little comparison between the chain. At criticality its sufficient is more complicate because one has to
choose this free parameter as a function of the total length of the chain. One is forced to do a
multiscale analysis that was sort of inspired again by this work by Ding, Lubetzky, Peres for the critical
Ising model on trees.
So here is an open problem that we have been struggling for awhile and didn’t come up with a real,
even a realistic solution in either direction. So, do you know some feel the Aldous-Diaconis chain but on
a ternary tree. So now you have each vertex as three children. The constraint is now that at least two
children out of three are zero, okay.
So it’s possible to compute the critical point just by computing the bootstrap map and the critical point
is eight over nine. However there is, so this looks maybe similar to the previous chain but there is a key
difference is that at the critical point the probability that the root is as an infinite cluster that is block
which is, must necessarily be at binary tree. So the probability that the root is connected to infinity on
the infinite ternary tree by a binary tree, this probabilities positive three over four, okay.
So instead for you know standard percolation on a tree the probability that you have an infinite cluster
at you know the critical point one over the number of children this is zero, okay. That was heavily using
the proof, okay. So the phase transition, so people would say that this is the first order phase transition
because suddenly at criticality an infinite cluster appears. Now the question is when P is equal to this
eight over nine is mixing time still polynomial in L like the Aldous-Diaconis chain. We tried you know
various cut set to prove that it’s not and we failed, always, always you get a polynomial. We did some
simulation and it seems that they indicate polynomial, so I don’t know so it’s very hard, okay this is very
hard. This is a very interesting…
>>: But below in the above is the same criticalities.
>> Fabio Martinelli: Yeah, yeah, yeah, below and above is the same criticalities. This is related to some
property of this infinite cluster that physicists call fragility. So they say is the cluster is fragile, maybe it’s
possible to break this infinite cluster in a polynomial number of steps, okay, simulation indicates that’s
true, okay. But we’re not really able to say anything more than that.
Okay, so cutoff for the mixing time on the binary tree so of course is pretty reasonable to ask whether
the chain shows cutoff, okay. So I recall you that T mix of a parameter epsilon where between zero and
one, okay. This is the mixing time to reach total variation not one fourth but say epsilon, okay. So if
epsilon is small you need more time because you want to be closer to the stationary T. If epsilon is close
to one you allow you know a more relaxed variation distance. Okay, so I recall you the definition of
cutoff. So if you have a collection of Markov chain, so shows total variation cutoff around some you
know cutoff point T of N and windows WN, okay. If for all N and epsilon, okay between zero and one
then the mixing time for the N chain is this TN plus a window of order WN, and this O epsilon means
that you have to put the constant in front of this WN that depends on epsilon, okay.
Okay, so here is the result that prove with Eyal Lubetsky and [indiscernible] Ganguly and [indiscernible].
So suppose that you are non-critical, then for most scales. Okay, most scale means not for all L but for
many L, namely there are many sequences of the, many divergence sequences of L for which what I’m,
so there is a cutoff. Okay, maybe I’ll say more what means this most, okay. So we have Aldous, so the
Aldous chain, Aldous-Diaconis chain as cutoff. So the center of the cutoff is this expectation of the
hitting time of the root. The cutoff window is of order one. So it’s very concentrated, okay. So mixing is
really achieved you know within order one from this time.
At criticality, so P one half, then again you know for at least at divergence sequence of scale, okay, there
is a cutoff with again the same center. Now the window is the relaxation time which at criticality
diverges with L polynomially, okay. So let me just give a sketch of the proof for the non-critical case. So
starting from all one when the system has reached, I mean has been able to put a zero at the root. This
mean that every point has been visited as updated its tape. Okay, so it’s not possible to reach the root
without updating at some point all the vertices because the constraint requires that both children are
zero, okay.
So, now if you do just basic coupling, namely choose the same point to update and flip the same coin to
update a point. Then whenever you are visited you forget about your previous, your starting
configuration you became a completely new binomial variable. Therefore at this time all starting
configuration have couple, okay, under the basic coupling, okay.
Now if starting from all ones the chain didn’t make it you know close to the root, so it didn’t bring a zero
close to the root, then close to the root the configuration is still all ones, okay. That’s very unlikely for Pi
so then the variation distance from Pi is still large, okay. Therefore so there is some little arguments but
it’s enough to prove order one concentration for this hitting time, okay.
Now here comes the small scale. Using the L two mixing that is linear in L. It’s possible to prove that the
average of [indiscernible] L grows linearly in L, okay. Therefore for some scale the increment between L
and L plus one must be bounded by a constant, okay. It cannot be more than a certain constant for all
scales, okay. So for any you know N and a little bit more than N, one plus delta N there exists a scale
such that this increment is bounded by a constant that will depend on this delta, okay. So you can
construct a good sequence by choosing for each N you know the corresponding L, okay.
Now because of the constraint the hitting time for a tree of L plus one levels is at least the maximum
between the hitting time of the left and right branch. Because you need to clean up the left and right
branch in order to update the root, okay. Therefore there is some distributional inequality, okay. So
this [indiscernible] prime, [indiscernible] double prime are two independents copy of the same variable.
So if L is a good scale namely this difference between averages of order one, then it’s simple to, I mean
is a simple analysis using this distribution in inequality to prove that actually the average of
[indiscernible] L minus its main absolute value, okay. So the first moment is bounded by the average of
the difference in absolute value of this two copies, okay. This is bounded by twice using this inequalities
can be shown to be bounded by twice this difference which order the one. Therefore the Markov
inequality shows that the concentration is of order one, okay. This is actually similar to proof for
tightness of the maximum of branching random walking, a rather simple argument.
So then we went also to consider cutoff for another model that is in some sense is more complex, okay.
In other sense, in other respect less complex and this is the East model. Okay, so consider the East
model, so remember that the East model is, okay you look at the East neighbor, if it is zero you update
your, the vertex is free to update, okay. Actually in order not to look from right to left we prefer to, at
least I use to look at the East model from right to left. But then I got convinced that it was better to look
at from left to right. So we decided to change to West constraint, okay but you cannot change of course
the name of a model, okay.
[laughter]
And, and…
>>: You say that East is where the process is moving.
>> Fabio Martinelli: Okay.
[laughter]
Okay, yeah, the process is going to the East, yeah. Okay so you consider this finite, so a finite interval
and say that in order to get reducibility the first vertex is always unconstrained, okay. Then of course
again the mixing time is related to the hitting time of the right boundary of the interval, starting from all
ones, okay. So the same group proved that, so there exists a positive number, let’s call it velocity, okay
V, such that the East model has cutoff at L over V, okay. A window that is big or root L, so at most root L.
So why, why is that?
Well say that start the chain from all ones, okay. Look at the process from left to right. So when the
process evolves you will start create first zero then the zero maybe may create another one, may die
and so on and so forth. So at any later time you will see a right most zero in the system that you would
like to call the front. So there is like a wave, okay. So call X T the location of the front, of course is
random, okay. So recently [indiscernible] Blondel in Paris, she’s a student of Christina Toninelli. She
proved the result that, so there is asymptotic velocity for the East model on the half line. Namely, so
there is a positive V, okay, such that if X T over T converged to V in probability in particular as T goes to
infinity, okay. Then she also proved that if you look at the law of the process behind the front. So you
sit on the front and you look behind you and you ask, which is the law that you see? So she proved that
as T goes to infinity the law converge to a unique invariant measure, so I call it new, okay. So actually
she first proved that and then using that she was able to prove a law of large number of course, okay.
So here we, I mean in order to prove cutoff we had to prove a more quantitative statement, okay. So
what we prove is the following that uniformly in the initial configuration if you look at time T the law of
the process behind the front, call it [indiscernible] T, okay. Then the variation distant from the invariant
measure in total variation, so this is exponent some stretch exponential, exponential minus T to sum
power alpha less than one, okay. This we do you know via coupling metal that involves also a maximal
coupling so no Markov, yeah, okay.
Now once you have a result like that then it’s rather simple than to deduce the cutoff result. Because
what you do is that you break the front into increments, okay. So you call Xian increments, you know
you fix some you know width T [indiscernible]. So the increments now they tend to behave like a
stationary sequence of weakly dependent variable. Because as soon as two increments are far away you
can apply this result and get that, I mean the biggest one has forgotten where it came from, okay. For
once we have that you have law of large number and CLT. The further front as a big O root T
concentration around it’s [indiscernible] the velocity, velocity times T, and as a consequence you get a
concentration for the hitting time of the right boundaries of the interval starting from all ones, okay.
So, yeah, so I would like to conclude with three open problems. So they are not in the open problem
session, but maybe, okay, anyway. So first problem is on the tree prove order one concentrations say
below criticality for all scales, not just for some subsequence, okay. That, I mean is highly non-trivial,
okay. So we try but we decided that can be quoted as an open problem.
[laughter]
Okay, now the next two problems they concern the North-East model and they are really interesting. I
think that there is some very interesting probability behind those. So the first problem is concern mixing
time for the North-East model. So maybe I can just do a picture here. So the North-East model you’re
required the North and East neighbor to be zero in order to update the vertex X. Of course in order to
have ergodicity you need you know boundary condition that are bunch of zero here, otherwise
something will not move, okay.
Now you ask, what is the mixing time in this box if the side is L, okay? So, if you, so below criticality we
know that the spectral gap is order one so independent of L, or as bounds independent of L, okay. So if
you apply you know the trivial bounds that relate relaxation time to mixing time then you get L square,
okay. But instead we have all the reason to believe that the mixing time is linear in L with
[indiscernible]. We prove L with some poly log, okay. But in a sense our argument is, really misses the,
some key feature of the North-East, okay. So I think that this would be really very interesting step
forward.
The third problem is even more ambitious, okay. So it’s a shaped [indiscernible] and that actually was
suggested by Steve Lalley. Okay, so consider the North-East model in the third quadrant, okay, and put
say all zero here, and start from all ones, okay. Then what you expect is that while the first guy to
update will be this guy here, okay, once this maybe zero then you can update also this one, and so on
and so forth.
So what you expect is that there is some random set, okay, at time P of vertices that have been reset
once, at least once, okay. So call this set VT, which is here, okay. Then, so if you ran simulation it’s very
clear that this set actually has a very clear shape, okay, asymptotic shape namely if you divide the set by
T and then send P to infinity this converges to a nice curve that looks like this, okay, and which looks
very, very much similar to the same curve that you would get by running a symmetric simple exclusion in
the, for the corner problem, okay.
>>: [inaudible] you make it look like that’s so easy. Can you move slow, more slowly on the
[indiscernible]?
>> Fabio Martinelli: Yeah, yeah, while here on this line is just East, on this line it’s East. So you know
that in time T you reach distance T, okay. On the next line you already need some help from the
previous line in order to move, and more help is needed when you go inside, okay. It will be like that,
okay. So this is really very, very challenging. But again because of the absence of this monotonicity
property in [indiscernible] a lot of you know useful tool are no longer there and that’s open, okay.
Thank you.
[applause]
>>: Any additional questions?
>>: So a related question to this third one is to understand and to combine this with a Blondel’s result
to understand stationary measures for a front with an even slope.
>> Fabio Martinelli: Yeah.
>>: So there should be one for any slope and…
>> Fabio Martinelli: Yeah, yeah, in fact I think that the way to attack that would be really to prove you
know a generalization of Blondel namely to assume that there is something that bring it back to the
origin and prove that exactly what you said. That there is a you know…
>>: [inaudible]
>> Fabio Martinelli: An in varying measure behind [indiscernible], yeah.
>>: [inaudible]
>> Fabio Martinelli: Yeah, but, so there is a key tool that is missing here that you have in East and is the
following, that from the North-East we don’t know that if you have some path of zeros, we are not able
to use that a close path of zero to a given point implies that at this point you reach you know P, say very
fast. This is missing, okay. In East we have this result and that’s very important that it’s essentially the
key, okay. Here it said it’s missing, you know there are several preliminaries a result that stated the
proof, okay.
>>: Thank you again.
[applause]
Download