>> Yuval Peres: Good afternoon. There is a... David Wilson wrote 2004 on random adjacent transpositions, and

advertisement
>> Yuval Peres: Good afternoon. There is a classic paper that
David Wilson wrote 2004 on random adjacent transpositions, and
that paper had some open problems that several of us have looked
at over the years, and we're delighted to have Hubert Lacoin tell
us about the resolution of some of these problems. Please.
>> Hubert Lacoin: Thank you very much for the presentation and
for the invitation to come and speak here. So what I'm going to
talk about the adjacent transposition shuffle and the simple
exclusion on one dimensional graph, meaning both the segment and
the [indiscernible]. Okay. So here is the [indiscernible] on
the segment. So it's a Markov chain on the symmetric group. We
start with the identity. And define it in discrete time at each
step to get the state of your Markov chain from the state you
have in the previous step. What you do is you choose an adjacent
transposition uniformly at random, which is the transposition of
X, X plus one, where X belongs to the set one up to minus one.
And okay. By [indiscernible] the you increase invariant measure
for dynamics is the uniform measure on the symmetric
[indiscernible] basically you should be close to a uniform
[indiscernible].
So okay. [indiscernible] but just for the sake of it, here's a
graphic representation that might be clear than the formal
definition. So you start from the identity, which I represented,
I represent the process graphically on the segment by saying, you
know, or by [indiscernible] and okay. What you do at each step
is you choose a pair of adjacent location and you inverse the
content.
This is what you get in step one. And you do it over and over.
And this is what you get after six step. And okay. After many
times, you should get some [indiscernible] close to
somehow if
you imagine this is a deck of cards, after many transposition,
your deck of cards should be shuffled.
So the question is how many steps do you need to shuffle the deck
of card? So here there is a small technical problem, that is
that after N step, you will have come close with an N
transposition so that basically, you're just looking at the
signature of the transposition you have obtained. You can tell
the [indiscernible] of the number of the steps you've performed
and you never
you will still keep the memory of the initial
condition. So there is two way to bypass this small
technicality. So you can easily work with a lazy chin. At each
step you will did nothing with probability one half. Or you can
work in continuous time, meaning that you put a Poisson clock on
each pair of adjacent sides and when the clock ring, what you do
is you compose with your [indiscernible] transposition. So if
you put Poisson clock with rate one at inch, what you will do is
speed up the Markov chain by your factor N minus one you compare
to the [indiscernible] one so keep that in mind when comparing
results in the literature.
Okay. So I will be interested in the [indiscernible] equilibrium
in terms of two distances which are [indiscernible] distance and
the separation distance so I guess all of you know what they are.
So just to [indiscernible] distance, it measures how well you can
couple two measures, define them to the same probability space.
The separation distance also looks at something that is very
often [indiscernible] context of mixing of Markov chains and
okay. We denote by DT and DST, distances to [indiscernible] and
separation distance respectively. And here, when I say that I
started from the identity, but, in fact, I submit it doesn't
really matter what condition I start for. This [indiscernible]
dependent on the initial condition.
Okay. So what I call the mixing time is the first time at which
your distance to equilibrium drops below a special epsilon.
Epsilon is just a fixed parameter. And what we want to know is
the asymptotic behavior of this mixing time when the size of your
system N tends to infinity.
Okay. So I will recall some results presented in literature
concerning this process. So okay. Very often, Markov chain and
the symmetric group used as a benchmark to test new techniques in
this study of mixing time so that would be a central object in
the area. So a first result that have some important historical
importance is by Diaconis and Shahshahani. It's a study of the
process of discrete time for the complete graph with an
[indiscernible]. And what they showed that the mixing time was
of order of one half of N again. So okay. Says there is no
dependence in epsilon on the right hand side. But just means
that the leading [indiscernible] does not depend on epsilon.
Okay. I mean around this time the distance quill rub drops
abruptly from one to zero.
So since this papers, there has been a lot of research concerning
also random or semi random transposition shuffle concerning the
adjacent transposition shuffle, the first reference I found was
by David Aldous in '93. And what he proved is that the mixing
time was at least N to N square and at most N square log N.
>>:
[indiscernible].
>> Hubert Lacoin: Yes, now I switch to continuous time. And
many people were convinced that N square again was the good bound
but some technicalities prevent [indiscernible] in 2004, David
Wilson showed that, okay, asymptotically, right mixing time was
somewhere between 1 over pi square and square again and one other
2 pi square and square again. And he conjectured that the lower
bound was sharp with some convincing heuristics.
Okay. So now I just a tiny bit of motivation. So let's imagine
that we want to study very complicated system, like a
[indiscernible] particle where [indiscernible] and the velocity
and the [indiscernible] and we can bounce in random directions.
And you want to know, well, you start answer something like that,
however time do you need to forget about the initial condition.
And somehow, you want to simplify this picture, you have to
the feature that you really want to keep is the interactions are
local, that particularly the only for short times only care about
what the neighbors are doing and okay. The simplest way to have
a paralysis tem with particles interacting with their neighbors
is really the transposition shuffle in the segment. If you want
to distinguish between the particle or the [indiscernible] if you
want to kind of ignore to have unlabeled particles, to forget
about the [indiscernible].
Okay. So now I can present my result concerning the adjacent
transpositional shuffle. Specifically what I showed is that the
conjecture of David Wilson was right. So that the mixing time
was, indeed, asymptotically equivalent of one other 2 pi square
and square again and that the separation time was
it was just
twice as large. So if you look at the distance to equilibrium on
the time scale as I said before, it drops abruptly from one to
zero around 2 pi square to the minus one. And a natural question
on what kind of window you should zoom on now that we see some
[indiscernible] transition.
So this phenomenon dropped from one to zero, it's called cut off.
And the term has been coined by Aldous and [indiscernible], I
think. And this window is just called the cut off window and
it's still an [indiscernible] problem for the adjacent
transposition and the right conjecture is that it should be
[indiscernible].
>>:
[indiscernible].
>> Hubert Lacoin: I don't get good quantitative bounds. I guess
maybe some improvements
the best you can hope is really log
log N. N square times log log N. I don't know if you can really
get there. Okay. So now
yeah?
>>:
[indiscernible].
>> Hubert Lacoin:
So this should be [indiscernible].
>>:
[indiscernible].
>> Hubert Lacoin:
>>
So lower bound, the lower bound N square is
[indiscernible].
>> Hubert Lacoin: Okay. So now let us look at kind of a
simplified process, which is, okay, instead of looking at all the
position of all your counts, you decide to paint K of them in
black and N minus K in white to consider K count as particles and
the other [indiscernible] as empty site. And instead of looking
at the bottom line, which is the motion of the particles, and
okay. It's not true this the prediction of a Markov chain is
always a Markov chain, but here it's the case and it's almost an
obvious statement. And if you want to describe the dynamics
below, what happens that the particles just make
well,
[indiscernible] they jump and they are [indiscernible]
probability one and if you particle twice to jump on the neighbor
which
on a site which is already occupied, the jump would be
cancelled.
So here's how you can define formally the process. So the
articles are one and the empty sites zero. The state space is a
number of 01 configurations with exactly K1s and an alternative
way to define the process is to say that, okay, when you put some
Poisson clocks on pair of adjacent sites and when the clock ring
you good change the content of two sides. Basically, I mean, if
you have two particles and two [indiscernible] you want these
interchanging because you don't distinguish between the
particles.
And for this process as well as the equilibrium measure is a
uniform measure. This set of continuation. So now that you
define the distance two equilibrium, what we do is
[indiscernible] for the separation distance. For this process,
where you have no symmetry anymore so the configuration to start
with does matter so when you define the distance equilibrium what
you do is you take the maximum of all initial condition.
[indiscernible] distance and the separation distance.
This
And then you define the mixing time and separation mixing time in
the same manner [indiscernible]. So the result is I obtain is
that, okay when you have K particle or when the number of
particle and [indiscernible] are similar to [indiscernible] and
okay, I [indiscernible] take the number of particles smaller than
N over 2, but there is no restriction to that there is
individuality between the particle and empty sites. It shows
that the mixing time is asymptotically equivalent to 1 over two
pi square and square log K. And the separation time is twice as
large. So here you need a lower bound in the growth of K if you
want to get the lower bound for the separation mixing time.
That's a technicality.
So the second line is not exactly proven for small K. Okay. And
finally, a last resort is when you consider as a process on the
[indiscernible] instead of the segment. So in that case, okay,
you
again, [indiscernible] in total variation. In fact, I use
a different method and
you can push things further and have
the exact cut off window. So these three lines there just use
at, okay, you zoom on a window with N square, you will see
something. And if you zoom on the something smaller, you won't
see anything. Okay. And here, I wrote some limit, but you
should think about that as the statement calls for a
[indiscernible] we have no [indiscernible]. I guess it exists,
but I can't prove it.
Okay. So as I told you, the method I use for the segment and for
the [indiscernible] are quite different. So, in fact, when I
mean that the method are different, I'm talking about the upper
bound because the proof of the lower bound is kind of
is the
same and it's basically [indiscernible] Wilson's paper. And so
the advantage of the method, what I will develop, in fact
[indiscernible] for the segment, it relies on the [indiscernible]
so in particular I use censorship and equality by Peres and
Winkler also the [indiscernible] inequality. So the interpretive
side, you can use it to divide result for adjacent transposition
shuffle and for the post distance separation. On the down side,
I mean, it does not export to more general graphs because, in
fact, you need to really be on this segment to have a natural
order on the space of configuration.
Okay. For the circle, I used a comparison results developed by
Liggett, which compare the exclusion on the circle with a set of
independent random [indiscernible]. And so on the positive side,
you can get a sharp result for the cut off window. And in fact,
half of the proof would work so if you have your high dimension,
say, two dimensional grid, on the down side, you don't get the
result for the adjacent transposition shuffle and I don't see how
you can get result for the separation distance with this method.
Okay. So during the rest of the talk, I will mainly switch to
the [indiscernible]. So first, what I will do is to provide the
small review of the proof of the lower bound on the mixing time,
which is the same for circle and segment and then I will focus on
the case of the simple exclusion on the segment with N over 2
particle and okay, give first proof of an upper bound that
matches within a constant factor which was already known and
explain how you can improve this bound and get in your sharp
result by using [indiscernible] arguments.
>>:
Isn't the lower bound just [indiscernible].
>> Hubert Lacoin:
>>:
[indiscernible].
>> Hubert Lacoin:
>>:
Yeah.
Yeah.
I will do it quickly.
Okay.
>> Hubert Lacoin: And also, it gives some idea of the heuristics
of why it should be true. Okay. So finally I will come back to
the circle and give a brief overview of the proof the circle. So
okay. Let me call [indiscernible] my assistant. Let's say that
I start from an initial condition K. And okay. I call UXT the
probability of [indiscernible] at site X at time T. When
starting from K. And a simple observation you can make is that
as you look at this expectation, in fact, it is the solution of
the discrete declaration.
So a simple way to see that is to see that, in fact, if you look
at the density of particle, the expected density of particle,
it's the same for that independent random [indiscernible] and
this [indiscernible] is just the generator of a simple
[indiscernible]. So a simple corollary of this fact is that if
you look at the first [indiscernible] coefficient of
[indiscernible] so let's define A1 of [indiscernible] to be the
sum from X, from Z, from X equal 1. For zero to N of
of X
[indiscernible] X by over N what you have is that the expected
value of [indiscernible] exponentially fast. And with rate which
is just two times one minus X by over N. This is equivalent to
pi square over N square. And okay. These guys just
okay, the
eigenvalue to this function for the generator of the random work.
Okay. So one start from a configuration this has all of its
particle on the right, okay. So [indiscernible] because you're
working with article system and the height function. In fact,
what you really have to consider is even
okay. I'm using to
write it for [indiscernible] X minus one half over N pi. So if
you start from [indiscernible] from the left half of the
[indiscernible] particle, what you have is that initially, you
want a [indiscernible] N. And now if you sample [indiscernible]
from equilibrium, basically your E to X will be almost
[indiscernible] variables. So this is just some of independent
variable of finite variance. So typically, the variance of
[indiscernible] of N square.
Okay.
>>
So
[indiscernible].
>> Hubert Lacoin: [indiscernible], yeah. So in order for your
system to come to equilibrium, what you could think is that,
okay, you need the expectation of [indiscernible] to drop at
least from this value N to the equilibrium situation so that now
the expectation get kind of lost in the [indiscernible]. If you
want to turn that into a rigorous proof what you have to control
is the variance of A1 of [indiscernible] for all time. So what
you can use is that this guy is [indiscernible] and then
computing the
banding the quadratic variation of this
[indiscernible], what you get is [indiscernible] the variance of
this guy is bounded by N. So the variance is never larger than
the equilibrium variance.
And as a consequence of that, if T is smaller than one half of
lambda one to the minus one, N, okay, so
one half minus delta,
what you have is that with hyper ability, A1 [indiscernible] is
larger than some constant times N to the one half plus delta.
And then from this, you can conclude that the [indiscernible] is
close to one because this is not the equilibrium value.
So then if you replace the lambda one by asymptotic
[indiscernible] what you get is that bound for the mixing time is
one over two pi square and N square again. So a reason to
believe that this gives you the [indiscernible] mixing time is
that, okay, not only is this for the coefficient K exponentially
fast but all the others, and the others came with a larger rate,
okay. So that basically, I mean, this function should be the one
that [indiscernible] to equilibrium [indiscernible] manner. But
you cannot turn that into a
this argument into a real proof
because, okay, this thing provides you N eigen functions or N
minus one eigen functions and your Markov chain means the space
of dimension [indiscernible] so you can't do the full
the full
spectral analysis. So in order to find an upper bound, you need
to be smarter and to use [indiscernible]. So right now, I need a
picture from the screen.
Okay. So the first thing we do is basically, we kind of change
the Markov chain. We study to an isomorphic one. So we choose
to study the height function to the particular system, which is
defined by each time you see a particle, you want to
you make
your I function climb by one and each time you [indiscernible]
you go down by one. And because your N over 2 particle, you end
up at zero. And if you look at the transition for the dynamics
of the high function are, it corresponds to [indiscernible] of
corners in your high function. So like when you look at maximum,
you can look at minimum and look at minimum and look at maximum.
And okay, we equip this state of function with the natural order,
which is that we see that some high function is larger than
another if it's [indiscernible] above the other.
So you call [indiscernible] corresponding [indiscernible]. So I
will continue on the board now. And so the first remark to make
is that this order is somehow conserved by the dynamics. We
haven't shown [indiscernible] meaning that we can find a grand
coupling. So a coupling of [indiscernible] we can couple the
Markov chain starting from [indiscernible] in a manner that if
two condition are ordered, then they remain ordered all through
the time. Okay. And so if you try to construct a coupling in
the most naive way, what you would do is to say okay, I would put
a Poisson clock on each coordinate, and when it rings if I see a
corner, I will switch it. But this would not provide you a
[indiscernible] coupling.
What you're going to do instead is say for each site, I will not
take one clock process, but two clock processes. For each X, the
two clock processes which I 2x up and 2x down and if 2x up rings,
then okay. If 2x up rings and X is a local minimum, I turn it to
a local maximum and if 2x down rings, and X is the local maximum,
I turn it into a local minimum. And I will [indiscernible] use
two different process for up flips and down flips and you should
do things that way and this is kind of straightforward proof.
So, in fact, using this idea, you can get an upper bound for the
mixing time that matches your lower bound up to a constant
factor. So [indiscernible] so I can [indiscernible].
Okay. So what you want is a bound to turn variation distance
equilibrium and you know that this is smaller than the maximum
over pairs of initial condition of the distance between
[indiscernible] these are the distribution of [indiscernible].
This is this a triangular inequality. And then you want to bound
this guy.
So this is the [indiscernible] distance is obtained as picking
the best coupling between these two probability and see by how
much
by what probability is different so in particular, you
should take a given coupling, it will give you an upper bound and
[indiscernible] distance so if we take our [indiscernible], this
is smaller than the probability that under other [indiscernible].
And now because we have a dynamic that conserves nor
[indiscernible], we know that the two paths that will be the
latest one to couple are the highest one and the lowest one. So
now you write them as a [indiscernible] and this is because,
well, I mean, basically, once these two have coalesced, all of
the other conditions are sandwiched between them so they will.
Yeah. And okay. So this is.
>>:
Wouldn't you upper bound
[indiscernible].
>> Hubert Lacoin: Yeah. Okay. So now what you get is that this
is the probability that A1 of [indiscernible]. The difference
between the two coefficients is positive so like different
definition, this is [indiscernible]. And okay. When this guy is
positive, it's larger than 1 over N so you can say that this is
smaller than N times the expectation of two A1. Okay. Just
symmetry here. You can say that this is smaller than N to the
three times exponential minus lambda 1T just by using the
exponential detail of this guy. And then [indiscernible] taking
T equal lambda 1 to the minus 1 times four times log N. We get
that this is smaller than N to the minus one. [indiscernible].
So here we've lost a constant 8 in the game and the reason that
basically for the lower bound, we wanted A1 to be smaller than N
to the three half and, okay, the other notation was N to the one
half where high function becomes N to the three half and here we
require it to be smaller than 1 over N so this explain. And
okay. So basically, what we need is some other idea in order to
make this
to improve this upper bound. Now we use a picture
again. So the reason why it's difficult to control the mixing
time of this object is because it's very high dimensional. So
within thing we can try to do is to say that we are going to
[indiscernible] lower dimensional process, which I call the
skeleton by just instead of carrying about the whole height
function, we only care about the K minus one points that are
equally spaced on the segment. Okay. So this is what I will
care about. And what we want to say is that basically, this
guy's the equilibrium, then shortly afterwards, the whole system
is at the equilibrium. And the two that we want to use to say
that is the [indiscernible] equality. Basically what we will do
is after some time, we will freeze the dynamics at this point so
that after the equilibrium
equilibrium, what we need is only
to mix smaller systems. So I only need this picture. So I will
continue on the board. And okay. So the idea is to define, we
want to show that if at time one half, plus delta and
[indiscernible] the skeleton the equilibrium. Okay, then things
become okay. And the second time, I will show that you can prove
what
I will give you ideas of how you prove this. So what we
define is some [indiscernible] dynamics, prime T so here I assume
that I stop from the very top initial condition. And it's not
true that it has to be the worst
>>
[indiscernible].
>> Hubert Lacoin:
>>:
What is the coefficient [indiscernible].
>> Hubert Lacoin:
>>:
Sorry?
One half plus delta, okay.
[indiscernible].
>> Hubert Lacoin:
1 over pi square, yeah.
>>:
[indiscernible].
>> Hubert Lacoin: I think of
okay. I first select delta and
then I take K called one over delta. Thank you. So I defined
[indiscernible] prime of T, which is a modified dynamic, which is
just [indiscernible] NT zero and for which the skeleton is
freezed for T larger than T zero. So [indiscernible] that means
I just
okay. If I call XI the coordinate of my skeleton,
which I am just I times N over K, what I will do is I will sign
in the clocks to the XI for I equals one to T minus one.
And the reason
the result, which was
yeah. Which is
published in [indiscernible] in 2001 and maybe appeared in the
paper in 2011, by [indiscernible], which is called the censorship
in equality and a way to read it in our particular setup is to
say that if you look at the distance to equilibrium for your
original dynamics, then it's smaller than the distance to
equilibrium for your modified dynamics. And this is
[indiscernible] you start from the maximal initial condition. So
here I'm kind of [indiscernible] something how it prove as
general result, but if I have time to explain how things work on
the [indiscernible] you'll see that the main interest for this
method is to use the adjacent transposition shuffle where you can
start from any condition you want.
So now, if you look for times after times zero, your dynamics
just becomes [indiscernible] to the skeleton. It's
[indiscernible] K independent dynamics on systems which are of
size L, which is N over K. So each of these systems, they are
not N over [indiscernible]. It's not L over two particles, but
what I said before still apply. So the know the systems are
mixing the time [indiscernible] log N. In fact, with this time,
you should take the [indiscernible] L square over N. By large
[indiscernible], I mean something that does not depend on K.
There will be really close to equilibrium so that the product of
the consistent will also be close to equilibrium.
And this guy is just N square log N by T square. So what this
tells you is that for T equal T zero plus delta N square log N,
I'm much after this mixing time and therefore, condition teal my
skeleton, my system is at equilibrium so if I know that I am
[indiscernible] my skeleton was at equilibrium, the full system
will be at equilibrium.
Okay. So now what remains to prove is that the skeleton is
indeed at equilibrium. So here I told you that basically you
want to get a shot down, you have to take K large. I will do the
proof of K equal one and I hope this will be sufficiently
convincing for to show you the proof is also right for K as large
as you want. So okay. So another consequence of my
[indiscernible] is that if you look at any time the identity of
your distribution with respect to equilibrium, it's an increasing
function in the sense that if this function at a given
configuration, you can be an incomplete function and you should
take in that function which is above another, this function will
be larger. And again, this relies on the fact that you start
from the maximum configuration and this thing is also stable by
projection. So what I will do now is to say that, I will look in
the distribution of the middle point. So I will look at
[indiscernible] at coordinate N over two and I call this guy bar.
I call new bar is the low of eta bar under zero T zero and I call
the new bar equilibrium low of eta bar. And what I have is that
D over new bar over D of new bar is an increasing function. So
it's defined on the subset of Z. And okay. So what I know is
that if I look at the expectation of eta bar and the measure new
bar, it's equal to U N over 2 T zero okay. Which is just a
solution of these equations starting from the
and this is
because of the time I've chosen, you can [indiscernible] to
resolve the [indiscernible] equation. It's small O of square
root of N.
So what you have that is, okay, the expectation is smaller than
the equilibrium [indiscernible] so this kind of indicates that
the two measures should be close, but you can't turn that into
proof. It's easy to found counter examples. But you have
additional information that this [indiscernible] function and you
can use that to complete. So I will do that now. What view in
general is that the [indiscernible] variation distance between
new and new bar, it's smaller than some constant times the
expectation of [indiscernible].
Okay. If you prove that, you're done. So okay. I will start
from this term and then I will show that it's larger than this
one. So if I look at this guy, okay, I can always subtract the
expectation and the equilibrium, which is equal to zero. I can
write it as the
well, I write it as an integral. And it's the
integral of the product of two positive function. One is
identity minus one. Okay. So when you have a two increasing
functions, what you can use is you will have this in equality.
If you use this right away, you won't get anything. You will
just get that this thing is larger than zero.
>>:
[indiscernible].
>> Hubert Lacoin: Oh, yeah. So what we will do first is split
this thing under two events. So as the density is increasing,
what you know is that the total variation distance, it's equal to
new of A minus new of A where A is the [indiscernible] so it's
you can write it in this manner, okay? This is true for some
value of small A. So I'll choose to spread this in two parts.
And then I'll use this correlation equality on for both sides so
I have to be careful because, okay, these things aren't
probabilities anymore. But what you obtain [indiscernible] is
that you have eta bar minus its larger than
okay. The
expectation of eta bar conditioned to A times U of A minus U of
A. Plus the expectation of eta bar conditioned to the compliment
of A which applied by a new of the compliment of A minus U of the
compliment of A. And then this is equal to
okay. Your total
variation distance times the [indiscernible] of eta bar
conditioned to A minus, the expectation of eta bar. Okay. So
what you need to do to conclude is prove that this guy is larger
than some constant times square. So you can discuss the value of
small A. So if small A is larger than zero, this guy is larger
than the [indiscernible] of eta bar conditioned to eta bar larger
than zero. Okay. And this is the square root of N. If it's
smaller than zero, and you don't know anything about this guy,
but you would know the same thing for this guy. Okay. And this
[indiscernible] to conclude. So if you want to do it for the
whole skeleton, basically, so you will take, instead of taking
just the expectation, you will take the sum of expectations for
all of the coordinates and basically instead of using this simple
correlation in equality, you use the TKG in equality for your K
coordinates and it should read the right way, basically. That
works. So this is what I had to say about the simple
[indiscernible] on the segment maybe I can tell you a bit more
about how you can make this proof work for our adjacent
transposition. Okay. So for adjacent transposition, the first
thing is when you have a permutation, you want to map it into
so you map to the particular configuration and to height
function. You want to map the permutation into discrete
surfaces, okay which I'll just functions from one
from zero N
square to R. Okay. And which are defined as [indiscernible].
The height at coordinate XY is the [indiscernible] from that
equal one to X. And okay. XY over N and this shift us just to
make this center at the equilibrium. And okay. If you have this
height function, I mean, you can easily define another on the set
of permutation. And again, this [indiscernible] is by the
dynamics so that you have [indiscernible] and you can use the
censorship and the equality. Then the censors you will use is
different in that case. You will do censoring two times. So the
first thing you will do is you will start so again you have a
parameter of K, which is one over delta, and what you start doing
is you run the dynamics during your time delta N square log N,
okay. And what you do is that during this time, yeah, you censor
the K minus one transposition that corresponds to the XI and
basically this has the effect of shuffling the
your
[indiscernible] each segment and it makes what the count number,
say, from one to N over K indistinguishable. So basically, to
describe fully about the permutation, you need N height function
with this first step, you reduce to K height function, okay?
Then you N censor the dynamics and you run it for time T zero.
And after time T zero, basically what you know is that if you
look just at this coordinate, XI is just IN over K. So if you
just look at the skeleton, you can put to it equilibrium just
using what I told you there. Okay? So if you looked at your T
line functions, you can couple them with an equilibrium like
that. Okay. So like you can couple with [indiscernible] so that
all the [indiscernible] coincide. And once you've done that,
basically, you're done because you can say for the remaining
time, for a time delta N square again, what I will do is I will
censor the [indiscernible] corresponding to the X size again and
this will allow me to kind of squeeze the kind of [indiscernible]
on the board. Okay. So okay. Basically [indiscernible]
anything, but this is the idea of the proof. So if I have some
time left now, I can talk about the technique which is used for
the second.
>>:
I'd like to hear it.
>> Hubert Lacoin: So for the [indiscernible], basically, the
first eigenvalue of the operator is four times as large. The
time you need for instant for the solution and for your
[indiscernible] to become comparable for equilibrium situation is
now one over 8 pi square and square again. And the first thing
you would like to show is what you know, by this time, I assume
that they have N over two particles. By this time, what I have
is that starting from any [indiscernible] configuration, the
expected value of the present of the particle somewhere is
it's one half [indiscernible] big O of N to the minus one half.
What I would like to show first is that, okay, if you look at a
segment and you count the number of particles you have into it,
you should just take the
you should compare that to the
expectation. You will be up at most by square root of N. So
let's say that I call STX, which is a sum from Z equal one to X
of eta. I will just forget about the initial condition. Minus
its expectation. For all [indiscernible] and zero, the
probability that there is an X such that STX is larger than
square root of N times S, which is smaller than two times
exponential minus CS square. I can find the consistency
[indiscernible]. So basically, my deviations are sub Gaussians.
>>:
[indiscernible].
>> Hubert Lacoin: So basically, what you want is to replace this
line by what? Okay. Corollary is that for T equal the value
that I want, I can replace ST
I can replace this guy by one
half and this won't affect this. Sol the proof of the statement,
let's say that, okay, here I'm going to have to work to prove it
[indiscernible] for all X and this is important. But let's say
that we just want to prove the statement for the middle point.
What we want to have is [indiscernible] transform of
yeah, of
STN over 2. And I would like to show that, okay, for all
[indiscernible] in one, this guy is smaller than exponential N
and [indiscernible] square times some [indiscernible]. You have
a bound of this type, you can prove this for N over 2 and then
basically, by dichotomy, you can prove also for N over 4 and 3N
over 4 and you will get better bound because your intervals are
smaller and then if you [indiscernible] you will be able to sum
everything. And okay. To prove this, in fact, you just have to
look in the literature. So this you can see that it's a function
of the particle configuration, okay. And the particle
configuration, you can [indiscernible]. The position of the N
particles, okay, you don't know which particle is at the position
at the N over 2 particles. You don't know which particle at this
place, but as a function of the symmetric, this is not a problem.
And there is, in fact, two papers by Liggett in the '70s which
tells you that if you have a function which is positive definite,
so in the sense that if you take any two dimensional margin of
this function, it's positive definite and you look at its
expectation for a simple exclusion process, it's smaller than
expectation for the simple
or for [indiscernible]. Okay. And
if you want to evaluate this guy, okay, at a given time, it's
just a sum of independent [indiscernible] with different
parameter and, therefore, I mean, you have subtracted the means
so the [indiscernible] transform is just [indiscernible] and
you're done. So now we'll come back to the screen for last time.
Okay. And so now you want to say that once you've
these
situations of the square root of N around the mean density of
particles, this is enough to [indiscernible] within the time N
square. So what you start doing is you draw the height function
of your system as before. So let's say that you start from zero
[indiscernible] decide to draw your height function. Let's say
that now you start from deterministic configuration that the
situation square root of N. So it can be as bad as you want.
Can just go out for square root of N step and then go down. And
you want to couple it with equilibrium, but I mean, now your
height function, they're not attached anymore. They can wander
as high and as low as they want. So what you will do is, okay,
let's say that you take a particle configuration at equilibrium,
define
you can
from it, you can define the height function
with zero translation and what you do is you will take two
[indiscernible] of this height function, you will place it one
below the configuration and one above and then, okay, you can run
dynamic on this height function that conserves the other and what
you will do is just to make the blue guy and the red guy
coalesce. Okay. And we'll take this up. You remember the
picture. So the screen up, please. So I call side one of T the
blue guy, which is somebody starting from equilibrium and that
[indiscernible]. And side two of T is the red guy, which was on
the bottom. And what I'm interested is the volume between the
two, which is the sum from all the [indiscernible] of this guy.
And, okay, it's random because the amount of which I had to shift
the height function to make things order is random. But it's an
order N to the three halves. Okay. And what I want is to find a
coupling on the dynamics as the high function is that this guy
fluctuates as much as it can so that it squeezes rapidly to zero.
Okay. But you should look at this guy it's [indiscernible] so it
has no drift. You can only count on the fluctuations to go down
to zero. So the way you will define your coupling is to say
that, okay, as soon as corners are distant, this is a
[indiscernible] independently so you maximize a situation and
when they are together they [indiscernible] together so that you
can [indiscernible] the other. So any fixed time you're at the
equilibrium at the beginning they have no contact with one
another. Okay. Maybe I have to say something before that. So
you should look at [indiscernible] with this coupling. It only
makes plus one and minus one jumps and it's a [indiscernible] so
if you make a time change, you can
well, it's just a simple
random work. It starts from N to the three halves to the time it
needs to reach zero is of order N 3. So what you would need to
show naively is that when your time changes the weight of N so
that you really need time N square. So initially, this is true.
Like your two guys have no contact and typically in the past, you
have N variable corners that can flip independently. But as soon
as the area starts to be small, this is not true anymore. So
what you have to do to their [indiscernible] is to
[indiscernible] in time to show that, well, okay, the time by
which you have two few corners is
well, you have only
you
have many corners as soon as the array is large enough is
basically the idea, and then you do this, yeah, multi scale
analysis in order to say that basically, I mean, when you have
abandoned time change which depends on AT and which is good
enough to bring you to zero [indiscernible] times N square.
Okay. So I guess I will stop here and thank you for your
attention.
>> Yuval Peres:
Any comments or questions?
Thank you.
Download