Document 17864786

advertisement
>> Eyal Lubetzky: Hi everyone. Welcome, thanks for coming. We’re happy to have Yuval Peres talk
today about Random Walks on Groups and the Kaimanovich-Vershik Conjecture for Lamplighter Groups.
>> Yuval Peres: Thank you. So this work arose from writing the relevant chapter in the book with Russ
Lyons on Random Walk on, Random Walks, Probabilities on Trees and Networks more generally. This is
the, we were getting to the section on Random Walks on Groups and one of the well know conjectures
is, in that area is due to Kaimanovich-Vershik from eighty-three. So I want to tell you the history of the
conjecture. As we were writing the section we realized we actually know how to solve that question. So
I also want to tell you the solution which is a nice application of entropy. But, let’s see is this working?
Okay, so the, but the main purpose is this is an opportunity to share what I think is really a beautiful
area, Random Walks on Groups. So as I mentioned what I’m talking today is joint with Russ Lyons. Now
if, I’m going to be writing on the board so in, people if you could move forward you would see much
better, so. So I would suggest because there’s no slides. This is joint with Russ Lyons. So the basic
setting is we have a Group G which is a finitely generated by some set of generators S. For the purpose
of this talk really three types of group will be enough, just the usual lattice D, three groups are trees and
the Lamplighter Groups. But there’s a much more general theory so we have Group S is a finite set of
generators and it’s symmetric.
Okay there is a usual Cayley graph so X is the neighbor of Y if and only if X is in Y times the S. So this is
the right Cayley graph. Would be interested in the simple random walk on this Cayley graph and it’s
often convenient to avoid periodicity issues and talk about the lazy simple random walk where PXY is
just a half of X equals Y and one over twice the size of S if X is a neighbor of Y and zero otherwise. But
this, so this transition matrix is just the average of the transition matrix for simple random walk and the
identity.
So the basic questions, there are many, the basic questions involve the asymptotics of the random walk.
So one particular asymptotic is the asymptotic distance or the speed, so we’re going to write row for the
graph distance and if E is the identity then the distance from E to X we’ll sometimes just abbreviate this
absolute value of X. So this is just the length of X when you write it in a word in the generators it’s the
shortest word that represents S. Then the speed of the random walk is just the limit of the expectation
of XN divided by N. The reason this limit exists is because you know if you just write down the triangling
quality, so the distance from this to the identity is at most the distance from E to XN plus the distance
from XN to XN plus N. Now you just take expectations and use a group in variance. So you have
expectation XN plus N is less than expectation XN plus. This is expectation here is the same as that.
So you have subadditivity which then implies that this limit exists. In fact a subadditivity actually implies
we won’t need this but this equals the almost sure limit of XN over N. So you don’t need to take
expectation because of the Kaimanovich-Vershik theorem. But the elementary definition is with
expectation and then just use a fact about subadditive sequences of numbers when you divide by N this
limit exists.
So one basic question is when is this speed positive? So let’s recall the basic example. So of course for a
random walk on the usual lattice is ZD with speed to zero. If you have a tree which is a Cayley graph, so
Cayley graph of free group, free product of three generators or you could take four regular tree which is
the usual free group. Anyway on this tree if you look at the random walk at every, if this is say the
identity then of course the walk is transient and at every vertex was probably two thirds. We move
further from the root of probability one third we move closer after we left the root. So the speed, so
here the speed is one third or in general it will be D minus two over the degree. Right, so if you have D
regular tree there are D minus one edges going further and one edge going back. So you get D minus
two over D for the speed. So that’s just elementary from the usual law of large numbers.
So let’s come to our third example, the Lamplighter Groups. So on the Lamplighter Group I, rather than
draw the Cayley graph let me draw one element of the group. So one vertex in the Cayley graph and
then just tell you which are the neighbors of this vertex, that’s all we need to understand random walk.
One can describe the group operation but it is not really needed. So here is an element of the group.
It’s given by a finite sequence of lamps some of which are, so now I’m talking about the Lamplighter
Groups. I’m starting with the one dimensional one which is called G one. So this is a Lamplighter Group
just over Z. So we have a finite collection of lamps that are on and then all the lamps are off, except for
finitely many. We also have a marker or a lamplighter, just draw an arrow, which, think of the location
of the, current location of the lamplighter. So this is just one vertex in the group. It’s given by the
location of this arrow which is an integer and a finite set of the integers which are where the lamps are
on.
Okay, now what are the neighbors of this configuration? One neighbor is obtained by flipping the lamp
where the marker is. So this zero could be changed to a one, so this lamp could be turned on or if the
lamplighter was in a place where the lamp is on he could flip it off. So that would be one neighbor.
Then the other neighbors are retained by moving the lamplight, the marker either right or left. You
could do this on any base graph. So if we have the base graph is a two dimensional lattice Z two then
we could construct the Lamplighter Group G two, where again we have finite collection of lamps that
are on, maybe I’ll draw them in red. So these are on lamps and then all the other lamps are off. We also
have a marker which is in some lattice location, say in this lattice location.
Okay and again the legal moves are we could flip the lamp where the marker is or move the marker to
one of the lattice neighbors. So here in G two the degree is five. Conservatively one can define the
groups GD. Now all these groups have exponential growth. So when we measure the growth of a group
we look at the ball of radius R and ask how many points are in there? Does it grow polynomially,
exponentially? So of course in the lattice ZD it grows like radius to the D. In the tree of course it grows
exponentially. What about here? It’s also exponential.
So we can easily see that within distance R from, so the identity of the group, I didn’t tell you what that
is just when the marker is at the origin and all the lamps are off. That’s the identity and within this
tenths R of the identity you certainly have more than two to the R over two contigrations because you
can just go to the right distance R over two and turn on any subset of the lamps and that will take you
less than R steps. If you do this more carefully you get feebal natural recursion. So it turns out the
actual, if you take the size of the ball of radius R in G one then it asymptotically the size of it grows like
the golden mean to the R well up to constants. So I’ll write just say it here. But that’s not important
here; it’s just easy to see that there is exponential growth here. Of course in all these groups there’s
exponential growth.
So namely we would think that the difference in speed here arises from the fact that here it’s
polynomial growth, here it’s exponential growth. But the Lamplighter Groups are you know a sobering
example. So if you look at G one then clearly the speed is zero because by time N the lamplighter is, the
marker itself is just going, doing a delayed simple random walk. So it will distance about root N and all
the lamps that are on will be in that subset that he’s visited. So the distance from the origin will grow
about root N, certainly less than root N log N. So it’s going to be the speed will be zero.
In two dimensions it’s also true that the speed is zero and basically it’s due to the recurrence of the
simple random walk in the plain in Z two. That recurrence implies that if you look at the range of the
simple random walk in Z two. So how many vertices has the marker visited that’s going to grow sublinearly? In fact we know exactly it grows like N over log N. So the lamps that are on will be only on
some connected subset of size N over log N. At anytime that connected subset you know there is a
spanning tree there and you can just anytime go back to the identity by going along this spanning tree
and turning off the lamps as you go. So the distance from the origin is going to grow sub-linearly, just
barely.
Once we go to three dimensions and higher, then the speed is positive because the random walk on the
base graph on the three dimensional lattice is transient. So it is moving at zero speed but its transient
which means the range of the walk is going to be linear in time, transients implies that every vertex is
visited in expectation a finite number of times. So after, by time N you’ll visit order and different
vertices. Each visit you vertex you know with constant probability you turn the lamp on before you left
there. So it’s easy to see that the number of lamps that will have been turned on is linear in N by time
N. So the speed there is positive. So see the exponential growth doesn’t determine when we go G one
at a speed zero G three has polytive speed. Both of these groups actually are amenable which means
the surface to volume ratio goes to zero. So that doesn’t determine things either.
So what does determine things is something else which in general determines the asymptotics of the
walk which are harmonic functions. So harmonic function on the group is just satisfying U of X is the
average over the neighbors of U of Y. Okay, which can be expressed as saying that if we take the matrix
of simple random walk and multiply it by use, so think of the function, so U is a function on the group
but we can think of it as a column vector we multiply by the transition matrix and this should be just
identically U. It’s the same as saying that if I take P which I define to be this average. Then I’m saying PU
equals U. Of course these equations are equivalent because of this.
Okay, so these are harmonic functions and harmonic functions are really key to understanding
asymptotics of the walk. So it turns out that on these examples it’s all bounded harmonic functions are
constant and on this they’re not. That’s part of a general equivalence. So to see the importance of
harmonic functions for asymptotic you want to define the tail sigma algebra of the walk is defined as
whatever you can see when you only see the far future of the walk. So formally we intersect over all N
the sigma field determined by XN, XN plus one and so on. So this is, so formally you could think of any
event where the indicator of the event is, can be written as a function over the variables from time N on
and that’s true for every N.
So example of a tail event will be, you know is this, is the speed taking some value but, because the
speed is going to be almost sure constant. That’s not going to be a very interesting event. On the tree
we have natural tail events. We can, the random walk on the tree is going to be transient so it will go off
to infinity. We can ask does it go off in this part of the tree that is, right? So this edge determines a
partition of the tree. We can ask does the random walk go to infinity on this part of the tree. That’s a
function so if you have the probability starting from X that the random walk you know ends up in a
somehow this part of the tree. I’ll just write left part of tree it’ll be more formal in a minute. Then this is
the harmonic function of the starting point because to determine this you just have to, so you need to
see it, it must satisfy this identity just by conditioning on the first step of the walk.
So and this function is an interesting non-constant function. Because if X is already deep in this part of
the tree say here, then this functions very close to one because the walk is likely to end up here. It’s not
too likely to come back to this edge at all. Well if X start, if, so this was the key edge that was separating
things. Well if X is here then the walk is unlikely to end up on this side, on the left side of the tree here
because it would have to go and visit, go and reach this edge, cross it and never come back. So it’s easy
to see that U of X is not a constant.
This is part of a general connection. So for any tail function, the tail function is, so it’s a function of the
path. Okay but it’s a tail function formally it means it’s measurable with respect this tail sigma field or
equivalency it just doesn’t change if you change the first few steps of the path. Then, so for any tail
function you can define UF of X to be the expectation starting from X of F. So we start the random walk
at X.
Okay now I claim that this function is always harmonic and this is easy for lazy random walk. It’s true for
any random walk in the group. This is harmonic for random walks on groups and also for any lazy
Markov chain. But not for any Markov chain, they need some condition. But if it’s a group it’s fine. If
it’s lazy it’s fine and here I’m going to make my life simple but just assume that you have a lazy walk on
the group. The reason it’s true is because…
>> Eyal Lubetzky: The mic, maybe you want the mic…
>> Yuval Peres: Don’t know how to. Thanks, thank you. The key is that, so let me explain this for a lazy
walk. The key is for a lazy walk we can write if YN is a simple random walk then, and XN is the lazy walk.
Then we can write XN as Y at the binomial time, binomial in half. Only because we can obtain the lazy
walk by just tossing a fair coin to decide if we’re moving or not, so the number of actual steps performed
by the simple random walk by time N will be a binomial N one half. Then the fact that binomial N one
half is very close to binomial N plus one half in total variation which I, you know it’s just a simple
calculation that I distributed in the notes. So this total variation goes to zero, means that we can define
another copy of the walk I’ll X zero till, so we start at X and we can couple the two walks so that, so this
is another, this is the lazy walk, another copy. It’s not an independent copy. It can be coupled so that X
and tilled equals XN plus one eventually.
Okay so when you run these two walks they start at different, they start with a shift of one. But the fact
that there’s still the operation distance goes to zero implies that you can do this coupling. Then the way
we use it is to check harmonicity we apply P to UF of X. Now by definition of P just moving time one
step forward so this is the expectation starting from X. So starting from X means X zero is X and then we
look at F of X one, X two, X three, and so on. But because of this coupling, this is called the shift
coupling. It’s the same as expectation of F of X zero tilled, X one tilled, and so on. So this is another
sequence which because the sequence eventually agrees with X one X two and F is a tail function. Then
these two things are just equal but now this is, was our definition of UF of X.
Okay so there’s a correspondence going from tail functions to harmonic functions. This is invertible
because given any harmonic and I’m going to now focus on, so I’m going to focus on bounded functions.
So observe that if F is bounded then UF is also bounded. Given the harmonic bounded function U can
define the corresponding F, F of X zero X one X two. I want to define it just as the limit of U. Let me
write as a limsoup just so it’s defined everywhere, so the limsoup of U of XN.
So with this definition clearly as a tail function and if I take this function F and apply UF. So I have to, so
UF of X is EX of the limsoup of U of XN. Now the key is U of XN is a martingale. So for function XN which
is harmonic that just means that U of XN will be martingale and it’s a bounded martingale. So this, with
probability one this limsoup is a limit and the expectation stays the same for a martingale. So this
expectation will just be equal to U of X.
So there is a one-to-one correspondence between tail functions and on the sequence space, and
harmonic functions on the group. So this really means harmonic functions are describing the asymptotic
behavior of the walk. The important thing for us is that we, you easily immediately get from this that
the tail is trivial, which just means that for any set A in the tail the probability starting from a point X of A
is either zero or one. If and only if all bounded harmonic functions are constant. So that’s an easy
consequence just of this correspondence if we let F be, if there was a non-trivial event A we could define
the harmonic function which is correspondent we take F to be the indicator of A we define that
corresponding harmonic function, and that function would be non-constant.
Okay, so that’s the basic correspondence. Okay and now key theorem going back to the KaimanovichVershik eighty-three and Varopoulos eighty-five, is that the following three things are equivalent. One is
that the speed of the walk is positive. Two is that the, there are bounded non-constant harmonic
functions. Three is that the entropy of the walk is positive. So I’ll say what that is the entropy collect H
is the limit of H of XN over N. So the entropy of a random variable is just the entropy of its probability
distribution. This random variable takes finitely many values. I recall the definition of entropy in the
handout. Again there is a subadditivity so the entropy of XN plus M is less than the entropy of the pair
XN and XN plus M, which is less than the entropy of XN plus the conditional entropy, but or its equal to
the entropy of XN plus the conditional entropy. The conditional one is just, in fact here there is
inequality because once you condition on XN then the distribution of XN plus M is just like the original
infusion of XM. So you get, this is the basic inequality and condition entropy again I recalled it there, so
you have again subadditivity, so this limit exists and the equivalence is that the entropy is positive. Okay
so these three things are equivalent which is very powerful result. Again so the equivalence of two and
three were proved by Kaimanovich-Vershik, Varopoulos added the equivalence to the speed.
So let me quickly indicate where these are coming from. So let’s write H of, so I’m going to start with
the equivalence of two and three. So let’s write XH XK given XN plus H of XN this equals H of the
entropy of the pair XK XN. We also want to write it the other way so it equals H of XK plus the entropy
of XN given XK. Observe that as we already said this is the same as the entropy of XN minus K. So first
let’s use this, so we’re going to use it a couple of times. Okay maybe let’s write, so XK given XN equals H
of XK plus H of XN minus K minus the entropy of XN and just for the case of one I want to emphasize
this. So this is H of X one plus H of XN minus one minus H of XN.
Now this is decreasing because when we have a Markov chain the entropy of XM given XN is the same
as the entropy of X one, I’m sorry it’s increasing because the same entropy of X one given the whole tail
from XN. Because if I know XN the future is not relevant to determine X one. But this is a decreasing
sequence of sigma fields so if I condition on less and less information the entropy of X one grows. So
this is increasing in N so this is increasing, which means, right so this is not changing so it means this is
increasing. So it means the marginal H of XN minus H of XN minus one must be decreasing.
Now what will be the limit? The limit must be exactly this H because once we know this is convergent
this is just the Cesaro average of these so it must converge to the same limit. Okay, so the limit is
exactly H. Now on the other hand, so this tells us this goes to H of X one minus H. But this converges to
H of X one given the tail. Okay now we’re in set to prove the equivalence of two and three. So if, so
let’s do the, so if the entropy is zero. So this is I guess two implies three because if the entropy is zero,
let’s see which direction do I want?
No actually let’s start with three implies two. So if, so again we know that this expression here H of X
one given tail equals H of X one minus H. So if H is positive then conditioning on the tail matters. So the
tail is non-trivial. Because conditioning of something trivial won’t change anything. Now if H equals
zero then we can use the same thing for K then we’ll get, we can write H of XK given the tail, the same if
you just use that identity there you get that’s H of XK minus KH. So if H equals zero H of XK given tail is
just H of XK. So it means that the tail is independent of you know X one, X two, up to XK for every K. But
if the tail is independent of all of these well it’s independent of what the generator is independent of
itself so the tail must be trivial. This is the equivalence of two and three.
I’ll quickly explain one and three. So, alright, so let’s write, so this is no good. Non-trivial, okay so let’s
write the entropy of XN. So I’m abbreviating the PN from the identity to X I’m just writing PN of X. So
log one over PN of X. Okay now I want to use the Varopoulos Caran inequality. So I gave handouts to
some of you with the proof of that. It really just takes a page but it’s a general inequality which in this
setup it gives PNX less than E to the minus the distance squared to X. So that’s X squared over two N
times two. This is the Varopoulos Caran inequality and in fact this is, in this point it’s true for any
Markov chain if the stationary measure is uniform, and with a small variation it works for any reversible
Markov chain with a small change, reversibility is important.
So then one can apply it here and the key to get the short proof this is summing over X is we want to use
Varopoulos Caran but only on one term, only on this one, right. So why, because log one over PN of X is
going to be at least well log half plus X squared over two N. So, okay so plugging that in we’ll get, so this
is at least sum over X PN of X times a log log half plus X squared over two N. But then we just have to
see what this means. If we move the log half to the other side so we get log two plus H of XN is greater.
What we have left is just the second moment of the walk. So expectation of XN squared A normalize by
two N.
Okay and let’s divide both things by N two N squared and you Cauchy–Schwarz this is bigger than the
expectation of the walk, quantity squared over two N squared. You see immediately that if the entropy
is sub-linear if this limit is zero then you know the speed is zero. For the converse we have to use a fact
about a relative entropy that, so I guess this is A, this is one direction speed, well entropy equals zero
implies speed equals zero. On the other hand define a measure Q so Q of X, well Q is going to be
uniformly spread over spheres. So Q of X will be two to the minus K minus one over SK for X in this
sphere of radius K. So the sphere is just all the points Y with distance K from the origin.
Okay observe that this is a probability measure because the total measure of every sphere is like this. So
when we sum these up we’re going to get a half plus a quarter plus an eighth, so we’ll get one. So this is
a probability measure. The basic total, both basic, sorry relative entropy inequality tells us that zero is at
most the sum over X of A PN of X log PN of X over Q of X. This is also not working. Okay and what can
we say about this. So you see here log one over Q of X is certainly less than well log of A two D to the
power of, or two to the power K plus one. So I’m going to use D for the number of generators, okay.
So the size of the sphere is certainly A you know at most D to the K, right. So we have that, so plugging
that in here we’ll get here the sum over X PN of X and the log will give us A X plus one and then we’ll get
minus H of XN, right. Because when we take PN log PN will give us minus the entropy. Okay so what
you have here is just the expectation of XN plus one. Let’s see did I miss anything? So minus the
entropy so, yeah I missed the log two D factor, okay but that’s all we need, so that shows us that it
bounds the expected, it bounds the entropy from above using the expected distance. So now divide by
N plus to the limit and you see that if the speed is zero then the entropy is zero. So it’s the opposite of
inequality we got there, slightly different constants.
Okay so that’s the classical theorem. Then, so going to, next question is, so we know when there are
bounded harmonic functions that are non-constant and when they are not. But when there are what
are they? How can they be described? So this is equivalent to describing what are the tail events for
the walk?
Now one easy case is a tree. So there the tail of the walk can be described by what is the infinite ray
that the walk converges to? That’s quite classical elementary to verify that this, so all harmonic
functions can be represented as the expected value of some function on the boundary. When the
boundary of the tree corresponds to the infinite rays, so this time we don’t go through this. But the
Kaimanovich-Vershik question or conjecture concerned, so in the nineteen eighty-three paper they
asked, so what are the bounded harmonic functions on the groups GD for D great to equal three? So as
we discuss for D equals one two the speed is zero so there are no bounded harmonic functions except
the constants. Question becomes interesting in D at least three.
They identified a class of harmonic functions and they asked if that’s all of them. So, well this classes
they are determined by the asymptotic configuration of the lamps. So if I take any vertex and look at
the lamp there, the vertex will only be visited finitely many times because the walk is transient. So the
lamp will have an eventual setting. It will be on or off and this is random. But we can ask you know U of
X to be the probability starting from X that a particular lamp, say a lamp at some other vertex W is on.
Okay this is a function of X. Okay so, and I should say so X is in the Lamplighter Group and W is a vertex
in the lattice here ZD.
So we start at the configuration X. So we have some lamps, in particular the lamp at W could be on or
off and the marker could be somewhere. We ask for what this probability and this lamp at W is on and
here I mean at time infinity. So after the walk has traveled to infinity. So initially we know the lamp is
on or off at configuration X. But what if this is infinity of course it varies from one X to another. So if in
particular in X the marker is very far from W then you know then it all depends on what is the lamp at
WN times zero at X? But if the marker is say at W then you know the, it’s very likely that, or it’s
reasonably likely that the lamp will be flipped.
This is a non-constant harmonic function and it, more generally you could do any function of the final
states of lamps. So we look at the final lamps, I time infinity, we have some configuration of lamps. The
marker has disappeared and we can look at this kind of function. We can ask is this, so these are all
harmonic functions, very easy to check, are these all?
>>: Are these for finite sets?
>> Yuval Peres: No, not just for finite set. So for any, because as, in any, yeah so. So I can, if I have an
infinite set I will have to ask, you know I can’t determine you know is this lamp on, is this lamp off and so
on. But I can ask for an event involving infinitely many lamps. Such an event you know, so I can ask if,
so any measurable, so the eventual lamps are collection of, although at finite time only finitely lamps
maybe on. In the limit when the particle has gone off to infinity it will leave a trail of infinitely many
lamps that are on. So I have a configuration of bits on the whole lattice which infinitely many will be on.
I can take any function, any measurable function of the config, bounded measurable function of this bit
configuration, and take its expectation and all of these yield harmonic functions.
The question is, is that it? Or equivalently is the tail sigma field for the walk generated by the final
lamps? This was the Kaimanovich-Vershik eighty-three conjecture and it was, so I’ve, I spend some
months on it in the ninety’s, it didn’t work. Then Anna Erschler, well it appeared in two thousand eleven
but she did the work in two thousand five, showed that the, so the answer is yes. These are all the
harmonic functions but for D great to equal five.
Then the, you know the new result is that it’s true for dimensionally three. The dimensions three and
four are really the most delicate. So let me explain where this has come from. So the key is that
extension by Kaimanovich of this entropy criterion. So here is a, basically a Kaimanovich criterion, so
suppose F is some sigma field contained in the tail and invariant under the group. So if an event is in
there and you shift it it’s still in the, in F. So for instance the final configuration of the lamps define, you
know all events measurable with respect to that is such a sigma field. We know that’s in the tail and
want to determine is it the whole tail?
So he gives an entropy criterion, so F equals the tail again almost surely so [indiscernible] sets of
measure zero, if and only if you take the entropy of the walk given F and this is sub-linear. Okay so just
like we saw that the tail is trivial if the entropy of XN is sub-linear. So what it says is if you have some
candidate subsidient condition on that subsidient if you look again that condition entropy and see is that
sub-linear or not. If it is then actually it’s a positive. Okay, so it’s a natural transition. The proof is very
similar to what I showed you. So I’m not going to go through it.
So this is an ABcircuit here and the problem is how to apply it. So how did Anna apply it in dimension
five? Let me also say that there’s also the fact that if you take the probability PN of XN but given F this
actually always converges to, so if you take one over N, the log of this converge is almost surely to this
limit of the normalized entropy. So that’s just a technical point you can ignore. So the point is in order
to establish that F equals to the tail. How do you apply this criterion one to, so idea is find a set DN
which is a deterministic set, so not a random set? Well I should say, so I shouldn’t say deristic. So DN
should be a, not exactly a deristic function of our candidate. So of F, it’s a set it’s determined by F so
that the probability of XN and DN is large say ten to one. Ten to one, it’s enough that this probably is
just uniformly positive and DN is sub-exponential, so it was a Little Law of N.
Okay and that guarantees that the entropy, the conditional entropy is zero. Okay, so we want to find a
set, so we want to be able to, given our candidate F to predict where the walk is. The number of
possibilities should be sub-exponential. So how does Anna apply this? Anna Erschler applied this as
follows, so we want to describe the set DN. So we want to first know where the marker is. So for the
marker, well there are less than N to the D possibilities, right, because the marker is somewhere in the
lattice of most distance N from the origin so we can be generous at most N to the D possibility.
So let’s just enumerate over these possibilities N to the D is a factor we can A easily lose. So suppose we
know where the marker is. Okay, then her idea is let’s take a ball around the marker of radius N to the
Epsilon, Epsilon’s just going to be small it won’t matter exactly. Let’s enumerate over all the possible
settings of the lamps. So we want to understand what is XN? XN is a configuration of time N. We see
the final configuration of the lamps. So we have some uncertainty near the marker. So let’s enumerate
and halve this factor into the D. Let’s enumerate over all possible settings of the lamps in this ball. So
that’s still something we can afford.
Now what about the other lamps? Well there are lamps; the intuition is there are lamps before and
after the ball. Those before should all be in their final setting because we’re not going back there and
those after should be in their zero setting because you haven’t gotten there yet. Okay, so that’s all true
if you can tell what’s the before and after. So and in dimension five and higher we can tell because once
we have the separation of N to the Epsilon basic properties of random walk tell us that the probability
that this random walk will intersect with, so this is the origin where we started. This time separation is
enough so that the probability of this random walk will intersect here is going to zero. So that’s all we
need so it’s just, this is very standard estimates for random walk.
So that’s, that’s it so, well almost it. So we don’t actually see where you know how do we tell this trail
because we don’t really see the walk, all we see are some lamps, some are on, some are off. But the key
is that these lamps are telling us where the walk is with only an error of log N. That is, if you take the
lamps that are on and kind of look at the log N neighborhood of that that’s going to contain where the
walk is, you know by time N.
The key is not, so I mentioned that the future of the walk and the past of the walk don’t intersect. But
there’s something better, the same estimates tell you that they don’t come within log N of each other.
So it’s really easy to separate what are the lamps from past and the lamps from the future just by this
log N separation. So we first, so we take that the lamps that are on, look at the log N neighborhood of
that and we can, that will have one component of the origin. Then you know components, other
components which, so those will be the, so the lamps before the ball will all be in the final setting, yes?
>>: Isn’t corridor leading to this ball [indiscernible] too costly? When you have N, it’s length is N and
you are taking…
>> Yuval Peres: But we’re not enumerating there.
>>: So what are you doing in order to actually understand…
>> Yuval Peres: I’m just; the corridor is just used to explain how we tell apart the past from the future.
So what, because we don’t see a connected path, we just see you know occasional lamps that are on.
There can be gaps between them but the gaps are at most logarithmic. So if we look at the path from
the origin with these you know with these logarithmic gaps.
>>: Yeah.
>> Yuval Peres: Then that will, all these lamps will be in the final setting. Then the lamps on the other
side will be all in their offsetting. So we can tell the past from the future. Now this, again in the last five
minutes let me explain the new part.
So all this is fine in dimension five and higher. In dimension three and four this argument fails because
the, you know the future will come and intersect if we have any separation like this. We cannot afford a
separation you know here which is order N. It has to be smaller than that otherwise there are too many
lamps in here. So the past, the future will come back and intersect the past and will kind of get a mess
of lamps from the past and the future and we can’t tell which lamps should be off and which lamps
should be in the final state.
Okay, so how and again it’s too many to enumerate over. So the key is to understand the communitorial
structure of these intersections. So although this will come to intersect and intersect many times but
not order N times, because if I look at any individual point here in the past the chance that it will be
intersected by the future of the walk is not to high. It’s like, it can be bounded by N to the minus Epsilon
over two. So basically it’s just from the fact, so just from the fact that the random walk, simple random
walk in D dimensions the fact that this will come back to the origin at time P is P to the minus D over
two. So the time it will probability of returning to a vertex after more than P steps is going to be about P
to the, so one minus D over two. So if I want exactly P I get P to minus D over two. If I want more than P
I get P to minus D over two. So if D is, so this is take D equals three so this would be one over root T.
That’s the source of this you know N to the minus Epsilon over two. So because we know we have a
time separation of at least N to the Epsilon because between the past and the future here. So of these
points only N to the minus Epsilon over two can be intersection points. Okay, but so how do we use
that? We’re going to enumerate over where these intersection points are. So remember we have a
factor N to the D for where the walker is so that’s not the problem. Let’s take D equals three, so N to
the D is N cubed.
Then dimension four is just the same even slightly easier than three, so we have N cubed for where the
walker is. We’re still going to, we’re going to take Epsilon less than one over D so we can still enumerate
over this ball into the Epsilon D over all the lamps in this ball. Now we’re going to enumerate over
where this, where the future intersected the past. So how many options is that? Well in the past there
are only end points and the number of intersections is at most N to the one minus Epsilon over two.
Okay now actually there’s some log factor here because I don’t know exactly where the walk is. So
there’s some logs but they don’t matter. So the point is this binomial coefficient again you can ignore
the log it’s still sub-exponential, right. So it’s, okay so this is still sub-exponential as long as, so then log
is not important, right. So you can bound it by N to this power and that’s still sub-exponential. So we
can enumerate over where these intersections are. In fact when we take these intersections around any
one of them we draw a little, so we have these N to the one minus Epsilon over two intersections,
around each one we’re going to draw a little ball of radius log squared. So we’ll have maybe have log to
the six end points and we’re going to enumerate over all the lamps in this little ball. So that will cost us
two to this power which we could still afford.
So we’ve enumerated over where the intersections are and what are the lamps near these intersections.
Now what is left? What is left is still a lot of points. I mean it’s still linear number of points. In these
points we have no hope of knowing which are the past and which are the future. They’re kind of
combined, but we can separate them into components. Because once we remove these little balls,
right. So we have this number of little balls N to the one minus Epsilon there was two corresponding to
the intersections. They separated the past, well they separate this, so if we look at the component of
the origin. That’s some end points, order end points that we want to separate into future and past.
So this component, each time you remove one of these balls how many that component can break up
into more components but only to a logarithmic number because this ball is logarithmic. So the total
number of components after I take out all this ball is still about two to this number, okay. In such
component it’s pure, either it’s all future or it’s all past. I don’t know which but whatever it is it’s not a
mixture. Because any mixture must come from the future and the past coming together and already
enumerated over that.
So these, so I look at all, so when I remove, so I have this jumble which is before the ball which is a
combination of past and future I don’t know. But when I remove the balls of where the possible
intersection is I get components, they don’t, and the number of components is just N to the one minus
Epsilon over two times some logarithmic factor. So I’m going to have another factor like this N to the
one minus Epsilon over two times some power of log. But this comes from enumerating over the lamps
in the little balls. This comes from and this is really the new feature that we’re not enumerating over
the exact lamps now. But just enumerating each of these big connected components are they past or
future and then they all get either zero or the final setting of the lamps. So we still have just this
number of possibilities. All these factors multiplied together are still sub-exponential. So the
Kaimanovich criterion holds.
>>: Right, so we can say that you enumerate over the ordering at which you intersected this line, right.
This will dictate everything just this factor of which you can take…
>> Yuval Peres: The ordering but you have to use the fact…
>>: This will tell which of these, this will tell you the partition and which are past and which are, other
side with the past and history.
>> Yuval Peres: You have to use the fact that the number of intersections is small.
>>: Yeah, yeah of course. You stop…
>> Yuval Peres: Once you do that it’s just a…
>>: I just want a re-culmination of what you said at the end about the components, after you said…
>> Yuval Peres: Okay.
>>: [inaudible] if there’s a classical point, right?
>> Yuval Peres: Yes.
>>: And then am I going to have to multiply the on and off functions?
>> Yuval Peres: Because the past component what it has it just is going to get the final setting of the
lamps. Because the past component means we’re not revisiting it.
>>: Okay.
>> Yuval Peres: So the lamps at time N are going to be the same as the final lamps.
>>: [inaudible] okay.
>>: I do want to ask more questions? So for trees [indiscernible] these objects?
>> Yuval Peres: Yeah for trees it’s…
>>: [indiscernible] functions are.
>> Yuval Peres: Yeah, yeah it’s just, like I said it’s just in terms of the rays. So infinite self avoiding path
to the boundary and the random walk will go to one of these infinite path and that characterize. The
harmonic function is just given by you know functions on this kind of boundary that goes back to
nineteen sixty.
>>: Like [indiscernible] Cayley graph like what are these functions [indiscernible]?
>> Yuval Peres: So there are conjectures for various classes of matrix groups. There are many open
problems if you go inside of bounded harmonic functions you go to positive harmonic functions that’s a
much harder topic. So we don’t know what they are on these Lamplighter Groups.
>>: So you can do everything just take lamplighter over any graph which is transited?
>> Yuval Peres: Yes.
>>: So since there is no abstract [indiscernible] any transitive graph…
>> Yuval Peres: There’s no abstract but there is…
>>: You believe that it will be true.
>> Yuval Peres: No it follows now. So Anna’s argument works for all transitive graphs that have growth
bigger than five. So it’s not just I explained it for you know ZD for D great equal five. But we know
enough about groups to see that this is true. Then about green functions on groups that her argument
works the moment the growth is bigger than five. So the only open things were when the growth is
three and four and those are very few options. It’s basically lattices, Heisenberg Group and finite
extensions of them. In all of these the random walk estimates were using hold. So basically these were
the only cases needed to understand on the case of any group or transitive graph.
>> Eyal Lubetzky: Okay let’s thank Yuval.
[applause]
Download