Document 17864521

advertisement
>> Yuval Peres: Welcome everyone. We're very happy to have Jonathan
Hermon from Berkeley who will tell us about social connectivity.
>> Jonathan Hermon: Okay. So thank you for having me. And thank you
for coming. So the name of the model I'm going to talk, we call it the
social network model. So the first part of the talk, the main part,
will be about my master's thesis. So this is joint work with Itai
Benjamini, my advisor, and Gady Kozma from the [inaudible] institute.
Later I hope to get to talk about work still in progress with Ben
Morris, Ellen Sly and one of Ben Morris's student Johan Kim.[phonetic]
So we start several independent random walks on some given graph. And
for instance, we may assume we start with one walker on each vertex
times zero. Have a very nice picture for that.
And this is totally worth that. Okay. And when two walkers reach the
same vertex at the same time they're declared to be acquaintances. So
they don't stick together. They don't coalesce, continue
independently, we just keep track of the fact they met each other.
So this induces an equivalent relation is having a path of
acquaintances. So we say the two walkers have path of acquaintances
between them, call them A and B. A met somebody who met somebody who
met somebody who met B.
So we studied the evolution of the equivalent classes of this relation.
And in particular we care about the first time in which there's only
one equivalent class. Okay. And so as I mentioned, this is the
Walkers don't coalesce, but we can view it as a coalescence process of
the equivalent classes. This time increases. There become less and
less equivalent classes.
So it is perhaps plausible to argue that this model captures some
aspects of reevolution of real social networks. At least of the sizes
of the equivalent classes. Not so much of the internal structure,
because there's no reason to see the degree sequence like you would
expect. But, yeah, but anyway we don't make any such claim. And we
didn't try to prove such claim.
Okay. So a bit more precisely. Underlying graph in which the walkers
walk. We call it G. And we always assume it's connected. So I'm not
going to mention it every time. And we don't have problems to deal
with loops and multiple edges. So we allow them.
And so in this part of the talk we are only going to consider the case
that G is finite. So we denote the size of the graph by N. And we're
actually interested in asymptotic behavior so implicitly we have some
sequence of graphs with size then to infinity. And all the results are
in terms of with I probability.
And so I didn't say that. So this presentation is very old. So I
didn't improve it and I didn't update. So I'm sorry for that. So I'm
going to discuss only two starting conditions. The third one is less
interesting. So either start with exactly one walker in each vertex or
start with at vertex V with Poisson number of walkers and do it
independently for all vertices. And at vertex V use N times the value
of the stationary distribution at vertex V and this is with respect to
simple random walk. And we call it the Poisson starting conditions.
And this is normalized in away that the expected number of walkers will
be N. And right for regular graph this is simply Poisson 1 everywhere.
And okay. So the walkers perform independent lazy simple random walk,
because we want to avoid problems of parity, saying bipartite graph
when some walkers may not be able to meet each other, and by less than
simple random walk, the usual meaning either you stay in your current
position with probability 1 F or move to one of your neighbors with
equal probability. Right. Unless there are loops or multiple edges
and then obvious adaptations.
So the acquaintances graph at time T, we denote it by GT. But it
should not be confused with G. The underlying graph in which the
walkers walk. It's a graph in which any vertex corresponds to one of
the walkers, in a bijective way and we draw an edge between two walkers
if they have met up to time T including. So this enables us to use the
language of random graphs instead of equivalent classes.
So obviously the connected components of the acquaintances graph at
time T are exactly the equivalent classes of the, having a path of
acquaintances relation. So as I said, we care about the time in which
there's only one class. We call it social connectivity time. So we
can now describe it as simply the minimal at T for which GT is
connected.
Okay. So one example to illustrate what I said so far but also to
illustrate another important point. So let's say we give the walkers
numbers as names. So these are the names of the walkers. And let's
say all the acquaintances that were made up to time three, then we say
that walker one and walker four have a path of acquaintances between
them. And we allow such a path but notice that the edges were not at
the monotonic with respect to the time manner, right? But we allow
that.
Okay. But monotonicity in time for other previously studied models is
required. If you want to model spread of information or an epidemic,
because obviously one cannot infect another person with deadly disease
today if he only gets exposed to that disease the next day.
Okay. So comparison of our work to previously studied models. So
apart from the monotonicity in time, we don't restrict ourselves to
nice graphs, like ZD or trees or expanders. But more importantly is
the monotonicity in time. For instance, consider the cycle of size N.
So if we start with one infected individual, the time we need until all
individuals get infected will deterministically order of N it's of
order which is theta of N squared over log N. Okay. This is not
trivial. But this is the order. And now we can ask ourselves what's
going on in the social network model, in the cycle. So it turns out
that the answer is much, much faster than this. And you can start
thinking about it while I'm explaining this solution.
So for simplicity, I will give the starting conditions where in each
vertex we have one walker at time zero and in continuous time, because
it avoids some technicalities, but the answer remain the same in any
other combination of walk type and starting conditions.
Okay. So let V and U be two neighbors. And we look at the nearest
neighbors from each side. There's C log N nearest neighbors. C will
be determined shortly. So let's call the edge between them E.
>>: N cycle?
>> Jonathan Hermon:
Yeah.
>>: What do you mean by nearest neighbor?
>> Jonathan Hermon: Okay. I'm sorry. I meant that -- okay. So this
is the neighbor of U then we have U1, U 2. And here we have U C log N.
>>: Log N.
>> Jonathan Hermon: Yeah. So okay we think about these two groups as
sets, as armies that are trying to force these two guys to be in the
same cluster by doing a Tong's [phonetic] motion.
And because we do continuous time, simple random walk, we can't have
the case that two walkers at some time are here and then exactly the
same time they switch positions by opening above each other. Yeah.
But in the lazy simple random walk case you have to deal with that and
you get the same answer, maybe with different constants.
So it's enough that at least one walker from this army will land at
this side, will cross here and stay here, say, and the same thing for
one walker from this army.
And if you're going to do D, which depends on C, obviously, log square
N steps, then you can make sure that the probability that this doesn't
happen is at most, say, N to the power minus 2. And then you can just
do union bound over all at most, over all N neighbors. So with high
probability we get the social connectivity is at most order of log
square N. And it turns out that this is tight. This is also if we do
small constant epsilon log square N steps, then with high probability
there will be a lot of edges that no walker have crossed, which once
you have two like this you know that social connectivity has not been
achieved yet.
Okay. So the main results are polylogarithmic upper and lower bounds
on the social connectivity time, assuming that the family of graphs we
work with is of bounded degree, which might be a bit surprising. But
as you saw even for the cycle, intuitively you don't expect it to be
log square, but that's the answer. And before discussing it in more
details, I'm going to present only the proof of the upper bound, which
is more interesting. I want to mention some other secondary results.
>>: Was that the time or the expected time?
>> Jonathan Hermon:
The time.
So this is the result in terms of --
>>: I see.
>>: [inaudible].
>> Jonathan Hermon: Yeah. We believe the cycle is the worst example,
at least up to constants. And we have good reason for that. I will
get to that. So some other results.
>>: With high probability for the expectation?
>>: No, I didn't see with high probability.
>>: That's what I mean.
[laughter].
They're quite sure that the execution --
>> Jonathan Hermon: Okay. So the first thing we've considered, just
because it's easiest, was the complete graph. So it turns out that the
answer is log N and it's very concentrated around it. So with high
probability it's log N 1 plus little O of 1. And if you're going to do
1 plus C step, then -- okay. So this converges with double exponential
rate with C. After three steps we already have a giant component with
high probability. But two steps are not enough.
Okay. So there's some analogy between these results and non-result
about GNP, which are the threshold for connectivity for GNP is log N
over N and also there if you add the constant C here then you have this
double exponential convergent in C. And in both cases the threshold,
the bottleneck for achieving connectivity are the isolated workers.
And when you think about it, at least this result make a lot of sense
because after log N steps, the marginal in both cases are similar. And
when you think about it the acquaintances graph after log N steps the
edges are in a sense almost independent.
very surprising.
So heuristically it's not
Okay. So one question, which I return to, is it makes sense to guess
that perhaps the complete graph as the lowest social connectivity time.
At up to negligible additive terms. So this turns out to be true in
the case that the graph is regular, not necessarily of bounded degree.
But in general this turns out to be false. You can push -- you can
make a clock N with constant that you can push it arbitrarily close to
0. But there's even one pathological example that we found in which
it's asymptotically lower than log N.
>>: Constant you can push up to 0 then it has to be possible to have
like, [inaudible] and get something [inaudible].
>> Jonathan Hermon: So this is actually some log to power smaller than
one. Okay. So in the spirit of the analogy to GNP, then G 3 behaves
very similar to GN 2 over N. And I don't have enough time to explain
why. And G2 is very similar to GN 1 over N. And to turn this
heuristic to a real proof, it was very useful for us to use the paper
of [inaudible] and Yuval Peres about the Martingale approach for
critical random graph.
>>: I'm sorry, what is T3?
>> Jonathan Hermon: The acquaintances graph after three steps. And
for the complete graph we started with one walker in each vertex, and
that's the only example where lazy simple random walk would define that
you just pick the next position out of the uniform distribution. And
this is cleaner in that case.
So perhaps without giving more details I can explain more about this
analogy if somebody's really interested. Otherwise I'll just continue.
I think I'll just continue. Okay. So now it's not very surprising
that for expanders you get that the answer is theta of log N. So this
is all presentation. So something else is written. And the idea is
that obviously after three steps on the complete graph you get a giant
component. And after order of log N steps you'll get giant component
for expanders. But this actually can be improved and you can get it
after constant number of steps. And this is due to the work that is
now in progress with Morris Sly and King. Once you get the giant
component, you can run the process for another, some constant time log
N steps.
And you get the picture which looks like this. So you already have
some giant component. And you think about them as predators, trying to
catch the rest of the workers. So any walker which is not in the giant
component we think about it as a prey. And the event that -- so now
let's condition on a path that the walker not from the giant component
performs. So the events that the walkers from the giant component meet
this guy are conditionally independent. Once we condition on the path
that the paper forms. The second moment calculation using spectral
analysis shows that after C log N steps, the marginals are -- that they
catch him are some C tilde log N over N. So the probability that none
of them catch him because 1 minus that to the power number of workers
and by choosing C to be arbitrary large, you can make this become 1
over N squared. Okay. Now you can do union bound.
And another result which in some cases improve on the general bound
that we have for bounded degree graphs is in the case that the graph is
regular, then we can show this bound holds right I'm talking about the
second theorem and the idea is to do a coupling between the situation
on the graph and the situation on the complete graph.
So basically we perform log N samples, but we don't look at consecutive
steps. We wait between samples. And the waiting time is always six
times the mixing time times log N to the base 2. Okay. And by waiting
that long, we make sure that each sample we have looking total
variation distance, very near just one step on the complete graph.
>>: What did the S ->> Jonathan Hermon: Yeah, so this is for the different starting
positions. So this is for starting with one walker in each vertex and
this is for the starting conditions I didn't tell about but actually
both of them -- so this is ->>: Let's focus on the one.
>> Jonathan Hermon: But it's also true for the Poisson starting
condition, actually. But you have to work harder.
>>: But surely you don't need to go there.
>> Jonathan Hermon:
This is a very naive argument.
>>: But you don't have any example that you need more than order T
times the single log.
>> Jonathan Hermon:
No, I don't have such an example.
>>: So, right, so now we get N samples that together in total variation
distance just look like N samples in the complete graph. And now we
just use the result that on the complete graph after this many steps
social connectivity has already been achieved with high probability.
Okay. So in some cases, right, if the mixing time is asymptotically
smaller than log to the 5, then this is an improvement.
Okay. So as I said, the presentation is not updated in the lower
bound. The lower case C does not depend on D. I already told you that
if the graph is regular, then we can just take one minus little O of 1
because the complete graph is the best out of negligible terms up to.
And in general we can relax the bounded degree condition very far in
the lower bound. So as long as the minimal degree, the minimal sorry
value of the distribution is at least some constant time N to the alpha
minus 2 for some alpha between 0 and 1. We have such lower bound for
some lower case C that depends on alpha. And for the upper bound big
case C depends on D if I'm not mistaken. I'm sorry for not being sure.
I think it's order of D to the full and for regular graph the proof
right now give D square. And you cannot remove the dependency on D.
For sure it should depend on D at least order D. I have an example
that shows that. So you can't improve the dependency much more.
And actually the proof as it is written works so for any D, right?
When you change this to be some universal constant times D to the four.
So whenever D is polylogarithmic, the upper bound is polylogarithmic.
Okay. So I'm only going to talk about the proof of the upper bound.
So the Y here is redundant. I'm sorry for that. So we define the
probabilistic ball around the X with probability parameter of P and
time parameter T to be the set of all vertices such that lazy simple
random walk starting from X as probability at least P to hit up to time
T. These are in a sense the vertices that we are likely to hit in the
near future.
So the fact that it is important is that whenever the maximal degree is
D and we look at some set of vertices, which is not all of the graph,
then the expected exiting time from A is at most eight times D times
the size of the set squared. And in the regular case you don't even
need the D.
So is the corollary is that the probabilistic ball around the X with
probability parameter 1 over M and time parameter for M squared can't
be too small. It has to be at least of size M over 2 D. And to see
that -- okay. So start the lazy simple random walk from X and run it
for twice the exiting time. Okay. Twice the expected exiting time.
So we run it for at most this many steps, which is exactly 4 M squared.
So we run it at most twice the expectation, and we exit with
probability at least 1 F. But the outer boundary, because of the
maximal degree decondition is at most of size D times the size of the
probabilistic ball, right?
Outer boundary, I mean outer vertex boundary. So it turns out by
averaging that they need to be some vertex on the outer boundary that
we hit. The F is because we exit with probability 1 F. And the size
of the outer boundary is M to the 2 D. But we have this D, right, from
here.
>>: Is it an example what it's like the statement that you're proving
on this slide?
>> Jonathan Hermon: Yeah. Okay. So what I'm saying is that we exit
from the probabilistic ball with probability 1 F if we run the walk for
4 to the M squared steps. And then by averaging, we must have some
vertex on the outer boundary that we hit with probability ->>: I think can you write down what statement you have?
>> Jonathan Hermon:
That's the statement.
>>: You write it on the board because when you switch, it would ->> Jonathan Hermon: Okay. I got you. Sorry. Okay. So we start by
assuming by contradiction that this is not the case. And if this is
not the case, okay, so we assume that. Then the exiting time from the
probabilitistic ball, T complement, is at most 2 M square. Okay.
Using the previous fact. And now we run the walk for full M square
steps. So we exit with probability at least 1 F and then there must be
some vertex at the outer boundary, which we have with probability -this is for exiting. Size of the outer boundary, which we upper bound
by D times the size of the set and this turns out to be 1 over M at
least 1 over M. Sorry. By the assumption that the probabilistic ball
is small. So it turns out that this vertex of the outer boundary
should have been inside the probabilistic ball. That's a
contradiction.
Okay. So just ignore.
everything myself.
It's a complete mess.
I will rewrite
>>: What's the [inaudible] [laughter].
>> Jonathan Hermon: Okay. So H stands for hitting. So it's minimal T
such that XUT equals V. MUV is minimal T such that XUT is equal to
YVT. So both X and Y are independent lazy simple random walks. One
starting from U and one starting from V.
And M stands for meeting, and we want to relate the distribution of
these two quantities. So now denote by NWT the expected number of
returns to vertex W up to time T. So it's simply PI 1 TW, W. Okay.
So now we can say that the probability that they meet before time T
over 2 is at least like the probability to hit V up to time T, up to
something. And that something should be not what is written there but
close enough. Sub V. Sub W, NWT plus 2. Okay. So these are results.
But to understand these quantities. So this quantity -- so this is
always at most some universal constant times D I guess times square
root T. Okay. When the maximal degree is D. Okay. And this can be
improved in many cases. And you can prove that it's actually of order
constant whenever, say, you have uniform upper bound on the effective
resistance between two vertices. Okay. But for our purposes, this was
enough to get the upper bound of the theorem. Okay. So all of this
including the inverse is the worst case is 1 over square root D.
Okay. So the proof I'm just going to give a sketch of the proof. The
idea is very simple. So whenever we look at some joint path that they
meet after K steps and we can always reverse this path, right? And
that's why we get the D term. And then we get some path from U to V.
Okay. So instead of running over all path here, we run over all path
that don't form a cycle here. But that's only work in our advantage
for proving such an inequality.
Okay. But why do we get this term? Because when we reverse the fact
we don't get necessarily all the path from U to V that we need to
consider here because we can add a lot of cycles here, right? So
denote the cycle by C as long as K plus 2 K, sorry, plus the length of
the cycle is small or equal than T, then this is another path we should
consider. But when we sum over all cycles, the contribution turns out
to be exactly this. Right? If this is W.
Perhaps there's one delicate point that I didn't mention. So now I can
explain the idea of the proof. So using the probabilistic ball we fix
the parameters. Can I erase?
We fix the parameters. We always look at PB X1 over M 4 M square. We
fix M to be something of order, say, log M. So we know that the size
is also of order log M. That's by the first fact. Right? First
corollary. And then we know that if we have a picture like this, two
walkers inside of some probabilistic ball around X then the probability
to start here and hit V is at least by reversibility 1 over M square 1
over D. Just by going, okay, so now we do it for 8 M squared steps.
So first 4 M squared steps is first to get to X and then another 4 M
square steps to get from X to V. Okay. This is very wasteful unless
we're considering a path. And, okay, but this is just to start at U
and hit V. But now we want to say let's say we have two independent
lazy simple random walks. So what's the probability for them to meet
each other in now 4 M squared steps? So by the previous lemma, it
turns out that this is of order instead of 1 over M squared, it's 1
over M cube. Okay. We have this square, but we use this with square
root in the previous lemma. Okay.
So okay so we fix this M. Okay. So every time when we say near future
in what's comes or near future is log square steps and likely is 1 over
log to the cube. Okay. And so we know that the probabilistic balls
are never empty because they're of size log N. Let's do, for instance,
Poisson starting conditions, then we noted the number of walkers is
distributed like Poisson theta of log N. This is because the Poisson
starting conditions are stationary over time. So by the concentration
of Poisson random variable around its mean, we know that with high
probability all the probabilistic balls are occupied all the times.
So we can do union bound over at most N probabilistic balls and say log
to the power 10 times and say that always all the probablistic balls
are all occupied. From this we can learn the deterministically that
the picture must look like this. But I'm not going to show this. It's
going to take me a few minutes. That for any Class A up to the social
connectivity time, there must be some probabilistic ball such that we
have a walker from A. So U is in A. And some walker not from A inside
the same probabilistic ball. Okay. Okay. So this turns out to be
deterministically once the probabilistic balls are occupied in all
times. Now to conclude the proof. Okay. So let's assume for sake of
simplicity we may condition on the probabilistic balls to be always
occupied without distributing the distribution of the walks. So this
is just a technicality because anyway what we condition on happens with
high probability.
So we think of every 4 M squares as a small trial to make our social
class unite with some other class. Okay. And we always have
probability of success in a small trial, which is P. P is of order 1
over log cube. Okay. But now we can look at, say, 2 log N over P
consecutive trials.
So always we have success probability at least P. So the probability
that all of the trials are failure is N to the power minus 2. Okay.
By setting C to B 2. So we have every time at most N classes. So by
union bound over the number of classes, it turns out that if we look at
this many consecutive small trials as one big trial, then in any big
trial we can say that with high probability all the classes unite, with
at least one other classes. So the probability of failure in one big
trial is at most 1 over N.
But once we have this, we are done, because all we need is log N
consecutive successful big trials. Because deterministically if we
have K consecutive big trials, which are successful, then all classes
are of size at least 2 to the K. All right. Okay. So we just need to
do union bound over log N steps and in every time we fail with
probability 1 over N. So we can do this union bound. Okay. So this
gives you a bound which is -- so we do log -- we have log N iterations.
Each iteration uses log N over -- so P was log to the minus 3 N. And
each trial uses order of log square N steps. And what you get is order
of log to the power 7. I'm sorry for being too technical and petty
about the details.
Okay. That's the proof of the upper bound. Let me see how much time
do I have? I have at least ten minutes. Okay. So now I can talk a
bit -- so, first of all, we only prove this theorem for bounded degree
graph. So there's a reason for that. You can consider the case that
you have two complete graph. And they're connected by single edge. So
obviously you need order of N steps for this edge to be crossed. So
the social connectivity time is of order N.
theorem.
Okay.
I --
You cannot improve the
So how do I -- I don't need the presentation anymore.
How do
>>: Just ask for it.
>> Jonathan Hermon: I have a few conjectures here to show. So the
first one is that -- so always the social connectivity time for any
starting conditions is of order at most log square N. So the cycle is
the worst case. And what we do know to show is that in the bounded
degree case, after log square N steps, any walker reaches order of log
N other walkers, with high probability. Okay. And we have some ideas
how to use that, to show that social connectivity occurs. But we still
haven't done so. So we're fairly certain that this is true.
>>: [inaudible].
>> Jonathan Hermon: Yeah. But I don't have time to discuss it. So
another conjecture -- okay. So we can start with K random walkers, and
each one of them independently can start the stationary distribution.
So we conjecture that in the case that the underlying graph is vertex
transitive, the expectation will be monotonically decreasing in the
number of walkers. And similarly you can start with Poisson lambda
walkers and also we believe this would be monotonically decreasing in
lambda. And it is important that the graph is vertex transitive.
Otherwise this is not true.
Okay. So can we lift the curtain, the screen? Now I'm going to
discuss some of the results of the work with Ben Morris, Ellen Sly and
John King [phonetic].
>>: Can I talk about is the monotonicity? I guess if I start with one
every guy and compare it with starting with two at every guy, is it
just ->> Jonathan Hermon: No, but this is trivial. If you include the
acquaintances at time zero, by our convention we do that.
>>: Okay.
>> Jonathan Hermon: So when you do it with at least with K walkers,
and you change it between kakutani plus 1, there's the trade-off
between how much this extra guy is going to help you, compared to the
chance this guy will be the last guy which is isolated in a sense.
If he's not the last one, which is isolated, you can't make things any
worse. He can only help you. There's the trade-off. And actually we
thought about doing the Poisson lambda thing and taking the derivatives
and maybe get something similar to percolation, where you have
Russell's lemma but here you have positive contribution and negative
contribution according to what they just said. Okay. So now about the
infinite setting.
So we considered -- so if G is regular, so just do Poisson lambda.
Poisson lambda in any vertex. And if it's not regular, then do Poisson
lambda times the degree in any vertex independently. So as in the
finite setting, this is stationary over time. And now the questions we
need to ask are different. Okay. I mean, the finite setting asks when
does the social network become connected. But here for any finite
there will always be many infinitely many isolated workers. So we need
to ask instead of when does it become connected is it true that
eventually any two walkers will have a path of acquaintances between
them. Okay.
So is it true, most surely. So that was our conjecture at least when
the underlying graph is of bounded degree. But it turns out that the
picture is more complicated than that.
Okay. So easy fact is like in percolation, for any given T, including
T infinity, the existence of at least one infinite cluster is a 01
event. If the underlying graph G is vertex transitive, then saying
that you have exactly K infinite clusters is a 01 event. So we have
this ergodicity [phonetic]. And moreover we have some weak form of
social intolerance [phonetic] and for continuous time work we have
actually social intolerance. So we can rule out K which is finite and
bigger than 1. So this is exactly as in percolation.
Okay. So in the amenable case, it turns out that the answer is
positive. So G amenable. So for any lambda and G infinity is
connected.
Almost surely. It's like saying eventually any two walkers meet. So
for this we use result from percolation and you have to do some stuff
in some cases but this is fairly easy. But the nonamenable case turns
out to be interesting. So denote by award the spectral [inaudible] so
this could be defined -- once you prove that the limit exists, that
this is P and XY or EX, for instance. 1 over N. So once the limit
exists, you can say it's the supreme by super multiplicity. And in the
nonamenable case, this is strictly smaller than 1, okay? So it turns
out that if 2 lambda plus 1 times the [inaudible] is smaller than 1, we
have more than one class at time infinity. Okay? And, again, if G
will be vertex transitive, then you can say that almost surely you have
infinitely many clusters at time infinity.
Okay. So I want to explain this result. And we also showed that if
lambda is large enough then you have uniqueness and there's
monotonicity, property. So there's a critical value. So some lambda
critical for uniqueness at time infinity. But I'm only going to
discuss this because this is the most surprising result. So it turns
out that you can look at the process in some slow motion manner and
then get the cluster at the slow motion manner at least to be dominated
by branching random walk. And the idea is that we started some vertex
V. And that's stage 0 and then it's stage 1 we look at the walkers
that the guys from stage 0 met. And then it's stage 2. We look at the
walkers that say the walkers that were added at stage 1 we go one step
to the past and one step into the future. And see which guys they met
and the walkers from stage 0 we only go one step into the future. And
in general at stage T of the process, the expiration process, let's
look at walkers that were added at stage I but time J. So I mean this
is the stage with respect to the expiration process, and this is the
time with respect to the actual lazy simple random walks. So what we
explore is the walkers that they met at time J plus T minus I. This is
the future step. And we explored the walkers that they met at time G
minus T minus I. As long as this is non-negative. Okay. So at
least -- and when we run this expiration process up to time infinity,
what we get is exactly the cluster and we follow the trajectories of
all the walkers in the cluster in every time. That's the information
we get.
So the contribution that we get here at least unconditionally should be
Poisson lambda. Okay. But we only are interested about what are the
new walkers they meet, walkers which are not already part of the
exploration process. So for this we notice that this is to be
dominated by Poisson lambda, because we only care about the new
walkers. Basically you can divide the path. So let's say here we've
reached vertex call it W. So instead of looking over all paths that
reach W at that time, we look only at the subset of path, right? And
we use the composition of the Poisson random variable. So it's clear
that this still has to be dominated by Poisson lambda. Then we get
this is stochastic dominated by branching random walk with expected
number of children Poisson 2 lambda, but we have the plus one because
we don't die after we give birth.
Okay. And then if the expectation -- if this is smaller than 1, we get
that the expected number of visits to the original vertex V is finite,
which implies transient. And if the branching random walk is
transient, it means there's some vertices that it never visits, but
because this branching random walks actually tells us we follow the
walkers that we meet we follow their trajectories including time 0. So
if this process never reaches some vertices, it means that the guys
that started the death vertex were never part of our cluster. That's a
proof. I'm done. Sorry for keeping you longer.
>> Yuval Peres:
Just in time.
Thank you.
>> Yuval Peres:
Any further questions?
[applause].
You have a question?
>> Jonathan Hermon: Yes, so we have a lot of questions about infinite
clusters, but the many Wayne conjecture we still don't know how to
approach is is it true that whenever P C with respect to percolation,
independent bound percolation, that whenever P C of G is smaller than
1, then G -- sorry. Then exists T such that GT has an infinite
cluster. So we know that to be the case for ZD for any D bigger than
one. And we know for any lambda just the critical piece depends on
lambda. We know that in the non-amenable case. But that's it. For
any lambda bigger than 0.
>> Yuval Peres:
[applause]
Any questions?
Thank you again.
Download