>> Yuval Peres: Good afternoon everyone. We're delighted... of Washington and see his co-authors Toby and Chris are...

advertisement
>> Yuval Peres: Good afternoon everyone. We're delighted to have Matthew from University
of Washington and see his co-authors Toby and Chris are also here and he'll be telling us about
the frog model on trees.
>> Matthew Junge: Thanks everyone for coming and I will be talking about the frog model on
trees, and as Yuval just pointed out this is joint work with my advisor Chris Hoffman as well as
Toby Johnson. First let's start with the definition, and we start with fixing a graph, G. I was
introduced to this problem on the binary tree, so specifically I think of a binary tree when I
think of the frog model, so I'll start with that. The model earned its name because we are going
to put frogs on this graph. In particular, we'll place an awake frog at the root which is any
distinguished vertex and I'll call it v not and we'll place a sleeping frog at every other vertex.
You can actually, let's do that on this graph. I'll put an awake frog at a natural place for the root
on a binary tree is the root. I'll place sleeping frogs everywhere else. To set this model in
motion, what we're going to do is have awake frogs perform a simple nearest neighbor random
walk and awaken the sleeping frogs that they visit. Awake frogs do a simple random walk and
they wake any sleepers that they visit. By visit, I mean, land on the vertex that a sleeping frog is
occupying. It couldn't hurt to just look at a couple of steps of the evolution of this model. At
time zero we have one awake frog and now in discrete time he's going to choose one of his
neighbors uniformly and jump to it. Let's say he jumps to this neighbor here and he's going to
wake that sleeping frog up that he has landed on. Now we have two awake frogs and
independently they'll do their own simple random walk, choose a neighbor and jump to it with
uniform probability. Say the frog that just woke up jumps back and our initial frog jumps
forward to here. We have three frogs and they'll continue doing this and that's giving our
random model its dynamics. For concision I'll refer to this as FMG for the frog model on the
graph G. This was introduced in the mid-‘90s by Ravishanka [phonetic] and the first paper came
out in 1999 from Taulch and Wormold [phonetic] and I'll say something about the results the
result in a minute. First I want to give this some context by talking about some similar models
that have come out in the last decade. Are there any questions about how I'm defining this
model? The main ones that I get are our frogs allowed to move backwards, and yes, because
they choose a neighbor uniformly. And is there any rule about multiple frogs on a site, and no,
there is no rule about that. You can have as many frogs on one vertex as you want, because
once they're awake they don't interact at all.
>>: [indiscernible]
>> Matthew Junge: Oh yeah. We'll be talking about an infinite graph, but it makes sense on a
finite graph too, different questions, though. Activated random walk is a model that was
introduced by, I believe it was first studied by Benjamini. All know. I got that backwards, by
Dickman Rolla and Sitar Evisious [phonetic] in 2009 and this resembles a frog model because we
have infinitely many particles and they're going to move like simple random walks on their
graphs except we allow the possibility they falsely. Then they can be re-awoken by other frogs.
The main result in that is that they actually look at a random number of frogs at each site and
see if they vary that parameter when you see a phase transition between all of the frogs being
awake, or not all of the frogs, always having some awake frogs, so every frog eventually falling
asleep. Something that came up in 2003 was known as excited random walk and this was
studied by Benjamini and David Wilson. An excited random walk is actually just a single random
walker. There's only one excited particle and why it's excited is upon its first visit to each site
on the graph it has an extra drift in a certain direction. Say you are on Z, the first visit to each
site you get an extra push, or extra bias is the right word, in the positive direction. They
actually do study this on ZD and they prove that for every D the model is transient with this
drift. With activated random walk the parallels are pretty obvious. There's infinitely many
particles. There's moving and they're interacting this way. And exciting random walk also is
similar to our model because say that we just got rid of all of the frogs and start with a single
particle at the root, we could think of the frog model as an excited branch in the random walk,
whereupon the first visit in each site branches into two particles. In a similar way that they
study recurrence transience, that's going to be the question that we also look at. Let me get
the definition down before we get started. We will say that FMG is recurrent and I'll abbreviate
it with REC in the future. If the root is visited infinitely often with positive probability and
otherwise they say it's transient. Is that okay, that definition? With positive probability is the
main thing about it. Are there any questions? And this has been studied into previous papers.
In particular, the very first paper about a frog model was about transience and recurrence and
that was on ZD. Does anyone want to guess what they found on ZD for the frog model? They
found it is recurrent for every D. Another variation that was studied was also on ZD, or in
particular, just on integer Z and this was Ganter and Schmidt in 2008. They look that FMZ with
a drift and they also let a random number of frogs be at each site; let's say NK. Then they
classify what conditions on NK guarantee recurrence or transience of the frog model. They find
that FMZ with this drift, does anyone want to guess what the condition on NK is? That might
not be as obvious. They find that it's transient if the log of it has finite expectation, and
otherwise it's recurrent. In this paper, this is 2008, in this paper they leave as an open question
what's the transience recurrence behavior on a binary tree. In an earlier paper by Alvis
Machado and Popov they asked what is the behavior of just any d-ary in terms of recurrence
and transience with just the one per site model. The reason this is an interesting question and
is not a trivial question is we see on any tree there is going to be, say on the binary tree, there's
two edges leading away from the root and one leading back, so there's this drift which says that
any single frog is going to escape to infinity. The question is can the collective effort of waking
frogs overcome this drift and visit the root infinitely often. That's the result of our main
theorem.
>>: Question. [indiscernible] because if N is 1…
>> Matthew Junge: Then it would be transient, so this is with drift.
>>: Oh, with drift, I'm sorry.
>> Matthew Junge: What's interesting about this is -- does that make sense? What's
interesting is that this is independent of the drift, this condition. What we consider is the frog
model on TD which is the rooted d-ary tree and we ended up showing that, does anyone want
to guess what the behavior is? You didn't read the abstract on the tree? How about d equals
2?
>>: There's no drift now?
>> Matthew Junge: There's no drift except the tree structure is creating this natural drift.
>>: And there's one frog?
>> Matthew Junge: One frog per site, yes. We found that d equals 2 is recurrent and we found
that as we increase d there is a phase transition where it switches to being transient. Our proof
covers the cases up to 5, so we know that for d larger than 5 the model is transient and from
simulations we have a conjecture for the remaining 2 degrees, in particular, that d equals 3
remains recurrent and d equals 4 is where the switch happens to transience. No one was
conjecturing any phase transition with regards to this, so we are excited to see that and this
recurrence result came after the transience result and is what I think I'll spend most of this talk
talking about. I found out I have 10 minutes less than what I was planning so I'm going to say a
little bit less about the transience case, but I will still give a sketch.
>>: Is the, do you have any idea whether the phase transition is actually at 3 or at 4 or at 3 and
a half or something?
>> Matthew Junge: Only if I'm looking at our simulations. I think it's very close 3 and I'll say a
little bit more about why we think that at the end of the talk, but we don't know exactly. It
would be neat if it was exactly. Yeah?
>>: What if you attached the tree to a line and then put one frog and then two frogs and then
four frogs and then eight frogs? Is it something algebraic to the [indiscernible] frogs
[indiscernible]
>> Matthew Junge: That one will be recurrent because you…
>>: Oh because of the…
>> Matthew Junge: That's going to say that the frog, that's going to be this model where every
frog gets woken up as you go at it, so that would be a recurrent. You could actually put the
same drift and that would be recurrent. I think it would be deterministic as to how you are
placing them, but that is actually just the right amount of frogs to overcome the drift. If you
look at the quotient of these two drifts, you see that you need, there is a bias of 2 to the
negative k to getting back, so you need some constant fraction of the frogs at each distance to
wake up in order to have recurrence. For D greater than or equal to 5 the idea is that we need
to find something simpler that dominates the model and we'll proved that that is transient. We
want an upper bound and then we want to make use of the self similarity in the tree to do
some sort of recursion on that upper bound. Does anyone have a suggestion for a process that
might naturally upper bound the frog model?
>>: [indiscernible]
>> Matthew Junge: Everything away? That would do it, yeah, and that's recurrent or that
would be recurrent.
>>: Percolation?
>> Matthew Junge: It ends up to couple really naturally to a branching random walk and then
we can use Martingale techniques to prove things about that branching random walk, and does
anyone see maybe an obvious branching random walk that would dominate the frog model?
Obvious isn't the right word, but a crude estimate.
>>: Every step each frog gives birth to another…
>> Matthew Junge: Yeah, what if every step you just put into a new frog? So I would say every
step doubles? And if we analyze that branching random walk you can already prove for D
greater than or equal to 14, the tree is going to be transient. What we end up doing to create a
good upper bound is we refine this down to get as close to modeling the real behavior of the
frog model as we can and to do that we use a multi-branching type random walk that actually
has 27 types of particles. You get exponentially decreasing gains as you add more and more
particles for lowering D, but this gets us all the way to D greater than or equal to 5. That's how
our proof for transients goes and we like it because we always know what's going on and we
have a good grip on the probabilistic objects that are going on. I'm going to go into a lot more
detail for D equals 2 because at a certain point we lose our probabilistic interpretation and I'm
curious if anyone can help us recover that. That's going to be one of our questions at the end.
>>: So could you just give a sort of the vaguest idea about what these types of [indiscernible]
>> Matthew Junge: Yes. I'm glad you asked. One type of frog -- we treat, for instance, when
two frogs are at the same site like these two frogs right here, as a single particle, because when
these frogs jump down there is this chance that they jump to the same site and so they won't
awake as many frogs as if you were just taking them in isolation. We get up to 27 by simulating,
say that we let these jump two steps, and look what they could do. And 27 ended up being the
nice number and about as far as we could go realistically.
>>: And you don't expect to get it down to the equals 4 or a little bit more?
>> Matthew Junge: It's not just a little bit more to get there. There's no reason this couldn't
work though unless, if 4 were where the transition occurred, it probably couldn't work, but we
think there is some wiggle room where maybe it would get you there. The idea for D equals 2 is
the same except we are going to look for good lower bound now and then use a recursion.
Good lower bounds are harder to come by than a branching random walk. One that we did try
it was a Galton-Watson process. You can embed several different types of those in the growth
of the frog model, but none of them showed the growth that we needed, so I'm not going to
grill you on a good lower bound. I'm just going to tell you what we used. It came in two steps.
The first was that we're going to restrict all of our frogs to non-backtracking paths. What I
mean by that is since any frog is going to escape to infinity along the tree we can break its path.
Say that a frog woke up here. We can break its path into a direct path to infinity and then
excursions away from that line where he comes back to it. Is that okay? And what we do, the
first thing that we throw out is any frogs that a frog visits outside of that path, so as far as we
are concerned frogs are just going to be jumping back for a while until they turn around and
then they will continue away to infinity on the tree. This already gives us a lot of control over
the model, but we need to do a little more. The second step will be a little more in detail to
explain, but it roughly goes like only allowing one outsider into any given subtree.
>>: [indiscernible]
>> Matthew Junge: Let me draw a diagram to explain what I mean by this. Say we have the
group here and we're going to look at the right subtree of the root. I'm going to label these
vertices, X, XL, and XR and you can picture the subtree rooted at XL and the subtree rooted at
XR starts there. We want to get out of recursion and that's what's motivating this rule, because
in order to do that we need to make subtrees act the same in some sense. And the problems
that we run into is what if 10 frogs move into one subtree? Then it's going to be distributed
differently than one that just had two frogs enter it, but if we just cap the number of frogs that
can enter a given tree at once, they will all look the same. What I mean by only allowing one
outsider to enter is say the next problem that tries to go into a tree, we will just ignore all
subsequent visits after that and this is done via a coupling. Does that, can you picture what
that means, at least? It may not be so obvious why that makes it distributed the same, but that
is the rule that we're going to enforce. And these are frogs that are following these nonbacktracking paths, so we can ensure that once a frog on this path enters a subtree, it's going to
be, it's committed. It is going to stay in that subtree. I'll say a little bit more as I define things
about this, so hopefully it will become more clear.
>>: Don't you run out of subtrees? I mean there are too many frogs?
>> Matthew Junge: For subtrees?
>>: What?
>> Matthew Junge: There's always, so this will recursive down because there's a subtree
rooted at X. This a subtree rooted at XR.
>>: But these non-tracking back, they are not going to get you back to the origin, so you get
recurrence as you defined it, you need somehow to use more than these non-backtracking
paths.
>> Matthew Junge: Non-backtracking paths, they first start by walking backwards towards the
root. For instance, the frog at XR, it could step back and then step back and visit the root and
then from there it has to walk forward.
>>: I see.
>> Matthew Junge: So stepping backwards and backtracking are different notions on the tree.
>>: Can we again watch the definition of non-backtracking?
>>: Not the usual definition.
>> Matthew Junge: It's just, I think it might be the usual. Say a non-backtracking path from
here could go like this and then forward, so you just never intersect the route that you take.
>>: Okay.
>>: So it's [indiscernible]?
>>: No, no. This is non-backtracking. I thought you were going back on the same page.
>> Matthew Junge: No. So you can't visit and then revisit it. And so they still are able to visit
the root and they also have that condition once they step away from the route they will
continue to step away from the root. If we make the assumption that if we are looking towards
actually obtaining a recursion, let's assume that the frog at XR is visited by the frog that started
at the root just so we can fix our notation. I'm going to define V to be the number of visits to
frogs that are at X and jump to v not. Similarly, I can define VR and VL as the number of frogs
that come from the subtree that started at XL and visit X. The advantage of these tools, I mean
non-backtracking paths and this one outsider, is that these actually, given this assumption, is
identical in distribution to VR and if we assume that XL is visited, V is equal in distribution to VL
conditioned on that event. And these are independent because there is no interaction between
the trees. This is what motivated us looking at this coupling is that we can get something about
visits to the root that's going to look the same down the tree. If we actually try to write this
down as a formula -- do you have a question?
>>: Yes. So VR is the number of visits just from the children of the root?
>> Matthew Junge: VR is the total number of frogs from this subtree that visit X.
>>: When you say from you mean the ones that were born in that subtree?
>> Matthew Junge: Frogs that were born in that, yeah, because one visited from above, so the
number that came from below it, from that subtree. And V is the number that came from the
subtree rooted at X. What this all adds up to in terms of formula, we can express V in terms of
VR and VL and I'll write it down and then explain where it's coming from. And then I think we
can call it good. We claim that VL is equal to this and we can express the number of visits to the
root in this reduced model in terms of these three terms. Let's look at them one at a time. I'm
claiming we have a binomial VR 1/2 because VR is going to tell us how many frogs land right
here at X and now since they are following non-backtracking paths, they can either jump to XL
or to the root with equal probability, so viewing each frog it's like a Bernoulli 1/2 event. We're
going to get a binomial distribution of returns. Is that okay? And similarly, if we know that XL is
visited, we're going to see the same number of returns to the root from VL. They are going to
percolate in the same way. And can anyone see where this Bernoulli 1/3 might be coming
from? There's one thing unaccounted for. Yeah. We haven't said anything about this frog who
has yet to take a step and so he can, with equal probability, can jump to each of his neighbors
and will contribute a Bernoulli 1/3.
>>: I'm so confused about the -- if he's not backtracking, does he really have a one third
probability of going in either direction? That's a good question. Because we're on a tree, the
structure of it guarantees that when we actually look at how this ray is selected, it's also
selected uniformly, so it is true, but that's not obvious that it has those transitions.
>>: But it is a binary tree, not a three regular tree, right?
>> Matthew Junge: Everywhere except the root it's going to look the same. In our coupling you
actually have to do one other thing where if you visit the honest route you get frozen there and
you no longer do anything. Otherwise, it would have a different distribution for [indiscernible].
>>: So if you visit the root then you are not allowed to go back off to an affinity?
>> Matthew Junge: Yeah, we just ignore you and that's okay, so it's still a lower bound, our
initial process. We're trying to prove that recurrence for the binary tree and we have reduced,
we found a lower bounding process that satisfies this. And now what we're going to do is
transform this into a question about functions, whether V is infinite almost earlier or not. In
particular, we'll look at the probability generating function of V. So I'm going to let f of X be the
probability generating function and so this transforms the question about V being infinity
almost really to proving that it's generating functions equivalent to 0. What we can work with
is this expression here to write f in a different way. At this point our proof turns into one about
functional analysis and, I guess, the appropriate way to get that started is by introducing an
operator. It will seem unmotivated for a minute, but then I'll bring it back to what we are
talking about here. Consider the operator A that's going to take functions on 01 into 01 that
are increasing into functions that take 01 into R that's defined as follows. Even some function if
you want to evaluate at a point x, we'll look at x +1 over 2, x +2 over 3 times g of x +1 over 2
squared plus x +1 over 3. 1-g of x over 2, 1- g of x +1 over 2. This may seem like it comes out of
thin air, but is the definition clear? And let's collect some facts about it. I think the first fact will
explain its origin and that's that our generating function for V is a fixed point of this operator, so
we have f of x is equal to af of x for x in the appropriate domain. Where is this coming from? If
you are comfortable with generating functions, you might recognize some things. For instance,
this is the generating function of a Bernoulli 1/3 and when you think about a binomial on a
random variable, that's really like a random sum and that results to function composition with
probability generating functions, so that's why we see g of x+1 over 2 looks like a binomial on
some random variable V.
>>: 2x+1 over V?
>> Matthew Junge: We get a lot of -- I'm not sure, but we get a lot of there's a lot of
simplification going on here, but this formula is coming from looking at this and then computing
the appropriate probabilities. It's important to know that there are some dependencies in here
and that's why we get some of this kind of noisy equation right here, why it's not extremely
clean or doesn't look exactly the same as that. Remember, our goal is to prove that f is
equivalent to 0 now and the key property of A that lets us do that is that it's monotonic. If you
take g less than or equal to h, then A is going to preserve that domination. If we combine that
with the fact that we know because f is a probability generating function, it's bounded by one,
we can stream together the following bound and that is that f of x is going to be equal to any,
because it's a fixed point, is in the amount of iterations of A and then by monotonicity in this
bound, we can just look at A operating on the constant function 1. So we reduce this question
of recurrence and transients to studying if you plug 1 into this operator and start iterating what
do we get? With a technical lemma that actually uses some single variable calculus, we can
bound this by either the c in x-1, for some sequence c and that's increasing to infinity. To
remind you what these graphs look like if this is one and this is one, as cn increases they get
progressively flatter and then steeper at the end and actually converge to 0. Just like that and it
seemed like all of a sudden to us when we prove this we have f is equivalent to 0, which going
back to our original random variables says this is infinite almost surely and because we were
lower bounding the process, we have recurrence for the frog model. And we're still a little
mystified about how this is working and with that let me jump into our further questions
because the first one is about a few steps in this proof. Are there any questions from the
audience before I do that? We really have two things we would like to know.
>>: Actually, I do have a question. I mean you have this indicator that XL is visited in your blue
equation, so how does that manifest itself in the calculation?
>> Matthew Junge: That's actually the event that we condition on when we're computing,
when we're writing out the generating function, so what we do is we look at this event and we
ask how can XL be visited. It's either going to be visited by the frog here or the frog here and
that's basically where it comes up.
>>: I see. And you still sort of have enough independence of appropriate things rather than
[indiscernible]
>> Matthew Junge: The key independence is this one. Everything else is related in some way,
but this is the key independence. Our proof took a turn into analysis right when I introduced
the generating function and we would really like a probabilistic interpretation of what we're
doing in that argument. For instance, we see that the functions that were actually ultimately
bounding our generating function by, these are generating functions of poisson c and random
variables, so it just seems like there must be some probabilistic interpretation of what's going
on here, but we don't see it. Secondly, if you look at my idea, our ideas that we wanted a good
lower bound and I told you a lower bound, but if I just told you that and really when we defined
it, we had no intuition as to why that would be good or bad. We can't tell how many frogs we
are throwing out with that second condition. The non-backtracking condition is pretty harmless
where sure, but that second only one outside a rule, we would like to know why that’s not
cutting out too much, whereas, other things we tried were. In terms of actual further
questions, we're interested in placing poisson lambda frogs at each site and then forcing a
phase transition higher d. For instance, it's open if you put a thousand frogs on each site, is a
five area tree recurrent with that? We don't have a technique that addresses that question and
it really ought to be at some point. In regards to our conjecture, we actually give a sharper
conjecture in our paper and we think it's possible that there is a three phase transition. And
what I mean by that is we think that binary and ternary trees have a different type of
recurrence that they are exhibiting, namely, stronger currents and weaker currents. By
stronger currents I mean the fraction of time that the root is occupied is staying away from 0 on
the binary tree and the fraction of time that it's occupied on a ternary tree is going to 0. We
think it would be really neat if maybe if we find this phase transition for placing poisson lambda
that we can exhibit all three phases by varying that parameter or proving that for three and
four area trees with one per site would be, or two and three would be really nice. That's where
it stands and I think, one reason I'm talking is we want opinions about this proof, if people
recognize this technique coming up before or see something that we are missing about it,
because we really hope that it could cover both of these cases, that the ideas could be
extended. Thanks for having me and are there any questions? [applause]
>>: So what breaks down exactly so it equals three?
>> Matthew Junge: Nothing breaks down except this -- we've done the same thing for d equals
three and we get an operator, but it has like seven or eight sum ands in it and you have to
actually start looking at derivatives of your function too when you talk about it, so we just can't
push through the same analysis on that.
>>: But do you think that this kind of bound just using one outsider can work for a ternary tree?
>> Matthew Junge: That goes to the question that we don't know how much we're throwing
away, so we're not sure.
>>: But in your simulation, but this is something you can trial through simulations, if you allow
me one…
>> Matthew Junge: Three was already very sensitive for simulations, the hardest to simulate
for just one per site, so we haven't tried it with this rule, but I suspect it won't be clear, because
it wasn't very clear with one per site already.
>>: But that might mean that it is clear with rule that there are…
>> Matthew Junge: I think that's a good suggestion to look at simulations for this.
>>: And again, this operator, even if it is hard to analyze, ultimately, you can still easier than
simulation is just to apply this numerically.
>> Matthew Junge: Just apply the operator to one, Uh-huh.
>>: Took one many times and see where that's going. The monotonicity that you mention, is
that supposed to be obvious, because you do have a 1 minus term there?
>> Matthew Junge: It's not obvious because of the minus term, but you can make comparisons
that this is not contributing enough to change it.
>>: There is another model called the lily pad model.
>> Matthew Junge: Oh really?
>>: Like you should combine the two. [laughter]
>>: That one is used in RD though, R2.
>>: Or you could pick [indiscernible]
>> Matthew Junge: I thought about a lily pad model actually where you picture frogs moving
Brownian motions and actually they are on a lily pad and they are moving around, so a little bit,
but it's best to avoid too many frog references when you are talking about it. [laughter]
>>: How many is too many? [laughter]. The method, the general methods of bounding by
branching process or branching random walk have been used many times, so, multiple times,
but, of course, the devil is in the details, so finding the right bounding process. Once you, but
why do you say that 27 is some kind of limit? All you need is to find an ith, so can you somehow
automate the process and do, you know, a thousand [indiscernible]? All you have to do is find
in eigenvalue for that, for that matrix.
>> Matthew Junge: We actually did optimize the process to get to 27 and this is a computer
assisted proof for transience and we were unable to optimize it any further than that. We were
running into computational runtime problems.
>>: In what part, calculating eigenvalue [indiscernible]
>> Matthew Junge: In calculating the matrix, because we need to actually simulate the
transition probabilities between particles and that gets complicated quickly if you are talking
about several frogs taking several steps.
>> Yuval Peres: Any other comments?
>> Matthew Junge: Does anyone recognize proving a generating function is 0 to prove infinite
visits or some random variable is infinite?
>>: Yes.
>> Matthew Junge: A little? Yeah, so we are curious about that.
>> Yuval Peres: Anything else? All right. Let's think the speaker. [applause]
Download