>> Yuval Peres: Good afternoon. So some folks at... were envious, so we asked Alexander to also tell us...

advertisement
>> Yuval Peres: Good afternoon. So some folks at [indiscernible] already heard this, but we
were envious, so we asked Alexander to also tell us about random games.
>> Alexander Holroyd: Okay, thank you. Hello everyone. So, Random Games, imagine 2
players compete in a game of skill. So how might you model this mathematically? So you have
a graph, a directed graph, of positions. So the vertices of the graph are positions of the game and
the edges represent moves and you have 2 players. So maybe you start at this position, then
Alice moves first and these are all the positions that she could move to on her first move, the
ones that you can get to. Then it is Bob’s turn, so he has a move, then Alice has a move and so
on.
So I am going to assume that Alice and Bob take turns, they alternate moves and there’s not
really much loss of generality here because, say for instance you had some strange game where
Alice has 3 moves and then Bob has one, then you can think of Alice’s 3 moves as just 1. It
would just be a different graph. However, I am certainly not going to be considering
simultaneous moves. So game theory, Nash equilibrium type stuff, I am not going in that
direction. This is just taking turns.
Well how does the game end? I am just going to say some terminal positions, which are simply
vertices that have out degree 0 are designated wins for Alice or wins for Bob. So maybe this
node is a win for and this position is a win for Bob. Okay, so it’s very, very simple. And if you
want to, you might want to assume, and again there is not really any loss of generality in any of
these things, you could assume the graph is directed acyclic, meaning it doesn’t have any
directed cycles where you can come back to the same position because if you did have a game
like that you could always just declare it to be a different position when you come back to it the
second time, the third time and so on, if you wanted to. It would be a different graph again and if
you want you, you can label a position by whose move it is next as well, that can be part of the
position.
Also if you want to you can assume that the graph is a tree, because if you had 2 possible ways
that you could come into the same position then again, you could just call it a different position
depending on how you got there. If you want to you can assume it is a tree. I mean I won’t
always assume these things, but you can assume those if you want to. So finally, and this is kind
of important to just note, if you want to, you can assume that the way you decide whether it’s a
win or a loss is by what’s called the “normal play rule” in combinatorial game theory. So the
rule is simply if you have no possible moves when it’s your turn then you lose and the other
player wins.
So I advise this without loss of generality, because if you had a game that wasn’t like that then
again you could just always modify the graph. If it is Alice’s turn, but it’s a win for Alice then
you just give 1 extra move for Alice to move and then Bob loses. So there is no real loss of
generality. So I will assume this most of the time. It is kind of convenient.
Okay, so I want to think about optimal play. So that means just the players are very, very clever.
So a strategy for Alice say is simply a map which assigns a legal move to every position that she
might find herself. It’s rule book for how to play the game. So you imagine Alice uses one
strategy and Bob uses another. You call Alice’s strategy a winning strategy if it guarantees that
Alice will win no matter what strategy Bob uses. And it is pretty easy to see that if you have a
finite directed acyclic graph then exactly 1 player does have a winning strategy. The game has to
end so someone has to win. It may be difficult in particular cases to decide who does win of
course.
But, on an infinite graph there is a third possibility and sort of by far the most interesting
possibility, at least in the sort of directions I am going to be discussing, which is that it’s a draw.
Neither player can force a win. Neither player has a winning strategy. So a very stupid simple
example, if the game graph is simply a directed path off to the right then the game never ends.
You just keep on moving to the right. So that’s a draw. And for that I am going to assume that
the players know the entire graph and they are infinitely skilled. So I am not going to consider
computational questions of how hard is it to decide our now strategy, although those might be
interesting as well.
So you can do some very easy things. You can classify positions, vertices of the graph as N
positions, which means a position from which the next player wins, the next player to move, the
first player or P position for which the previous player, i.e. the player who just moved or the
second player, has a winning strategy or draws. So this is the outcome of the game with optimal
play. And it’s not very difficult to convince yourself that there is sort of a recurrent for trying to
compute these things. If this is a node and these are all the nodes that you can get to from it and
you know the outcomes of them, then you can computer outcome of this one.
So for instance if you can get to a P position from here then that’s an excellent move to make
because that means you are leaving your opponent in a position where they will lose. So that’s a
very good move. So that means this is a winning position, this N position. And if all the things
can get to the N positions then too bad for you, you are going to lose. So that’s a P position and
then you involve draws. If there are no P’s, but there is at least one D then this is a D. It’s not
too difficult.
But, note already that this recurrence in itself is not sufficient to determine game outcomes,
because consider the stupid example where the game graph is just an infinite line, then every
vertex D is a solution to that recurrence and it’s the correct one, that every vertices is a draw.
But, these two alternating, P, N or N, P are both solutions the recurrence. So on an infinite graph
you need to know some more and note if you were to cut this path off at some point then either
or this would be the truth depending on where you cut it off at an even or an odd distance. So
saying that it’s a draw is something to do with saying if you cut the graph off a long way away it
depends how you do it, what happens at the beginning.
Okay, so again, I am assuming everything is known to both players and there is no randomness
at all. You just have a deterministic graph and you play deterministically. So what are
interesting games to look at? Well, that’s kid of a tricky questions here, because on the one hand
you have some simple mathematically idealized games where there is some interesting theory of
them, but they are not really the kind of games people play in practice. On the other hand, you
have games like chess and go where people are very interested in them in practice and it is
definitely interesting computational questions like: How do you make a computer plays go? But,
the theorems about chess and go, you don’t expect there to be many interesting ones because
they are just too specific.
So what is a good thing to ask? Well here is one perhaps interesting middle ground you could
try. You could ask: What happens in a typical game? What does a typical game look like? So
that is what I am going to focus on. So I am going to choose the games itself at random. So
there is still not going to be any randomness in the game play. I bring in 2 players and I tell
them, “Here is a game I cooked up for you to play. I’ll let you play and I chose it at random.”
So that is what I am going to do.
I am going to talk about 3 examples. So the way I am doing it a game is just a directed graph.
So I just want to think about some interesting models of directed graphs. I am going to talk
about 3 cases. All these are joint work with James Martin who is now here for a few months. I
am very happy to have him here. And depending on which one it is also joint with Markovici,
Basu and Wastlund. So I will consider Galton-Watson trees, and I will consider directed
percolation clusters and I will consider undirected percolation clusters. And bear in mind, the
way I described it, it was a directed graph. So I haven’t told you what it means to play this game
on an undirected percolation cluster, but I will come back to that and I will give a sensible
definition.
And this isn’t going to be one of those talks where I just try to impress you with how clever we
are having solved all the problems. It’s not going to be like that. I am trying to open the door to
some interesting circles of questions. We managed to prove some things, but there is a lot more
to do. We have just kind of scratched the surface. And it’s already interesting that in trying to
answer essentially the same question in these 3 settings we ran into all sorts of interesting old
friends from the world of probability theory, continuous and discontinuous phase-transitions,
probabilistic C.A., hard-core models, bootstrap percolation and maximum matching, all from
considering the same question essentially.
>>: [inaudible].
>> Alexander Holroyd: And there will kind of be an increasing difficulty gradient as we go
down. So here, most of all proofs are kind of exercisers, although the results are quite interesting
and striking and will give you some clues about what you should be asking lower down. And
here it is just on the cusp where somehow, magically the theory is just strong enough to be able
to prove some interesting things, but it is sort of very fragile. If anything broke it would be much
harder. And down here by far the most interesting questions are completely open, we have no
idea how to track them. I think they are very, very interesting and we managed to find some
consolation prizes, some things we could prove. But, the main question is open.
All right. So let’s get started with Galton-Watson trees. So let’s suppose that the graph, this
directed graph that I choose to play my game on, is a Galton-Watson branching process with
non-deterministic offspring distribution. So you just have one vertex at drop, and that has a
random number of children and each of those has a random number of children and so on, all
with this same distribution. And I am going to play the normal game on this graph. So you start
at the top, you take turns to play and if you find you are at a vertex with 0 offspring then you lose
and the other player wins.
So of course if mu has mean less than 1, then the branching process dies out. So the tree is finite
and that means certainly there are no draws. Draws can only happen on an infinite graph, but if
it has mean greater than 1 then the tree is infinite so you can ask, “Can there be a draw?” So you
have an infinite random tree and so draw can take place, but do they happen with optimal play?
So maybe there is this infinite path, but somehow it’s not thick enough –. So to have a draw you
somehow have to have a thick path, not really a path, but a cluster, so that neither player can
force the other one off it and cause a win. So it’s a question.
>>: But you are assuming P0 is [inaudible]?
>> Alexander Holroyd: That’s right [inaudible]. Okay, so not so surprisingly, if you have
thought about questions like this one can do really an exact analysis on a Galton-Watson tree. So
first of all, and this is too much more generally, there is sort of a compactness argument that says
if Alice can win say, then she can guarantee to do so within some finite number of moves and
that number depends on the tree. So it’s random. All right. So somehow wins are local in some
sense and therefore the sensible thing to do is truncate the tree at level N and declare all the
vertices at level N to be draws. So you just play the game on this top finite piece, but with
different rules. You say, if the game every get’s to the bottom we call it a draw and then you see
what happens.
So you can just apply the recursion upwards to figure out what all these other vertices are and
then figure out what the root is. And because of this thing, it is pretty easy to see that, say the
probability that the root is in N position when you truncate, converges to the probability it’s an N
position in the infinite tree. And similarly for P and therefore it’s true for D as well, because
that’s 1 minus the other 2. And on the other hand you can compute all these things using
generating functions. If you take the generating function of their offspring distribution then you
can just translate this recursion into a recursion involving that if you know NN you can compute
PN + 1, etc.
So the up shot of all this is you look at 1 minus the generating function of the offspring
distribution and it’s all about the fixed points of that. So D, the probability that the root is a
draw, that the game is a draw with optimal plays, is the difference between the biggest fixed
point and the smallest fixed point of not F, but the iterate F of F and the interval 0,1. So, easy
computations and N and P are given in terms of that as well. So there are draws if and only if
this function, the iterate F of F of X has multiple fixed points in the interval.
So, here is an example, just a simple example, your binary branching, your offspring distribution
is you have 2 offspring with probability P and 0 offspring otherwise. So of course we know
what the mean of this is. So the tree is infinite if F and P are greater than a half. So here’s what
happens with the game: here is P along the bottom, increasing this P, which is the probability of
having 2 offspring and in any vertical line. The width of the red part is the probability of an N
position. The width of the blue part is the probability of P position. And the width of the green
part is the probability of a draw. So up to root 3/2, which you will notice, is not equal to a half.
So the phase transition for the tree to be infinite is at P equals a half, but you don’t have any
draws until you get to P equals root 3 over 2 and then you start having draws with positive
probability. And the probability is the width of the width of this green bit. So it’s a continuous
phase transition. So right at the critical point it’s still 0 and then it grows.
So that’s what happens. All these colors are computable in this simple example. So basically
what are going on are you have this F of F of X and you are interested in fixed points so you had
better subtract X from it. And below the critical point it’s just some nice curve with a root
somewhere and then as you cross the critical point it starts to have a point of inflection here.
And after the critical point it has 3 fixed points and the outer 2 move apart and the distance
between them is D.
Okay, so you can do lots of nice examples. You can look at a binomial distribution, which is
percolation on a regular tree and you have a similar picture as P varies basically. You have a
continuous phase transition where you start having draws not at the critical point of percolation,
but strictly higher. And with a Poisson distribution it turns out similar. The critical point is
lambda equals E for draws. And interestingly with the geometric distribution you never have
any draws no matter what the parameter. And once you understand this picture with fixed points
you can concoct weird examples.
So there are examples where you vary the offspring distribution continuously, but D has a
discontinuous jump from 0 to positive. And if you can concoct it sufficiently you can even make
it jump from one positive value to a different positive value and you do this just by figuring out
what you want this curve to do to make the fixed points behave strangely. So the short answer
is: Anything is possible if you concoct a strange distribution.
So a couple of other things you can do; this was the normal game, which is where you lose if you
cannot move. There is also what’s called the misere game where you just change the rules for
winning. You say, “If you cannot move you win.” And then there is also this sort of
intermediate one which we call escape game where you have one player called the “stopper”
who is trying to make the game end with someone not able to move. And the escaper is trying to
make it continue forever.
So you can do similar things with these. For the misere game things come out similarly to the
normal one. For the escape game it’s actually different. For the escape game it seems that
discontinuous phase transitions are the norm, from no probability of the escaper winning to
suddenly jumping to a positive probability. So again, for binary branching the picture ends up
looking like this. You have one fixed point and then at some point this curve dips below the
access and you suddenly have a fixed point, much further to the left. You have a jump and you
can compute it in this example. But, again you can concoct examples with continuous transitions
it turns out. So it’s not as nice as you might thing.
All right. So we wanted to salvage some sort of rigorous statements that are turn in general. So
for the normal game P and N are lowest semi-continuous functions of the offspring distribution.
So basically the only way you can get jumps is by D jumping and similarly for the other games.
And for mu, where the support is too small, all these probabilities are continuous. And finally
here is something that says for the escape game, which is the one where sort of discontinuous
phase transitions are the norm there is this special way that the escaper can win, which James
actually mentioned in the lunch yesterday. If the mean times the probability of having 1
offspring is positive then the escaper can just keep letting the other player have no choice at all.
So that’s sort of a very special way to escape. And the set of distributions where the escaper
wins can change the set where this is greater than 1 and it’s a closed subset of the set where it’s
less than 1. So it’s not quite saying that this is a closed set. If I could say that this was a closed
set that would mean that you only get discontinuous phase transitions and it’s not as strong as
that, but the only way you can get a continuous phase transition is by sort of carefully going
across this boundary. So that’s what we have.
So, how about some inequalities: Yu can compare these three games and we have 10 inequalities
between like the probability the next player wins in the normal game and misere game and they
sort of come into 3 categories. These ones just hold point wise for any directed acyclic graph or
they come from arguments like if you can win in the normal game then you can win as the
stopper, because that’s just an easier thing to do. Then there are these ones that are a little bit
more subtle. They use strategy-stealing arguments. They say that because it’s a Galton-Watsons
tree after 1 move you are sort of still in the same situation in distribution. And then these are
kind of interesting. They are not super hard to prove, but we don’t really have intuitive
explanations for them. So maybe the most interesting is that drawers are more likely in the
misere game, more likely in the misere game than the normal game. So if anyone has any
intuition as to why that would be true I am kind of interested. And no other inequalities hold in
general.
So let’s talk about another random graph. So let’s do site percolation on the directed square
lattice. So you have Z2, the square lattice and make it directed. So every edge is directed to the
East or to the North. And we do site percolation on it. So each site is closed independently with
some probability P. So I am just deleting some vertices at random and they play the normal
game on the open sub graph, the vertices that remain. So you are not allowed to move to a
closed site. So again, if for some reason our P is 1 minus the usual P of percolation or whatever,
if P is big there are lots of closed, then the set of vertices you can get to, so this should be the P,C
for directed slight percolation, then the set of vertices you can get to is finite. So definitely no
draws, but if not then the graph is infinite, so maybe there could be draws.
So here is a simulation. So P equals .2, so 20 percent of vertices are closed or deleted. And
again, what you want to do is, I mean consider diagonal, right. If you know the game on a
diagonal like this then you can compute what happens on the next diagonal down. So the thing
you want to do is put draws on the boundary. So here is what happens if you declare the game a
draw when you get to this boundary and then propagate backwards. So red denotes draw
positions and blue and green are wins and losses. So you can see that draws certainly persist
down quite a long way and if you want to know what happens on the infinite lattice that’s
equivalent to asking: What happens if you just propagate further and further down? Do you ever
see any reds surviving? And you ask that question for each P of course.
So this is P equals .2 and this is P equals .1. So it seems pretty clear that the draws die out and
then maybe if you go back to this you sort of see that the red doesn’t seem to grow very often
and then it gradually seems to get eaten away. But the question is: Is there any P for which the
red survives? Okay, so here’s a way of looking at that question. If you want to, you can change
the rules of the game. You can say, “Rather than it being forbidden to move to a closed site,”
you can say, “If you move to a site then you immediately lose.” So that’s the equivalent,
because no one will ever want to do it. So it’s the same game. It’s just convenient to regard
closed sites as N positions. And again, we can imagine turning the lattice turning this way
around. So if you know the game outcomes on one diagonal then you can compute the next
diagonal down. And one way to look at is as a probabilistic cellular automaton. So if you know
the 2 states above you and there is just some rule for computing the one below and sometimes
you have to toss a coin.
So here is a lemma, which I am not going to prove. So draws occur if, and only if, this
probabilistic cellular automaton has multiple stationary distributions. A stationary distribution
means you have some distribution on the configuration of a diagonal and then you propagate it 1
down and it still has the same law. And furthermore that’s also equivalent to it has multiple
stationary distributions without these. So if you just consider you don’t have any D’s, you just
have P’s and N’s and you just use this rule, that gives you a simpler cellular automation and it’s
the equivalent to that having node laws.
And I am not going to prove this, but you should think if it as being a bit like these pictures we
started with where if one player has a win in the game then they can win locally. It is not a
global property, it’s a local property. So if there are no draws then there ought to be one
stationary distribution. On the other hand if there are draws, then it somehow telling you that
even with just P’s and N’s there are multiple solutions, like in that picture at the beginning with
Z alternating N’s and P’s. So if you can’t have draws it’s telling you there are multiple
possibilities. So in any case this is true and not too difficult to prove. So we want to know: Does
this have multiple stationary distributions?
Now this brings us to some difficult territory. It is widely believed that all simple 1-dimensional
probabilistic cellular automaton with positive rates have unique stationary distributions. What
do positive rates mean? It means you have some rule like this, some cellular automaton rule like
this, some cellular automaton rule, and no matter what symbols you see above you, you always
have positive probability of producing any other symbol below. And this isn’t actually true here,
because here you definitely produce an N for example, but it’s kind of almost true in the sense
that if you, for instance take 2 steps then locally you could produce anything.
So morally, well not just morally, this cellular automaton is certainly in the category where it’s
widely agreed that you would expect it to have an unique stationary distribution. However,
although this is widely believed, it’s proved only in some very specific cases and it’s hard to
prove it in many cases and it requires different techniques in different cases. So it’s potentially a
hard thing to prove.
So what does simple mean? Of course there no specific meaning, but there is an extremely –.
Well backup, for a long-time it was believed that you can remove the word simple so that any 1
dimensional probabilistic cellular automaton, with this positive rates condition, has a unique
stationary distribution, which is widely believed. It’s called the positive rates conjecture. But
then Gacs in 2001 gave an extremely complicated counter example to that. However, it has been
exceptionally difficult, 200 page proof, and I think it’s fair to say that no one, other than the
author, really understands it. And maybe it is not universally accepted to be correct, but it –.
>>: It is definitely no universally accepted.
>> Alexander Holroyd: There you go. But in any case, this is extremely complicated and the
construction is extremely complicated. So, one certainly doesn’t expect multiple stationary
distributions for a nice simple rule like this. It would be extraordinarily surprising. Okay, so we
expect unique stationary distributions, therefore no draws, but can we prove it. Well here is a
very interesting picture, so consider just two successive diagonals and let’s just look at the
picture where there are no D’s. We just have P’s and N’s. So you have some configuration and
of course you can compute the configuration on the next line and actually the rule is: If you can
see a P above you then you are an N and if you see two N’s then you toss a coin.
So now you can imagine keeping going of course you compute the next diagonal. And to save
space I could actually write the next diagonal above instead. I could write it up here because this
is the same shaped zigzag path and again the rule is if I see 2 P’s then I write and N, otherwise I
toss a coin and I compute the diagonal above. And I could keep going like this. Now here is
something else you could do on Z, look at the hardcore lattice gaps, which is a standard
statistical physics model. The hardcore model is a distribution on configurations of 0's and 1's on
the vertices of graph with no adjacent ones allowed and weighted by some parameter to the pair
of the number of 1's, at least that’s how you do it on a finite graph. Then if you wanted to do it
on an infinite graph there are ways of taking limits.
And there is a Glauber dynamics for which this measure is a stationary distribution, which is you
choose a vertex to update and then when you update a vertex, if any of its neighbors is 1, then it
has to be a 0, because you are not allowed to have 2 adjacent 1's. If all the neighbors are 0's then
you toss a coin. There are some appropriate probabilities that depend on this parameter lemma.
And if you think about the hardcore model on Z one thing you could do is apply Glauber
dynamics alternatively on the odd and even vertices of it and the hardcore measure will be
stationary under that dynamic. So you have Z, I draw it in this zig-zag fashion and you
alternatively apply that rule on all the odd sites simultaneously and all the even sites
simultaneously. And of course that’s the same thing as the picture I just showed you with 0 and
P is some function of lambda and so on.
So because of this correspondence it’s a fact that Gibbs distributions for the hardcore model on
the line are in bijective correspondence with reversible stationary distributions for the
probabilistic cellular automaton with no D’s. And it is reversible, well yeah; it is reversible,
because there are pictures of reversible. I won’t go into the proof, but one can prove that. And it
is rather well known, and not surprising, that the hardcore model of Z, because it’s a 1
dimensional graph, has a unique Gibbs distribution. So that means that my PCA has a unique
reversible stationary distribution. But, of course, that’s not enough for what I wanted. That’s not
enough to prove no draws because maybe there are some other non-reversible stationary
distributions. So unfortunately that doesn’t help.
However, using different methods, kind of new methods, we did manage to prove what we
wanted on the square lattice. No matter what P there is a unique stationary distribution and there
are no draws. And moreover if you are interested in the game you can even compute the
probability that the origin is in N position and it’s some function of P. And just to sort of show
how fragile all this is –. So how did we prove this first of all? We proved it by a very different
method, by assigning a rather subtle weight function to configurations and proving that weight
function is monitoring. And it is not so obvious how to come up with that weight function.
And just to show sort of how fragile this is, there is another game called the target game where
you win if you ever move to a closed site, for which we can still prove that there are no draws,
but we cannot compute the probability of a first player win like this and it’s really a fundamental
obstacle, it’s not just we didn’t work hard enough. And also for the misere version of the game it
is unknown whether draws occur. So no, his technique doesn’t seem to work. So why did I
bother telling you all that business about the hardcore model if it’s no use? Well, you can try to
use it in the other direction, in other settings. So we can still try to use it in the other direction.
If there were multiple hardcore Gibbs measures then you have draws.
And one place you can try to use that is in higher dimensions. So here is a case where it works.
So take a D dimensional lattice, so not Z, D, but the even sub-lattice of Z, D, and the vertices
whose coordinates have even some. And consider the game where you are allowed to move.
You have to move up in the vertical direction and you can move in one other coordinate
anywhere you want. That’s a graph, a directed graph and this corresponds to a hardcore model
on Zed, D minus 1, and 1 dimension lower by a similar correspondence to the one I showed you.
And this, the hardcore model, is known to have multiple Gibb’s measures in dimensions 2 or
more, with this D minus 1 being the 2 or more. So that means in dimensions in 3 or more this
game does have draws. For P sufficiently small the probability of a draw is positive.
Well you might ask: What about the usual lattice, Zed D? If I want to consider 3 dimensions
then maybe the most natural directed graph is just Zed D and you are allowed to move northeast
or up. You are allowed to increase any coordinate by 1. That’s the directed graph and those are
the allowed moves. Well here this hardcore correspondence really breaks down. And again, it’s
not just because we were too lazy, there is a fundamental problem. So basically the kind of thing
you would want to do is if you want Z3 you could consider 3 successive layers of Z3 and sort of
imagine how they connect to each other and you get a triangular lattice, but then what you want
to do is there are these 3 classes of vertices that they naturally fall into and you want to consider
hardcore dynamics where you update each of these sets of vertices in turn.
But, for example, you update the green vertices just conditional on the blue vertices. And that’s
no good because they are adjacent to the red vertices as well, so that’s just not the hardcore
model. So it really doesn’t work there, but we conjecture nevertheless. So again, in 2
dimensions you do not have draws such as theorem and in 3 dimensions we conjecture that you
do for P sufficiently small.
All right. So finally the third example, as needed as I say this is in some ways the most
interesting, but also the hardest. So let’s consider an un-directed lattice. And again I will make
it random by making each site of the lattice independently closed with probability P. Toss a coin
for every vertex and make it closed. So somehow the normal game is very boring on an
undirected graph. The normal game being there is a single token on some vertex and when it’s
your turn to move you can move it along an edge and you lose if you can’t move. So, most of
the time no one is gong to lose, for instance for anyone who feels they are in a losing position
can just –. I don’t know why I drew a cycle like this, you can even just make the token shuffle
backwards and forwards between 2 adjacent vertices if you feel you are going to lose. So it is
not a very interesting game.
So here is a way we can make it more interesting. It’s a game that we call “trap”. Alice and Bob
take turns to move a single token and the single token is located on some vertex, you move it
along an edge and you are allowed to move it to any open site, provided that site has never been
visited before and you are not allowed to move it to a closed site. And again, if you can’t move
you lose. So it is kind of a fun game. You are trying to trap your opponent by trapping them in
the path where you have been already, but you are using the closed sites as well.
So this is an interesting game and since we are concentrating on Z, D and since Z, D is bipartite;
I am going to rename the players. There is one player who always moves to even sites and I will
call that player Eve and the other player Odin. So here are some similar simulations. So here is
of course not Z2, but a square and as before the good thing to do is declare the boundary to be a
draw. So we play the game on a square board, but if the token ever get’s outside the boundary
we just call it a draw. So the sites that are outlined in black are closed and white is places,
starting vertices for which the game is a draw with optimal play. And red is Odin wins and blue
is Eve wins.
Okay, so you see some very interesting things. If there are not many closed sites then basically
it’s all draws, not very surprising because the closed sites aren’t enough for anyone to trap the
other person. You increase P a little bit and there are some little regions where maybe Odin can
forcefully win locally, but globally it’s mostly draws. Then P gets a little bit bigger and
suddenly there’s a huge region where it happens Eve can force a win anywhere in here and there
are these interesting checker board regions as well that are half blue and half white and here’s a
little region that favors Odin and then you make P a little bit bigger and there are sort of more of
these smaller regions somehow. So that’s a square of size 50 I think. Here is a bigger square of
size 200.
So there is a big difference here, which is that the P equals .1 picture, here it was mostly draws
and here it’s not. So here, if you take a bigger square, suddenly there are these regions where
one or another player can win again, but they are very big and it seems somehow you need this
whole region and the previous square wasn’t small enough to contain it. So the picture that I
have in mind by looking at these simulations is the following: That for each P there is some
typical size of a region within which one player can force a win and this size diverges as P gets
smaller.
And if your square isn’t big enough to contain one of these regions then it’s mostly draws.
That’s sort of the picture it seems to be showing, because here there are big regions and then they
get grainier when P gets bigger. And furthermore it kind of appears that, at least perhaps, if you
take the square big enough, even with –. Oh, what happened? There we go. If you take the
square big enough it may be even with P equals .05 someone wins and you don’t get so many
draws.
So it’s hard to tell, but you might guess that the critical probability is 0. So when you take the
whole infinite lattice Z2, you don’t have any draws. And again, I know there is a lot more you
can say looking at the pictures. It sort of seems that these red and blue regions kind of abutt each
other. So there are no corridors of draws in between. And again, you have these interesting
checker board regions. You probably can’t see it, but that’s checker boarded red and blue for
awhile. So the first player wins from this region. So it is very, very interesting. We can’t prove
any of that, but a reasonable conjecture is that Z2, maybe for all P, you have no draws. And
again we don’t really have much to back this up, but if I had to guess I would say in 3 or more
dimensions you have a phase transition.
In 3 or more dimensions, if I had to guess, I would say perhaps you do have a [indiscernible],
you do have draws from P small enough. Okay, we can’t prove any of that. So we backed off to
see if there was something we could prove. Yeah?
>>: [inaudible].
>> Alexander Holroyd: Right. I mean I just don’t know, but by analogy with the directed case I
would say.
>>: [inaudible].
>> Alexander Holroyd: Yeah, I just don’t know. But, I think these are very interesting questions
and it’s obviously very hard.
Okay, so how can we save face and prove something? So because the graph is bipartite, ZD is
bipartite, you can give one player a big advantage. So you can say you have odd and even sites
of Zed D and I could assign different probabilities to them, different probabilities of being
closed. And the extreme case is odd vertices are closed with some probability P and even
vertices are closed at all. All even vertices are open. So this gives a massive advantage to Eve,
because Eve moves to even vertices and she never had a problem. So the are all open. So it
definitely gives Eve an advantage, but you can ask: How big is it, trying to make it quantitative?
So here is kind of how this game tends to go. So suppose Odin moves to this site, we can get it
back, so then Eve is going to just move there. Then she has won immediately, because Odin has
no moves. So that means really that site there is effectively closed. It is forbidden to Odin. If he
ever moves there he is dead and you can see where this is going. So now you can iterate this
argument. So anytime you have a site, [indiscernible] and you have 3 sites that are closed or
even effectively closed like this, you can make the fourth one effectively closed. So that means
that one is effectively closed as well and therefore that one, and so on and so on. So this model
is on a rotated version of the lattice. It’s a variant of bootstrap percolation called, “Modified
bootstrap percolation”.
So the model being whenever you see 3 sites in the configuration like that you are out the fourth
one in the square. And a standard, but rather surprising result on this is that on Z2, for every P,
all the sites end up blue. If you start with random sites, no matter how small P is, they all end up
blue. So you can deduce from this that Eve wins from every initial vertex. And this extends to
various D dimensional lattices. It could likely be extended to Zed D, but it is a little bit harder
because the lattice is not really the standard one.
Okay, so let’s try to make it a bit more interesting again. So this made it too easy for Eve. Let’s
give Odin some advantage back again. So let’s play it on a finite board. Let’s play it on a finite
square and we will choose the parity of the square itself so that we give the boundary to Odin.
So we just choose the parity of the square so that the external boundary vertices are ones that
Odin moves to. So like if Odin moves to here he has won. So basically Eve tries to win by using
the closed vertices and Odin tries to win by forcing even to the boundary. That is roughly what
happens. So you can play around with this.
So let’s play it on a size N diamond. So now we have 2 parameters: We have N the size of the
square and we have P the density of closed sites. And this bootstrap argument tells you that –. I
mean bootstrap percolation gives you bounds involving finite squares. So it tells you that Even
wins with higher probability if N is big enough compared with P, if it’s bigger than exponential
than constant over P, which is pretty big of course. But, that’s not tight. The bootstrap argument
is not tight for the game. So here is a place where Eve can win, even though this argument from
2 slides ago doesn’t tell you that, this argument of effectively closed sites. So you can work it
out in this example. I won’t do it right now, but certainly this vertex doesn’t become occupied
by the bootstrap model, but Eve would start from here. That is one way to do it.
So the correct scaling is actually not this exponential one that you get from bootstrap percolation,
but N is approximately constant over P. So more precisely we know that if you do this on a
diamond of size N with P then, so if N is large compared with 1 over P, then Eve wins almost
everywhere. Yes?
>>: [inaudible].
>> Alexander Holroyd: Well it comes up on the screen here. I haven’t got log to go anyway.
So if N is big with respect to 1 over P then Eve wins. If N is small then Odin wins and it’s not
quite for every site, but it’s for most of them and there are some logs that we couldn’t get rid of
so it’s not quite that tight result. And you can deduce things about what happens if you have odd
sites closed with probability P and even sites with probability Q as well, not very tight results.
So where all this comes from is there is a little bit of magic. This game of trap, on any graph
actually, is related to maximum cardinality matching. So here’s a fact: On any finite connected
graph the first player wins starting from a vertex V if, and only if, V is included in every
maximum matching of Z.
So it’s all about maximum matching and this is easy enough to prove. It is just by induction you
say: If this is true then you find a good first move and if it’s not you find all first moves are bad.
So there is another interpretation of all these questions, which is at least as interesting as the
game one. They are questions about sensitivity of maximum matching to boundary conditions.
You take a big chunk of your graph, you cut it off somewhere and you ask what maximum
matching looks like in the middle, near the origin. Does it depend exactly how you cut it off or
not? So that’s what draws are all about.
So yeah, I guess I should wrap up, but the way we prove that theorem is by finding matching and
findings ways to modify them, etc. So there are a bunch of questions. As I said the most
interesting one: Play trap on an undirected percolation cluster. On Z2 or Z3, are there any
draws? We don’t know. The directed game on Z3, remember that there is this hardcore
correspondence, but it doesn’t work on Z3. So we don’t know what happens.
So one thing that I didn’t talk about really is you can ask whether you have phase transitions in
any of these cases. So take, for instance, this 3 dimensional even lattice that I showed you
towards the end, 3 dimensional directed even lattice. So we know that you have draws if P is
small enough and not if P is big enough, but we don’t know whether the probability of a draw is
monotone in P. So we don’t know whether there is just a phase transition point.
So you can try to improve these bounds that we have after a logarithm, or you can just ask what
actually happens in here. And by the way, if you want to understand this first question: What
happens on the undirected lattice? Then it wasn’t just a face saving exercise that we looked on
this finite diamond. You probably want to understand how the game works on finite regions
first, because supposedly there are these red and blue regions that are abutt each other, so you
had better understand how they work. So what happens within this window? Well there are
many, many questions. You can look at the target game. Can you compute the winning
probability? So back to Z, that was and misere versions, there are non-bipartite graphs, etc., etc.
So I will stop there.
[Applause]
>> Yuval Peres: Any questions?
>>: Did you somehow incorporate random turn games into [inaudible]?
>> Alexander Holroyd: Well I was wondering about that. It seems as if it’s a different question,
because you don’t know the randomness ahead of time. What we are doing is we are saying that
the randomness is all in the graph and then you know it. So it seems like it doesn’t, but maybe
there is some possible [indiscernible].
>>: So in these bounds, log 1 or [inaudible].
>> Alexander Holroyd: Yes, yes.
>>: So when you [inaudible].
>> Alexander Holroyd: These ones, right?
>>: I see, so on a technical level those show up [inaudible].
>> Alexander Holroyd: Yes, you might expect that. We really don’t know. I wouldn’t be
surprised if you could improve one or another bound, but it’s perfectly possible that there is a
real interval in between as well, where something different happens, like Even wins in certain
places and Odin wins in other places. Yeah, it’s an interesting question and I don’t know. There
are pictures like this, you know we explicitly construct matching. I mean for the one boundary
we explicitly construct good matching and then we somehow modify them along alternating
paths so that we can produce different ones that correspond to different vertices. And there is
probably some slack in our constructions and then the other way, we somehow prove nonexistence.
>>: [inaudible].
>> Alexander Holroyd: Almost. Yeah, so we find a –.
>>: [inaudible].
>> Alexander Holroyd: Right, so basically on this side of the bound you want to prove that there
is a matching that matches all odd sites, but leaves any particular even site that you are interested
in un-matches. That’s basically what you have to do. So we start by constructing one matching
and then we modify it along an alternating path to get the other ones we want. And then this one
is the other way around. But yeah, I mean behind these there are fairly concrete constructions
and it is perfectly possible one could improve them.
>> Yuval Peres: If there are no other questions, let’s thank Andrew again.
[Applause]
Download