21913 >> Yuval Peres: Welcome, everyone. Good morning. ... have Alexander back here, and he will tell us about...

advertisement
21913
>> Yuval Peres: Welcome, everyone. Good morning. And we're very happy to
have Alexander back here, and he will tell us about randomized broadcast.
>> Alexandre Stauffer: Thank you, Yuval. It's so nice to see so many familiar
faces here. The talk is slightly different than what I wrote in the abstract. So I
should do ambitious talk. So this talk will have two acts but no intermission
between them. So the first act I'll talk about random broadcast, a paper I
presented in SODA, in a conference about a month ago.
And the second act, I'll talk about mobile geometric graphs, a model for moving
those. And I will talk about some small extensions I have been doing on this
model since I worked on it here last summer. And I will especially focus on this
part on some open problems, very ambitious speculative open problems, and I
reserve more concrete open problems for off-line talks.
So let's start with the random broadcast. So the setup is very simple. So we
have a graph. I have a graph. And there exists a node of the graph that contains
a piece of information. Say this one. And he wants to spread the information
throughout the nodes of the graph.
We're going to analyze the algorithm called a push algorithm. It occurs in
discrete time steps. At each time steps, these nodes, the node that contains the
information, picks a neighbor uniformly at random and informs that neighbor. It
pushes the information to that neighbor. That's why the name comes from.
And then you just iterate this procedure, so the two nodes will pick neighbors
uniformly independently and from them and continue this but all this choice is
done independently, independent of one another and independent of the test. So
we can end up picking the same neighbor as in previous steps. So just continue
doing this procedure until all nodes have received the message.
And this model have been analyzed for a long time and have been suggested for
applications like replicated database, have database many different computers,
and have to guarantee the integrity of the data when there's some update.
And also in computing like achieving consensus in a network or simply broadcast
the networks, and if you look at it through a different perspective, you realize that
it's just like a first passage percolation problem.
So each node is choosing edge. And you can just take each edge, oriented edge
from here to here and to label it with the time at which this node picks this edge.
And then it just becomes a first pass percolation problem and the question of
broadcast just the first time at which all nodes percolate in the sense of first
passage percolation.
>>: But it's slightly dependent?
>> Alexandre Stauffer: No. No. You have -- you take the edge oriented.
>>: Time -- one outgoing edge recursive?
>> Alexandre Stauffer: Yes. Because taking times like exponential times in
each, yeah. But it's still -- if you Poisson-ize this problem, instead of picking
vertex, instead of each vertex picks an edge, you just pick an edge with some
rate, normalize them and you can couple them and get the same result.
That's actually how we analyzed this process. So, and you recall RG as the
running time of this procedure, the time at which all nodes receive the
information. You want to get bound on RG that with probability going to one as a
size of the graph goes to infinity.
So first let me start with very simple lower bounds. First lower bound is a
diameter. No one doubts the diameter is a lower bound, right? And the second
lower bound is log N. And there is a log N lower bound is that each node informs
at most one neighbor at each step. This means that at each step at the size of
the informed set, at most double. So it's at most double at each step you require
log N steps to inform N nodes. It's very simple. It's personalized by Freze and
Gremet [phonetic] [inaudible] in '85 and their analysis was first later refined by
Putu [phonetic] and they got precise [inaudible] for runtime for computer graphs.
It's log N plus natural log of N.
>>: Log, that's very good.
>> Alexandre Stauffer: It's very precise. And then there's three years later Fige
[phonetic] [indiscernible] analyzed this for different types of graph like
hypercubes and random Dirichlet graphs, they gave log N upper bounds that are
tied to the lower bounds here.
And especially they analyzed this for general graphs. They show that ->>: Random graphs. What do you mean? In what range?
>> Alexandre Stauffer: It's most sufficiently large, so at least a giant component
would not [inaudible].
>>: Is there any difference between that and the [inaudible] essentially nothing?
>> Alexandre Stauffer: No. Actually, later ->>: If you bring it down, the execution.
>> Alexandre Stauffer: But later [inaudible] is Huber [phonetic] actually showed
that for Dirichlet graphs sufficient to this you'll get the exact same asymptotic.
The exact same answer.
And for any graph, we have a general lower bound of N log N. And this lower
bound is achieved by this graph.
>>: The upper.
>> Alexandre Stauffer: The upper bound, yes. It's easy to see, because if you
start the message in the central nodes, then it's just a coupon collector problem.
It's the time at which that node chooses all the neighbors. So it takes N log N
times. And there's another upper bound that's maximum degree delta times the
diameter of the graph some log N. If the diameter is too small. And it's simple to
see as well because if you take pick two nodes, take the shortest path between
them and then you just compute the time at which this message goes through
this path. Just a sum of geometric random variables. And the mean is at most
delta. At most maximum degree. Then as long as the degree is large enough
then it's concentrated and you get the results for the upper bound.
Now, the paper that's most relevant through our result is a paper of most
[inaudible] from 2006 that they studied this for expanders. So they proved that
for any graph, for almost regular graph, sorry, where almost regular means the
maximum, the minimum degrees deferred by a constant factor, they show that
the upper bound is one over the conductance which I'll define soon, times log N.
General. So this means that for generating almost regular graphs, when there
are expanders they have constant conductance, then this bound is tight.
So let me just -- okay. Just quickly review what conductance is. If you take a
graph, you take a set of the graph, the conductance of the graph is just a number
of edge crossing the boundary of the set, the conductor of the set is the number
of edge crossing the bound of the set over the sum of the degrees of the nodes
in the set. So it's an edge crossing the boundaries over the end points of edge
that touch nodes inside this set. And the conductance of the graph is just the
minimum over all sets with at most half of the nodes of the conductance for the
sets. So the set with minimum conductance.
And in this talk I'm going to look at a different notion of expansion, which is vertex
expansion, and vertex expansion is very similar. You just take a set and instead
of counting edge, you count vertex. Count the neighbors of the set and divide
that by the number of nodes in the set.
So there's always a constant between 0 and 1 and it measures how the sets
grow in terms of vertices instead of edge.
>>: [inaudible].
>> Alexandre Stauffer: It is because if you take a set with half of the nodes,
that's one. And you can only get minimum error, vertex expansion of the set, the
graph is just the minimum.
>>: But for a second --
>> Alexandre Stauffer: For some sets, for the graph it's between minimum and
1. For the whole graph it's between 0 and 1 but for a set it can be much larger
than 1.
And for regular graphs, we can easily relate to the vertex expansion is larger than
the edge expansion and at most D times the -- larger than the conductance and
at most D times the conductance. Very easy to see. So in this talk I'm going to
look at regular graphs. And for the case where D is not constant. So the
question, the general question is there any difference between vertex expanders
and the conductance or edge expanders, natural expanders in this sense for
large D and regular graphs.
So the sequence looks similar. And we prove that the runtime of the push
algorithm for regular graphs with vertex expansion alpha is, it's an expression
that's similar to this expression for the conductance. It's 1 over the measure of
expansion, 1 over alpha times log to the 5 of N. That's an upper bound. What
we get is E of 5 here times -- embarrassing, but it's there. And before I address
the 5, let me just say that this result cannot be extension nonregular graphs. If
it's not regular, then all sorts of bad things can happen. One of the examples is
this one, you have a click, a completed graph, and you add an edge in the vertex
in the end. And once -- and this graph itself is a vertex expander. It has constant
vertex expansion. And when this node receives a message, it should take time
of order N to choose this edge from this node. Just because it has order N
vertices in the click.
So it means that even for vertex expander you get linear time for the time of the
push algorithm while the theorem predicts for the regular graph this should be
like log at most log to the 5.
But let's address the 5. The 5 there -- so can we reduce the log 5? Probably,
yes. We're not very careful -- we had no real goal in actually getting log to the 4
or 3. But the question is can I get all the way down to log N and get the result
that's almost the same as the result for the conductance? And the answer for
that is no.
So we construct a graph that's regular graph. Not only regular. The graph is
actually vertex transitive. And it has constant vertex expansion. And the runtime
of the push on this graph is log square, at least. So not going to show you, it's
going to take me five minutes to show you what the graph is. But it's based on
[inaudible] graphs, just expander with a Cartesian product of another graph.
So for the remaining of this part of the talk, I want to give you some ideas on how
to prove the upper bounds, which is not hard to prove. I'll give you -- the proof
goes like when you look things in the right perspective, then just follow your nose
and you'll get a proof.
>>: But then can 5 be replaced by 2?
>> Alexandre Stauffer: Yeah, the follow your nose the turbo.
>>: What.
>> Alexandre Stauffer: When you follow your nose that's when you use all the
log factors.
>>: Do you know which type?
>> Alexandre Stauffer: No, we don't know which type. There's still this gap.
>>: Would you have a guess?
>> Alexandre Stauffer: We have a brave guess. By the way, these are joint
work with Thomas [inaudible]. That's now in Max Blank Institute.
>>: Who?
>> Alexandre Stauffer: Saurvaut. We have a guess that it should look like 1
over alpha times log N times log D. Now example this N, so we've got this log
square. And now context. Now that's a brave guess.
So let me tell you some ideas on the proof. So the basic strategy is simple. We
just look at sometime there would be a set of informed nodes that are called Y
sub T at time T. And just wait from that time until this set gets twice as large, and
you compute how much time in each weight and after that you just iterate this
procedure log N times.
In our proof we show timing of weight is 1 over log alpha 4 and you get this extra
log to get the log field. But the problem is called computing this time actually is -different strategies depend on the size of the set of the informants nodes. We
have three stages. I'll go very smoothly through the stages in the talk. You may
not even feel it. But in the paper it's, especially the third stage, you cannot avoid
it. It becomes huge. That's where the log phase enters the game. It's not going
to be clear in the talk why it fits. It's just because we lose logs as we go through
the analysis.
So let's start with the first stage. When the set of informed nodes is very small.
So what happens? The set of informed nodes is very small. Right? Say this
size. And have a few nodes inside. But each node has degree D. Smaller than
D. Say D over 2 or even smaller. There are a few nodes inside. Each node has
degree D. Even if there's a click in the set of informed nodes there's still
efficiently many edges going out of the set.
You can just use this idea to get that initially the set of informants nodes will
increase exponentially fast. So in some sense there's a local expansion for small
sets, just because the graph is regular.
So get it after log D steps, you get a set of order D of informants nodes. So we're
going to choose the idea five many times.
>>: We should be able to --
>> Alexandre Stauffer: Sorry.
>>: We should be [inaudible].
>> Alexandre Stauffer: Does that make sense?
>>: Most of the time you need much less.
>> Alexandre Stauffer: No.
>>: No?
>> Alexandre Stauffer: No, at most by half.
>>: [inaudible].
>> Alexandre Stauffer: The 10 is not very important. Actually, it needs also
upper bounds, we use upper and lower bounds. We need to constantly
contribute 1. So that notion is going to be very useful, because it allows us to
define with this funny name a friend of the set. So if you have a set X, you have
a node S, this node with constant probability to do this local disseminations, local
expansion that I was just talking about, just like this, if this local dissemination
goes inside the set -- actually, half of it goes inside the set, we call F a friend of
X.
This notion is actually deterministic. I'm saying that with some constant
probability, these nodes will reach half of its dissemination, local dissemination
reads the set X to the deterministic ones. Deterministic definition. The nice thing
is if S is not a friend of X. And X is an enemy of X, in some sense, then S is a
friend of the complement, just by definition. Because I talk about half. So half
must be either in excess or in the complement. So either it's a friend of the set or
friend of the others, not all the others but some of the others.
That's the idea. So initially it would grow this, the edge's local expansion we
would grow the size of the informant set quickly. But when the site of the
informant set gets larger we can't apply this idea more. Now we have too many
nodes and we can actually consume all the edges inside the set. So then you
have to start looking at neighbors.
>>: Spreading at least as fast as infinite D[inaudible].
>> Alexandre Stauffer: But D over 2 regular graphs expanders.
>>: Infinite.
>> Alexandre Stauffer: Oh, infinite.
>>: Not graph. I'm so sorry. The beta -- that's trivially lower than what you want.
No? Not here.
>>: In the very beginning.
>>: First stage. So you're saying that it's done.
>> Alexandre Stauffer: But then the size gets too large.
>>: No, just for the first lemma. What I would say it's at least there's a process
that data distance is I think what you have to do too [phonetic].
>> Alexandre Stauffer: So now let's look at the informant set and look at the
neighbors of the informant set, and let's look at one particular neighbor, V. And
assume V is a friend of the informant set. So it means if it does this local
dissemination, it would send a message inside an informant set. But then have
choose again that the graph is regular. That means any path from V from the
neighbor here can be reversed and occur with the same probability. So
probability that V, there's a path for a given probability that V informs this node,
using this path, is the same as the probability that the node informs V. And since
V is a friend, there are many nodes inside the set. And V would get informed
with constant probability.
Then to me it's a constant fraction of the neighbors will get informed. And since
these sets -- it's larger but not sufficiently large. That's true, large. Just this
fraction of neighbors get informed is sufficient for the analysis. It's actually
sufficient if the set is too large, because growing by to this alpha some constant
times alpha. Times the size of the edge.
The problem comes when, for example, there is no friends. The informant set,
the sets, there's no friend of it. So it may takes a long time for given nodes, look
at the nodes, may take time D for these nodes in fact to get the message,
because these may be the only edge that links it to the set of the informed nodes.
Then what happens? Okay. You wait until the first neighbor gets infected to get
the message, but then when -- then this guy gets the message and then he's a
friend of the complement. So it means once it gets the message, it informs many
people after just a few more steps.
And that is the basic idea. That's the two interplays in the analysis. You either
have many friends and you just spread the message quickly. Or you wait some
time to help cross a kind of bottleneck. But once you cross it, you spread the
message quickly later there, after that.
So that's the main idea. And then there is a take-home message, every proof
should have a take-home message. This is one of those proofs. The take-home
message is if you want to spread some information, don't tell your friends. Tell
your enemies. Right? Because your friends still get anyway but your enemies
will spread it out. That's the take-home message.
The problem starts to come when this size of the set is too large, because when
the size of the informant set becomes very large, then looking at only one
neighbor is not enough. Because this neighbor will in fact inform many people.
But this many people still not as large as the informant set. So it's not -- the
informant set is not going to grow by a substantial amount.
In this case -- this is not important -- in this case what you have to do is you wait
some more time until like, say, half of the friends, the neighbors get the message.
And when this happens, you look at the local information from all of them. All of
them do the local dissemination. The problem is this local dissemination may
collide. And that's where you lose all the logs. When local dissemination
collides, for example, look at V. And V is one of the nodes that receives the
message for more than one dissemination.
Then not only do you have to count how many times it receives the message, but
after -- let me put this in a different way. First you consider this dissemination,
this local dissemination as being independent. Think of them as just doing them
independently.
What happens when V receives, after receiving the message, say, from this guy
first, then receives the message from this guy. When he received the message
from these nodes here, it should start choosing two edge per step. Because
since it should choose it independently, it should start choosing edge, should
send the dissemination in according to this dissemination and the same
according to this.
So we have the problem analyzed them independently, not only do you have to
count how many, how many of this propagation the node is, but also there is a
problem that there's conflict. They're making too many choice. So the way we
go around this, we first analyze them independently, count how many of this
piece our nodes can be. We bound this uniformly over all nodes. And then we
do a coupling between the independent analysis and the analysis where V is not
allowed to make more than one choice of neighbor per step.
And then we lose more logs. And we get the logs to the field. So just one quick
remark before -- that's the last slide of the proof of this sketch of the proof. After
doing all of this calculation, when it says follow your nose it says follow by
standard principle Chernof bounds all over and union bounds, and that's where
the neighbor lose some log factors, but still even if we're to do the analysis very,
very carefully, if we're able to analyze very, very carefully, we would not get -probably being log to the 3. Not there.
But that's all I want to say about the proof. Before I go to the second act of the
talk, let me just quickly say that straightforward corollary from our result should
cover time, what the straightforward corollary results from many things, random
sub graphs and many things. Let me mention one for cover time. The main
thing this process is only picking edge. It's similar to many other proofs. It's
similar to percolation, for example. It has relations to other types of processes.
And here, for example, Chandler, et al., his paper where he relates to cover time
with the fact resistance of a graph, he also shows that for any graph with vertex
expansion alpha, the cover time is bounded, upper bounded by 1 over alpha
square times N log N.
And then he says, wait, why square? It's even more embarrassing seeing a
square there than a 5 in the log. And he asked why the quadratic dependence
on the vertex span was necessary or not. And it actually follows directly from the
result, and we're using not even tight relation between the cover time and the
performance of the push algorithm done by Elves and Salvag [phonetic] that for
any regular graph G vertex expansion alpha you have one over alpha times
some weird log factor N times log factor of 6.
So even though there is an extra log here, which that should be bit, we know the
dependence on alpha is linear, is universally linear. Like this.
>>: But you don't know if [inaudible] minimum of these two.
>> Alexandre Stauffer: No. No. That's yet to be done. So very speculative
ambitious question is to know if there's any fundamental relation between the
performance of the push algorithm and other models like the cover time or
random subgraph generation and so on.
But that's all I want to say for this part of the talk. Is there any question? No?
Okay. So let's move on without intermission.
In the second part of the talk, I want to talk about mobile geometry graphs. Let
me right away define them. So many of you have seen me talk about mobile
graphs, so have seen these pictures. So we take a Poisson point. We want to
construct a random graph. Take a Poisson point process with intent to learn over
the whole planes that would be the nodes of the graph. And between any pair of
nodes that are within distance R you put an edge. And the way one should draw
the edge that I'm going to put this red balls of radius R over 2 centered at each
node so nodes will be adjacent when the balls intersect like in this case and they
will not be adjacent when the balls do not intersect. So that's the starting point.
It's a very well known model called random geometry graphs or Boolean model
and other names.
>>: They add [inaudible] for, the ones you mentioned earlier, and almost no one
ever mentions Gilbert.
>> Alexandre Stauffer: Just first defined ->>: Gilbert. And then for decades nothing happened.
>> Alexandre Stauffer: Yeah. Mentioned back in the '60s.
>>: Yes. Exactly the same. A little bit different effort.
>> Alexandre Stauffer: Yeah. And then we took this model at time 0, and from
that time on we let the nodes move in the space, according to the dependence
Brownian motions. So the picture you should have in your mind is this red ball is
moving around in the Brownian motion and whenever two balls intersect, why are
they intersecting, the two nodes are adjacent in the graph.
So have a graph for each time, continuous time. Oh. I do have -- I am not
prepared to show it but I do have some of these.
And some cute facts first it's stationary. So it means that the Brownian motion is
a measure preserving transformation of the Poisson point process, or, in other
words, if you stop the process at any time T the distribution is the same as the
one in time 0. But, of course, there are dependence. And thankfully there are
dependence, because this allows us to write papers about it. If there were no
dependence that would be too simple.
And something that we know is that there exists a quick [inaudible] lambda C
such that it's proved by Vandenberghe [inaudible] in '97 that all times it happens
that if it's the case that lambda is more than lambda C, then at all times, then all
connect components are finite so there are no exceptional times.
And this is the picture I'm showing here. If I show you the largest component in
the box, it will be this green components here, which is much smaller, relatively
smaller than the box I'm showing. But if lambda is larger than lambda C, then
there are more nodes in the graph and there will be at all times an infinite
component, which will look more like this. So there's a component going from
top to bottom, left to right, spanning the whole region.
So back in the summer we started with Yuval Peres, Alistair Sinclair [inaudible],
and the last S is me. We studied this large division results for some stopping
times. So probability that it's stopping times larger than T and starts this for
stopping time being the detection time which the first time some nodes of the
graph is able to detect the presence of a target that can be either fixed or
moving.
And the percolation time, which the first time at which are given nodes first
belong to the infinite component. And also the coverage time and the broadcast
time. And related to broadcast, now I'm going to show you, I think, the only
concrete open problem of this talk. It's the following: So assume the following
situation. You once note that nodes are moving in continuous time, but they are
able to send a message to their neighbors, immediate neighbors, in discrete
time. So a node has a message at time 0 look at current neighbors they send a
message to all of them do it at discrete time step.
So these processes we know it was analyzed in similar model by Harircast
[phonetic] and [inaudible] in this century. Not sure if that's 2004 or '3. And they
showed that the size of the informant set grows linearly with time.
Now, the question is, okay, but what if the nodes have a null alarm, a Poisson
alarm, when they receive the mass for the first time they set their alarm and at
each discrete time they spread the message to their current neighbors; but after
the lemma rings they stop and never participate in the dissemination again.
It means, it can be a model for the scenario where nodes are like moving sensors
in a network and they want to save power. So after some step they just stop N
message. Or the case where no spread of infection, that's the set case, then that
just dies.
And the question is, is there a value for mu sufficiently large for which disposes
survives or not?
>>: Not? It's more ->> Alexandre Stauffer: Rates, yes.
>>: Discrete time. Is that discrete time, is that important just ->> Alexandre Stauffer: To simplify. Because then I would have two Poisson
clocks. My explanation would be weird.
>>: But you could just cast whatever you touch instantaneously, or would that
be ->> Alexandre Stauffer: No, no, no, because then you would touch the infinite
component. And that -- but that would be a difference between the [inaudible]
case. Good point.
But that's my only concrete problem for the talk. And then let me just talk about a
technical thing and some developed on this technical thing and open problem on
this technical thing.
So the technical thing is in our analysis it was very handy, a coupling argument
that first appeared in a joint work of myself, Rogers and Sinclair, and then in the
work we did here in the summer, and just quick remind us there's no nodes
moving around. Positive look at the finite box and you tessellate it like this. And
it turns out that at some time that all the cells have sufficiently many nodes.
For example, here in the picture all the cells have two nodes. But pick a number.
Cells have that many nodes. And accordingly the cells are dense, and then we
can show that we can add to the picture, an independent Poisson process with
different intensity, say they have different colors. Then they couple the evolution
of these nodes and the independent Poisson process so that after some steps
they depend on the size of the cell they couple in a way that.
>>: You choose a great entity of any size or ->> Alexandre Stauffer: You can fix the side of the grid.
>>: But the number of cells, N by N or what have you got here? You can't have it
infinite.
>> Alexandre Stauffer: No, you cannot have it infinite.
>>: So what's your choice here?
>> Alexandre Stauffer: It can be -- it can be anything very large -- it cannot be
exponentially larger than T. It can be polynomial larger than T.
>>: Just asking what you were choosing.
>> Alexandre Stauffer: Depends on the application. Usually it depends the size
depends on the time you will be serving the process completely. So that arguing
the about finite case the same as arguing about the infinite case, that's
essentially it. For example, if you ran the post ->>: Your choice, your choice.
>> Alexandre Stauffer: Because I applied this to different problems. It's up to
your flavor. But, for example, just to give a complete example, we used time T
squared, for example, in our analysis for the percolation time. Size T squared.
Give a concrete example.
So then you can do this coupling so that after you skip the steps, if you look at
the subset of this region, smaller size of the box, then the nodes will contain
based independence Poisson process. So this allows you to forget the past.
You recover independence by skipping some steps.
Okay. That was very handy, and there are two simple extensions. The first one
is the simplest. Ongoing work with Allistar Sinclair. And we also need to have
this happen but only allowing nodes to move in a ball around. It cannot escape a
ball around it.
And the reason why we need this -- the same thing works, it's not S 3 as you
think, because the coupling has some details to be sorted out. But the reason
we need this is we need to analyze sometimes this process in different areas of
this space at the same time. Like in most scaled arguments. And when you do
this, you want this coupling to be applied in different areas and not interfere with
one another. They should be able to apply them independently. So that's why
you force the nodes not to move too much so they can't go to a ridge and come
back. They have to stay around their position. So they're able to get the same
results by assuming the nodes cannot move too much.
And the second is a joint work with Tajameni [phonetic]. It's also an ongoing
work. And we're able to give the other side of the coin which says that we're able
to couple the nodes so that the nodes are contained in independence Poisson
process assuming that all the cells are not students. That's one thing. That's the
same analysis as before.
But interesting is that we are able to work in our case with unbounded regions.
You can take the whole Euclidian space -- I'll explain what that means -- you can
take the whole Euclidian space, the whole L2. You tessellate it. You assume
that all cells are dense. Then you are able to apply this coupling so that not
everywhere but almost everywhere in the region you will have a couple of
success. Means in a percolating cluster, offers a tesselation. You can just show
that there will be a percolating cluster like this hatched area here after which the
coupling succeeds and the other areas the coupler may not succeed, but these
are bounded and in our case these are sufficient.
And this turns us to two speculative questions. So the first one is with the result
with the tie, we actually show that in this percolation process they force the
nodes are sandwiched between two Poisson processes. Contain a Poisson
process and they are contained in another one.
So they show somehow in some sense they converge through the Poisson point
process. And the question is whether we can do similar analysis for all the
Poisson point process or not. If you take a Poisson process, assuming it's
transationnally variant, and go through a Brownian motion, now of course you
have dependence, not all so dense but it helps to do that. But can you somehow
argue that the approach, the conversion in this sense of the sandwiching
algorithm, an independent Poisson process.
That's a very speculative question. And the other one is whether you can apply
the coupling agreements for the same way I was telling before, but for all the
types of motions. That's always when I give this talk someone that works in
network ask me, and what about the other types of motions? Because no one
moves as a Brownian motion. No one can do a motion that's nowhere there's a
problem. And that's the point. So can you do a coupling of other types of
motions or not?
And the main thing here is you need like a coupling time for each time of motion.
If you can make something as a result of that, you would be important for people
that studies and practices.
So let me just sum up and just realize that talking about sandwiching before
lunch is not a good thing. So let me finish the talk quickly so I just showed you
my work on mobile geometry graph very quickly and randomized broadcast. I
didn't have time to show you my work on mixing time of Markov chains and I also
have ongoing works on that and the performance of sampling algorithms
especially in important sampling, but I would be very happy to talk with you off
line about any of these things and other things in life and the weather that's nice
today, and everything. Thank you for your attention.
[applause].
>> Yuval Peres: Any questions?
>>: I was hoping to hear about the sausages as well.
[laughter].
>>: Is there going to be more?
>> Alexandre Stauffer: Tomorrow.
>>: Okay.
>> Alexandre Stauffer: We have some gaps in the schedule.
>>: Yes, there are gaps.
>> Yuval Peres: Okay.
>>: Just one question. So Brownian motion, what about other motions? If you
sort of discretize, it you couple it and [inaudible].
>> Alexandre Stauffer: You mean for Brownian motion or for ->>: You want to avoid the totally ->> Alexandre Stauffer: If perhaps ->>: Just a random walk.
>> Alexandre Stauffer: Yes, that works.
>>: That works too?
>> Alexandre Stauffer: Yeah. Essentially anything that -- not anything. I won't
say that. But Brownian walk, yes, for sure. Everything that looks like normal
distribution that time will work. Because that's the only thing.
>> Yuval Peres: Thank you.
[applause]
Download