>>: First before we go to the next speaker,... that is on work related to noise and noise sensitivity...

advertisement
>>: First before we go to the next speaker, I just wanted to mention that most of
that is on work related to noise and noise sensitivity work started by Oded, so if
you are more interested in that, you might want to have a look. And now our next
speaker is Russ Lyons who will give a [inaudible] talk.
>> Russell Lyons: Thanks, Christian. So I'm told that I have five less minutes
than is scheduled. Hopefully I'll end on time. I'm talking about hyperfinite graph
limits, work of Oded. This is one of his last topics, though Omar this morning
introduced what it was about, although he was mainly concerned with a particular
case. And Oded's last solo paper was called hyperfinite graph limits. And that's
what I want to talk about.
The notion of this conversion, discrete limits in graphs is I believe due to Itai and
Oded, although it was discovered independently by some others shortly
afterwards.
So let me just remind you briefly what it is. Here is just one example. We might
have a square lattice as a limit graph. Now these graphs actually have to be
rooted, although for this graph it didn't matter which vertex you take as root. And
this would be a limit of, for example, big square portions, finite graphs of the
scare lattice where you take a uniformly chosen root and you look at the
neighborhoods of that root, and you see that you -- what you'll see is exactly the
neighborhoods here, unless you're close to the boundary. But very few vertices
are close to the boundary.
You also got the same root if you take square graphs on [inaudible] and finer and
finer grids. Now, by contrast if you take a regular tree and you take a big portion
of that and take a uniform root, you will not get as a limit the rooted tree.
Because most vertices are near the boundary. Opposite to this case. And things
look very different there. On the other hand, if you take a random regular graph
or expander graphs, graphs with large girth, and take a random root, you will get
in a limit a rooted regular tree.
Now, the limit will not necessarily be a fixed graph, it can be a random graph with
a root, okay. And I'm going to give a general theorem of Oded's, but if you are
not familiar with this concept, you probably just want to think about the case
where you have a fixed graph. Now, and I'm also going to assume that all the
degrees everywhere are bounded by some number M.
Now, the question that Oded looked at in this paper is when is the limit graph or
limit measure concentrated on small graphs like this as opposed to big graphs
like the tree? In other words, amenable versus non-amendable. Now, it turns
out that actually another word for amenable in this context is hyperfinite is
decade old. And before I get into the definitions that we just mentioned that the
result which is a fairly abstract result that Oded proved in his paper, although it's
kind interesting, was actually used by him, Itai and Asaf Shapira in a paper where
they applied slightly more quantitative version of the result to what's called
property testing in computer science. I won't go into that.
But actually had a fairly concrete application.
So it turns out that the key notion was invented by Gabor Elek about a sequence
of graphs. Or more generally a family, but for us a sequence will be enough. So
he defined the family or sequence, and these are finite graphs which we're
thinking of is what we're interested in taking a limit of. So the family itself is
called hyperfinite. So they are already finite. So what does he mean by
hyperfinite? There's some sort of uniformity about how they're put together.
You can break them up into finite pieces in uniform way. So, in other words, for
all epsilon greater than zero it's just a finite K, and now is uniformly in N the
graph GN is K epsilon hyperfinite. And what does that mean? There exists a set
of edges S in GN such that if I delete those edges and take them away, then I'll
get all components of size [inaudible] K, but I haven't deleted very much. So in
other words, the number of edges and actually we're normalizing by the number
of vertices of GN.
Is this in epsilon? So I'm not deleting very much. And all the connective
components which I'm calling clusters have G minus S, have at most K vertices.
Okay? Now, it's not so clear. That's somewhat complicated definition. You
know, what graphs are hyperfinite. Well, actually one example that you can give
is the family of planar graphs. Planar finite graphs are the whole family is
hyperfinite. There's a theorem in computer science called the planar separator
theorem which tells you this.
And in fact, the property testing paper that I mentioned, the primary example is
planarity, testing planarity, but they mentioned other properties as well.
>>: [inaudible].
>> Russell Lyons: Yes?
>>: [inaudible].
>> Russell Lyons: No. That's the uniformity, okay.
>>: [inaudible].
>> Russell Lyons: What?
>>: The planar what is [inaudible].
>> Russell Lyons: Well, it depends on epsilon. For each epsilon -- the smaller
epsilon you got a bigger K you're going to have to take. Now, so we're interested
in taking limits of such things. Now, it's easy to see that if we take a limit of a
hyperfinite sequence, if the limit exists, random link limit exists, then the limiting
measure or graph also inherits a similar property. In other words, remember
we're taking GN in a uniform root. And S is sitting somewhere. There's a fixed
S. A fixed epsilon fixed is a fixed S that witnesses this. Okay.
And take a uniform root. I get a random rooted graph with this deleted or marked
edge set S somewhere, right? And it breaks up my random rooted graph which
is the same graph GN into random pieces.
Now, if I take a limit of this, what I see locally is how the limit is defined, right?
Now, if GN has a limit, that doesn't mean the GN with S also has a limit, but I
could take a subsequence that would still have a limit point -- a limit. And then I'll
see a random S inside my random rooted graph. It won't be very much of the
graph because what does this statement mean? This says if I take a random
vertex, the chance that it's near S is small.
Also all the clusters have size at most K. That's certainly preserved in the limit.
K is fixed. So what I'll get in the limit is also some random deletion of a small
number of edges, small density of edges of leaving only components of size at
most K.
Now, so what Oded did is formalize that notion and prove that it has a
characterization. Those are exactly the limits of hyperfinite sequences. In other
words, any such thing is a limit of hyperfinite sequences. That's a converse. So
let me say what Oded's definition is. So how we measure new probably measure
unrooted graphs. And the definition will be similar to like this. What it needs to
define is K epsilon hyperfinite. And then it's called hyperfinite if for all epsilon
there exists a K such as it's K epsilon hyperfinite.
Now, even if mu is just a fixed graph with a fixed root, the S might not be fixed. I
still will let S to be random. Yes?
>>: [inaudible].
>> Russell Lyons: In this definition here?
>>: Yes.
>> Russell Lyons: It's uniform and N.
>>: Is it [inaudible].
>> Russell Lyons: Same epsilon and K [inaudible].
>>: I mean a set can be just [inaudible].
>> Russell Lyons: S. Well, S sits inside A and GN.
>>: [inaudible].
>>: It says for er->>: It's SN.
>> Russell Lyons: Well, if you want, yeah. SN if you want.
>>: That's [inaudible] I mean it's just.
>> Russell Lyons: It's GN ->>: [inaudible] sequence of the index doesn't matter. I don't see the index N
[inaudible] it doesn't say ->> Russell Lyons: It shows it now.
>>: Oh. [laughter].
>> Russell Lyons: Okay. So S might not be random. It doesn't have to be
mixed. So that's going to be a probability measure which I'll call nu. And it's
going to be an adjective which I'll postpone for the moment. So there's a
probability measure nu. Now, mu was on rooted graphs G and I'll call all the root.
So now I have S. So nu is on triples, G, O, and S. Well, the new law of GO is
mu, it's what I'm starting with, S is a set of edges in G. So I'm just going in turn of
conditions. Now, this next thing is really about how often S touches the root.
So I look at the degree of the root in G with S deleted on average. Okay? Now,
it's less than two epsilon. There's a two because it can come from either end
point. I won't go into the details but it's really equivalent to that. If you just took a
random root there. And finally the last thing almost surely all clusters are G
minus S have at most K vertices.
Okay. Now, there's enough space for an adjective here. And that is if you
remember from I forget which talk it was, Olle's talk yesterday, the condition of
unimodularity came up for percolation. Well, it turns out that unimodular is also
defined in this context finite graphs are in a sense unimodular already, and that's
preserved. Unimodular is a mass transport concept. And I don't think I have
time to go into it. If I do, I'll say what it is at the end.
And so my assumption will be that mu is unimodular and nu also is required to be
unimodular. Yes?
>>: Are you sure that the integral in one degree of O in G minus [inaudible].
>> Russell Lyons: As opposed to?
>>: Degree in S or something?
>> Russell Lyons: S is what I'm deleting.
>>: Oh, but you want that S [inaudible].
>> Russell Lyons: You're right. Yes. Thank you. Yes. Thanks.
This graph is definitely unimodular, Cayley graph's unimodular. So if any -- if
you're not familiar with the concept, it's just going to be automatically satisfied
and you don't have to worry about it.
But, it's always true the limited of finite graph sequences are unimodular. So if
we want to characterize what we get is limits, we have to put in unimodular
anyway. And so Oded's theorem is that so first of all, it's easy that a limit of
hyperfinite graph consequence is hyperfinite with this definition and Oded's
theorem is the converse. Any time you have any measure which satisfies this
that is for all epsilon there exists a K such as K epsilon hyperfinite, then -- and
you have a graph sequence which tends to it, then actually that graph sequence
itself is already hyperfinite.
You cannot get as a small limit, you cannot get a small limit by big graphs. So let
me write down the theorem. And in fact, some quantitative version of it already
appears in this paper. Just with a K in epsilon.
>>: When you say hyperfinite [inaudible].
>> Russell Lyons: Oh, yes. I wanted to say that. Thank you. Okay. So I
haven't defined what I mean by amenable, but there are various ways to define it.
You don't really need to be explicit about how big things are, you -- one way to
define it is there is a unimodular way of breaking it up into finite pieces not with a
bounded size but taking away just a small amount.
In the case where you have a fixed graph, it's transitive. It just means there are
big pieces with small boundary. So you can just think about that context. Okay.
>>: It's equivalent to being a [inaudible] condition ->> Russell Lyons: There's a tricky subtle point that I don't want to get into about
that. But it's not conditioned, no. It's different. Okay. If mu is K epsilon
hyperfinite, and GN has random equivalent mu then [inaudible] GN is K epsilon
tilde hyperfinite where epsilon tilde is equal to 4 M epsilon plus 6 times M plus
one epsilon times log have one over epsilon.
So how would you show that given any graph sequence with this limit you can -and the fact that you can break up the limit in a nice way means that you can
break up the consequences, the finite graphs in a similar way. That's the issue.
And so Oded had a very nice idea is to show that -- so the definition of being K
epsilon hyperfinite means is there some witness, is that measure nu? Well, that
nu you may not be able to use on the GN. He wants to find a different witness
created from nu that you will be able to apply or emulate on the GN. So what
kind of measure just think of a fixed graph, what kind of measure here would be
good that you -- so good that you could essentially apply it to finite graphs which
look like it?
Well, what does it mean for a finite graph to look like? It means that it looks like it
on big neighborhoods. This means that what you're doing here should really only
depend on a big neighborhood. So it should be a local decision about what you
remove. So what we wants -- what he's going to do is take the given nu and use
it to create another one which is just local. And then he can do it. And that -okay. So that's the first very nice idea. And then of course the second thing is
how do you carry it out.
That's hard I think. That's maybe the first idea. But anyway, oh, and let me just
mention before giving the idea that this is similar to what's used in the theory of
random regular graphs. People often want to count various things in random
regular graphs, the asymptotics, and they sometimes want to create things to
provide lower bounds, the existence of certain things. And the way they very,
very often do it is by taking -- starting with IID numbers on all the vertices saying
using that locally to decide what to do and after a few steps. In fact, you can also
do that on the tree. A regular tree you start with those same labels and do the
same kind of thing.
And essentially, although they -- and they even mention that some time. So
essentially it's a similar idea where you're doing local information on a limit that
you could also have done on the finite graphs. Okay. So let me give the proof. I
need a little bit of rotation.
>>: [inaudible].
>> Russell Lyons: How much?
>>: Five, plus or minus.
>> Russell Lyons: No. After all the -- see how far we get.
Okay. So K of O is the cluster of the root in G minus S. Now, here are some of
the key definitions. So the probability that the cluster of the root is a given fixed
finite graph K, given the graph, rooted graph GO or actually we're just looking at
vertex eps. So instead of the cluster it's really just the vertices in the cluster.
And we're only going to look at K of size at most -- big K of size at most little K
because all the clusters are size at most K.
Now, that's a little tricky to define in the general setting, but if you just think of G
as a fixed graph and that's perfectly fine. I just have some vertices for those four
and the chance that when I take away S that's the cluster of the root. Okay?
Now, the problem -- that of course depends on S. The randomness here on S,
right? And that is not local or certainly not given as local at all. So the idea is to
transform this to a local thing. So it's going to depend on some radius R. P tilde
sub-R of KN.
So we're going to have some large R and all the things that within distance R of
my given K, that the probably not exactly accurate. But anyway, we're given a K
and an N, and we're not given what the graph is outside, although for our fixed
graph it didn't vary. But we're not given what S is outside then either, only what's
inside N. Okay?
And so R is going to be -- so the R neighborhood of K. So, now as R tends to
infinite, what happens? We're getting more and more information. We're getting
all this information in the end, and so this converges. I won't show you. And
that's what we're going to use for our approximation. We'll take an R large
enough, depending on the epsilon we're given at the beginning to make that
close.
So I think I'll have time at least to give the definitions of now it works, even
though I won't be able to give the proof that it works. So then an exercise to work
it out. Okay. So for big K the size at most little K, I'm going to delete or I'm going
to choose these sets at random or deleting their boundaries at random. So I'm
going to X of K be a random variable. That means takes value zero and one.
They're going to be independent of each other given GO if in fact we have a
random root and graph. And the probability that it's one given GO will be well,
I'm going to stop writing the R. R is going to be -- well, we're going to have to
choose R later but we're not going to write it.
So really what we want is the probability of P tilde of K. [inaudible] is determined
by K, so I'm not going to write it. Times log of one over epsilon. Well, that might
be bigger than one and so we'll have to take the min of one. And there's a
multiplication by two just for convenience. Okay. So that's the definition. And
now what are the -- what's the new S?
So every time SK turned out to be one, consider all the edges in the boundary of
such a K. That's the edge boundary, D sub-E of K, all the edges that go from K
to the outside of K. Those are going to be in our new set that we're going to
remove. And that's good because when whenever we move such -- whenever
we move edges here what's inside, besides at most K? But of course we have to
worry about what happens elsewhere. And we're just going to take everything
else -- okay. I need another notation, first of all. These are the insides, W. And
then all the other things we're just going to remove everything that we need to to
make everything a size at most K. So all edges F tilde is all edges incident to
any vertex non NW. And then finally S prime is F union, F tilde. So that's the
definition.
So everything -- so it's clear that when we remove S prime all the components
have size at most K so either they have size at most K because we removed the
boundaries here or if they weren't removed that way, removed all the neighbors.
So size at most one. What we need to show -- so what has to be shown is that
the degree here at most this two epsilon tilde. And while you have to do some
calculations, you know, these are independent, so a lot of the calculations are the
that hard, but, yeah, it's not so hard after this. You have to be a little clever, but
it's not so hard. But any way, I'm out of time. So thanks for listening.
[applause].
>>: [inaudible].
>>: You mentioned property testing as [inaudible] some other motivation.
>> Russell Lyons: Oh. Oded's original motivation I believe was not -- did not
have anything to do with proper testing. Gabor Elek was visiting and I think
asked the question. Although Christian and Jennifer probably know a little bit
more about how it arose.
>>: Well, I think Gabor has a different version, it's a different formulation of the
limit where of this graph sequence which brought some abstract sort of notion of
[inaudible] logic or -- and in his version [inaudible] direction Oded had to prove it's
actually easy in the other direction. But it's -- I mean it's obviously an interesting
question in which [inaudible] these properties carry over to the infinite limit.
>>: [inaudible] questions? Okay. So we have a three minute break.
>>: All right. I'll just keep going.
>>: You have to turn the mike on.
>> Steffen Rohde: Okay. It's on? I didn't do anything to it. Anyway, so then I'd
like to thank Wendelin for advertising the little note that I wrote about that work.
And so I just wanted to give a very quick, because people ask, a little sneak
preview, so just to show that it exists. So there's lots of pictures and you can
give it to your children to read, and so just lots of ->>: Is it on your web page already?
>> Steffen Rohde: It is going to be -- it's sort of -- you know, I'm in the last stage
of revising it, so it has many -- some sections on the circle packing. And I'm
going to put it on the archive in about a week or so. So you have to play
Sokoban on the flight. [laughter].
Okay. So that's just -- so there are many topics that I thought about that I had to
choose from. And it would have been a lot of fun to talk about. So I should like
to say that doing mathematics with Oded was like playing tennis with a
champion, and I meanwhile you're not being a champion yourself. So there's
these balls coming and you know, you try to hit them back, but in the end all the
balls are in your court. And every once in a while you get one back but then it
comes back at you full blast. So there is actually a lot of unfinished projects that
I'm now scrambling to, you know, try to finish. And so essentially I picked the
oldest one to talk about today, and that's a joint work with Kari and of course
Oded.
And there is another reason why I chose that topic because it shows one aspect
that I think did not come out so well so far but which was an important aspect of it
at work which is his computing skills. So he wasn't only a gifted mathematician
but he was also a very good programmer. And he loved playing with the
computer and that was an essential part. I mean, we all know the pictures that
he drew, but maybe we don't know how much -- actually how much brain power
went into these pictures.
And so before starting to that, I wanted to show one photo. I think everybody
expects me to give some anecdotes but I'm not sure if I really want -- I wanted to
show at least this picture. This is Oded on the top of -- anybody recognize this?
Mike maybe. It's Little Tahoma Peak. And that was a climb that I think we both
enjoyed. And Oded was such an unconventional person. He had such and
amazing ability to question things, I mean things that we totally thing on faith and
for granted where we don't think twice and Oded would question and try a new
approach.
And so what he did on this trip, we had to camp at a foot of a glacier and Oded
tried on that hike to not -- I mean, it wasn't that he forgot, he didn't bring a
sleeping bag. He said he want to try how it is, you know, to -- he had the theory
that you know if he just wraps himself in all the stuff that we brought he's going to
be warm enough. And everybody worked with him probably knows he had these
amazing numbers of idea and many of them did not quite work out. But you
know [laughter] this was not a good idea. He was miserably cold that night.
[laughter]. But still we had a very good time.
So now I'd like to do some math. And this is almost on the level. So first I want
to give the motivation. But I think it needs a lot of motivation. So eventually we'll
come to -- some of you might not under the motivation and then don't worry, in
the end it's going to be very simple and almost like recreational map. But I
should like to draw one picture, if you wonder what is a quasicircle. So I give
definitions that you might not understand. But the quasicircle is just a
topological circle in the plane, and these objects -- so basically the geometric
definition is whenever you take two points, then the smaller arc, so they don't
need to be rectifiable, so I don't want to talk about length, I just want to speak
about diameter, but the smaller arc shouldn't go too far away. So the diameter of
this arc you should not be much more than the distance of the end points. So
what is not a quasicircle would be a thing like this, something that has a cusp.
So here you can take two points that are very near by but the curve sort of
escapes. It has a large diameter as compared to the end points of the two
points. And so those of you working with these random curves know that these
random curves they're not quasicircles, but they sort of are, they're like
quasicircles with a small property of seeing bad cusps. So some of Oded's
insights actually what goes into his proof of loop-erased random walk scaling limit
being a simple curve is somewhat motivated by at least the insights that he had
from working with quasicircles.
Okay. So they appear as limit sets of quasi folk in groups in many ways. But so
let's just give a definition here. So there is a conjecture of [inaudible] that's
celebrated paper on the area distortion and so all it says, don't read it too
carefully, but the Hausdorff dimension of a quasicircle, I just write it in this way, I
want to take one of these quasicircles and the Hausdorff dimension is -- should
be at most one plus K squared. Somewhere we need to measure the
roundishness how bad these bottlenecks are. There's a parameter called K and
I'll explain in a second what that is. So that's a technical definition and don't
worry about it. If you don't know quasi conform epsilon not understand this.
But there's a simpler way to express this, and that's the point that we all
understand much it's about holomorphic motions. And the equivalent conjecture
is this. Take a set interval, negative 1, 1, and move each point holomorphically.
So thing of a point here and we have a time parameter. But time is actually you
know think of it as complex value. Time in the unit. So we have a time
parameter that varies in the unit and the motion of each point is an analytic
function of time. And it should be a motion. So if I take two different points, they
will not collide, they will never collide.
So the definition of holomorphic motion is very simple. So we think of it as each
point moves at the first coordinate, the second is the time coordinate, so we
function from space, some time into the plane, and the axioms are -- so in the
time parameter it should be analytic -- each point moves analytic and the space
parameter is injective, two points never collide, and of course the motion starts at
time zero with a line segment. So we just move points. Okay. So then in that
phrase the term -- the conjecture is equivalent to saying whenever such a
holomorphic motion then at time K what I see, so the motion of the interval at
time K has Hausdorff dimension at most one plus K squared. So we'll see in a
second what relevant sets are. So this project, the motivation comes from
studying this conjecture. And we had some upper bounds, and then there were
some improvements. And then other people found better.
In the end of this line of result that I won't mention, Smirnov proved actually the
sharp theorem, the upper bounds. So Stas prove that actually the -- whenever
you move a line segment holomorphically the Hausdorff dimension is never more
one plus time squared. But that leaves then the question what is the lower
bound? Is it really short? And to date it's still the best known estimate is what
we get. We don't get one plus K squared but we get a one plus essentially point
six K squared. So we [inaudible] one holomorphic motion off a line segment that
gives a dimension close to one plus K squared.
So maybe I should first show a picture of the holomorphic motion. So this is the
motivation. If you don't like it, you can forget everything because you will see the
new problem is going to be totally independent recreational math, so when I talk
about examples of holomorphic motions. Okay. So one example is actually
coming from Julia sets of quadratic polynomials. And so here is a little picture
and actually here I should like to say -- as you can see, I'm not too experienced
with this. Okay. So when Isman Prassa [phonetic] from Helsinki saw the
announcement of this talk, he said I might be interesting in this program that a
student, Alex [inaudible] wrote. And this program illustrates very nicely some
holomorphic motion. Here is the program again. So the parameter is the red
point. And you can move it. That's the time parameter. And the set is actually
the Julia set of the polynomial. The red point at the moment is a point zero in the
complex plane. When we move it around some at some parameter C. And the
white set is the Julia set of Z squared plus C. So when I move this around -- so
this is what we get. So you see the Julia set. And as long as you stay inside that
main corridor, you know, it's a simple curve. And when you go out then there's
something funny happens, some parabolic implosion or stuff and then you see
different sets. So towards the boundary of the main corridor, things go bad.
But anyway, it's a theorem. Basically it's [inaudible] that's holomorphic motions
were invented. But if you stay inside this corridor, the things stay nice curves
and they move holomorphically.
Okay. So another example is actually snowflakes. And that's what the talk is
going to be about eventually. So a snowflake, I think of a line segment and you
replace it like this and then each line segment gets replaced by a smaller scale
copy of it and you proceed. The limit set of that is going to be the snow flake.
But we want to make the snow flake depending holomorphically on one
parameter P. And so that one parameter P how does that come -- how do we
create it? So now I want to explain how to make a holomorphic motion out of a
snow flake. And so I'll show an animation, then you'll see right away how it
looks. But so here's how it goes at first. So start with a line segment and instead
of doing the -- this picture of the snow flake right away, I break it down into two
parts. So the first step is I replace the line segment by a triangle and the tip is
going to be the parameter. And then in the next step, I replace each of these two
line segments by again the triangle like this and then each of these four line
segments now gets replaced by another triangle. So in each step you double the
number of -- the number of edges. But it's probably best to just not worry too
much about it, but you just look at these pictures. We have a simple way of
generating this sort of one generation of the snow flake and then if you just
looked at each even one you see sort of the standard iteration.
All right. So that is best encoded by two maps, two contractions, F0 and F1.
And so these are maps -- linear maps of the plane that map the point negative 1,
1 to the points well, negative 1P and 1P. Two contractions and they generate a
limit set. Okay. So here is -- so the main thing I want to talk about is Oded's
program that he called Jordan that analyzes the limit set. And for instance if you
-- if you put in Jordan minus oh, there's an L that you can't really see
here, .2, .25, that gives the limit set of well, this dynamical system where the
parameter P is .2 plus I times .25. And the program of course, everybody can
write a program like that. That's not the point.
But it tells you -- tells you more. So first before I tell you what the program
actually does, I want to argue that the snow flake is a holomorphic motion in the
sense that I described. You start with a line segment which is P equals zero, and
then the snow flake actually moves holomorphically. And this we can see best if
we look at a few generations. So that's the dotted one is a second
approximation. This one is now the fourth one. And we look at these corner
points, and they do not change under subsequent refinements. So once they are
set then the next step I just change things here. But these guys are definitely
point on that snow flake.
So then it's very easy to see. You have a -- I mean the only parameter is this tip
here, this P of the very first triangle. And all these corners are actually
polynomials in P and polynomials of course holomorphic. And the rest of the
curve are just limit points of these corner points. They are limits of polynomials
so they are analytic, too. So we do get a set that is moving where each point
moves analytically in P.
Now the question of course, is it a holomorphic motion? And the answer is well,
if these points don't collide, if this thing doesn't self-intersect. So we should ask
ourselves the problem is find the set of parameters where this limit set is a
non-self- intersecting, a simple curve. And at this point, I guess everybody can
join again. You know, some of you might have fallen asleep. This has nothing to
do with probability. And that's like Oded and I, also we spoke sort of different
languages, had different sort of interests, and I do remember the very first time
when I met him, he came to a talk that I gave in San Diego in '91. And I asked
him afterwards, I knew that he wasn't exactly complex analyst, and so I asked
him if it made sense to him and he said, yeah, he could understand everything.
And he didn't say anything else. And so I asked him, you know, if he liked it.
And he said well, he doesn't find it particularly interesting. [laughter].
So he was very honest. He was very nice. He wouldn't have said it himself but
once pressed for a comment, he was also honest. And he had two of the original
problem was maybe not the most interesting one. But this problem he actually
liked. So this one really interested him. And so then he started working on trying
to understand that set. And so at first it's very easy to see, maybe here at this
point I should show a little computer animation and you can see better what the
set looks like. Okay. So let's go -- this is not Oded's problem. This is a finished
problem. So here the disks is the set of interesting parameters. The red do the
is P and what you see here is an approximation to the limit set to the curve. And
so we draw more than we see. See, this is a curve. And you see, you have to
go quite close to the boundary to see that it starts self-intersecting. But as long
as this curve is not self-intersecting, it's -- we have a holomorphic motion.
Okay. So ->>: So it is [inaudible].
>> Steffen Rohde: No, that's not true. So that's exactly the point. So this is
what I want to do. Now when is it self- intersecting, for what parameters? So it's
sort of easy to see that outside the disk it will be self-intersecting. You can see
just by drawing [inaudible] computing the Hausdorff dimension has to be at least
two [inaudible] there are many ways to see that the outside cannot give simple
curves. But sort of the computer aided theorem says the disk of radius .91. So
large part of the set is actually where it's -- where you have a simple curve. And
this is exactly the set. This is the black set is the set of parameters for which the
curve is non self-intersecting. The gray set is the set of points where well, where
it's probably intersecting. Okay?
So what the program actually does is if you take a point, it returns -- it tells you if
the curve is simple it says yes, it's a simple curve. So it gives a test that tells you
you have a simple curve. If you're outside, it only says it's probably not. So the
test works one way. And I want to say a few words how the test works. And so it
was a bit of a surprise that you actually do get such a large set of parameters for
which I have a simple curve.
There are other models of two generators, sort of semi-groups I think it's called
the Barnsely [phonetic] set where you just ask yourself whether you get
connected sets. So there's a big industry that's a very special, special problem.
>>: [inaudible].
>> Steffen Rohde: Yeah, very good question. Maybe I should -- how much time
do I have?
>>: You have left nine or ten minutes.
>> Steffen Rohde: Nine or ten minutes. Okay. So I do have enough time to -maybe I skip forward to -- and then I'll come back here.
The answer to your question is that's the only probability that's going to appear
here, probably not. So [laughter]. Okay. So let's go forward. So the problem
actually was able to -- and I actually don't quite know how Oded did it, because
the program has many virtues, but it didn't really scan for island. But he found
this island here. But unfortunately -- the program is able to prove given that you
assume enough accuracy, the program is able to prove that for instance this
point gives a simple curve, okay? The program at present does not prove that a
certain curve does not give a simple curve. But theoretically it's easier to prove
non simple. So if you really wanted you to you could write a program that
actually shows that this is an island. But this we haven't really -- Oded hasn't
really done. And you see the -- it's quite fine. Maybe I'll show you the picture. I'll
skip forward first.
This is what the program claims to be a simple curve. Okay? And it didn't look
very simple. But then it's a limit of non simple curves, and so if you look at a
certain approximation to it, so now this should be sort of before the limit the curve
that is supposed to be simple and at least the beginning looks good but then it
didn't look so clear what this is going to be. But here you start seeing that there
is at least a good chance that it is simple. So it goes like -- I think it goes like
this, and then it goes around like this.
And you see at least that it's very delicate. This is like the pre, prestochastic
phase where you have -- well, like an SLE four where you don't quite know
whether it's going to self-touch or not. But that's what it is.
Okay. So let's go backwards a little bit. And so I guess what I want to say is one
of Oded's strong traits was to also to hear to such a question I think it's not really
theoretically possible -- I mean, it's probably not easy to come up with a formula,
even the question of connectedness. Who notion? But he just sat down and
started to work on a computer program. And before I come to that, maybe I
should answer -- or I should say quickly how this actually gives us this dimension
estimate. But that's sort of very easy for the expert.
As I said, in the black set, in the Jordan set, every curve is a simple curve. So if I
restrict my motion to that set, then I do have a holomorphic motion. And how do I
restrict to it? Just by conformally mapping a disk to this one and changing time
by that conformal -- by that conformal map. Call it phi. So if I conformally make
that thing hound and I have a holomorphic motion of the disk and then it's very
easy to compute the Hausdorff dimension of that self-similar set. It's just a
similarity that I mentioned. And when I put it all together and then out comes a
lower bound for the Hausdorff dimension and you just -- the in radius of that set
gives you control over the conformal map, and then that's an exercise for an
undergraduate complex analysis essentially.
But so this is not very difficult. So the main part is really to find the radius, in
radius of that set. All right. Maybe we should stare at this for a little longer. And
then we could ask ourselves how would we test if a curve is simple? And I don't
know. I mean, when I was a student and saw the snow flake and tried to prove
myself that just -- that just a symmetric snow flake is a simple curve, I think this is
not trivia. I mean, it's easy, it's a simple exercise. But it took me a few minutes
then to use this symmetry.
But here we have non symmetric situation. So here it's very different. So how
can you possibly attach -- attack this problem? And here's a little lemma that
Oded found. It's too many -- it's very, very easy to. It looks a bit longer. I should
say where this comes from Oded worked quite a lot on this. I mean, he had -that's now the mid '90s actually. But so he wrote a documentation to that
program and so this is the documentation that I put this program also on the web.
I had Andre, a graduate student from Michigan help me getting the program
running again because the C++ some standards have changed, our libraries
were not supported. So there's a long documentation that explains how it works.
And so I was working through this in order to write up -- well. Okay. So
somewhere here, some of the little observations that he had are actually I mean
all of them are sort of easy to prove once you know them. But you wonder how
we actually got them. And so here is one of them. This one is easy to -- easy to
find. So this is the test actually. How can we test if a given curve is simple, I
mean one of these self-similar curves? And so here I haven't drawn the full
curve, but imagine -- so you remember the process that comes out of your selfsimilarities? So we should think -- we refine each of these sets, so then we see a
copy of the curve here, here, here, and here, in these four sets. And they labeled
them just 01, 00, 11, 10, just because I compose, I have compositions of these
maps, F0, F1. And so that -- if I don't take the line segment but the actually limits
that I want to call this one S01 and this 1 is S00, just these guys here, and so the
test only requires you to see in these two pictures this part intersects with that
one. If this one intersects with that one, of course it's not simple. So test of them
not intersecting, and test of this one does not intersect that one, and that one
doesn't intersect that one. That gives you the three tests of that one generation.
Okay.
And here you have the same for the third generation. Just test -- okay. This part
and that one is already dealt with. Just test that this one doesn't intersect this,
this not bad and this not bad. So three more tests. You have six tests and test
those. And then if you decide that they are not intersecting, then you are done.
All right. Okay. That requires a little argument. And when I tried to work this out,
I had a teeny little bit of a problem. And it's sort of easy. How would you prove
that you have a curve with certain self-similarities and that this test works. And in
the end of the day that's a nice little exercise to do. But the solution at least my
way of seeing that is to say that here at that tip what you see if you take these
four arcs, that this is actually similar to the whole set. What you sort of expect by
self-similarity. But it's not so clear because what you see here is sort of this is
sort of the right part of the curve, and this is of the left part. So it's like when you
chop up the curve in the middle but you put the right to the left and the left to the
ride back together they still give this old thing, and the scaling factors are
different. So actually I would like to ask Oded how he saw that. I can just
compute it very easily, but this is true, you can draw pictures and verify it. You
can just compute these points ->>: [inaudible].
>> Steffen Rohde: Say again.
>>: [inaudible].
>> Steffen Rohde: Okay. So how do you actually execute the test? So using
that similarity, it's very easy to see here that if the curve were -- so you take any
two points on the curve and you ask yourself, okay, so the best way is probably
to -- I haven't quite explained how the curve should be parameterized. So the tip
I make the curve at time one-half. And this one is a -- oh, sorry. If I want to
parameterize it by negative 1, 1, well, okay. So maybe let's answer your
question first. The test depends how many tests do I have to perform? I mean,
here I would have to know the limit set. And this I don't know. I don't know this
set at the beginning. It's a limiting object. But what I can do is I take the next
idea of Oded's. He calls it the super ball. So you take the super ball, a disk, that
is mapped by both similarities into itself. So you take -- find the smallest disk that
is mapped into itself by both -- both similarities. Maybe I should just show the
proof by picture.
So then you apply all these maps F00, F01 and so on, and so you get these
disks. If you know that the corresponding disks are disjoint, then you are done.
Because the arcs are inside those disks, sort of by definition of the disk. And if
the test fails in the first level, then you go again, you do a few more iterates and
so on. And when the disks become very small then at some point when you
reach a threshold accuracy threshold then you stop and you don't know. But if
the tests terminates and the disks are disjoined, then you know that you are fine.
So that's why the test is only sort of one way. And the number of steps is sort of,
you know, depending on how far you are from the boundary of that set. Okay.
So I should come to the end here. But we already discussed that that set is quite
complicated. Looks quite interesting. It seems to have islands. The
corresponding sets at least in these islands would be interesting to study. And I
think that's all I wanted to say about -- say about that set.
I asked myself, also, I went back to, okay, so there was one project where Oded
used a heavy sort of -- I mean, some nice ideas and some heavy programming to
at least make some progress on some question. But he made so many pictures.
Why not pictures of SLE? And so he actually wants [inaudible] that statement
that he did attempt some mathematical pictures of SLE, but he didn't trust them,
and they looked very unstable. And then I actually tried to do some using -pictures using something related to the Loewner equation. And I explained that
to Oded, and he was very gentle and nice. He said I hope so. You know that the
pictures -- he said I'm excited to see the pictures. I don't want to ruin your
optimize mission but looking at the simulation, a blah, blah, and so he describes
his experience set with a loop-erased random walk we don't know if there are
loops before they could prove it and so I think later actually that trashed the
pictures to some extent. So I think he was happy that he thought he has seen
some honest SLE pictures. And finally those that have actually worked with him
probably know that at least in the SLE world he was extremely good also with
Mathematica, and he just wrote these amazing notebooks where he did all sorts
of computations very quickly. And then of course then you can check them by
hand. But here he describes something interesting. Here he said -- okay. So he
learned from -- so in SLE sometimes they have this PDEs, and then you need to
find a solution and at least you get subsolutions on, so first he used the computer
a lot to find these solutions but Mathematica often doesn't let you solve
equations. But so then he says he learned from playing with it and seeing
solutions that it's actually better to guess it. And the last sentence is interesting.
He says after he saw a few of these, he says I got bold and instead of trying to
coax math mat canoe solving, I decided I'd just guess the solution up to one or
two free parameters and it so far always worked.
So he was sort of excited that he was actually sort of beating the computer and
was doing better by just guessing and very fine. So I guess this is all I want to
say. I would just want to conclude by saying that when I first met Oded in San
Diego in '91, he introduced himself as a circle packer and I thought to myself I
knew a little bit of [inaudible] I thought to myself this guy is a little bit narrow.
Which I didn't tell him. [laughter] and that was certainly the biggest misjudgment
of my life. And so looking at this picture I think he showed us a direction in many
ways. And I trusted him in both in mathematics and also in other questions. So
in mathematics sometimes I did not check his computations and most of the time
I did, but once or twice I just trusted him.
And also on this hike actually, on the way back down, you know, I just trust him.
I was wiped out and we were -- the glaciers were, crevices were open, snow
bridges. He was leading. And I was just thinking okay. And actually it was
good. He was paying attention. I went over one snow bridge which then gave
way and I think my feet should thank him for teaching Oded so much about -Oded was very aware. He was in self-arrest. And everything was just fine. It
was just me just trusting Oded, and it was a great thing. He was a work in my
world in mathematics and I miss him. Okay. Thank you.
[applause]
Download