Document 17864785

advertisement
>> Yuval Peres: All right. Good afternoon everyone. So just yesterday walking, or this morning
walking through the building I saw many people in unexpected places at MSR have copies of
James’ book on Markov chains. So not just in the theory group. It's extremely influential. But
today he'll talk about something else, a variant of DLA Hastings-Levitov aggregation in the small
particle limit. James Norris, please.
>> James Norris: So thanks Yuval for the chance to talk. And more generally, thanks for the
opportunity to spend some time in this beautiful place and very stimulating environment.
Before I go any further, so I've not written it on the slide, this is joint work with Amanda Turner,
who is a PhD student at Cambridge and is now a lecturer in Lancaster University in the UK. So
DLA is a famously hard problem and I'm not really going to talk about DLA, but what I going to
talk about is, I suppose, motivated by it. So let's explain in what respect the model I'm going to
talk about shares features with DLA and other ways in which it's different.
But let's think generally about, well, let’s kind of describe what DLA is and what we're trying to
do with DLA. It's a model for the growth of a cluster. So in mathematical terms you could think
about clusters being represented by a compact set in some Euclidean space. And it's growing,
so we've got a parameterized family of compact sets which are getting bigger. Okay? So K sub
T is how big the cluster is at the time T. The mechanism by which the sets grow, this is sort of
physics rather than mathematics, we are imagining lots of sticky particles wandering around
and every so often a particle bumps into the cluster and sticks there. So the particles the same
moving, I mean the simplest, one mechanism by which you might think of these particles
moving is that you set them off a long ways from the cluster and they just wander around
randomly like Brownian motions and then the first point that they hit, the cluster, they stick
there and the cluster grows a little bit. Okay?
So the distribution of the point which the cluster will grow at is given by the hitting distribution
of Brownian motion from infinity, otherwise known as the harmonic measure from infinity.
Okay? So the idea is that we should make the clusters grow so that the growth rate at any
point on the boundary is proportional to the amount of the harmonic measure which sits there.
And immediately you can see that if you have some boundary which has fingers pointing out
then those fingers are more likely to be hit by this Brownian motion than some point deep in a
[inaudible] of the boundary of the cluster. So you expect to see that the tips of the fingers will
grow preference preferentially and then that's already giving an idea that it, I mean suppose
you compare the growth from a nice smooth bowl or disc and from something, on the other
hand consider a nice smooth bowl or disc with a little pin prick poking out. Very small. That
pinprick will get preferentially grown so that these two domains, which are very close to each
other, the evolution might therefore not be close to each other because all that's required is
this little prick out of the boundary in order for it to have a completely different evolution. This
is not mathematics but this is just some sort of heuristics.
But if you try to make sense of a PDE story for the growth by harmonic measure it doesn't
work. You try writing down a PDE, it's not well posed. You can't do anything. And indeed,
when you look at physical instances where the growth you might reasonably think of as being
caused by this sort of mechanism, they look random. So you wouldn't expect it to come out of
a PDE. They exhibit sort of fractal and apparently random features. So maybe PDEs aren't the
way to go here.
I mean, which is interesting for probability because often it's a sort of particle picture of what's
happening. You know, you can move over to a PDE picture and the essential aspects of the
dynamics are captured by the PDE. But here it seems it’s not going to be that kind of set up. So
instead, we might start with a particle model. After all, that's kind of by the sort of physical
picture began, and see what happens as the size of the particles get a small. Is there any kind
of limit object?
Now I'm going to work exclusively in two dimensions which causes already a big
disappointment because we would like to be able to understand this in three dimensions. But
two dimensions is great because you have all the mechanisms of complex analysis at your
disposal. You know, you can play lots of games which you can't do three dimensions. This is
the era of two dimensional probability. So we're going to be working two dimensions. So I'm
thinking about going from thinking about a continuous evolution for now towards thinking
about [inaudible] moves in discrete steps. My time parameters going to be discrete and I'm
thinking about a sequence of compact sets which grow, K, 0 is also always going to be the unit
disc and I'm going to progressively make it bigger by adding bits to it. I'll assume that the
exterior domain for my sets is always simply connected and then a first exploitation of that we
are working in the plane is that I can encode the cluster through its complementary domain in
terms of conformal map because there's a unique conformal map which takes the exterior
domain of the unit disc, so here's my disc K, 0, unit disc and so the D, 0 is complementary to
[inaudible] and you get mapped by some conformal map to the complementary domain D, N of
the cluster at time N. This three parameter family, which does this but there's only one, which
has the property that it fixes infinity and doesn't rotate the plane at infinity at all. So you come
up with a unique conformal map associated with these two, with this cluster.
You know about the conformal invariants of Brownian motion? If I take a Brownian motion in
two dimensions and I look at this image under a conformal map, they're not to a change of
timescale. I'm still looking at Brownian motion. So in particular, if I go hitting probabilities, if I
start a Brownian motion after infinity here or if you like on a sort of large circle around here
uniformly, well, actually I know then that it's going to, by sort of symmetries of Brownian
motion I know that hitting distribution here is the uniform distribution. If I map this over here
then I'm, again, looking at a time change of Brownian motion. So it's the hitting distribution of
the image process on here will just be the harmonic measure from infinity and so this shows us
that if I take the uniform distribution here and I knock it over I'm assuming that boundary’s nice
here, the boundary of this domain is nice, if I put the measure over here using phi N, then I will
get exactly the harmonic measure on the boundary of this cluster. Okay?
All right. So this slide describes the, well, actually a big family of models that we were
discussing just before for the talk that in some sense this is kind of prior work on this family.
Not all of it published by Carson and Makarov, but the name which becomes associated with
the family of models, the names Hastings and Levitov; so these are models for aggregation, for
kind of growth of clusters in the plane which are encoded using conformal maps. There's one
parameter family of these models, parameter Alpha, and I'm actually going to talk exclusively
about Alpha recourse zero, which is the easiest one to study in this family, there's, its work in
progress to deal with the other Alphas. How far we’ll ever get I'll never know. This recent
preprint of Amanda and Fredrik Johansson and Alan Sola where they’ve begun to make a bit of
progress with positive IDs of Alpha and just post it on the archive. But for today, apart from this
slide, it will be entirely about Alpha equals zero. But let me describe the whole family so you
have, so you can see what it is.
Okay. So I'm going to consider adding a certain sort of particle to the unit disc and two simple
examples you might want to keep in mind are, on the one hand I could think of adding a slit like
that of length Delta, or I might think about adding a little ball of diameter Delta. So that's a
family of particles because I could think about varying Delta. Okay. Now if I exaggerate this
picture a little bit, suppose that K, M is like that. Now I really want to attach a particle here of
some certain size, say Delta zero, where the attachment point is chosen according to harmonic
measure. So I might want to attach a little ball here. What I'm actually going to do is to attach
a particle at some random point on the circle and then I'm going to map it over using phi N.
Now the amount of harmonic measure on this little bit of boundary here is quite small. It's
hard for Brownian motion to get in here. So when I map it over here to the circle, it maps to a
small interval. So in order to have a particle of size Delta here, it has to be a much smaller than
Delta over here. Because this is a scale factor of if it’s attached at angle theta, Z to the I theta.
That's the scale factor in moving this direction. So I want to add a small particle there and if I
was adding a particle of size Delta here, well, this little bit of boundary here has got lots of
harmonic measures. Easy to hit. So over here I’d want to attach something which was
relatively, let's get this right, relatively large over there. So that actually motivates the formula
given here. My Delta ends over on this side. I take a bowl or a slit of size Delta N plus one. This
is the pointer, right? Anyone know which button is supposed to activate it?
>>: Are you sure this is the pen? Don't you have a silver pen [inaudible]?
>> James Norris: Okay. So I take my basic particle size and then I just, if I take Alpha equals two
it does just what I was describing, Alpha equals two scales-down Delta so that when the
particles map back to K, N it comes out at the right size. So we should aim to get all the
particles of the same size over here when Alpha equals two. When Alpha equals zero you don't
bother to do that so the particles are the same size over here we’re going to end up adding
different size particles over there. And you might want to explore how things varied as we let
Alpha go between zero and two. Obviously, Alpha equals two because your best hope for
something which looks like a DLA, it's believed that actually the whole family of models is of
interest and possibly Alpha equals zero is some sort of connection with Eden model which is
another famous graph modeling probability.
>>: [inaudible]?
>> James Norris: Alpha equals one. Sorry. Yes. Alpha equals zero. You can see it's going to be
much simpler because I'm not doing anything here and that's the one we could do something
with. All right. And little bit of light relief. Why you might be interested in this sort of thing.
Here are some pictures of instances, either physical or computational, where your cluster’s
grown. And they show interesting features that don't look like bowls. They look random and
they look kind of fractal. Mathematics has not really had much success yet in explaining what's
going on here. But I think we ought to, right? It's, there’s some math to be done here.
So these are caused by some sort of electrical effect like lightning, Lichtenberg features, famous
source of paperweights. There’s a different sort of growth model coming from biology. You
can see sort of fingers coming out and maybe this is growing more like a, roughly like a bowl.
It's a simulation of DLA, the colors in these pictures encoding times at which particles are
added. And this is a simulation which Amanda did of our model HL zero. It's not too different
to this simulation of DLA. Again, the colors are decoding different epochs, but which one is
adding particles?
This slide just reviews the model which is going to be the subject for the rest of the talk. So if
you’ve got any, if you want me to explain then I'm happy to take questions. We can stop with
unit disc and fix a parameter Delta. Delta in the end is going to go to zero. We’re going to be
thinking about scaling limits as the size of the particle goes to zero. The particle added in each
stage will either be [inaudible] slits or discs. The picture here was, this picture was the slit
model. Then there will be some unique conformal map which kind of encodes adding that
particle to the unit disc, subjects to these normalizations, and the way the model is constructed
is the following: the probability comes from a sequence of independent uniform random
variables on unit, [inaudible] on the circle. Feature’s going to be the angle at which the particle
is attached on the circle. Probably the best formula to look at to understand what's going on is
this one. P is my basic particle, could be a little disc attached at point 1 on the unit circle, then
you rotate the disc by random angle, and then you map it over to the cluster using the
conformal map phi N. So you end up with a new bit added to the cluster. As the cluster evolves
you add more and more particles. The challenge is to understand what the limiting shape of
these clusters, described the stochastic structure of the clusters which form in this way.
Oh, yeah. So maybe one thing to say is this: the parallel’s here with SLE, right? So if I look at
the inverse map, then like I kind of get the composition in the right order for a stochastic flow.
Oh. Maybe I did inverse on that. Plus this is not too visible, but, so the inverse maps of these
conformal maps are stochastic flows. I can put a point into the flow and see it evolve. And this
will follow some mock off evolution. And I mean, you know, in SLE you want to study either the
trace or the whole. What you actually get access to it through is the level of the flow and much
of the hard work is to translate stuff you can prove for this into properties of this. Okay? And
that's exactly paralleled in the math we do here. But we can understand this map pretty well
and then we work hard at it and are able to understand the inverse.
So this is the theorem on the shape of the cluster. See what it says. This stuff at the beginning
isn't so important. You should look at these three bullets. We’re really interested, so this
describes how the cluster has evolved after a certain number of iterations of adding particles.
Delta, we are considering the limit is Delta goes to zero. The size of the particles become small.
When the number of particles, when you’ve added a number of particles of order, one over
Delta squared, you've grown a part, grown approximately a ball of radius, well, some
microscopic size. So the interesting values of N in this theorem are really when N is like one
over Delta squared. Because if you add, if you scale down the size of the particles you're
adding, then you need to scale up the number of particles that you add in order to see anything
appreciable happen and the way that you need to do the scaling is this.
So what do the three things say? Well, the first one says when we add a particle then in fact it
ends up close to a scale up by the current size of the cluster to the point which it was actually
attached on the unit circle. So we attached a point in the unit circle. We mapped it over to the
cluster. It turns out really just some kind of moved out and scaled up. I mean this is despite the
fact that the cluster has an extremely complicated boundary and in principle it could be
attached anywhere. The particles don't get attached just anywhere. They just get attached
kind of on a scaled up version of the unit circle. The particles don't get distorted hugely.
Usually they're all close to each of the C, N plus I theta N.
The second thing says there aren't any big holes in the cluster. So any point which is within the
kind of approximate shape occupied by the cluster, there is some bit of the cluster K, N which is
close to it. So we fill out a disc. And the second one is simply saying, this accounts for where all
the particles go up to a certain maximum value of N, which can be taken to be pretty large.
Remember it’s Delta minus two which is really of interest. So this is much larger, but this is just
saying that none of the other particles penetrate in two, the particles are all attached in the
boundary layer. It's a growing disc and you never get a particle which is attached kind of
further and then it should be beyond an epsilon.
So the limit shape is a ball, it fills out the ball, and we have some good control about where the
particles are attached and there’s a final cluster. So that's one level of description.
Disappointing in a way because it's a deterministic limit and we are hoping to see some sort of
random effects as Delta goes to zero. So the rest of the talk is about discovering some random
effects in the cluster which you might anticipate by looking at the picture here. So we're trying
to describe this by theorem. What is there to say? Well, you can see bits of it are picked up by
the theorem I just stated that it fills out the disc; you can see that as time goes by it decretes in
layers, but what we are able to do is to be able to say something about the structure of these
fingers. So that's really what the rest of the talk is devoted to, understanding the structure of
the fingers.
And to get access to that we have to think about this thing we call the harmonic measure flow.
It's helpful understanding these things to take logs. So the boundary of the cluster becomes
this, instead of E to the C, N we're looking at the points C, N plus I theta. And that’s the
boundary of the cluster there. So if that’s the boundary of the cluster K, N and then we add
another particle, so that's the boundary of the cluster K, N plus one. And when we take the,
this is realized by some intertwining of the conformal map with the exponential function of the
map from this line since we, the harmonic measure parameterizes the boundary here by theta
going from not to two pi.
Okay. So we can map this by phi, let's call it little phi N, distinct it from the big phi N which was
exponentiated. So this is parameterized by theta from not to 2 pi. On the other hand, so is the
other boundary, they boundary of K, N plus one is also parameterized by theta. And the
boundary of K, N is a subset of the new boundary. So there is a map on the interval from not to
2 pi which takes the, see each point of the boundary here parameterized by harmonic measure,
so suppose that's the parameters set there gets mapped to some parameter value in the new
boundary of K N plus one only there has to be a jump. The map is essentially the identity, a way
for some region here around the particle, but we have to kind of force in a new bit of boundary
here. So you end up with this map, G N plus one or G, what did I call it? G theta N plus one.
So when one gets a flow of the harmonic measure as you add more and more particles, and it’s
by looking at this flow that we can understand the behavior of the fingers. So the way it goes is
that we take, we always take a limit of these flows in some weak sense and understand the
limit measure. In order to do that you have to have some space in which the flows live. To do
weak limits in probability you have to understand the space in which the flows live in. So this is
the space which the flows live in.
So there's a picture here of a non-decreasing right continuous function which sort of a repeat
itself periodically. So if I consider this set of all such functions D without, here without the
periodic condition, then I'm going to say two functions like that are close if when you rotate the
access, so instead of thinking it as being a non-decreasing right continuous function, I could just
draw axes like that cross and it becomes a contraction. Okay? And if I consider two such
functions, I could look at the uniform of distance between those two contractions. You’ve got
to tilt your head by 45 degrees. So the distance of this function from the identity is just the
maximum value of that it never gets away from that diagonal, for example. But instead of
looking at the uniform norm we have to look at some usual way of just [inaudible] a little bit so
it's a locally uniform norm.
Okay. So the harmonic measure flow is a flow in the sense it has a kind of normal property that
you'd expect to flows. You know, if you act by a one-time interval by the flow and then you
flow on more by the next time interval then you get what you’d expect by taking the flow over
the larger time interval. However this, the flow property turns out not to be robust under the
sorts of topology which is possible to put on this flow space. There's a weaker property, a
weaker flow property which is robust and that is written here. So these increasing functions
have right and left limits. And the right way to think about the flow property to make it robust
is say when you compose the left limits of two time intervals then you get to the left limit which
is something that is less than or equal to the left limits of the longer time interval and then
something analogous with inequality the other way around for the right limits.
One property that a flow might have is that it be continuous in descent, that if you look at how
far things move over short time intervals, it's not very far. So I want to consider continuous, in
this sense weak flows, and on that space there's a way of defining a metric which turns the
space of these flows into complete separable metric space. So there you have an object where
you can start thinking about weak convergence. So we’ve looked at that picture before. That's
how you define the distance between the two flows.
Now there's a famous object in probability which is that coalescing Brownian flow which I now
want to introduce because it turns out to be the limit object for the harmonic measure flows.
So if I start with, think about Brownian motions on a circle and take a collection of starting
points and I round Brownian motions forwards from these planes, and I have a rule that if I go
through the starting points in turn, and if I hit Brownian motion already there, I just join up with
it. Then it's easy to see that you get something, a distribution on coalescing Brownian parts
which is independent of the order which is you set things up, and you can have more than one
points in here so eventually you could have Brownian motions going from every rational point
forwards, and then there's a good way to kind of complete this so that you end up with, you
can say for every point on this line here where it goes to on this line. And these are pictures
that you end up with is that actually every, they’re only finite images. Every point here actually
is coalesced into one by a certain later time, and so that's what the function looks like. If you
draw a graph of the function then it looks like a staircase. Not all the same size, but graph of
this function is a staircase like that.
>>: [inaudible]?
>> James Norris: Oh. So this is, there's a theta here and it gets mapped to F of theta over
there, all of those going to that one. Okay. So this is theta and that's F theta, like that. So, in
fact you can think of the coalescing, or think of Arratias flow as being a probability measure on
this continuous weak flow space. It doesn’t live just as, sometimes it’s necessary to complete
spaces in order to support measures. It doesn't live on the space of perfect flows, but it does
live nicely on the space of these continuous weak flows.
So this is basically a result of Arratia. It's just reformulated in the language of these continuous
weak flows. There's a unique [inaudible] property measure on the space with the property that
if I, so this means I can’t drop a point into the flow and see where it goes, but it does a
Brownian motion because from this [inaudible] characterization it would imply that this is a
Brownian motion. And if I did this as a martingale, it says that if I drop two different points into
the flow, then as soon as they hit each other they perform independent Brownian motions,
again by [inaudible] characterization up until the time they first hit each other, TEE, and
thereafter they perform a single Brownian motion because as soon as we get after this time we
have to compensate the thing by the usual Brownian drift. Okay? So this is a neat way to give a
martingale characterization of the Arratia flow.
So more propaganda. This is a good way to think about the Arratia flow. You can invert all
these continuous weak flows; just look at the inverse map. The time reversal map, which just
takes this inverse, turns out to give an isometry of this space and it also preserves the law of
the coalescencing Brownian flow.
All right. So the harmonic measure flow is going to converge to the coalescing Brownian flow.
And that's because the harmonic measure flow turns out to be one of these, what we've called
disturbance flows. So I need to describe what a disturbance flow looks like in general. Suppose
I take some basic disturbance. So I call it disturbance because it's not quite the identity. It's a
little disturbance of the identity. So it looks like the identity for most values of theta, which we
not in 2 pi, but then this is some little region where it's not quite the identity. Okay? And then
suppose that I then randomize this theta here, I can sort of move this up and down the line to
vary the value of theta. So there’s a particular function G, so this might be a picture of G theta,
I specify G not and then G theta is obtained from it by translation. So I fix the disturbance G, G
not, and then I form a flow just by iterating with random values of theta. So these are random
functions. I choose these thetas uniformly random.
How will a point move under this flow? Well, for most of the time it doesn't move very much
because you're away from the disturbance. Every now and again the disturbance lands close to
where you are and you get moved a little bit. And it's symmetric. So you’re just as likely to go
up as you are to go down. Okay? So just think about motion of one point. What's the scaling
limit going to be? Well by symmetry it's going to be Brownian motion. Now what about
putting two points into this flow? How are they going to move? Well, it's going to be different
times that you move the two points, provided their separated. Okay? So they'll move as
independent Brownian motions until they’re close and then when they’re close together they'll
tend to get hit, they'll get moved in the same way by the disturbance and so no surprise, really
not very difficult to show at least for kind of finitely many points, that the limit object is
coalescing Brownian motions. Okay? But in order to get those that we wanted it was necessary
to consider this limit at the level of the flow is not just the level of finitely many points.
So back to the slide. So you specify a disturbance and get a disturbance flow, you can do a sort
of diffusive rescaling of the flow, which is described here, I don’t expect you to absorb it all, but
if you look at the flow on various scales, just as when you prove convergence of random walks
to Brownian motion, you can work in sort of, score a whole topology to understand that as the
right topology. This is sort of scorer hot metric on the flow space and in that metric the
rescaled disturbance flows converge weakly to the coalescing Brownian flow. Okay?
So the limit of the disturbance flows, this is kind of a nice way to think, to realize the coalescing
Brownian flow. In fact, I guess I should explain what this criteria, so one of the expresses is the
size of the disturbance is becoming small. The size of the disturbance is smaller than the scale
at which we are trying to look at the flow. Now the harmonic measure flow, the flow of the
harmonic measure on the boundary of the cluster turns out to be exactly one of these
disturbance flows. And so it, that flow converges to the coalescing Brownian flow. That, in fact,
we knew a long time ago. But then by combining that with this sort of precise location of the
clusters, which is given by the other theorem, we are able to transfer that information about
the harmonic measure flow to the shapes of the fingers in the cluster. So this slide is just saying
that the harmonic measure flow is converging to the coalescing Brownian flow.
Now I want to talk about the fingers. If I take any point in the cluster, there's a nation of
ancestry because the particle attached to another particle and you can trace back in time and
watch where your ancestors were. Okay? And they won’t go sort of straight [inaudible] into
the unit circle. They'll move around a little bit. So the finger of a point is its ancestral line in
space. And I've drawn this picture in there. It’s a logarithmic scale. This also an escape route
associated with a point. If I take some point and attach a piece of thread to it and then that
thread leading out of the cluster through the gaps between the particles and I pull that thread
tight, then I get a unique path from that point out to infinity. Outside the cluster. So for any
given point, there's a finger going in and there's an escape route going out.
>>: [inaudible] line segments and circle [inaudible]?
>> James Norris: This is misleading because these have all been distorted by the conformed
lab. But essentially, yes. The escape routes will have line segments and also bits which are
round particles. Yeah. So because we know essentially where all the particles are by the earlier
theorem, we can establish the limiting shapes of the fingers and the gaps by referring the
particles sent to their thetas which come in from the harmonic measure. So in the end, for
example show that if I take a finite collection of the sort of space and time starting points, so
those will be points in this diagram, and consider for each point the finger going in and the
escape route going out, so there are two paths associated to each starting point, one going
forwards in time, one going backwards in time. So that gives a probability measure on I guess,
paths which, a finite collection of paths which coalesce both when you go forwards in time and
when they go backwards in time because the fingers tend to coalesce. But also the escape
routes tend to coalesce. Right?
>>: So the fingers goes one way, the gaps goes the other way?
>> James Norris: Yeah. Let me draw a picture. Theta from not to two pi to a collection of
points to start from, we trace back the finger here and this finger and this finger, they join up
with that one. Okay? And this is a good time to have a, you trace the escape routes for this.
The cluster is all over the place, right? So your escape route is very much constrained by the
cluster. There are no big holes in the cluster. Here's the escape route for this one. Like that.
So this escape route’s coming out here. There are more escape, it’s more constrained than it
looks by the picture because I’m only considering finitely many points here, but here are the
escape routes coalescing. Is that all of them?
>>: [inaudible]? [inaudible] obtained in some version as a scaling image of [inaudible]?
>> James Norris: So this is a picture which I'm, this is a sketch of something which is read off
the cluster. You sort of take logs of the cluster, okay? But then this converges, in a weak sense,
to the backwards and forward lines in the coalescing Brownian flow. That's the theorem.
Okay? So we do understand kind of the stochastic structure. So we simulated the limit law,
didn't succeed in making it very much like, so this is a simulation of backwards and forwards
lines in the coalescing Brownian flow, but exponentiated around, so the light blue ones go out
and the dark blue ones go in. So if you could see, at each point there is a light blue line going
outwards and there's a dark blue line going inwards and they all coalesce, right? So that's
supposed to look like this one, which is what the Hastings-Levitov simulation.
>>: [inaudible] fingers [inaudible]?
>> James Norris: That's correct. You have to imagine the gaps.
>>: [inaudible] match the pictures [inaudible]?
>> James Norris: Sure. Okay. But, okay. So you can just look at the dark blue lines in there,
right? And then that's supposed to look like this one considered in monochrome. Okay. So I'll
stop there.
>>: Questions or comments?
>>: Do you know how they simulated that?
>> James Norris: I know roughly. I'm not the one who did it. So this is among the simulation.
>>: But, I mean>> James Norris: Okay. So you basically work out where each pixel goes. The slit map,
conformal map is something explicit, right?
>>: Oh, okay.
>> James Norris: Okay? You can just work it out. You're not slit map is for the other upper half
plane. You just move around and you get the slit map for the disc. So that's something you can
ask your computer to do and you can also get your computer to generate your random thetas
and then you compose and you see where each point goes. I didn't get a picture.
>>: I think it would be interesting to offer a little bit of historic perspective because this picture
was not made in this high-resolution [inaudible]. It was created first in the late 80s actually.
And, so 10 years before Hastings and Levitov. The only trace that left in the literature
[inaudible] worked on this. He was actually the only trace of the literatures by a guy named
Richard Rockburg[phonetic] gave a talk at [inaudible]conference. The title was actually
Stochastic Movement, so that’s supposed to be, no actually it’s different. And in many ways
the generation of just composing, you can fix conformal map [inaudible]. Now, of course, the
computers were very lean, slow and [inaudible].
>> James Norris: So if I could add a sort of personal [inaudible] to that. I was just thinking what
happens if you take SLE and you drive it by a poisson random measure. And then you, of course
if you increase the intensity of the poisson random measure, you'd expect to get back to the big
measure. If you derive SLE, but if you derive SLE by the big measure you just get an expanding
disc. Nothing interesting. So do you get anything interesting when you drive it by a poisson
random measure? Well, actually you do. Even when the particle size goes to zero, even when
the kind of effective each little atom in the measure goes to zero, there's still something
stochastic left if you do proper rescaling. Of course, we know this don't we? If we look at the
lower large numbers and you do a proper rescaling, these Brownian fluctuations, they're not on
the scale of this cluster. There are on a small scale here. So, okay. I reversed from the SLE to
this, but then fell on this older idea. [inaudible] Makarov, I guess.
>>: I guess I have one more question. So do you think one could take this and construct a
model closer to SLE by building this but still adding some artificial randomness on the size of the
fingers? So, in other words, if you look at an individual finger do expect it to look locally like a
DLA finger or just with different structure? I mean, I don't expect it to look, construct a real
DLA, but maybe something that will be a little more like it by building this and then kind of just
run>> James Norris: Yeah. So there's another work by the three, Turner; Johansson, Sola, and
Turner, where they investigate a rotationally inhomogeneous version of this story. So they add
particles of different sizes at different angles. And that develops in a rather interesting way. I
think, so you could have some sort of, you could cite in advance some sort of random way or>>: That's what I was>> James Norris: One might even try to think how could you drive that randomness by, I mean,
the nice thing to do perhaps would be to sort of use stochastic fluctuations, which are kind of
present in this to drive the, in a sense putting Alpha positive as doing exactly this, right? So
that's the hard way to do it. So you're saying well, do it another way and make it look like DLA.
Yes. That's a really good idea.
>>: [inaudible].
>>: Okay. So thanks.
Download