>> David Wilson: Why don't we get started? So... He is graduating this year supervised by Rick Kenyon who...

advertisement
>> David Wilson: Why don't we get started? So today we have Adrien Kassel from ENS.
He is graduating this year supervised by Rick Kenyon who is at Brown. So he's going to
talk to us about random curves, Laplacians and determinants.
>> Adrien Kassel: Okay. Thank you, David. Hi, everybody. Okay, so in this talk I will be
talking mainly about loop-erased random walk and different aspects of it. So let me -- so
that everybody is aware of what we're talking about -- just recall the definition of looperased random walk.
So it's a random simple path that was introduced by Greg Lawler in the eighties. And this
is a simple path on a Z2 with a very small [inaudible] size. And the construction goes as
follows: you take a random walk started from some vertex and each time you create a
loop in the path, so self-intersection. You erase this loop, start again from the point you
just erased the loop from, continue the random walk and so on.
So of course in order to finish this procedure you need to specify some kind of boundary.
So you can have a target vertex and you will kill the random walk when it touches,
eventually, this vertex. So we can do this on any finite graph. Or you can imagine, for
example, here in Z2 having a very large box starting at the origin and stopping the walk
when it eventually touches the boundary. So that's a loop-erased random walk.
So in fact, loop-erased random walks are closely related to uniform spanning trees of
graphs, so I will just recall that right now. This is a sample of a uniform spanning tree on
a finite square grid, and it was show by David Wilson how to sample such a uniform
spanning tree from a loop-erased random walk. And the way it goes is as follows: well,
suppose you pick some vertex in the graph and you want to sample a uniform spanning
tree rooted at that vertex and oriented towards it.
Well the way it goes -- This is David Wilson's algorithm -- you pick any vertex which is
not the root. You start a loop-erased random walk which ends at this root. And now you
keep this first simple path you just created. And now you pick some vertex that is not
already on this branch just created. You do a loop-erased random walk kill when it
touches the path you kept at the first step, and you go on until every vertex is covered.
So by construction this will yield a spanning graph which is a tree; there are no cycles.
And David has shown that this is the uniform measure on all spanning trees -- well,
oriented but then you can forget the root, forget the orientation, and you get a uniform
spanning tree.
Now, okay, so I want to introduce just now a slight variation on Wilson's algorithm which
is that we're not going to erase all loops that are created but we are going to keep them
with certain probabilities. So let me -- Well, let me just use this notation. So we pick
some finite graph, and I denote alpha to be a function with values 0, 1 which to any cycle
in the graph associates some weight.
And now we can also put some -- Of course I didn't say but Wilson's algorithm works for
any conductance at a biased random walk on a graph. So, imagine you have some
positive weights on the edges; you perform a random walk proportionately to these
weights, nearest neighbor random walk. If you do this, you will obtain a spanning tree
with weight proportional to the product of the edge weights.
Now in this slightly different version what I want to introduce is a probability measure on
cycle-rooted spanning forests which will have a weight proportional to the product over
the edges of the conductance times the product over the cycles of [inaudible] weight.
So if it's not familiar to everybody what a cycle-rooted forest is -- spanning forest, sorry.
So it's this object here. So you have an underlying square grid. A cycle-rooted spanning
forest is a spanning sub-graph. And it contains cycles but in a very specific way, namely
that each connect component contains a unique cycle. So you can see them pictured
here. On the right you have a particular case of cycle-rooted spanning forest where
there's only one -- I mean, it's connected. So we call this a cycle-rooted spanning tree.
Now the algorithm of Wilson I just presented for uniform spanning trees generalizes
really quite easily to this setting. In fact in that case you don't even need to specify any
root. And the algorithm goes as follows: So suppose you want to sample a cycle-rooted
spanning forest which has a weight proportional to this vector. So this product over the
edges of the conductances times the product over the cycles in the cycle-rooted
spanning forest of their weight.
>>: I’m sorry. How is the union of cycles -- So what...
>> Adrien Kassel: Yeah, this is a gamma I think.
>>: Oh, gamma.
>> Adrien Kassel: Gamma is a cycle-rooted spanning forest, so it's that.
>>: Okay.
>> Adrien Kassel: So it's a union of these joint cycles plus each cycle has a tree rooted
on it. And the algorithm goes as follows -- I mean, I wrote it here but I will explain out
loud. So you don't fix any root. We're going to sample an oriented CRSF -- I will say
CRSF for cycle-rooted spanning forest -- oriented in the sense that the branches are
oriented towards the cycle and the cycle has some orientation. Now how does it go? You
pick any vertex from the graph and perform a loop-erased random walk -- No, no sorry -perform a random walk biased by the conductances.
Now when you construct a self-intersected loop, in the usual algorithm you would just
erase it. But here we're going to flip a coin with winning probability alpha of gamma. And
if it turns out that the outcome is positive, we keep the loop.
So let me just -- So you pick some random vertex. You do the walk. You make some
loop. You flip a coin and you keep this loop with probability alpha of gamma. This is the
loop gamma. And you flip a coin with biased alpha of gamma.
Now suppose the outcome was that you keep it, so this is your first step of the algorithm.
Now you pick some vertex which is not already on this path, and you do the same thing.
So maybe it will create some other loop. You will flip coin and it will say that you have to
erase it. Okay? So you continue and maybe eventually you will touch this first part. So
now you just keep that like that. It will be rooted on this cycle. And so you go on until all
vertices are created. So sometimes you keep cycles, sometimes you erase them. And
you will touch something which was already created. And what you see is at the end you
have exactly a cycle-rooted spanning forest which is, if you will, oriented but this is not
important.
And if you follow the proof of Wilson's algorithm basically you will see the examples
exactly the measure I had on the previous slide, so this measure.
Okay, so now I want to talk -- So what we have just seen so far is that a loop-erased
random walk is related to uniform spanning trees as was already known and of course,
with this modified version, also related to the cycle-rooted spanning forests.
So in a way -- But the difference between the two approaches is in the first case you
only have simple paths. I mean, they're not closed. And here the property is that really
construct certain loops, and we may see them as, if we want, excursions of the looperased random walk. Right? Because whenever a loop is kept of course it's not just a
simple random walk along this curve but each time you may have created some loops
that you have erased, etcetera. So it's a different kind of random path.
So in a way this correspondence will enable us to study either the trees using the looperased random walk or the loop-erased loops using these trees. And this will be the topic
of my talk right now.
So I want first to concentrate on this special case where you have only one connected
component. Well, a way to achieve such a measure is to take -- So let's forget for the
moment about conductances; we'll just take them to be all one. And now take for any
loop the same weight which is some very small parameter alpha which will [inaudible] to
zero. Well if you are on a finite graph then if alpha is very small you're likely to keep only
one loop. Basically because if you had K loops, the probability of having K loops will be
proportional to alpha to the K. So if alpha is very small, it will keep only one loop. And in
fact you will have the uniform measure on cycle-rooted trees with only one loop.
So I want to talk now just a little bit about this uniform cycle-rooted spanning tree. So a
finite graph, uniform measure on cycle-rooted tree which is, I mean -- we can also call it
the unicycle, uniform unicycle. Okay, so that's one. So this is quite a dense slide but it
recalls different results that I will now explain.
So this is joint work with Richard Kenyon and Wei Wu from Brown University. Let me just
explain briefly the limitations. So here we have square grid, triangular grid. Suppose you
have some infinite periodic grid, so we will just suppose for the moment it's a square
grid. The two other grids we will work too.
And we take some large box of size N. So we will look at the uniform measure on cyclerooted spanning trees of this graph. I can call this graph Gn and we have the measure
Pn uniform on CRSTs. So L is a random variable equal to the length. So, when I say
length just a combinatorial length, number of edges of the cycle. I have somewhere this
random cycle. So L is the length. A is the area. So area just means for me the number of
faces. We have a plain graph so the number of faces inside the loop. And so the results
that I present on this slide concern the expected length and expected area and expected
area squared. And I will talk afterwards about the higher moments of the area.
So let me start by explaining the first two lines which are not due to me. Okay, so let me
just first explain the second line. So in a paper by Yuval Peres and Lionel Levine, they
have showed that the expected length of the uniform unicycle is related to some
constant side that they call the looping constant. At least -- Okay, I should explain what
this means. This is actually an equality asymptotically. So they have shown, and I will
explain in a second how, that for Z2 if you take now N to go to infinity the expected
length turns to 8, the integer number 8, which is a bit surprising. And I don't think we
have already a good explanation as to why this is rational or even an integer, but it's
that.
And roughly to explain how the argument goes: So, what is this xi? Well, xi is some
constant that depends on the lattice but is given by the loop-erased random walk. So
suppose you fix the origin here as zero. You're on this large walk N. And you take a
loop-erased random walk killed on the boundary. Now you can ask the question for a
fixed N: what is the expected number of neighbors of zero on this path? So it's a simple
path but, okay, you can go to this neighboring vertex like that. I mean you will have only
one neighboring vertex but you can to it just directly like that or by doing some loop and
going to it afterwards. So it's not obvious what the expected number of neighbors of the
origin is in this loop-erased path. But it turns out that as N goes to infinity, this has a limit
which is xi the looping constant.
And David Wilson and Richard Kenyon have shown that for these different grids, xi is
equal to five-fourths, five-thirds and thirteen over twelve.
Now how did Yuval Peres and Lionel Levine relate this to that? Well, I don't have time
now to explain it all, but it goes through the burning bijection of Dhar which relates
uniform spanning trees to the Abelian sandpile and then uses some convergence for the
Abelian sandpile by [inaudible]. And...
>>: I'm sorry. What was delta there?
>> Adrien Kassel: Oh, I'm sorry, yes. Delta is the degree of these regular graphs.
[laughing] So, yeah. Okay. So this proof of convergence was done; at least the paper
can be found for the square grid. I think David Wilson has work in progress that it works
for the other grids and, therefore, using the same argument in Peres and Levine's, you
derive these quantities. So it's kind of interesting that also in these other grids, you get
integers which are actually equal to twice the degree of the duel graph. Well, okay. That
doesn't that much but --. I mean, this definitely requires some explanation but we don't
[inaudible].
Okay, so that was for the expected length. So you might think that if this random loop
has a finite expected length that it could be the same of the area or not. And it turns out
not to be. So in fact the area somehow has a heavier tail. So let me just explain. What
we have shown is that -- I will just talk first of all about the square root; I mean the other
cases are similar. The expected area on the box of size N grows as log N. And we have
the explicit factor; it's 4 over pi log N. And now the expected area squared grows as N
squared and the constant is some [inaudible] constant which involves this number. I will
talk about it in a second. But just notice that -- So the constant in front of the leading
term is something that is lattice dependent, so these numbers, times something that just
depends on the domain.
Ah. Okay, I understand why you might be surprised. I didn't explain at all what is D. I'm
sorry. Actually to compute exactly these constants what we have used is some -- Of
course we're looking at an infinite volume limit. But since wanted to convert some kind of
discrete Riemann sum to an integral, we actually looked at the scaled version. So the
way we did it is you take some domain in the plane, and now you push in the square
lattice, 1 over N. So each time this will define you some graph Gn and obviously just lies
as a discrete approximation of this domain. And we may define the same measure, the
uniform measure on the unicycles, compute the length and the area.
When I say length and area it's always the combinatorial one; I'm not talking about the
Euclidean one which of course will tend to zero.
But if we have sort of this sequence of graphs that have this boundary at infinity in the
sense that when they are rescaled they approximate this domain then the constant in
front of the leading term of the area squared is given by this quantity. So G of z, w is the
Green's function and the domain with [inaudible]-bounded conditions. So you can just
think of this quantity as being the mean expected exit time of Brownian motion from this
domain. That is, you take a Brownian motion started at some point z then you integrate
the time it has spent at W of O points on the domain. So this will give you the exit time of
the domain, so Brownian motion killed on the boundary. And now you average this over
all z's. So this will give the mean expected exit time.
So here are a few examples. So for the disk -- So this quantity, at least with this
rescaling, is something which is scaling variant, rotationally variant, transitionally variant
and it is maximized for the disk. So for the disk it is worth 1 over pi.
And now for a rectangle with aspect ratio tau, it's given by this, for example, so this
series. Just a small comment is, I mean, one way to see that it is maximized for the disk
is actually to relate this constant to this, what I call, [inaudible] of D. So there's a beautiful
short paper of [inaudible] from the fifties where he computed this quantity which has a
mechanical interpretation as being the torsional rigidity of some metal beam of crosssection D. So you take a domain D cylinder and you ask about the torsional rigidity of
this. This turns out to be the quantity expressed in sort of a minimization PDE problem.
And by using some symmetrization he shows that it is maximized for the disk. But it's
kind of interesting.
Now just to talk a little bit about the proof of this result. So I explained these previous
works. Now, how about these two. What I call here critical weight is something that
comes from the scaling limit approach. So in the scaling limit approach what we're going
to do is that we're going to express these quantities, expected area and expected area
squared, in terms of Green's functions on the graph. And now here I have used the
continuous Green's function on the domain. So what we have actually used is some
convergence of the discrete object when rescaled to the actually continuous object.
And this requires that harmonic functions converge to continuous [inaudible] function
and then you have to weight your graph with certain isoradial critical weights. And it
turns out for these grids it is given by these numbers. Okay, so this is why I wrote down
these numbers.
Now I will explain a little bit about the proof. So let me start with the area. So I'm just
going to consider a case of a large box of size N. And actually the simple observation
you can make is that if you're looking at a cycle-rooted spanning tree on this graph which
is with free-boundary conditions, it's really -- and now you take the dual-planar graph,
well this is going to be a two-component force which is wired on the boundary.
Let me just draw if I have here a cycle, some branches free on the boundary. And I will
take the planar dual. This will be some tree inside and some tree outside which is rooted
on the boundary. So in fact a way to answer -- Okay and now for example if we want to
compute the expected area, we just have to sum for the cycle-rooted tree. Over all faces
the probability that this face lies inside the uniform CRST, right? This is just the expected
number of faces that are inside. This is the area.
So in fact we need to know these probabilities. So we want to know given a face, what is
the probability that it lies inside the circle. Well, if we take the dual approach, it's just
asking, "Well, if we pick some face, so it's a point in the dual graph, what is the
probability that it lies in this floating component?" so the component of this bi-tree which
is not rooted on the boundary.
And then we can use some electrical network theory or some results already from
Kirchhoff about the number of spanning trees and we can show that the probability that
F lies in this uniform spanning tree is equal to -- So I will denote kappa -- Well, okay, I
will just write number of spanning trees of the graph divided by the number of cyclerooted spanning trees times the Green's function on the dual graph -- So I will put a star
-- with [inaudible] boundary condition on this outer vertex evaluated at face, F.
This is in fact the electrical resistance in a network between one point, point F here and - I mean if you put unit resistances on all edges and ask what is the resistance between
this point and the outer vertex. So we obtain that. And actually when we consider -- So,
okay, basically what is the asymptotic of this?
This Green's function evaluated at the same point for almost all faces will be something
like 1 over 2 pi log N. Now this ratio, number of spanning trees divided by number of
cycle-rooted spanning trees, is actually related to the expected length of the cycle-rooted
spanning tree. And this was shown to converge. So actually this will be roughly -- Let me
get it right -- 8 over N squared.
Okay, the reason for that, I will just say it in a few words, is that if you take a uniform
spanning tree and you add some random edge, you will obtain a cycle-rooted spanning
tree. But it will not be the uniform measure but it will be biased by the length of the cycle.
And, therefore, if you compute the number of spanning trees times the number of edges
which are not on the tree and divide it by the number of cycle-rooted spanning tree, you
will get exactly the expected length which is 8.
Okay. So using this result, you obtain that roughly this will be 8 over N squared 1 over 2
pi log N. And so this is now 4 over pi, this is 1 over N squared, log N. And now you sum
over all the N squared faces and you will get that the expected area is 4 over pi log N.
Now for the case of the second moment of the area, it's a bit similar approach. We just
write that the expected area squared is a sum over all pairs of faces of the probability
that both faces lie inside this CRST, the cycle. And now the result is that --.
The probability of two faces being inside the CRST: well, how can we compute that?
We're going to condition on one face to be inside. So let's take F and F prime. Let's
suppose F is inside the cycle. Now what's the probability that F prime will be inside? Well
using either Wilson's algorithm or some other technique you can show that it's the
probability that a random walk started at F prime touches F before it hits the boundary.
And this is just a harmonic function which is worth zero on the boundary and one at F
which you evaluate at F prime.
So the conditional probability here will just be this harmonic function which you can write
in terms of Green's function as this ratio of G star evaluated at F, F prime divided by G
star at F, F. Now using the result we previously had for probability of F which is this ratio
then that, there's a simplification and you obtain number of spanning trees, number of
cycle-rooted spanning trees, G star of F, F prime.
Now using the same result as before this will be sort of be that. And now using some
care and the convergence I explained earlier, I mean taking care of these critical weights
you can show that Green's function will be an approximation of the continuous one. And
since you are now summing over all pairs of faces, this will be like Riemann's sum for
the integral which is written here.
Okay. So now let me just go a bit more quickly now. It turns out that for higher moments
this kind of simple bijection by relating the cycle-rooted spanning tree to a dual which is
just this tree with two components and then using the simple electrical properties turns
out to be much harder if you want to take more points into account. For example, if I
wanted to know what's the probability that three specified faces are inside a cycle which
would be a way to compute the third moment at least we don't know how to do it
intractable. But via some other method that I will now introduce, we were able to show
that the k moment grows like that. And this is just a constant that depends on k.
So these techniques I will now introduce are more related to a scaling limit approach.
We will now not only consider this uniform cycle-rooted tree on the graph but rather try to
compute some scaling limits of CRSF's, cycle-rooted spanning forests. Okay, let me
show some samples and then, I will introduce what this is and how we should compute
them.
Okay, so there are different surfaces. The first one is a sphere with its usually metric,
some other curved surface. This is hyperbolic ball and this is a flat torus with these sides
identified. And I represented here only the loops of certain cycle-rooted spanning forests
which will be weighted in order that they are microscopic and then we forget about all the
branches that cover all the graph.
I didn't say but we will consider some graphs embedded on these surfaces which
approximate them in a certain way. Okay so let me know go to this topic.
For this presentation I will consider only one case which is, we take some domain in the
plane which is topologically non-simply connected so there are some holes. And we
approximate it by the square grid, 1 over N Z2. So this will be the graph Gn. And this is
the surface signal. Now for each N we will consider Pn to be the uniform measure on
incompressible cycle-rooted forests.
What is incompressible? It means that all the cycles of the cycle-rooted spanning forest,
so like on the left, must be non-contractible. So they have to enclose at least some of the
holes. This will ensure that they are microscopic when we go to the limit of mesh size
going to zero.
So this is the measure we consider and we can view it as a measure on the space of
multicurves of the surface which are just finite corrections of continuous curves which I
will suppose in this case incompressible as well so non-contractible. So maybe
something like that and something like that. And so if we consider for the surface sigma,
omega to be -- So omega k means multicurves with k curves. So this is naturally
endowed with a structure of metric space, and we can consider F to be its Borel sigmafield. And I will define the two next notations in a second.
Okay, so that's for example what would be an incompressible CRSF on such a surface.
And this is a theorem I want to prove. So I want to prove -- So this is joint work with
Richard Kenyon. There exists a probability measure, P -- So on this space of multicurves
of this surface sigma -- such that the sequence Pn converges to P.
The idea of the proof is used as a classical paradigm, so we will show tightness of the
sequence of measure and then show convergence for some specified events that form a
determining class for the sigma field. And then by using Prokhorov's theorem this will
ensure convergence actually to the probability measure.
>>: Sorry. Can I just make sure...
>> Adrien Kassel: Yes?
>>: ...[inaudible] that I understand? So the condition is every cycle is non-contractible.
>> Adrien Kassel: Yes, yes.
>>: So...
>> Adrien Kassel: Yes. Yeah, I'm sorry. I didn't...
>>: So you could have many cycles -- [inaudible] so you could have just one cycle
[inaudible]?
>> Adrien Kassel: Yes, certainly. Yes, the number of cycles is not a condition. I mean, it
could be one. It could be more.
>>: [inaudible] have to prove [inaudible]?
>> Adrien Kassel: Yes, yes. Certainly, yes. Yes. The only condition is that it's uniform
and on the space where the cycles are non-contractible.
>>: You said it clearly I just wanted to make sure it had sunken into...
>> Adrien Kassel: Okay. Yeah.
>>: ...my brain correctly.
>> Adrien Kassel: So I will explain the second point in a bit of detail, and let me explain
what all these cylindrical events which form a determining class which you want to
compute.
Well basically they are events like this one, so let me explain what this means. Well, we
take the surface and now we will -- Okay, one approach to the topology of curve is to
say, "Well if we know the topology of the curve with respect to --" Okay, I will add some
blue holes in the surface and then not ask precisely how the random curve behaves on
the surface but just what is its homotopic class with respect to these additional
punctures.
And the idea is that this will generate the Borel sigma-field because if you knew the
probabilities for all these events and let's say you take the squares blue to be on a dense
countable subset of the surface, you will actually get the precise information on the
topology of the curve.
So basically we're interested in what I called here lamination, and lamination is just a
word to say isotopy or homotopy class of multicurves with respect to topology. So I fixed
now some blue squares additionally to the black squares, and the cylindrical event is -So E which depends on the blue squares and depends on some lamination, so some
homotopic type with respect to its topology -- and it's just the event that the CRSF
belongs to that homotopy type.
Now what we want to show is convergence Pn of E. Oh, this is E. Okay, so we want to
show that this converges. Now, okay, so in order to do that this is where the Laplacian
comes into the picture. I will just recall technology that was introduced recently by
Richard Kenyon which enables us to compute exactly the probability of this event I have
just shown. So the idea is that you take a graph and you define a connection on a graph
-- So I will say SU 2 connection -- to be data for each oriented edge V, V prime of some
matrix V, V prime belonging to SU 2 matrices.
And now the only condition you impose is that on the reverse orientation it's the inverse
of this matrix, so phi V prime, V is phi V, V prime inverse.
And given this data what you can compute is around any simple closed curve you can
compute the product of these matrices. And this is what he coined, he termed in this
context holonomy of the connection on this curve. So this is just mimicking some
geometry language. So holonomy...
>>: [inaudible]
>> Adrien Kassel: So this is on any finite graph.
>>: Oh, [inaudible]
>> Adrien Kassel: Yes, this is on any finite graph. Holonomy of phi around cycle gamma
is the composition of the phi Vi, Vi plus 1 if the cycle is indexed by vertices Vi. Now,
okay, Richard Kenyon has proved the theorem which is that -- Where should I write it?
Okay, you can define a twisted version of the usual common natural Laplacian on the
graph using this connection. How does it work? Well, usually the Laplacian acts on
functions over the graph just like that. But in this case the Laplacian will act on -- You
can imagine that the graph is endowed like it's two-dimensional bundles, so over each
vertex you have two-dimensional complex vector space. And F is now a section of this
bundle. And the twisted version is that Laplacian phi of a section is F of V minus psi V
prime, V, F of V prime.
And the theorem here shown is that you can define what is the determinant of phi, of
Laplacian phi. It turns out that it's the sum over cycle-rooted spanning forest, product
over the cycles of the forest of 2 minus trace the holonomy around gamma. So when you
have matrices that are in Su2, their trace is a positive number smaller than 2. So this is
positive. And this is actually sum over the cycle-rooted spanning forest [inaudible]. So
note that in contrast with the usual combinatorial Laplacian which is not invertible on a
graph, well this operator has determined non-zero whenever there is at least one cycle
where the holonomy is non-trivial, not equal to the identity in which case it would be only
zero.
Now let me call this function Z of phi. Now it turns out that if we impose some condition -there's a natural condition to impose on the connection. This was for any finite graph.
But now, let me come back to the case of our graph Gn. It's embedded in the surface, in
the weights, so there's some topology. Well you can ask that the connection be flat
which means that the holonomy around any contractible curve is one, I mean is the
identity matrix which means that no contractible curve can count in this sum for a flat
connection. This trace will be equal to two, trace of the identity, T minus 2. It will be zero.
If you consider a flat connection Z of phi is a sum over incompressible CRSF's the same
way. It's product of a cycle 2 minus trace holonomy of gamma. Now there is something
even more which is true about flat connections, which is that if you take two curves that
lie in the same homotopic class, and they neither define the same lamination as
introduced with respect to this topology, the trace of the holonomy will be the same. This
is just because when you deform you can write it as a product of one cycle which is
contractible and the other cycle. And since the contractible cycles have holonomy one,
it's easy to see that they will define the same trace of holonomies.
So in fact more is true. You can write that this a sum over laminations and a sum over
incompressible CRSF lying in the lamination. So sum over laminations L, sum over
CRSF's lying in this particular lamination L, of a certain function that I will call T L of phi
which is the product over -- For any representative of this lamination, it's the product
over the cycles of 2 minus trace of the holonomy. So this just depends on the lamination
not on the particular multicurve which represents it.
And now since what I just said, it means that this can be factored out. Right? It doesn't
depend on any particular CRSF. It's just a tier. And now what is this number? Well, this
is the number of CRSF's lying in L. But this is the numerator, if you want, of the
probability we're trying to compute because Pn of E is exactly the number CRSF's lying
in the lamination we have chosen, L, divided by the total number of incompressible
CRSF's.
So I will write this A of L for this number and for this partition function just Z zero.
Now how to compute: so...
>>: [inaudible] done with the [inaudible]?
>> Adrien Kassel: I think I want to show you -- Well, okay maybe it's --. I don't know. It
would be easier if we remove it. I mean the last slides are not that important. I mean if
it's easier for you to follow if I write in the middle?
>>: [inaudible]
>> Adrien Kassel: Okay. So the question is really to show that this ratio now converges.
Okay, so using the technology we've just written, basically A L is the coefficient of the T
L of phi in Z phi. This is Z of phi. We want the coefficient A L in front of that function.
So I say function because now we're actually not prescribing any particular flat
connection but we're just talking about this is as a function over the space of flat
connections on the graph. And so now this is a theorem from Folk and Goncharov that
these functions over the space of flat connections are actually linearly independent. So
asking for the coefficient A L is just asking for the coefficient in this basis expansion.
So if I just use the usual dual, I mean we're just asking for the coefficient of T L in Z of
phi and this is the dual of T L on the vector space of functions over the space of flat
connection dived by Z zero. Now by linearity, this is T L star of Z phi over Z zero. Okay.
So this is exact computation. It just is done on the finite graph Gn, but now we want to
show the convergence. So in fact it was proved by Richard Kenyon that this ratio
converges. So he has shown this using some discrete complex analysis techniques. So
we know that what's inside converges, and now we need to show some continuity of this
operator. It is not obvious, but we were able to show this. And by combining these two
facts, well, you obtain that this has a certain limit.
So this is how you construct this limit. Now let me say just a word about the first -- So,
tightness: tightness essentially means to show that the curves remain simple and that
they don't have, for example, double points. And this relies on previous work about the
loop-erased random walk because, as I explained in the beginning, these CRSF's can
be sampled by means of Wilson's algorithm. And much work was done by [inaudible],
[inaudible], Newman and Wilson then Schramm then Schramm, Lawler, Werner to show
that loop-erased random walk converges to SLE2. And we use these tightness results in
here. So this is for the convergence.
Okay, so this example is just show that this is incompressible CRSF on an annulus, and
we can hope it has some lengths -- So we do not know of direct description in the
[inaudible] like Dave's SLE 2 for loop-erased random walk, but I just want to point out in
this example the dual of this incompressible CRSF which is conditioned to have only one
loop, is exactly a tree which is rooted on this and this. I mean since -- Okay, in this
example I think it appears that there should be some close link to SLE, but it's not. It's
just heuristic for a moment.
Okay, so there are some other weights corresponding to these pictures that I have
shown you that we can define. And I actually request some more technicalities to show
the convergence. We don't know of any easier approaches.
So one of them is to look at a measure on cycle-rooted spanning trees so only one
component. But if you take the uniform measure, of course, it will not have a scaling
limit. We have seen before that the expected area will be very small if you look at the
Euclidean and...
>>: I missed something. I thought the construction of these measures was partly
motivated by computing the moments of [inaudible] K?
>> Adrien Kassel: Yes. So that's exactly what I'm about to say. So in this particular
case, it's not the uniform measure on CRST that has a scaling limit. But the one
weighted by the square of the area. It turns out that if you favor large loops by pushing
them by a factor of the area, you can show that it converges. And now in order to get the
moments of the area, basically this is just a change of measure and you will see that the
K plus 2 moment of the area divided by the second moment of the area is exactly the
Kth moment of the area for this biased measure.
And so showing that this has a scaling limit shows that rescaling by the appropriate
factor has a limit and then that's how we get the moments. So it's a bit -- I mean
compared to the first cases it's more intricate. So there might be some simplifications but
we are not aware of them.
>>: [inaudible] argument comes to mind.
>> Adrien Kassel: Actually we didn't introduce this to prove this. We just realized that it
was a consequence.
>>: So you get the moments [inaudible]...
>> Adrien Kassel: Actually just directly you write -- So this is for the uniform measure: E,
K plus 2, divided by A 2. So this one, we know it.
And now this is the expectation for the weighted measure of A to the K. Okay, but now A
to the K is just a combinatorial but if I rescale by N squared under here, this will be a
Euclidean error so this has some -- I should multiply here. And now this is equal to N
squared. So you get that it's this constant times N to the [inaudible] so just directly.
>>: [inaudible]?
>> Adrien Kassel: No, it's just like a -- The expected average -- Yeah. That's -- Yes,
exactly. Okay and there are some other measures we can define. For example on this
sphere I showed you there were two loops. It's because we're not looking only at
measure on CRST but there can be several loops and they're also weighted as a
function of the area they enclose basically.
Now how much time do I have left?
>>: Why don't you wrap it up?
>> Adrien Kassel: This is my last slide. I just want to make a small comment. So the
uniform spanning tree, when you view -- Okay, so this is not a uniform spanning tree but
think of a uniform spanning tree. You can view it as a point process on the set of edges
of the graph. Right? It's just a bunch of edges on the graph. And it turns out this was
proven by [inaudible] that for the uniform spanning tree it's a determinantal process in
the sense that the marginals -- So the probability that certain edges are inside, this can
be computed so that K's edges are inside. It's a K by K minor of some kernel.
Well it turns out that for these measures on CRSF, they are also determinantal
processes viewed as [inaudible] processes on the edges. However, it seems that they
are slightly in a more general class because, for example, if I want to look at this
particular picture -- so namely this is a measure on CRSF -- which is conditioned, for
example, it doesn't enclose any of the holes, it has to enclose at least a more
complicated topology. Well, I can construct such a measure basically using some
connection as here and then, the determinant of the Laplacian will be a partition function
for these. So if I take a connection phi such that the holonomy around any hole here is
trivial, but the holonomy around a more complicated pass is not, so this relies on the
non-commutativity of the Su2 matrices. And it turns out that this process can still be
thought of as a determinantal process.
But it's kernel now takes Su2 value so [inaudible] non-commutative. In fact I have looked
a little bit at these processes and they share most of the properties, I mean, known of
the determinantal processes because they are basically kind of algebraic. For example,
the number of points in a domain is a sum of [inaudible], etcetera. But I mean I haven't
been able to make use of it. But they seem to belong to a slightly more general class
because, for example, this process cannot be written as a usual symmetric
determinantal process. Okay, I think I'll stop there then.
[applause]
>> David Wilson: Any more questions?
>>: So even though this is not, you said, a determinantal process in the usual sense,
you still get things like negative association between variables?
>> Adrien Kassel: Yes.
>>: [inaudible]...
>> Adrien Kassel: Yes, yes. Yes.
>> David Wilson: Okay. Well, let's thank Adrien again.
[applause]
Download