>>: Okay. Well, today we're pleased to have... already. So today he's going to be talking about...

advertisement
>>: Okay. Well, today we're pleased to have Rick Kenyon here who many of us know
already. So today he's going to be talking about the double dimer model, and how to
[indiscernible] using quaterniums.
>> Rick Kenyon: Thanks. Thanks, David. So I want to start by talking about the dimer
model. I mean before I talk about the double dimer model. Just make sure everybody
knows where we're starting. And I hope you'll all feel free to ask questions about
anything.
So the dimer model is a model on a -- it's a probability measure on the space of perfect
matchings of a graph. Okay, so we're going to start with a graph. Set of vertices and
edges. And in general, we're going to have a function on the edges, a weight function on
the edges. So new. And it's going to take positive real weights.
And from this data -- so the space of perfect matchings, so a perfect matching of the
graph is just a set of edges which covers all the vertices exactly once. So MGs instead of
perfect matchings or direct covers. Okay. So. All right. So there's a perfect matching of
my graph.
And associate to the weight function, we can put a measure on the set of perfect
matching. G is a finite graph. MFG is a finite set. So let's define the measure mew. So
fun UA probably measure M. So we want to define a random matching and a random
matching is -- each matching is weighted by the product of edge weights it uses.
Okay, so probability of a matching M is going to be, you know, proportional, some
constant proportional to the product over the edges in the matching of the weight of that
edge. So the weight of a matching is just the product of edge weights and probability is
proportional to that. And then the constant of proportionality of the partition function Z.
So Z is just going to be a sum over capital P of the weight of M.
So that's the dimer model. And just one -- it's of course a very well-studied model which
I've been working on it personally for 15 years. There's lots of interesting questions
which come up in the -- [indiscernible] about the dimer model on large graphs. But, okay.
One thing I wanted to say right -- this is a good place for it, is that if you change the
weight function, you don't necessarily change the measure. And in fact, there's a -- if you
take a vertex and you multiply all the edge weights there by constant and it's not going to
effect the underlying measure. So if I take these weights, you know, A, B, and C, and I
multiply them all by some constant lambda, the measure doesn't change. Why is that
true?
>>: [Indiscernible].
>> Rick Kenyon: That's right. That's right. And so this is called a gauge transformation.
And I'll explain why it's called that, transformation, shortly.
>>: Did you see [indiscernible].
>>: Yes, yes, yes. It's called a gate transformation. Maybe I'll -- and, well, every -- as
we've all said, every perfect matching uses exactly one of these edges so all perfect
matchings get multiplied by lambda when you do this. And so get a lambda enumerator
and a lambda denominator. Those cancel, you get the same measure.
And of course you can do this in more than one edge, more than one vertex. All right.
Well, the most important -- I mean, the root of all computations in the dimer model is the
theorem of Castaline (phonetic), which was -- I mean, the basic case was for the square
grid which was also proved by Temperly Fisher (phonetic) in the 60s. Temperly Fisher.
So, I mean, they get credit also. But Castaline (phonetic) Prutin (phonetic).
More general case. If you have a planar graph, it's possible to actually compute the
partition function using a determinant. So.
I'm just going to -- let me just state in the case of a bipartite, since it's a little bit easier,
you have to use bipartite. And planar than Z [indiscernible] the determinant of a certain
matrix, K, called the Castaline (phonetic) matrix, which you construct from the graph. It's
essentially the -- a weighted adjacency matrix. So K, you know -- here the graph is
bipartite, 1, 2, 3. So there's two types of vertices, blacks and whites. And K is sort of
adjacency between the white vertices and the black vertices. So K of W, I, and J is going
to be -- well, the going to be the weight on the edge from W, I to be J. If there is an edge.
Right. And with a sign. Otherwise it's going to be 0.
>>: Sign?
>> Rick Kenyon: What? And the sign? Yeah. So the trick is to figure out what the signs
have to be, and -- well, so the rule, one rule is that around each phase, so you want to
put some signs in. Throw in some -- sprinkle some signs down so they're around each
base. If the base is a multiple of four, you have an odd numbers signs; otherwise you
have an even number of signs.
So for example, this base is a square, so it's going to have to have an odd number of
minus signs. So I can put one there. This one is also multiple four, so I can put one
there. And this one is six, so it's not a multiple of four, so it has an even number of minus
signs. So that would be an example of a way to put in the signs so that this theorem is
true. Well, I should put in [indiscernible] values there.
So that's called a Castaline (phonetic) weighting of the graph when you put in these
signs.
Okay. I'm not going to talk about that theorem today. Just to get the statements straight,
that's a [indiscernible].
But the point is that the -- because you can complete this partition function in a fairly
simple way using determinants for -- even for very large graphs, large regular graphs,
you have a -- a good way to get a handle on this instead of matchings, and that's why this
model can be -- can be really -- you know, you can really work with this model in a fairly
large generality.
On the other hand, two important provisos are -- well, the most important one is plenarty.
If it's not planar, then you're out of luck. There's no related theorem. If it's not bipartite,
there is a more general version of this theorem which uses fathioms. I'm not going to talk
about that either. But bipartite does less -- is not really an important constraint of this
theory.
Okay. Well, now let's talk about the double dimer model. I think I'll do it here. And this is
double dimer model. This is also -- this is very -- the definition is very simple now. We're
just going to take two independent dimer covers of a given graph and that's it.
So we're going to -- but you can view it -- you can of course draw these two dimers on top
of one another. Right. So there's a black dimer cover and a red dimer cover. When I
draw them together, I get some loops. Every vertex has to be two now and some
doubled edges. So this is just -- let me just say probably measure. It's the product
probably measure on two independent dimer covers of the same graph.
Alternatively, you can think of it as a probably measure on edge configurations of a graph
where -- and this configuration consists of loops and double digits. Or configurations of ->>: [Indiscernible] cycles.
>>: Cycles, yeah. Is there a difference?
>>: [Indiscernible].
>> Rick Kenyon: Oh, okay, yeah.
>>: [Indiscernible].
>> Rick Kenyon: Well, you're right. So I'll call it cycles then. I'll probably call it loops
later on, but cycles and double digits.
>>: [Indiscernible].
>> Rick Kenyon: That's right. In the bipartite case, you don't have to say that, but in the
nonbipartite case, yes that's more -- that's important. So even length.
>>: [Indiscernible].
>> Rick Kenyon: It will still have even length, right. If it's nonbipartite, every cycle, in a
given cycle, every other edge is in one matching and every other edge is an other
matching. All right.
>>: [Indiscernible] sample wasn't two.
>> Rick Kenyon: You can certainly sample more than two, but this is the one we're going
to consider here. All right.
And in this case, of course my measure -- the measure is just the -- the usually measure,
if you want to try to think about it in terms of loops and double digits, how should assign a
measure -- I mean, what's the corresponding probability measure? Well, you take the
product of the edges that you see, but then you have to -- there's an extra factor of two
for each ledge, for each loop. So probability. And the configuration of C is -- it's a
product over all the edges in C as before, but then you get an extra factor times two to
the number of cycles, but whereas a double edge does not count as a cycle, but every
other cycle gives you accurate [indiscernible].
>>: [Indiscernible].
>> Rick Kenyon: No because there's only one way -- there's only one way to, you know,
cover with two dimers of double edge whereas there's two ways to cover every other
cycle. Okay.
So that's the double dimer model, and not much else to say there. Well, so what are the
questions which we want to ask about these models? Well, we want to consider, you
know, in probability or statistical mechanics [indiscernible] in the large graphs like
[indiscernible] graphs, structure like the plane, the grid, what is the -- does a typical dimer
model look like? What can we say about dimer models on a large graph? Even if that
graph -- can we define the limit of these measures in the infinite setting? Does that limit,
you know, depend on our sequencing of approximations? Questions of this type. Many
of which can be answered. I mean, if you have some question you want to ask, I'd be
happy to tell you what is known about it, what I know.
Well, in a double dimer model, of course, we're going to get some loops of various types.
You might ask, you know, how long is a typical loop. Questions of this type.
Once the sort of fractal dimension of the loop in the sense of, you know, if you know a
loop has a diameter of thousand, how many -- what should you expect its length to be?
And of course, it was conjectured, you know, a number of years ago that if you take the
square grid, square grid Z2, you scale by epsilon, so that the mesh size is epsilon. You
take a double dimer model. I'm not going to touch the [indiscernible] here. And you look
at the -- you look at the long loops. These should somehow in the limit of epsilon
[indiscernible] should converge to some nice loop -- fractal loop process, which are -- has
something to do with SLE(4). So you can -- it's possible to measure the dimension of
these loops, long loops, and it turns out to be the correct dimension for SLE(4), which is
three halves, one and a half.
And various other quantities you can measure about these groups all seem to point to
SLE(4) as being the scaling limit. So [indiscernible], can I say what SLE(4) is? Probably
many of you have known more than I about what that is. But ->>: [Indiscernible] on the cycle what [indiscernible] for the cycle its lengths.
>> Rick Kenyon: Yeah, well, okay, so right.
>>: [Indiscernible].
>> Rick Kenyon: Here's a simple question. Here's a simple setup. Let's take an anulus
and look at -- and conditioned on having a cycle which goes around the anulus. Okay.
So when you have at least one long cycle. And you can measure its length, it's
properties. Where does it go?
>>: [Indiscernible].
>> Rick Kenyon: Well, there's no -- I mean, how do you define a path from a single dimer
model. I mean, there are various ways to do it, but I want to say this particular model. I
mean, there are ways to define ->>: It's related to ->> Rick Kenyon: It's related to the spanning tree process, for example. I mean, it's in
bidiction with the spanning tree process where you get SLE(2). That was proved by
[indiscernible] and Berner.
So a single dimer model does have some conformally [indiscernible] properties. That's a
sort of [indiscernible] result.
>>: [Indiscernible].
>> Rick Kenyon: Starting from me, yeah. I mean, I proved this conformal invariance of
certain properties of the double dimer model, in particular this high function, I don't want
to talk about right now. So there's some convergence to the calcion free field and then
[indiscernible] Berner proved that the spanning tree converges to the SLE(2) and the
SLE(8), depending on which path you're -- so those are all parts in this random domino
[indiscernible] model. Not random dimer model on this on this [indiscernible].
But this one has been more elusive. I mean, the SLE(4) has been more elusive for
various reasons. And I'm not going to -- certainly not going to prove that today, but I'm
going to prove or at least indicate how you can show that the -- these paths are actually
conformally invariant.
>>: [Indiscernible].
>> Rick Kenyon: False is already proved.
>>: Is it SLE(4)?
>> Rick Kenyon: No. No. This is a conjecture.
Do I need to write some words down here or not? I'll just leave it like that. Okay.
>>: [Indiscernible] of actual random process that used a double dimer distribution.
>> Rick Kenyon: An actual random process, no. As far as we know, there's no sort of -- I
mean, how do you find the next edge? Not in any nice -- not [indiscernible] way. That's
right.
All right. If we had something like that, then it might be easier to actually prove, but, I
mean, for example, for the SLE(2), the branch and the spanning tree, there are various -there is a way to generate it, but -- okay.
Well, so this talk is really not about continuous objects. It's about combinatorics. And so
I want to -- I'm not going to try to -- anyway.
I want to introduce some noncommunicative variables in the situation. And the
advantage is that we can actually compute some sort of topological quantities about how
these -- where these loops go, which we can't do with just commuting variables.
So and I could just say, you know, replace the edge weight function which was taking real
values up to this point with some quaternions. But I think I want to sort of think about it in
terms of two-dimensional vector spaces and connections on a graph because this is in
some sense more geometric information.
So let me talk about bundles on graphs. So vector bundles. And I think this is a sort of
useful viewpoint because it's more general than just applying to this model.
So what's a vector bundle on a graph? Well, I'm just going to start with a vector space,
V, a vector -- W, a vector space. And if I have a graph, I'm going to associate to every
vertex a copy of W. So W would be 1, 1, W, 2 and so on. It's just a isomorphic copy of
that vector space and for each edge, I want to define an isomorphism between the
corresponding vector space. I want to be able to transport vectors sitting at one vertex to
a vector sitting at another vertex.
So for each edge here, I'm going to have a isomorphism fee 1, 2, fee, 1, 2, goes from W1
to W2. So it's a directed object. If I go in the reverse direction, I'm just going to take the
inversion of that isomorphism. So that's all a vector bundle is. Well, sorry. Let me get
the terminology right.
Once I've assigned a vector space to reach a vertex, that is a vector bundle, but the
choices of isomorphism, that's called a connection. So a set of fee, I, J, is a connection.
That's the connection in the same sense as, you know, differential geometry on the
bundle.
So this is -- I said that in works, but I didn't say that, you know, fee, if I do the edge in the
reverse direction, it's the same as the inverse of the isomorphism in the forward direction.
So that's called a connection on the bundle. And in this talk, we're only going to need to
talk about two-dimensional bundles and the connections are they're going to therefore be
two isomorphisms between two-dimensional vector spaces.
If we choose a basis for each vector space, and I can represent these just by two by two
matrices, and maybe you know that quaternions can be represented as two by two
matrices, rotations of the three sphere, but -- so you can take it better than that in most
terms.
>>: [Indiscernible].
>> Rick Kenyon: Complex matrices, yes. These are in general complex vector spaces.
All right. So these fees are called parallel transport. These isomorphisms, so fee, I, J is
the parallel transport. Parallel transport of vectors from vertex [indiscernible].
So one situation is this natural -- suppose that our graph comes embedded on a surface
or in some Riemannian manifold. Right? So then in particular for us, imagine a graph on
a sphere. And the edges -- maybe the edges are geodesic segments on this sphere.
Right. So then at each vertex we have a tangent bundle on the sphere. Right. So a
vector is [indiscernible] -- you want to identify the vectors here. You want to transport the
vectors here to there. You need some isomorphism between the corresponding things.
And the natural one coming from the Riemannian.
The metric is called the, you know, [indiscernible] connection, which essentially just you
slide the vectors along the curve using the, you know -- always maintaining the same
angle with the curve. Right? And that gives you a way to identify the tangent space here
with the tangent space there. So that's an example where this setting naturally arises.
Well, so, of course, just like in this previous setting, there's a knowing of gauge equals
once. If we -- which is essentially just a base change. If we -- so gauge -- if we think of
this as having a particular basis for every vector space so that we represent these things
as matrices, then doing a base change in one of the vector spaces will just correspond to
pre or post multiplication of the parallel transports by some matrix.
So if we have got parallel transports, fee, one, fee, two, fee, three, and we change this
base -- the base here by multiplication by M, just a base change map, and it's going to be
premultiplication of the parallel transports by that [indiscernible]. At a given vertex, you're
free to premultiply by a matrix for the outgoing parallel transports. So that's called a
gauge transformation.
All right. Now, that's all fine and dandy, but what relevance is it for our model? All right.
Also, the -- how shall we redefine our probability measure when we have a connection?
So how to define -- I'm going to stick with the double dimer model, double dimer. How to
define the probability measure.
Well, so now we've got -- on our graph, we've got a collection of loops and double edges.
Well, every time we have a loop, we can -- we have an associated monodromy, which is
the -- you start at a point on the loop and you compose the parallel transports around that
loop. And when you come back to the place you started, you don't necessarily get the
identity. You get some nontrivial map from that vector space to itself. And while the
trace of the monogromy is the relevant quantities. So we're going to define the
probability, the measure of a configuration C where C is the double dimer configuration.
Dimer model configuration, it's going to be -- well, again we're going to still have our
underlying weight function on the edges. So E and C. That's a [indiscernible]. But then
we're going to have the product over all cycles. We're going to take the trace of the
monodromy -- by the monodromy, I mean the composition of the parallel transports
around [indiscernible].
>>: [Indiscernible].
>>: Right. Well, why doesn't it matter where I start?
>>: [Indiscernible].
>> Rick Kenyon: Well, suppose I started here instead.
>>: Well, just the conjugation.
>> Rick Kenyon: It's just conjugation, right. So I go here, take this product and then go
back. And so it's just conjugating and trace is invariant under conjugation, so it doesn't
matter where I start.
Well, let's see. All right. That's the definition. Well, one over [indiscernible].
>>: This is a new measure.
>> Rick Kenyon: It's a new measure, right.
>>: More complicated.
>> Rick Kenyon: More complicated, well, it's more complicated, but I've got more
parameters to deal with. So more powerful. If only I could evaluate this measure in
some way, right? And theorem is that yes, I can indeed. There's an analog of
Castaline's theorem in this setting, which is that partition [indiscernible] -- just do the
partition function -- well, it's going to be the determinant -- well, the cue determinant of a
certain matrix. Let's call it K -- let's call it K again.
Well, K now depends on not just the weight function but on the connection. So how
should I denote that? K depends on our connection capital fee. Capital fee is the set of
all parallel transports.
Well, sorry. I -- let me see. Let's not forget the hypothesis.
When we have a [indiscernible] bundle, so a two-dimensional vector space. W is
dimension two. And fee -- all the fees are in SL 2C, so determinant window. Right.
>>: [Indiscernible].
>> Rick Kenyon: Yes. Yes. And G is bipartite [indiscernible]. See I said the conclusion
first and then I said the hypothesis later.
Of course I didn't tell you what K is, but -- so I wouldn't have been able to define K except
in this case. Then theorem.
>>: [Indiscernible].
>> Rick Kenyon: Yes. So that will take at least a few seconds to describe.
Well, what is K? Let me tell you about K. That's the easy one. K is just like the
adjacency matrix -- I mean the Castaline matrix from before with the weights, except that
each edge weight now gets multiplied by the corresponding matrix.
So K fee W, I, B, J, is going to be plus or minus the same signs and there's going to be
the edge weight.
>>: [Indiscernible].
>> Rick Kenyon: Oh, okay. I'll do it over here.
So K, W, I, B, J, it's going to be plus or minus the edge weight times the parallel transport
from I to J. W, I, B, J. And, well, K is now going to be an N by N matrix where N is the
total number of vertices, so both the white and the black vertex. So maybe I shouldn't
distinguish between white and black here. Let me just say V, I, and B, J. Is that going to
be a problem? It's the weight of V, I, so V, J, and parallel transport from B, I, to B, J. So
it contains a term at fee and a term at fee inverse. And that's going to be 0 if they're not
adjacent.
So your fee is going from V, I to V, J, and the 0, if B, I is not adjacent to V, J.
And the signs are again determined by Castaline condition as before. Same Castaline
condition.
>>: [Indiscernible].
>>: What is that? It's an -- it's an isomorphism [indiscernible] to vector spaces. So
maybe what we should do is -- well, see, part of the definition of [indiscernible] is -- I'm
going to tell you what these objects are. Well, but, if you like, you can just think of
choosing a vector -- choosing a -- choose a basis for each vector space and this is an
element of SL 2C?
>>: [Indiscernible].
>> Rick Kenyon: Yes.
>>: [Indiscernible].
>> Rick Kenyon: Yeah, that's a good point. Thank you for that question. I forgot to
mention that.
Well, doesn't [indiscernible] depend upon the direction of the cycle? I mean, after all, I'm
proposing some non-commutative objects.
>>: [Indiscernible].
>> Rick Kenyon: What?
>>: [Indiscernible].
>>: Well, if I have the trace of a two by two matrix, A, B, C, D, it's the same as the trace
of inverse. D, A, minus B, minus C.
>>: [Indiscernible].
>> Rick Kenyon: If it's an SL 2C. And if I multiply by the [indiscernible], yeah. So
indeed, doesn't depend on the orientation of the cycle.
Oh, another thing you might point out is that this definition, suppose that all of our parallel
transports are trivial. Suppose they're all represented by the matrix 1001. And what's the
trace? The mondronomy is the trace of the identity matrix which is two. So we get a
factor of two for each cycle which generalizes the previous definition.
Any other questions about -- okay. So now I have to define what Q deck is. This is
something which -- so this is a matrix now whose entries are in noncommute -noncommuting variables.
>>: [Indiscernible] so the matrix itself can depend on the choice but there -- because it's
[indiscernible] independent of the choices, then finally you get something will
[indiscernible].
>> Rick Kenyon: That's correct. The matrix itself ->>: [Indiscernible].
>> Rick Kenyon: Well, I mean, the matrix doesn't see the cycles. It's just a -- the matrix
is just the actual entries on the edges. Each edge, right, I mean the -- this is the matrix
from B, I to B, J which takes that ->>: [Indiscernible] so it's just an edge. Okay. [Indiscernible].
>> Rick Kenyon: Well, so this is a matrix now with entries in SL 2C, and the Q deck -this is -- I'm not sure who this is due to, this notion is due to, but I, I mean, you can read
about it in this book of Meta (phonetic) on random matrices. That's where I learned about
it.
K has a particular property which is -- I don't know -- I'm hesitant in giving the theorem or
the definition first. But let me try to give the definition first. All right.
So if -- if K is a matrix with entries in GL 2C, and K is self dual, which means K, I, J, is K,
J, I dual. That is BDCD dual equals D, A, minus B, minus C. So this -- if the entries are
in SL 2C, this is just the inverse. Bur in general, it's the determinant times the inverse.
Then Q deck of K -- well, it's defined as follows. So you take the sum over -- this is an N
by N matrix. Take time over the permutations. It's just the same as the usual definition,
K-one, sigma one, KN sigma N. Except that you have to permute these guys. You have
to compose these entries in a particular order. And that order is going to depend on
sigma.
So how do I denote that. By this, I mean, okay, take the product, so the order here
depends on sigma on which permutation you're currently evaluating. All right. So now I
have to tell you what order to take. And the thing is that you have some freedom, but if
two permutations -- if sigma and sigma prime have the same cycle structure, but except
the sum of the cycles are reversed, then the corresponding elements should appear in
the same order in this product -- I mean, so sorry. Let me say it this way.
So the order is chosen similar to that. Well, elements in the same cycle of sigma appear
consecutively. Cycles of sigma appear consecutively. I'll give an example.
>>: [Indiscernible] to reverse the cycle?
>> Rick Kenyon: Just a minute, just a minute. What I should have said is that the -- each
cycle appears in a contiguous fashion in this product. And if you're -- and if you have two
permutations with the same cycle structure, then their individual cycles appear in the
same order in each product.
So here's my cycle decomposition of sigma. And in K, I put all these -- all this -- I put all
these guys first. K-one, sigma one. One sigma one. Et cetera.
Then -- then after I have taken the product of those, then I can go to the next cycle, then I
go to the next cycle.
Okay, so the advantage of this is when you -- if you have two -- if you have another
permutation, sigma prime, with the same cycle structure, except this one goes in the
reverse order, for example, when you add this term to the corresponding term here, you
can -- you can -- you can factor out the initial part and you get the monodromy in this
way, plus the monodromy in that way is a scaler quantity. So the fact -- the important
factor we're going to use is that ->>: [Indiscernible].
>> Rick Kenyon: Okay. Well, do you understand this part? If I -- for a given
permutation, I decomposed it into a cycle structure, and then the corresponding product, I
go through the -- okay. So if sigma -- I need a new color or something.
>>: [Indiscernible].
>> Rick Kenyon: That's right.
>>: So inside the cycle [indiscernible].
>> Rick Kenyon: Inside the cycle, you can do a cyclic rotation.
>>: [Indiscernible] rotation, right.
>> Rick Kenyon: That's right. So if sigma is, you know, X1 through X, K-one, X ->>: [Indiscernible].
>> Rick Kenyon: I can't even ->>: [Indiscernible].
>>: Right. Sigma equals 1, 2, 3, 4, 5, 6, 7, and so on. Right. Then the term is going to
be K 1, 2, K 2, 3, K, 3, 1 times K 4, 5, et cetera. K, 7, 4.
>>: [Indiscernible].
>> Rick Kenyon: This is so far -- so far this is just -- this is just a way to define the
corresponding term here.
>>: Okay.
>> Rick Kenyon: And of course I've got some choice involved because sigma can be
represented, you know, cyclic where I'm just going to choose one of those
representations for each sigma. All right.
Now, suppose that sigma prime looks the same except that this cycle is reversed. 3, 2,
1, 4, 5, 6, 7, blah, blah, blah.
What happens when I add these two guys together times the rest? Well, everything else
stays the same, but these two guys, well, these guys are two by two matrices. When I
add them up, when I take the product, rather, what happens if I take the product in the
reverse orders? I get the inverse. Or rather I get the dual matrix. And when I take a
matrix and I add it to the dual matrix, what do I get?
If I take ABCD plus D minus B minus C, A, I get two times the traits -- sorry, get the traits
times the identity. I get A plus D times the identity matrix. So you see this factor now is
just a scaler. Okay.
>>: [Indiscernible].
>>: So what this comes out to be -- so independent of the choices I made here about the
cyclic rotation or by flipping these [indiscernible] around, I'm going to get a product of
scalers out. And so that's -- that product of scalers is the definition of the cue
determinant.
So if you like, it's the sum over cycle decompositions ->>: [Indiscernible].
>> Rick Kenyon: Yes, yes. I'm going to tell you the theorem. This is not the theorem.
This is just the definition so far. Cycle decompositions of sigma, so -- sorry. This is the
sum over -- I'm not -- I'm just trying to -- okay. Maybe I don't need to restate what I just
said. Do I need to restate what I just said?
>>: No, but you [indiscernible].
>> Rick Kenyon: Okay. Just believe that for now. Let me write down the statement of
the theorem, and then we can discuss the definition.
>>: [Indiscernible].
>>: That's right. So the theorem. And this is due to meta, is that the cue determinant of
K, if K is subdual, is just the -- it's a determinant -- well, really what it is, it's going to be
the fathiom of a certain metric Z times K star. Well, if this is an NYN matrix, I can make it
two N by two N matrix by replacing each two by two-block of entries with those entries
[indiscernible]. Okay. And this is not the same Z as you've seen before. You saw
[indiscernible] called something else.
The V -- no. Okay. We'll figure out what that letter is called. C? C, okay. So C is just
matrix of two by 2-block 01 minus 10. 01 minus 10. And everything else is 0?
>>: [Indiscernible] C divided by ->> Rick Kenyon: C bar.
>>: -- [indiscernible].
>> Rick Kenyon: Yeah. Coming up, don't worry.
Okay. So C, okay, maybe that's a bad [indiscernible]. But at any rate, it's just this matrix
which multiplies by this diagonal matrix of -- looks like this. Two by two blocks. Well, the
amazing thing is you take one of these subdual matrices, and you multiply by this, and
you get an anti-symmetric matrix. It has a fathiom. That's the determinant. That's the
cue determinant. Okay.
Of course it could have given this as the definition of the cue determinant, but I mean, I
need that interpretation over there to give some meaning to the determinant. All right.
So -- well, this is great because this means that using this linear algebra here, we can
actually compute our new partition function. Right? New partition function, our matrix K,
our Castaline matrix is self dual by construction. And now we can actually compute
things with it.
Do you want to ask questions anymore? Do you want to see an example? Well, here's a
baby example, just for -- well, of course, probably very useless example, but suppose my
graph is a -- this -- right. That maybe is worth just doing this example.
So we've got four matrices, two by two matrices. Let's call them ABCD. Then I construct
this matrix capital K -- well, there's four vertices right -- let call them 1, 2, 3, 4. It's a
bipartite graph, so these guys are all 0. 1 to 3 is A. I'm going to have to put in a minus
sign because there's a Castaline condition. Up with to 3, 1 to 4 is minus D. Minus D
inverse. 2 to 3 is minus B -- is B inverse. And that's C. Maybe I should just -- it doesn't
matter.
And of course [indiscernible] down here inverses. Okay. So the claim is that if you write
out the little two by two matrices here, and compute the cue determinant -- I mean, sorry.
You take the -- multiply it by C, you take the fathiom, then you'll get -- what will you get?
It's [indiscernible] of K. It's just some function of these matrix entries but it turns out to be
trace of ABCD plus -- well, that's [indiscernible] looks like this. The other one has these
doubled edges where the other one has those doubled edges. Each of those contribute
one, so I get two.
>>: [Indiscernible].
>> Rick Kenyon: Sorry?
>>: [Indiscernible].
>> Rick Kenyon: Yeah. So that the matrix, what you get from the original Castaline
matrix by replacing each SL 2C matrix by a two by two-block of real complex numbers.
>>: [Indiscernible].
>> Rick Kenyon: This is K. Then K, you know, in this case, eight by eight.
[Indiscernible] is going to look like 0, going to be zeros and that's going to be A one, A
two, A three, A four. And so on.
All right. Now, graphs on surfaces, we're interested in the planar graph, right? Well,
maybe we're interested in a graph on a [indiscernible]. That's sort of a typical setting is
you want to approximate the square grid with a N by N grid with periodic boundary
conditions.
Well, so in this case, it's natural to look at flag connections, so a flag connection, that's
one way -- there's no monodromy around topologically trivial loops. So it's a connection
which if you look at the monodromy around loop, you get the identity until the loop is a
topologically not trivial one.
Is a -- and these are the ones which we're going to be interested in -- with trivial
monodromy. Trivial loops. Around topologically trivial loops. [Indiscernible] on the torus,
really doesn't like -- if you have a flag connection, then, you know, around any loop you're
going to get the identity, but when you go around the torus this way or this way, you get
two -- you get a matrix, but, you know, if you take two different paths around to the
monodromy is just going to be conjugate to each other.
So in fact, for the torus, there's a flat connection on the torus there's really only two
matrices. You need to give one for the horizontal direction, one for the vertical direction.
There's only a five-dimensional space of flag connections.
Another setting is if you have a -- if you just have a graph like an anulus. Right. A -what's a flag connection on an anulus? Well, if you have a loop which goes around the
anulus, you get a nontrivial, potentially nontrivial matrix, but, you know, if you have two
loops which go around, then the monodromy around here is going to be conjugate to the
monodromy around there. Because if I go around this way, and then go back around this
way, that the a topologically trivial loop. It gives me the identity.
So there's only -- for an anulus, there's only one matrix if you have a flag connection, one
sort of important matrix which is the one that you get by going around. For a torus there's
two. You can imagine how it works for more general settings.
So in the case of a flag connection, our theorem can be restated -- just about done
here -- in the following way. This is just a restatement of the theorem.
If G is embedded on the surface, and, you know, capital fee, our connection is a flag
connection, then what can we say about our purchasing function for our double dimer
model? Right? Now the going to be the -- right. This is the sum over all configurations.
C. Now, again, we still have the product, but now we're only going to get the product
over nontrivial cycles. Nontrivial. By that I mean topologically nontrivial cycles of the
trace of the monodromy.
>>: [Indiscernible].
>> Rick Kenyon: Potentially many of those.
>>: [Indiscernible].
>> Rick Kenyon: No, no, no. What do you mean? No. Yeah, so every cycle,
[indiscernible] take the project of the monodromy over every topologically nontrivial cycle.
But you know, for example, if you have a graph on an anulus, you know, there may be 13
cycles around and we're going to get to that trace of that matrix to the power 13.
Because every cycle contributes the trace of its monodromy. All of those monodromies
are the same.
>>: [Indiscernible] there would be many between [indiscernible].
>> Rick Kenyon: So here's our anulus. Right. Our typical configuration can have a
bunch of cycles which line around. The cycles which don't line around, they contribute
just one. Yeah.
>>: All the cycles [indiscernible].
>> Rick Kenyon: They help contribute the same amount because it's a flag connection
potentially. The monodromies for cycles which are parallel to each other are conjugate to
each other. So they contribute the same trace.
>>: [Indiscernible].
>> Rick Kenyon: This is just a restatement of the theorem. It's not even a theorem. It's
just sort of a trivial -- a trivial simplification of the previous theorem.
But the point is that here, in this setting, we've got -- our connection depend on a finite
number of matrices, right. If we ever got sort a G surface, we've got two G matrices. And
this thing, Z is just going to be a polynomial function of the entries in those -- that
[indiscernible] number of matrices. So note Z is a polynomial function in the matrix
entries.
It's not just any function. It's a function which is variant under gauge transformation,
under gauge [indiscernible]. And there's a theorem about these kind of functions which is
also not due to me. It's due to Falk and Gontraub [phonetic]. It's a beautiful theorem.
[Indiscernible] that these -- well, so let's let curly V. So we're going to start with a surface
and -- let's let sigma be a surface. Curly V be the vector space generated by product of
the trace of the monodromy over all flag connections. Well, let me -- how do I say that?
Let B be a surface and fee [indiscernible] connection on sigma or maybe it's a graph -well, flag connection. Okay.
So let's let V be the vector space generated by -- [indiscernible] dimensional of vector
space. Let me say it this way. Because it's going to be the vector space of polynomial
functions of the matrix entries of fee. Vector space of polynomial functions of the
individual fees which are invariant under gauge transformation.
This is something which is -- because we're dealing with flag connections, this is -- it's an
infinite dimensional vector space but it's finite in each degree. If you fix the degree of the
polynomial, it's only a finite dimensional vector space.
And the theorem is that this -- do you understand what I'm saying here. I have a flag
connection that depends on some matrices, and the -- I look at the polynomial functions
of the matrixes, for example, the trace is -- well ->>: So this theorem [indiscernible].
>> Rick Kenyon: This is a theorem by Falk and Gontraub [phonetic]. This is just the
definition so far.
V is actually generated by the product of the traces of the monodromies for simple close
curves. So V has a basis and -- well, is generated by product of the trace of the
monodromy over all -- monodromy of gamma where gamma is a -- an X, where X -- so
on your surface, whatever it is, you take all possible simple close curve systems, which
are -- so several close curves. [Indiscernible] simple close curves which are disjoined
from each other. You take -- anytime you have a simple close curve system, you look at
the trace of the monodromy. This is a polynomial function. You take the product of all
curves, so X is a -- sort of a lamination on your surface of finite to the curves.
As X varies over sets of simple close curves, disjoined, [indiscernible] disjoined, simple
close curves.
So this is a little bit -- I'll give a sort of basic example [indiscernible] on sigma.
Well, not only generated by but actually these guys form a basis. That's what I should
have said. As a basis. Basis. Colon. And more over, there's actually an [indiscernible]
product on this base which turns into helper space in which these guys are
[indiscernible].
Well, as long as the surface as a boundary, if sigma has a boundary, nonempty
boundary, there's a -- [indiscernible] space completion for which these -- in which these
guys are actually [indiscernible] each other.
Areas and identification of -- well, what should I say? A V with -- I mean there's a
completion. A natural interproduct making -- well, V into a [indiscernible] space. It's
closure is an [indiscernible] space. So in particular, you can extract various things using
that interproduct.
Well, what does that have to do with us? Why do we care? So here's a very simple
case. Let's do the -- sort of a baby case which is trivial.
But suppose we have an anulus, all right. Our double dimer model has a certain number
of loops around here. And our partition function -- our partition function -- well, it's the
sum and then ->>: [Indiscernible] the number of the loops?
>> Rick Kenyon: No. No. But this theorem actually does. So the partition function, it's
the sum [indiscernible] times -- well, trace of M to the power of K when we have -- if K
loops. So it's the -- some of our configurations, KFC. So maybe I should just say size of
C, where size of C is just a number of loops which go around.
If I hand you a function like this, doesn't -- you know, with some coefficients, a function of
M you can extract, of course, the coefficient of those configurations which have 13 loops.
Right? So this is of course -- let's let this be some variable of Z, which we can vary.
Right.
So this is well, maybe it's just -- T. Right? So this is just some polynomial in T. And we
know how to extract the coefficient of T to the 13th from a polynomial, right? Just an
interval. Right? But what coefficient has a probablistic meaning as feta configurations
which have 13 loops going around the anulus.
And so the theorem which over here says we can do this in a much more general setting
of ->>: [Indiscernible] compute things using it?
>> Rick Kenyon: Yes. It's hard. And just to close, I'm sorry I'm running overtime, but
suppose that I'm on the [indiscernible] in the plane and I want to study the double dimer
model on to the upper half plane. Now I'm going to fix some phases. And now I'm going
to ask what's the probability that my configuration has a loop which goes like this, another
loop which goes -- well, goes like that. Right.
This is something which this theorem will say I can complete the probability of this using
this technology. I mean, I can compare this probability with the probability any other sort
of topological configuration loops. In particular, that allows me to compute the probability,
the loop does some particular thing. And these things you can compute in terms of the
actual greens function on the lattice. I mean, everything comes -- boils down to the
greens function for the simple random walk on that thing. Doing some complicated
[indiscernible] with products and greens functions.
And so these probabilities are all conformally invariant. If you do it in a different domain,
the greens function is conformally invariant, so these probabilities all turn out to be
conformally invariant quantities. So that's where you get the conformally invariants of the
double dimer loops.
So I'll stop. Thanks. [Applause.]
>>: (Inaudible) final sequence. So the calculation is exact, but the level of the
[indiscernible] is the only [indiscernible].
>> Rick Kenyon: Well, at the lattice level, you can do an exact calculation of these
probabilities. You're going to show that in the limit, as epsilon goes to 0, these
[indiscernible] that's a little nontrivial, but so what you show instead is that the ratio of the
partition function with these for any particular flag connection to the original partition
function is conformally invariant [indiscernible]. But yes, I mean, there's some nontrivial
[indiscernible] which I don't have time to talk about here. But you know, the
[indiscernible] part I think I presented essentially all the arguments. Nothing -- nothing
left out there. Of course I didn't prove this theorem, but ->>: [Indiscernible].
>> Rick Kenyon: 2003.
>>: It's not the same [indiscernible]. It's a young Falk.
>>: Gontraub is from Brown, right.
>> Rick Kenyon: Gontraub is from Brown, [indiscernible] Falk.
>>: Maybe it is ->> Rick Kenyon: I think it may be the famous Falk, yeah. [Laughter]. It's a theorem
[indiscernible] representation theory. I mean, the first theorem is a [indiscernible]. It's not
actually [indiscernible] presented in 20 minutes, but you have to know. It's good to know
[indiscernible] theory.
>>: Okay. I stand corrected then. [Applause].
Download