>> David Wilson: So today's speaker is Sergey Fomin. ... St. Petersburg. He went to MIT. I took...

advertisement
>> David Wilson: So today's speaker is Sergey Fomin. Sergey graduated from
St. Petersburg. He went to MIT. I took a class from him there. And then he
went to the University of Michigan where with Andrei Zelevinski they developed
cluster algebras.
And so this is a reprise of his talk at the International Congress of
Mathematicians.
>> Sergey Fomin: Thank you, David. Thank you all for coming here. This is
problem more algebraic than this audience is used to so my apologies for that.
I'll -- well, this talk is kind of a lightning speed introduction into cluster algebras,
but you can slow me down and ask questions and ask to explain things more
concretely. But this would be at the expense of making the talk longer.
[laughter]. So just be aware of that. Okay?
It's based on joint work with various people listed here, both alphabetically and in
increasing order of their contributions. So we discovered cluster algebras with
Andrei and developed this theory quite a bit, partly together with Arkady
Berenstein from Oregon, and then there was follow-up work with Michael Shapiro
and Dylan Thurston on cluster algebras associated with triangulated surfaces.
This will come at the end of the talk. I don't know if I will actually have time to
discuss it in many detail.
This, the slides for my talk are available online and also there is a full version of
this talk on the archive and also in the ICM proceedings.
There are a few related talks which discuss other aspects of the subject because
today I will only be able to cover just scratch the surface. There is -- if you prefer
the French version, there is a Seminaire Bourbaki talk by Berhard Keller.
Algebres amassees is exactly cluster algebras.
If you prefer 15 how long version, it's available online also in video format. I gave
a master class in June essentially with the same content but it took 15 hours.
And then there were two other talks at the ICM dealing with the more algebraic
aspects of the theory, including Idun Reiten's Emmy Noether lecture in cluster
categories.
The main references are listed in this slide. And, well, you can find proofs there
and more examples.
So what are these algebras? It's a class of commutative rings, although there
was actually non commutative quantum version of them. And they are equipped
with a particular kind of combinatorial scaffolding inside. So it's not just the ring,
but it's a ring with some distinguished generators and certain relations of a
particular kind which I will try to explain.
And it all grew out of George Lusztig's work. It was motivated by his work,
although he never did anything on this particular subject per se. But it grew out
of two seminal developments due to him, his theory have total positivity and
semi-simple groups and his theory of canonical bases.
So we were trying to understand those fairly abstract theories on a more
concrete combinatorial level and this is how the subject came about.
So since then these types of structures, combinatorial structures have been
identified in a number of context, some of which are listed here. And there are
many more which I do not list. And you can get a general feeling for how big the
subject grew over the last 10 years since we discovered these algebras by
visiting a cluster algebras portal which I maintain. And then there are lots of links
there to various papers, seminars, conferences, and so forth.
You just Google me and then there is a -- there is a link to this portal. At the
bottom there will be a special program at MSRI in the fall of 2012 under the name
of Cluster Algebras.
Now, total positivity. What is that? So total positivity is really one of my most
favorite subjects in the -- mathematics. It's not a subfield of mathematics, it's just
kind of a phenomenon that occurs throughout various disciplines. And what I like
about it is that it sort of bridges between algebra and analysis. So most people,
you know, either think as algebras basically or at [inaudible] even if they are -pretend to be like in combinatorics or some probability or some [inaudible]
[laughter]. But there is almost religious division, it's almost like left and right brain
or something between people who basically at heart algebras and people who
basically at heart analysts. And the late [inaudible] would characterize this
division by saying that algebraic is prove identities and analysts prove
inequalities. And that's kinds of -- he then went on to den great the algebraics.
But essentially there is some truth to this division.
And so what total positivity captures, it captures a class of inequalities which are
at the same time algebraic in nature. So even though it deals with sort of analytic
aspects of that, you know, we talk about positivity. So obviously inequalities are
involved. But also -- but the nature of these inequalities is very, very algebraic.
So the classical example is totally positivity matrices where you just -- where you
declare matrix TP, totally positivity if all its minors that is determinants of square
submatrices are positive real numbers. And a priori, of course, if you -- if this is
the first time you heard about such matrices, you don't even know that they exist,
you know, if this like hundred -- if it's a hundred by hundred matrix, there will be
exponentially many minors and maybe these inequalities when you take them
together because you take any subset of rows and numerous subset of columns.
Maybe these inequalities contradict each other. Or maybe -- right. Maybe.
But not only do such matrices exist, they actually are widespread. They come in
a variety of context. Initially they arose in the work of Gantmacher and Krein and
IJ Schoenberg on oscillations and mechanical systems and in approximation
theory, study of [inaudible] systems.
Then there was a crucial discovery by Karlin and McGregor that transition
matrices for one dimensional diffusion processes are TP.
And around the same time Endrei and Thoma discovered the important
connection with asymptotic representation theorem or specifically with
representations of infinite symmetric groups.
Another context in which total positivity comes up is enumerative combinatorics,
theory of lightest paths. And very much related appearance in discrete potential
theory. Play their electric networks and such.
And most recently of course, this work of Lusztig's and our follow-up that I
already mentioned.
So this concept, this notion of total positivity can be generalized from matrices to
broader context. Basically if you have an algebraic variety, something given, you
know, by algebraic equations you can identify some important functions on that
variety, some polynomial local, you know, regular functions, locally given by
polynomials, you'll want those functions which play the role of minors on GLN,
you want them to be positives, you know [inaudible]. The point on your variety
qualify as a positive point.
So which functions exactly you want to look at is unclear, which varieties have
such a structure is unclear, and what I am telling you, this is kind of a piece of
just my world view is that these cluster algebras are the varieties for which they
are coordinate rings are sort of natural class of -- or natural setting for this notion.
So I won't talk much about other examples like Grassmannians or other Lie
groups, but you can have it in the back of your mind.
So why should we be interested in studying these kinds of totally positive
varieties? One reason is that some of them can be identified with important
spaces. You know, some -- for example, some spaces naturally rising in
Teichmueller theory, they can be viewed in a very, very natural way as sets of
positive points of some algebraic variety.
So basically you have some algebraic structure, but it's not an algebraic variety
because you are only looking at positive points. Because, well, the nature of
your geometric topological, analytic, whatever problem is such that those
quantities should be positive for real numbers.
Another reason is that passing from a complex variety to its positive part is the
first step towards further degeneration called tropicalization which I won't talk
about, but you heard this buzz word before. It's actually -- I am not making it up
here.
And the third reason is that strange as it may sound, the set of positive points
actually retains a lot of information about the ambient variety as a whole. In fact,
it reveals some information which you don't see in the variety. So here is an
example. So here I'm taking totally nonnegative, meaning, you know, minors are
nonnegative, upper triangular uniform matrices. Okay. So you look at, you take
-- well, in this case three by three matrices, one's on the diagonal zero, under the
diagonal XYZ above, and you want all the minors to be nonnegative. Okay. This
is some semi-algebraic set if you want -- you know, given in three dimensional
space by these inequalities.
So you take that space that's set, you know, it has some curves, pieces of
boundary and some flat pieces of boundary, any semi-algebraic set has a
canonical algebraic stratification. There are pieces of the boundary which, you
know, locally smooth, right. So you look at these pieces and then there are like
edges and vertices and so forth, right? So you get some stratification. And you
see how that stratification -- so it's basically a stratification by the vanishing
pattern here, right, like which of these inequalities are strict and which are just
identities, right? And so -- and this creates a structure of -- well, stratified space
or in this case cell complex, more or less, and these edges here show
attachments to these cells, right? So smaller dimensional cell is where
everything is equal zero and then, you know, stuff starts getting more -- further
and further relaxed.
And you discover that this -- you discover something called the Bruhat order. So
it turns out, and this is actually true, not just for three by three but any size and
not just for the uniform [inaudible] in GLN and SLN but for any semi-simple
complex group that just by looking at the structure of the boundary of that space
of course you need to explain what totally positive means in those context, you
recover this notion of the Bruhat order, which is of course important for the
understanding of the group.
So somehow it's not just -- and these are actually sort of Bruhat cells you know,
the intersection of Bruhat cells with the totally positive -- totally nonnegative part.
So it's somehow, you know, more is revealed rather than less by looking at the
nonnegative points of a variety.
>>: Just to slow you down the X at the top should have been X sub greater than
equal to zero; is that right?
>> Sergey Fomin: This is capital X. This just means the whole thing -- this X?
This X means no restrictions. You know, just empty. I should -- I mean, but this
would have been misleading.
>>: [inaudible] X sub greater than or exactly to zero [inaudible].
>>: It's the positive part of ->> Sergey Fomin: Oh, right, yes. Well, I don't put those inequalities here, but
yes, right. Yeah, it should be X greater than or equal to zero, yes.
>>: [inaudible].
>> Sergey Fomin: Yes. Yes. And glad you're following. Yeah. Thank you.
Okay. So like I said, one wonders which algebraic varieties have this kind of
notion and which families of regular functions we should consider in defining it.
And that's how cluster algebras come about. It's one way of selling them or of
pitching them.
Okay. Here is a prototypical example, which is one of the classical invariant
rings. So let's consider the algebra of invariants under the natural action of this
upper triangular uniform group on SLNC. Okay? So what is that? So, well, this
example we're looking at four by NS four. So you take four by four matrices of
indeterminants and they satisfy just one equation which is the determinant is one,
right, because it's special [inaudible] okay? And then you take polynomials in
these 16 variables, but not just any polynomial, you want that polynomial to be
invariant under multiplying on the left by any upper triangular uniform matrix
which is the same as doing row transformations in which you are allowed to add
to a row a linear combination of previous rows. Any. Right? The row itself gets
multiplied by one, right?
So if you think about it, you will realize that any minor of this kind which occupies
certain collection of rows and the first several columns of -- sorry. Did I say
rows? I should have said columns.
>>: [inaudible].
>> Sergey Fomin: Was I? Okay. Maybe that's -- right. On the right. Okay. So
then it's -- that means columns, right? Because when you deal on the left, you
deal -- okay. Columns. Okay. So if you add to a column linear combination of
previous columns. So if your minor occupies the first several columns in any
subset of rows, then that minor is not going to change under that kind of
elementary transformation. Okay? So it's -- it's an element of that ring of
invariants and in fact the first fundamental theorem of invariant theory tells us
that all of them are generated by these minors -- in other words, you know, each
of them can be expressed as a polynomial in these minors.
And they satisfy well known homogeneous quadratic identifies called Plucker
relations and so these are our God-given functions on that quotient space which
are -- which we will want to be positive in order for a point to be totally positive,
right? So we are looking at the quotient of G mod N and we call a point there
positive, right, if all these minors are positive.
So you represent this point by matrix X but really it's a co-set modula N, right?
>>: What is capital N?
>> Sergey Fomin: Capital N is this upper triangular uniform matrices, right? So
you -- so points here are represented just by matrices in G but when you change
that, when you multiply on the right by matrices N, none of those quantities
change, right? So in reality they are functions of G mod N. And we call such a
co-set totally positive point and if those functions are positive. Okay?
So we can talk about the matrix, but really doesn't -- the matrices which differ by
those elementary transformations should be viewed as the same because, you
know, we cannot distinguish between them when we -- am I explaining too
much? Yeah. Sorry. Apologies to algebraics in the audience.
So the raw -- this many roughly speaking two to the N of these minors, right, I
mean we don't take the empty one and the whole thing because it's one, but
everything else is fair game. So there are exponentially many of them. And one
would have to test all of them in order to determine whether a particular matrix
represents a positive point.
But, in fact, you don't have to. It's enough to just test this many flag minors.
Okay? Why? Let me explain that. So that will require some combinatorics. So
let's talk about something called pseudoline arrangements. The pseudoline
arrangement is something that looks like this. Okay? They are basically like
straight lines, right, in that any two of them intersect exactly once. Okay?
So you take any two of these lines, they intersect exactly once. And they run
from left edge to the right edge. They are like -- they are sort of like graphs of
continuous functions. And intersect transversely and so -- no funny business.
So just -- and we consider everything just modula isotopy. So we only care
about the topology of the picture. Okay? Like who goes above what. All right?
And just in case you wonder, not any such arrangement can be straightened. So
not any of them can be deformed continuously without changing its topological
type to an arrangement of straight lines. This has to do with an existence of
non-realizable matroids, oriented matroids.
So for each such arrangement we introduce a collection of what's called chamber
minors. So what are chamber minors? So you take a chamber of the
arrangement, just a region into which the plane is cut by these pseudolines and
you look at which lines pass employee the chamber. All right so. In this case, for
this chamber there are two of these lines, this one and this one. And they -- the
lines are numbered from one, two, three, four at the left edge, and four, three,
two, one at the right edge because remember each line intersects each other.
So the order of course is going to be completely changed as you move from the
left to the right. So, okay.
So then we get these chamber minors. Okay? That's hopefully clear. Now, let
me introduce something called -- I call cluster jargon. Because this is -- I will
start using -- calling these things some other names in anticipation of a more
general setup.
So the chamber minors at the edges, you know, for these particular chambers
are called frozen. And the reason is they are not going anywhere as we -- as we
change the arrangements. So if we start changing the arrangement by making
these [inaudible] moves by moving one pseudoline through intersection of two
others, these are called flips. Right? The arrangement is going to change, but
only the interior -- the sort of inside, the intermittent part, this part.
But the -- on the sides of course it will always be the same, right, the set of lines
going beyond -- below this chamber is always going to be numbered by this, this,
this, and that. Okay? Yes?
>>: [inaudible] the meaning of the word variable a set of -- the set of numbers?
>> Sergey Fomin: No. A variable is a function in our future variety which here
the variable is one of these minors.
>>: All right. Okay.
>> Sergey Fomin: Okay? So the variable in the sense that it's a function. So its
values vary depending on where you evaluate it.
>>: So in the current context it corresponds to a subset [inaudible].
>> Sergey Fomin: It's labeled by a subset. But it's -- I want to think about the
actual algebraic function. Okay? So the guys in the middle are called cluster
variables and they form a cluster. And the guys on the outside are called frozen
variables, and when you throw them together with a cluster variables they form
something called an extended cluster. Okay?
Now, let's see what happens when we change the arrangement? So we'll look at
a particular triangle here, this one. This is a triangle. Right? Formed by three
pseudolines. And we'll push one of the -- we'll flip the triangle. We'll push one of
the pseudolines, namely this one, through the intersection of the other two, and
we this arrangement.
So what happens is that this cluster variables delta 2 got replaced by this one,
delta 13. Right? Everybody else stays the same. Because even though the
topology changed but for all other regions the set of lines passing below or above
didn't change. So this is called the flip. Any two pseudoline arrangements are
related by a flip. This is an instance of the [inaudible] groups of type A. So what
next? Well, let's see how the corresponding chamber minors are related. So this
is some -- now something beautiful happens so. When you flip a triangle, go
from E to F, the surrounding guys A, B, C, and D stays the same. It turns out
that you always have this identify, EF equals AC plus BD. Okay? So this is
some exercise in linear algebra. Let's go back.
So you have an arrangement of pseudolines with alike 100 lines and somewhere
in the middle there is a triangle and surrounding that triangle there are some
other chambers and there are these minors indexed by lots of subscripts. So
each of them is a -- right, is a minor of -- as a generic matrix. And those minors
together with the minor that you will put in when you flip your triangle, they will
always satisfy this identity. Okay? So that's -- it's not hard to prove, but it's kind
of elegant. Okay? This is exactly kind of a special case of these precorrelations
that I alluded to earlier.
So this means that if we denoted these guys A, B, C, D, E then this fellow F,
which we place instead of E, which replaced E is just equal to AC plus BD
divided by E. Right? And then we can keep going. So each time we are flipping
we can calculate the value of the new chamber minor as this kind of rational
expression in terms of these other five guys. And notice that these rational
expressions are subtraction free. There are -- there is no minus sign here.
So this means if the original guys were positive numbers or rather evaluated at
given matrix as positive numbers then everything that you are going to obtain by
iterating this process is going to be positive. So what is going to happen is that
these rational expressions will be fed into some other rational expressions and
some other and so forth, so this will be an iterated rational map, right? Because
next time you flip, you know, D [inaudible].
>>: [inaudible] the same map.
>> Sergey Fomin: Not the same ->>: [inaudible].
>> Sergey Fomin: Composition of maps, all of which looks like this. So you
have like, you know, some large number, a million of these chamber minors,
right, and you fix all of them, and one of them is replaced by products of two plus
product of two divided by the fifth. And now you get a new one. And then you
pick another one and you do the same to it. And as you iterate these rational
expressions become humongous, right, they become very, very, very big. But -and they are like multi-layered rational expressions but there are no minus signs
there. So if the original stuff was positive, then whatever you get is positive.
And since you can get from any pseudoline arrangement to any other pseudoline
arrangement, it means -- and any minor appears in some pseudoline
arrangement, it's easy to draw the line so that your favorite collection of lines
passes below some chamber, right, at that point -- at certain point. It follows that
if the chamber minors for a given arrangement evaluate positively at a given
matrix, then all chamber minors evaluate positively in that matrix, which means it
is TP.
>>: So which ones are the ones that [inaudible].
>> Sergey Fomin: The chamber minors of any of your favorite pseudoline
arrangement.
>>: [inaudible].
>> Sergey Fomin: And the [inaudible] [laughter].
>> Sergey Fomin: And there are like N choose two of them roughly speaking,
you know, because ->>: N plus one --
>> Sergey Fomin: [inaudible] choose two intersection points, well, plus a little
stuff at the edges. So it's a quadratic number in N.
>>: You just pick a given pseudoline arrangement and ->> Sergey Fomin: You fix the pseudoline arrangement right down those -- which
are indexed by those sets. And that's your criterion, an efficient total positivity
criterion. It's enough to check those minors. If they are positive, then all
chamber minors are positive because they are, in fact, can be expressed as
subtraction free rational expressions in those chamber minors. Right?
>>: But I ->> Sergey Fomin: So ->>: There's no canonical set of chamber minors which [inaudible].
>> Sergey Fomin: No, there is not. Although you can canonize a particular, you
know, pseudoline arrangement.
>>: [inaudible].
>> Sergey Fomin: Right. And then -- yeah. Okay. And actually there is a
similar game for ordinary classical notion of total positivity. So you can -- you
can.
>>: But there is a -- there is a canonical ->> Sergey Fomin: No. Again, some people would call it canonical, but
[inaudible] would not. So it's not -- there is some choice that, you know, is easy
to state but it's not -- it's not canonical. Okay.
So now this example illustrates the main features of a general cluster algebra
setup. So we have a family of generators of our reign, which in this case are
these flag minors. They satisfy some identities, these relations, right? There is a
finite subset of frozen generators among them. The remaining generators are
grouped into overlapping clusters. So each cluster consists in this example of
the chamber minors of the bounded chambers, the ones that can change. These
clusters overlap, and they are related by local moves of sorts which we call
mutations, where you remove one cluster available and replace it by another.
And there are certain exchange relations associated with these exchanges which
can be can be written reviewing the combinatorial data that accompany each
cluster.
So each cluster is not just a collection of generators of your algebra, it comes
with some combinatorial sort of side dish, right? It's not just a bunch of functions,
the results of this arrangement which you need to carry with it because otherwise
you don't know how to write your exchange relations. And there is a certain rule
for how that accompanying combinatorial information changes in the process of
the mutations. Okay?
But that rule of course in this example is very specific to the particular example,
right, the Y pseudoline arrangement you know, so it looks like not part of a
general theory but of something very, very, you know, custom made just to fit this
particular example.
And what I'm going to explain now is that, no, it is part of a much more general
setup.
Before I do that, let me mention this I lied to you slightly on the previous slide,
although you -- well, in the written form. I can't actually say any lies but I wrote
something. When I said there that you can pick any of your cluster variables and
exchange it for a new one and get a new cluster, blah, blah, blah. Because for
pseudoline arrangements not every cluster variable can be flipped, only those
which are written into triangular chambers. And of course you can make some
mutations, you know, make some flips and then -- like, for example, delta 23 now
is in a triangular chamber, used to be in a quadrilateral one. So now we can flip
delta 23 but not before, right? Whereas this cluster -- general cluster setup
suggests and indeed that's how it works that each element of any cluster can be
flipped.
But how? So we don't know. So actually these -- there are eight different
pseudoline arrangements for four by four matrices -- for four pseudolines, and
they are -- they can't -- they are connected by these flips which actually form an
octagon and for each of them there are two triangular chambers which can be -which correspond to some mutations or flips and then there is one quadrilateral
chamber which you cannot -- which you can not flip.
So what should we do there? I mean this -- even this single example motivating
that I showed you doesn't actually exhibit the features that I claim to be present in
the general theorem. Okay. So let me explain that.
Now, in order to do that we need to translate that into a universal combinatorial
language of quivers. So a quiver, this is an example of, you know, people
speaking different languages in different parts of mathematics. A quiver is just a
directed graph. So for some reason historically in -- you know, in some part of
the world, namely among people who studied representation theory finite
dimensional algebras, they invented this word quiver. Actually this was invented
by [inaudible] and maybe [inaudible] also, but anyhow, they came up with this
word quiver and they didn't know much combinatorics. They knew a lot of other
mathematics. And so somehow it stuck. So people in that part of the world use
this word. And it comes -- it's the same as a directed graph, but it comes with a
different mind set.
So you are thinking differently about it, even though it is just a directed graph.
Okay.
And you know, there are some technical conditions or low multiple edges or in
cycles of lengths two and allowed loops and unallowed.
So a quiver in this context should be viewed as this general combinatoric, the
combinatorial data accompanying a cluster. So I'm -- in other words, I'm going to
encode a pseudoline arrangement by directed graph, which from now on I will
call quiver, I hope you excuse me because that's the tradition.
So and the vertices of the quiver are going to correspond to the chambers in that
example and some of them will be frozen, some of them will be mutable. Okay.
But in general, for any quiver, there is a notion -- yes?
>>: The quiver also comes with information of which ->> Sergey Fomin: Right. That's right. Yeah. So the vertices -- well, in general,
quivers don't. But in the context of cluster theory, yes. So the vertices are
colored in two colors, you know, frozen and mutable. And so quivers can be
mutated. There is a very simple rule which is described here in this screen for
how to mutate a quiver. So you are given a directed graph, you pick your favorite
vertex, mutable vertex and then you do something to the edges. Namely you
take all directed paths of lengths two that pass through that vertex Z and you
introduce a new edge X goes to Z -- to Y like by transitivity if you want. But only
going through Z. So Z is your favorite vertex at which you mutate. And then you
reverse the direction of all edges accident to Z and then you remove all oriented
two cycles. And so at the end of these three steps you get a new quiver. And if
you mutate it again at the same vertex you get back the original quiver.
That's easy to check. But what's interesting is that you can play this game and,
you know, mutate this vertex and then this and then this. This is some kind of a
game. You can even play it probabilistically if you want.
You start with your favorite oriented graph and then you click -- in fact, when I
say click, I literally mean it because of the Java applets you know on the Web
you can just do a Google search and you can create your favorite graph very
quickly and then you will literally click on these vertices and the graph will
change. It will sort of change in the vicinity of Z, of course, right? But these
graphs will quickly become very complicated like almost complete and so vicinity
of Z is really everything. Unless your graph is carefully chosen. In which case it
will stay manageable.
Okay. And so there is a rule which I'm not telling you but it exists. Okay? It's a
very precise and simple combinatorial rule that enables one to associate a quiver
with a given pseudoline arrangement. Just some simple rule basically, I don't
know, you look around this chamber and you connect this guy with some of the
neighboring guys and you orient the edges according to some, you know,
protocol.
So it's very, very elementary and I just don't have time to explain it. What's
important is that when you do a flip in a pseudoline arrangement, your quiver
changes according to this quiver mutation rule. So it's just a different language.
As John Carlo Rodo [phonetic], these quivers are cryptomorphic to these
pseudoline arrangements. So it's just the same stuff only in different language.
But now of course we can see -- I mean quivers are -- can be much more general
than the ones that are associated with pseudoline arrangements. And then it
turns luckily that you can write these exchange relations very easily in the
language of these quivers. So these EF equals AC plus BD actually is written in
this very, very simple form. So when you flip Z and you put Z prime in its place
what you get is a sum of two monomials, one of which is a product of variables at
the Vertices from which an arrow points to Z and another is the product of
variables at the vertices to which an arrow points from Z.
It's a very, very simple rule. And so then you generalize, you just -- now you can
write the axioms, right, so you start with your favorite oriented graph, call it a
quiver, put variables at -- by variables now I mean indeterminants like X, Y, Z, A,
B, C, and so forth, and then you start playing this game. Not only do you click
and change the graph, but you also change those variables by a rational rule
which comes from this rule. Right?
So the quiver keeps changing. That quiver carries the combinatorial information
that accompanies your set of rational functions and the quiver changes by the
mutational rules, the functions change by this rule, which is encoded by those
quivers. Okay. Good.
And so you can iterate this. You can start with a quiver and just a bunch of
variables and then there is a an infinite regular tree of possibilities, right?
Because you can click on each of the vertices and mutate that particular cluster
variable. But some vertices are frozen so you cannot mutate those, and those
variables stay put. They are sort of in the ground ring of your algebra. So they
are not going anywhere. But the rest keep changing. And there is this infinite
tree of possibilities.
And then you just take all these rational functions that you generated by this
iterative process that will be infinitely many of them and you just pull them
together and you just take the ring that they generate inside the field of rational
functions and your quality of cluster algebra. Okay? So now you know the
definition.
This is a horrible, horrible object, of course. Because it's not finitely generated on
the surface of it, the face of it, right? And you don't even know who the
generators are because it's an infinite process and, you know, so it's just totally
unmanageable object so it seems.
So you -- but that's a definition. Okay?
>>: The example you gave was [inaudible].
>> Sergey Fomin: Yeah. In fact, in all examples that people care about, it ends
up being finitely generated for reasons that is are sometimes mysterious,
sometimes clear. Many cases proved.
>>: [inaudible].
>> Sergey Fomin: Yeah. Okay. So for example, if you go back to that picture
that I show you with these missing mutations and you just apply the quiver
mutation rule to get some new cluster, that cluster doesn't correspondence to any
pseudoline arrangement, it's like a phantom object. But we know which quiver is
there and we know which rational functions should be sitting there and we can
further iterate and so forth.
And now something magic happens. We don't get infinitely many things, we
recover back some of the stuff that we already had and fact it's just finite. So we
just get a finite regular graph. So it's like -- you know, it's like the [inaudible]
graph of a group that because of some relations, you know, collapses to
[inaudible] graph of a finite quotient, right? So it seems here something similar
happens. So there are some built-in relations between these clusters that
somehow make everything in this example collapse in to a finite object.
And we discover along the way one new cluster variable, this omega written at
the bottom. So this is an finite chamber surrounding -- okay. So what's the
convention here? This vertex, for example, we have a cluster consisting of delta
3, delta 13, and delta 134. That's a regular sort of pseudoline arrangement type
thing.
But this guy for example is one of these new phantom objects. It consists of
these two chamber minors and then delta 13 we flipped. That was a forbidden
flip that was a quadrilateral flip. And when you -- but when you still go ahead and
perform it, you get this cluster which consists of delta 3, delta 134 and this
omega guy, which is not a minor but still, you know, it's a regular function. And
so -- and then if you continue flipping you will get this, this, and so forth. So.
You only get finitely many objects.
>>: [inaudible].
>> Sergey Fomin: It's negative sign as -- right. But it can also be written as
rationally, rationally as the subtraction free expression.
So this fellow omega can be written as a subtraction free expression in the deltas
but rational. But you can also write it as a polynomial but with a minus sign.
Okay? So what do we gain from this, you know? Why should we care? We
already understand matrices pretty well. So why, what do we gain? Well, okay.
So in other context we get a sensible notion of positivity where we can't have it,
maybe. But and also of course let me jump to three. We get the -- some uniform
perspective in general tools of this cluster theory which I didn't tell you about, but
hopefully you believe me that there are some general theorems that one can
apply whenever one sees such a -- sees these axioms satisfied.
And then what's perhaps most important is that we get some candidates for what
people would call canonical basis, some additive basis of the ring so. There is a
-- there is a procedure which in this case is actually part simple. You take
products of powers of elements of a given extended cluster, so for example here,
so it -- so you -- so you take any power of delta 3, any power of this, any power
of this, multiply them, okay, and then you throw in the routes of the frozen guys
which are always fair game, you multiply by some power of those, and these
monomials will all be linearly dependent, and they will form a basis of the ring.
Okay? And this is not a PBW tab basis, this is not like a Grubner style basis, it's
a different type of basis. And in this particular example it's an instance of
so-called dual canonical basis in the sense of quantum group theory.
>>: [inaudible] or adjacent or was there [inaudible].
>> Sergey Fomin: Which three?
>>: Of the three [inaudible].
>> Sergey Fomin: Oh, no, it doesn't matter which -- the construction doesn't
dependent on the -- I mean, there is no distinguished cluster here in this picture.
But of course in order to build this, you need to start with something.
>>: No canonical, canonical basis?
>> Sergey Fomin: Yeah. Well ->>: Then you call it ->> Sergey Fomin: That's actually true in a deeper sense that you probably
expect. Yes. It's -- there is no such -- yeah. Right.
So in a -- in these kind of example, there is a canonical basis, namely it's
canonical because Lusztig defined it and called it that way. So now it's
canonical. But from a cluster theory perspective, this is no canonical basis.
These products must be part of [inaudible] so Lusztig also defines something
called semi canonical basis and so on. So he sort of felt that there is not really 1
basis that should be called canonical. So -- but that's the [inaudible]. So let's not
get to be [inaudible] that area.
So I told you that there are some general structural results. So what are they?
One of them is the so-called Laurent phenomenon. So remember I was telling
you about this iterated rational -- you know, these iterated rational maps that
would keep plugging in these expressions and so forth. So something magical
happens always, no matter which quiver you start with, all these rational
functions will actually end up being Laurent polynomials. Meaning that the
denominator is just a monomial in the original variables. Okay? So this is
always true. And this is one of the reasons why here we got a formula like this.
On a priori, right, a formula like this might not even exist, but I'm telling you there
will always be a formula like this maybe with some deltas in the denominator.
But just the product of deltas, no plus sign or minus sign.
These things are actually expected to have no negative coefficients, but nobody
knows how to touch this conjecture. So it has been proved in many special
cases but not [inaudible]. Meaning that even after you cancel the numerator and
denominator there will be no minus signs. The reason there is a minus sign
there is that we didn't express it in terms of a fixed cluster. Those deltas lie in
different clusters. But there is another formula, more complicated with no minus
signs.
And then another important result of the general theory is the finite type
classification. So when does this happen that when you iterate these maps,
iterate this process, having closes into a finite structure. And this can be
completely classified. And guess what, the classification is precisely parallel to
the Cartan-Killing classification. So in other words, these things are classified by,
you know, Dynkin diagrams or, more precisely, Cartan matrices of finite types of
[inaudible] collections of Dynkin diagrams.
So it's the same classification as the classification of crystallographic root
systems, finite crystallographic root systems or, you know, implementation semi
simple Lie algebras.
So, in other words, this process is going to produce finitely many objects if and
only if the quiver that you started with is a disjoint union of Dynkin diagrams. So
it's either a chain or type D like a fork at the end and then chain or [inaudible].
>>: [inaudible].
>> Sergey Fomin: It's a theorem, yes. It's a theorem. In fact, we know much
more. We know exactly what graph emerges, you know, what -- what is the
structure of exchanges that emerges when it is finite? So for example, in type A
what we get is so-called associahedron, which is the graph -- well, the -- well, this
is another situation where flips occur, namely triangulations of a polygon, right,
you remove a diagonal and there is a unique other way to stick in another
diagonal and so this graph of flips turns out to be exactly the exchange -- the
graph of exchanges in a cluster algebra of type A.
This is a one scalar of an associahedron. There is an analog of any type and
there is a beautiful enumerative and algebraic combinatorics of these generalized
associahedra associated to arbitrary systems. I'm not telling about you that at all.
Now, what I wanted to tell you instead in the remaining I don't know how many
minutes, maybe negative three or -- is how -- okay. So basically I told you what
cluster algebras are and how one thinks about them. What I didn't convince you
of is that this is useful in outside the immediate area where this arose.
So there is a beautiful example which I sort of left for dessert where this exactly
the same combinatorics arises in a context which is completely unrelated in the
face of it, at which actually was developed 20 years ago by Bill Thurston with
some totally other things in mind. He developed his theory of laminations on
Riemann surfaces and introduced you know combinatoric tools to study those
things.
And now it turns out in retrospect that these things are exactly cluster mutations
and they describe the dynamics of certain geometric quantities which transform
by subtraction by rational rules exactly as cluster algebra theory dictates. So this
is the work with Dylan Thurston that I did and Michael Shapiro. And I wanted to
tell you about it, but I didn't know -- it's like five slides. But we -- yeah.
>>: [inaudible].
>> Sergey Fomin: Maybe we should take a break so that anybody who wants to
leave can -- well. But it -- I don't know. It's like 10 minutes. So I don't know. I
can turned my back and -- [laughter] and let everybody leave who wants to leave
and then turn around.
>> David Wilson: Why don't we thank Sergey.
>> Sergey Fomin: And then ask a question.
>> David Wilson: [inaudible] stay longer.
[applause].
>> Sergey Fomin: So this game that I showed you on the previous slide flipping
diagonals in the polygon can be played on any Riemann surface of five. So you
start -- you take your favorite Riemann surface, you know, just a [inaudible] and
you take scissors and cut out to disks from it, so now it has boundary, and you
can take your permanent marker and put some marks points on the surface,
including some on each piece of the boundary, at least one, okay.
Now, your surface can be triangulated moving those marked points as the
vertices of a triangulation, okay. So this is an example where your original
surface was just a sphere. You cut out a disk, one disk, so now you have one
disk remaining. And the boundary of that disk you put three marked points. This
makes it -- this disk in a triangle because everything is up to, you know,
polymorphism. And you also marked one point in the interior of the triangle, so
you have one [inaudible] triangle. And then you can triangulate it in 10 possible
ways. Notice that these are kind of funny triangulations, right? This triangle
inside is so folded you glue two of its sides to each other. But it's okay.
It's a triangulation. Maybe it's not kosher, you know, but it's still a triangulation.
So and you see the raw -- these triangulations are related by flips. You take this
arc in a triangulation and you can replace it by the other. Diagonally the
quadrilateral which is formed when you removed the arc, right? Except when
you take this -- this kind of self folded triangular. If you remove the interior edge
here, then there is no way to flip it. There is -- right. You kind of stick in
something else. Okay? So that's exactly the equivalent of that example that I
showed you before, where we have -- we had completely other kind of
combinatorial dynamics, these flips or other, you know, moves, local moves for
pseudoline arrangements where some moves can be made and some moves
cannot, right? I mean, some chambers could be flipped, some couldn't. And
here it's the same. Right? You remove one object, stick in another, but not each
object can be removed.
And sure enough, if you just believe in the doctrine that cluster algebras
somehow must be lurking behind this, then you think about oh, how do I social a
quiver to a triangulation? How do I encode this combinatorial object using this
universal language? And there is a very simple rule. You come up with it pretty
quickly that -- sorry. For each -- for each triangle you put some, you know, this
red stuff is this quiver. You put an oriented cycle in -- three cycle inside it. And
here we don't do this because this is on this side of the surface, on the boundary,
so those vertices are not used.
Any way, there is a recipe. Don't worry about it. I am in a hurry. And these
things change according to the general mutation rule that I told you when I
perform a flip and a triangulation. So they exactly operate as one expects. And
even better, right, because now we can mutate in places where we couldn't
before.
Now, there is a dual object to the notion of a triangulation called elimination.
Here I'm dealing with integral lamination. So integral lamination is a collection of
curves on the surface, again up to isotopy, and these curves are sort of, you
know, like transversal to the arcs of the triangulation. So they start at non
marked points and end at unmarked points, or they can be closed curves or they
can also Spiral into punctures. Okay. So basically it's like the correspondence
between geodesics and ore cycles.
Okay. Now, we -- not we but Thurston actually proved a beautiful theorem that
each such lamination can be coordinatized, there is a combinatorial rule that
given a triangulation and a lamination science a bunch of integers to -- well, one
integer per arc of your triangulation, and these integers completely -- the way
they are defined is completely combinatorial, and they determine the lamination.
And this is a bijection between Z to the N where N is the number of these
integers and the set of integral laminations.
So it's a -- it's a one-to one correspondence. So these are truly coordinates. So
laminations are like integer points in some, you know -- can be thought of as
integer points in some defined space.
Now, it turns out that we can associate and extra vertex in our quiver, a frozen
vertex now to each of these, to a lamination. And then when we perform a flip,
then the rule by which these sheer coordinates introduced by Thurston change is
precisely the quiver mutation rule. So not only the mutable part through the
mutable part of the quiver is just determined by the triangulation, but the frozen
part of the quiver is determined by the lamination. And the edges that can
connect frozen to mutable vertices, they are the multiexplicits of them are these
sheer coordinates and they change precisely according to the quiver mutation.
Okay. So just another situation where some natural numbers, naturally defined
natural integer numbers change according to these rules.
So what does this buy us? This enables us to build models for cluster algebras
that arise in other context using these triangulated surfaces. Because this gives
you sort of global view of a particular cluster algebra complete with its, you know,
frozen and mutable variables in the language of combinatorial topology. So for
example in the type A example that I showed you before, I told -- I mentioned
that the variables themselves that correspond to diagonals in a polygon and then
when you flip a diagonal that make variable mutates and so forth. But I didn't tell
you anything about the frozen part.
So in order to describe a more general cluster algebra which has frozen variables
and most of them do, you have to also throw in some laminations and then in
terms of them and using these sheer coordinates of Thurston's you can write
down explicitly in a uniform global way the exchange relations that they satisfy.
So in this particular this example of, my running example is described by this
picture a hexagon with six one-curve laminations which contains in itself all
information about this cluster algebra. So not -- so, in other words, you don't
need to mutate, you don't need this recursive algorithm in order to ride a
particular exchange relation, right? So the main drawback of the original
definition is that you are -- you are given information about a small piece of the -this large picture, right, and then you sort of do like analytic continuation. You
extend and extend and extend and then you reach some distant corner and only
then you know which equations those guys satisfy.
Here I'm telling you the global picture which tells you everything about what those
variables are, which relations they satisfy and so forth in a uniform language. Of
course, the problem here is that -- well, not in this example, because this is finite
type, but in general, is that you are going to get infinitely many objects. But we
know what these objects are, you know. They are arcs on this surface. So if you
have a surface of high g-ness, of course you can have an arc that goes around
handled this handle and then that handle and then this. So there are infinitely
many of them.
But still, you know, you have a -- an overall grasp of what these guys are. You
know which clusters they form, namely they correspond to triangulations of your
surface by these arcs. And you know how they mutate into each other by these
flips and you know -- and there is a general rule using sheer coordinates to write
exchange relations for any given exchange.
So in a sense you have a global picture even though it is extremely complicated.
Of course geometry group theory tells you, you know, that awful things can
happen, you know. These -- this process can be can -- you know, doesn't have
to collapse at all. I mean, we can give you like an infinite tree with no
identification but still you will know what happens, you have a language at least
to describe these things and then you can classify these algebras, for example,
according to the speed of growth. So even if it's infinite you wonder, you know,
does it explode polynomially or exponentially in things like [inaudible] alternative
for mapping class groups tell you, you know, when actually it is polynomial, when
it's exponential and so forth.
So I'm just dropping some names. That's all I have time to say. Thank you.
[applause]
Download