Essays on the structure of reductive groups Finite root systems

advertisement
Last revised 12:06 p.m. October 22, 2015
Essays on the structure of reductive groups
Bill Casselman
University of British Columbia
cass@math.ubc.ca
Finite root systems
This essay discusses some elementary points concerning finite integral root systems. My approach is slightly
different from the standard one in [Bourbaki:1968], which does not distinguish carefully between roots and
coroots, and does not deal with root system that are invariant under translations, such as those arising from
parabolic subgroups.
Contents
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
Introduction
Reflections
Definition of root systems
Simple properties
Tits’ criterion for coroots
Other characterizations of root systems
Examples
Bases
Root data
References
1. Introduction
Root systems are a crucial tool in representation theory, because they describe the structure of reductive
groups in all characteristics. I’ll motivate what’s to come by a well known example.
Let F be an arbitrary field, and
G = GLn
T = subgroup of diagonal matrices in G
g, t = the Lie algebras of G(F ) .
Here G is considered as an algebraic group defined over F . The Lie algebra g may be identified with
Mn (F ), with Lie bracket [X, Y ] = XY − Y X . The group T is isomorphic to the algebraic torus Gnm , T (F )
is isomorphic to (F × )n , and the Lie algebra t is isomorphic to F n . The character group X ∗ (T ) is the lattice
of algebraic homomorphisms T → Gm , the cocharacter group that of algebraic homomorphisms Gm → T .
These are free Z-modules of rank n, and are canonically dual. I express them additively. If x has diagonal
entries xi , we have the characters
x 7−→ xεi = xi ,
that make up a basis of X ∗ (T ). The group T (F ) acts by conjugation on g, which decomposes into the direct
sum of t and n(n − 1) eigenspaces of dimension one, on which T acts by the multiplicative characters
xεi −εj = xi /xj
(1 ≤ i, j ≤ n, i 6= j) .
Finite root systems
2
These make up the set Σ of roots αi,j = εi − εj of (G, T ), which lie in X ∗ (T ). Corresponding to each of
these pairs i 6= j is an embedding ϕi,j of SL2 (F ) into G. For example, if n = 3, i = 1, j = 3:


a 0 b
a b
7 → 0 1 0 .
−
c d
c 0 d
If
σ=
0 1
−1 0
.
then its image σi,j with respect to ϕi,j lies in the normalizer NG (T ) of T , which consists of the monomial
matrices. The quotient NG (T )/T is isomorphic to Sn . This embedding of SL2 also gives rise to embeddings
of Gm through composition
x 7−→
∨
x 0
−→ T ⊂ G .
0 1/x
∨
∨
These make up the set Σ∨ of coroots xαi,j = xεi −εj , lying in X∗ (T ). For every root α,
hα, α∨ i = 2 ,
in effect since
x 0
0 1/x
0 1
0 0
x 0
0 1/x
−1
=
1 x2
0 1
.
The matrices σi,j act as reflections on X ∗ (T ), swapping xi and xj . This can be formulated in more general
terms:
σi,j : λ −→ λ − hλ, α∨i,j iαi,j .
Q
P
The group SLn embeds into GLn . Its subgroup S of diagonal matrices satisfy xi = 1, or εi = 0. The α∨i,j
make up a basis of X∗ (S). The restrictions of the εi to S span X ∗ (S), although the restrictions of the roots
have index n in it. In the figure below are drawn the coroots for SL3 as vectors in the plane ε1 + ε2 + ε3 = 0
and the level lines αi,j = k for integers k .
∨
α2,3
∨
α1,3
∨
α2,1
∨
α1,2
∨
α3,1
∨
α3,2
If V = R ⊗ X ∗ (T ), then V , Σ and the map from Σ to the dual space V ∨ taking αi,j to α∨i,j define what is
called a root system.
I now look at this notion systematically.
Finite root systems
3
2. Reflections
A reflection in a finite-dimensional vector space is a linear transformation that fixes vectors in a hyperplane,
and acts on a complementary line as multiplication by −1. Every reflection can be written as
sf,f ∨ : v 7−→ v − hv, f ∨ if
for some linear function f ∨ 6= 0 and vector f with hf, f ∨ i = 2. I This fixes points on the hyperplane f ∨ = 0
and takes f to −f . The function f ∨ and the vector f are unique up to non-zero scalar.
I’ll abbreviate sf,f ∨ to sf in circumstances when f determines f ∨ .
R
R
S
S
Proof. If s(v) = −v + h then s(v − h/2) = −(v − h/2).
Suppose now that V is given a Euclidean norm.
2.1. Lemma. The reflection sf,f ∨ is orthogonal if and only if f is perpendicular to the hyperplane f ∨ = 0. In
this case the function f ∨ is determined by the equation
∨
hv, f i = 2
v•f
f •f
.
Proof. The first assertion is immediate from the previous Lemma. Now suppose the reflection to be
orthogonal. The orthogonal projection of v onto the line through f is
vk =
v•f
f •f
f
Then v⊥ = v − vk lies in the reflection hyperplane, v = v⊥ + vk is an orthogonal decomposition, and
s(v) = v⊥ − vk = v − 2
(2.2)
v•f
f •f
f
is the reflection of v .
2.3. Lemma. Suppose s and t to be two reflections fixing the same hyperplane, say with fs∨ = ft∨ = f ∨ . Then
the product st is the shear taking
v 7−→ v + hv, f ∨ i(fs − ft ) .
Proof. Suppose s and t to be two reflections fixing the hyperplane f ∨ = 0. Then
t(v) = v − hv, f ∨ ift
st(v) = v − hv, f ∨ ifs − hv, f ∨ ift + hft , f ∨ ihv, f ∨ ifs
= v + hv, f ∨ i(fs − ft ) ,
since hft , f ∨ i = 2.
Finite root systems
4
3. Definition of root systems
A root system in a real vector space V is specified by a finite subset Σ of V and a map λ 7→ λ∨ from Σ to the
linear dual space V ∨ satisfying these conditions:
(a) for each λ and µ in Σ, hλ, µ∨ i lies in Z;
(b) for each λ in Σ, hλ, λ∨ i = 2;
(c) for each λ the reflection
sλ : v 7−→ v − hv, λ∨ iλ
takes Σ to itself.
(c) for each λ the reflection
sλ∨ : v 7−→ v − hλ, viλ∨
takes Σ∨ to itself.
That sλ is in fact a reflection is a consequence of the second condition. Note that necessarily 0 does not lie in
Σ.
Remark. Condition (d) does not follow from the others. For example, first choose λ = −µ in R2 . Since they
span a line, ity is possibnle to find λ∨ 6= µ∨ in the dual space such that
hλ, λ∨ i = 2
hµ, λ∨ i = −2
hλ, µ∨ i = −2
hµ, µ∨ i = 2
∨
λ
=0
as in this figure:
µ∨
=0
λ
µ
Conditions (a)-(c) are satisfied, but (d) is not.
———-
There are many different definitions of root systems in the literature, differing in most cases only in emphasis.
Sometimes—notably when dealing with Kac-Moody algebras—the condition of finiteness is dropped, and
what I call a root system in these notes would be called a finite root system. Sometimes the extra condition
that Σ span V is imposed, but often in the subject one is interested in subsets of Σ which again give rise to
root systems that do not possess this property even if the original does. Sometimes the vector space V is
assumed to be Euclidean and the reflections orthogonal. The definition I have given is one that arises directly
from the theory of reductive groups, but there is some justification in that theory for a Euclidean structure
as well, since a semi-simple Lie algebra possesses a canonical invariant quadratic form (its Killing form).
One virtue of not starting off with a Euclidean structure is that it allows one to keep in view generalizations
relevant to Kac-Moody algebras, where the root system is not finite and there might not exist any suitable
non-degenerate inner product, let alone a Euclidean one.
Finite root systems
5
The elements of Σ are called the roots of the system, those of Σ∨ its coroots. The rank of the system is the
dimension of V , and the semi-simple rank is that of the subspace V (Σ) of V spanned by Σ. The system is
called semi-simple if Σ spans V .
There is one odd thing about the conventions used for root systems, and that is the choice of which vector
space is primary, which the dual. In applications, the space V will contain roots of a reductive Lie algebra,
V ∨ the coroots. What is odd about this is that in origin the roots are linear functions on a Lie algebra, so it
might be seem reasonable to let V be the Lie algebra, with the roots in its dual. But that’s just not what the
literature does.
The Weyl group of the system is the group W generated by the reflections sλ acting on V .
The root system is said to be reducible if Σ is the disjoint union of two subsets Σ1 and Σ2 with hλ, µ∨ i = 0
whenever λ and µ belong to different components. Every system is a direct sum of irreducible ones.
4. Simple properties
The simplest property of a root system is immediate, but worth noting:
4.1. Proposition. If λ is a root so is −λ.
Proof. Since sλ λ = −λ.
4.2. Lemma. If λ = cµ for roots λ, µ, then c is either ±2, ±1, or ±1/2.
Proof. From integrality of hλ, µ∨ i and hµ, λ∨ i.
I now introduce a semi-Euclidean structure on V , with respect to which the root reflections will be orthogonal.
The existence of such a structure is crucial, especially to the classification of root systems (and reductive
groups). The construction of some reflection-invariant Euclidean semi-norm is straightforward, but the one
introduced here is canonical.
Define the linear map
ρ: V −→ V ∨ ,
v 7−→
X
hv, λ∨ iλ∨
λ∈Σ
and define a symmetric dot product on V by the formula
X
u • v = u, ρ(v) =
hu, λ∨ ihv, λ∨ i .
λ∈Σ
The semi-norm
kvk2 = v • v =
X
hv, λ∨ i2
λ∈Σ
is positive semi-definite, vanishing precisely on
RAD (V )
= (Σ∨ )⊥ .
Since Σ∨ is W -invariant, the semi-norm kvk2 is also W -invariant. That kvk2 vanishes on RAD (V ) mirrors
the fact that the Killing form of a reductive Lie algebra vanishes on the radical of the algebra.
4.3. Proposition. For every root λ
kλk2 λ∨ = 2ρ(λ) .
Thus although the map λ 7→ λ∨ is not the restriction of a linear map, it is simply related to such a restriction.
Finite root systems
6
Proof. For every µ in Σ
sλ∨ µ∨ = µ∨ − hλ, µ∨ iλ∨
hλ, µ∨ iλ∨ = µ∨ − sλ∨ µ∨
hλ, µ∨ i2 λ∨ = hλ, µ∨ iµ∨ − hλ, µ∨ isλ∨ µ∨
= hλ, µ∨ iµ∨ + hsλ λ, µ∨ isλ∨ µ∨
= hλ, µ∨ iµ∨ + hλ, sλ∨ µ∨ isλ∨ µ∨
But since sλ∨ is a bijection of Σ∨ with itself, we can conclude by summing over µ in Σ.
From this and the definition of ρ:
4.4. Corollary. For every v in V and root λ
hv, λ∨ i = 2
v•λ
λ•λ
.
Thus the formula for the reflection sλ is that for an orthogonal reflection.
4.5. Corollary. The semi-simple ranks of a root system and of its dual are equal.
Proof. The map
λ 7−→ kλk2 λ∨
is the same as the linear map 2ρ, so ρ is a surjection from V (Σ) onto V ∨ (Σ∨ ). Apply the same reasoning to
the dual system to see that ρ∨ ◦ρ must be an isomorphism on V (Σ), hence ρ|V (Σ) an injection as well.
4.6. Corollary. The space V (Σ) spanned by Σ is complementary to RAD (V ).
So we have a direct sum decomposition
V = RAD(V ) ⊕ V (Σ) .
Proof. Because the kernel of ρ is RAD(V ).
4.7. Corollary. The set Σ is contained in a lattice of V (Σ).
Proof. Because it is contained in the lattice of v such that hv, λ∨ i is integral for all λ∨ in some linearly
independent subset of Σ∨ .
4.8. Corollary. The Weyl group is finite.
Proof. It fixes all v annihilated by Σ∨ and therefore embeds into the group of permutations of Σ.
If λ is in Σ, its image λ∨ in V ∨ may be restricted to V (Σ).
4.9. Proposition. The space V (Σ), the subset Σ, and restriction of λ∨ to V (Σ) define a root system.
It is called the semi-simple root system associated to the original.
4.10. Proposition. The pair V ∨ , Σ∨ and the inverse to λ 7→ λ∨ define a root system.
It is called the dual root system.
As a group, the Weyl group of the root system (V, Σ, V ∨ , Σ∨ ) is isomorphic to the Weyl group of its dual
system, because:
4.11. Proposition. The contragredient of sλ∨ ,λ is sλ,λ∨ .
Proof. It has to be shown that
hsλ u, vi = hu, sλ∨ vi .
Finite root systems
7
The first is
hu − hu, λ∨ iλ, vi = hu, vi − hu, λ∨ ihλ, vi
and the second is
hu, v − hλ, viλ∨ i = hu, vi − hλ, vihu, λ∨ i .
4.12. Proposition. For all roots λ and µ
(sλ µ)∨ = sλ∨ µ∨ .
Proof. It must be checked that
hν, (sλ µ)∨ i = hν, sλ∨ µ∨ i
for all ν in Σ. Apply (1.2) a couple of times.
4.13. Corollary. For any roots λ, µ we have
ssλ µ = sλ sµ sλ .
Proof. The algebra becomes simpler if one separates this into two halves: (a) both transformations take sλ µ
to −sλ µ; (b) if hv, (sλ µ)∨ i = 0, then both take v to itself. Verifying these, using the previous formula for
(sλ µ)∨ , is straightforward.
Any set of roots Σ is the union of mutually orthogonal subsets Σi , each irreducible. The following is
straightforward.
4.14. Proposition. If Σ is the union of mutually orthogonal subsets Σi , then V (Σ) =
L
V (Σi ).
5. Tits’ criterion for coroots
(1.2) characterizes λ∨ in terms of an invariant inner product. But there is another way to specify λ∨ in terms of
λ, one which works even for infinite root systems with no invariant inner product. The following observation
can be found in [Tits:1966].
5.1. Corollary. The coroot λ∨ is the unique element of V ∨ satisfying these conditions:
(a) hλ, λ∨ i = 2;
V ∨ spanned by Σ∨ ;
(b) it lies in the subspace ofP
(c) for any µ in Σ, the sum ν hν, λ∨ i over the λ-string (µ + Z λ) ∩ Σ vanishes.
Proof. The necessity of (a) and (b) is immediate. As for (c), it is true because the reflection sλ preserves
(µ + Z λ) ∩ Σ and hsλ v, λ∨ i = −hv, λ∨ i.
To prove that these conditions imply uniqueness, suppose ℓ∨ to be another vector satisfying the same
conditions, and let ν ∨ = ℓ∨ − λ∨ . Suppose µ to be a root. It is immediate that hλ, ν ∨ i = 0. The sum over the
λ-string through µ is on the one hand zero, but on the other a positive multiple of hµ, ν ∨ i.
To show how this can be used, I’ll reprove Proposition 4.12. According to Tits’ criterion, it must be shown
that
(a) hsλ µ, sλ∨ µ∨ i = 2;
(b) sλ∨ µ∨ is in the linear span of Σ∨ ;
(c) for any root τ we have
X
hν, sλ∨ µ∨ i = 0
ν
where the sum is over roots in over ν in (τ + Z sλ∨ µ).
Finite root systems
8
Condition (a) is immediate, since sλ∨ is the contragredient of sλ . Condition (b) is trivial. As for (c), we know
that
X
hν, µ∨ i = 0
ν∈(χ+Zµ)∩Σ
for all roots χ. But
hν, µ∨ i = hsλ ν, sλ∨ µ∨ i
and if we replace χ by sλ τ we obtain equation (c).
6. Other characterizations of root systems
If V is an arbitrary Euclidean space, for every v 6= 0 in V define
♦
v =
(6.1)
2
v •v
v.
Define v ∨ in V ∨ by the formula
hu, v ∨ i = u • v ♦ .
(6.2)
Given a root system in a space V , we know that there exists a Euclidean structure on V such that for every λ
in Σ:
(a) 2(λ • µ)/(λ • λ) is integral;
(b) the subset Σ is stable under the orthogonal reflection
sλ∨ : v 7−→ v − 2
v•λ
λ•λ
λ.
Conversely, suppose that we are given a Euclidean vector space V .
6.3. Proposition. Suppose Σ to be a finite subset of V − {0}, and suppose that for each λ in Σ
(a) µ • λ♦ is integral for every µ in Σ;
(b) the subset Σ is stable under the orthogonal reflection
sλ : v 7−→ v − (v • λ♦ )λ .
Then V , Σ,a nd λ 7→ λ∨ define a root system.
Here λ♦ and λ∨ are defined by (6.1) and (6.2).
This is straightforward to verify. The point is that by using the metric we can avoid direct consideration of
Σ∨ . Here is one reward:
6.4. Proposition. Suppose (V, Σ) to be root system with invariant inner product. Suppose U to be a vector
subspace of V , ΣU = Σ ∩ U , Σ∨U = (ΣU )∨ . Then V , ΣU and λ 7→ λ∨ define a root system.
Proof. This is immediate from Proposition 6.3.
In practice, root systems can be constructed from a very small amount of data. If S is a finite set of orthogonal
reflections and Ξ a finite subset of V , the saturation of Ξ with respect to S is the smallest subset of V
containing Ξ and stable under S .
6.5. Proposition. Suppose Ξ to be a finite subset of a lattice L in the Euclidean space V such that α • β ♦ is in
Z for every α, β in Ξ. Then the saturation Σ of Ξ with respect to the sα for α in Ξ is finite, and (a) and (b) of
Proposition 6.3 are satisfied for all λ in Σ.
Finite root systems
9
Proof. Let S be the set of reflections
sα : v 7−→ v − (v • α♦ )α
for α in Ξ. Let Ξ0 = Ξ and Ξn+1 = Ξn ∪ S · Ξn . By induction, each Ξn is contained in L, and all elements of
Ξn are of bounded length. Therefore the sequence Ξn is stationary. Let Σ = Ξn for n ≫ 0.
At this point we know that (1) s Σ = Σ for all s in S ; (2) every λ in Σ is an integral linear combination
of elements of Ξ; (3) λ • α♦ is integral for all α in Ξ and λ in Σ; (4) every λ in Σ is obtained by a chain of
reflections in S from an element of Ξ; .
Define the depth of λ to be the least n such that λ lies in Ξn . To prove that λ • µ♦ is integral for all λ, µ in Σ, it
suffices to prove that α • µ♦ is integral for α in Ξ, µ in Σ. I do this by induction on the depth of µ. For n = 0
this is the basic hypothesis. Since
(sα µ)♦ =
2 µ − (µ • α♦ ) α
2 sα µ
=
= µ♦ − (µ♦ • α) α♦
ksα µk2
kµk2
we have
β • (sα µ)♦ = β • µ♦ − (µ♦ • α)(β • α♦ ) ,
which gives the induction step.
Condition (b) follows by from the identity
ssλ µ = sλ sµ sλ ,
and induction on depth.
7. Examples
Suppose (V, Σ, λ 7→ λ∨ ) to be a root system.
Suppose λ to be in Σ. Then −λ also lies in Σ. Any other root must be a multiple of
λ, say µ = cλ. As we have seen, c = ±1/2, ±1, or ±2. All cases can occur—in dimension one both
RANK ONE SYSTEMS.
{±λ} or {±λ, ±2λ}
are possible root systems. To summarize:
7.1. Lemma. If λ and cλ are both roots, then c = ±1, ±1/2, or ±2.
A root λ is called indivisible if λ/2 is not a root. It is easy to see that:
7.2. Proposition. The indivisible roots in any root system also make up a root system.
RANK TWO SYSTEMS. Suppose α, β to be linearly independent vectors in a Euclidean plane whose reflections
generate a finite group preserving the lattice they span. The product sα sβ is a rotation preserving the lattice,
and must therefore have order 2, 3, 4, or 6. In the first case they commute. The other possibilities are shown
in the following diagrams. The last is not reduced.
Finite root systems
10
A2
B2 = C2
G2
BC2
8. Bases
In this section I’ll summarize without proof further important developments. Suppose given a root system
(V, Σ, V ∨ , Σ∨ ).
The connected components of the complement in V ∨ of the root hyperplanes λ = 0 for λ in Σ are called the
chambers of the system.
8.1. Proposition. The chambers are simplicial cones.
This means that a chamber is defined by inequalities α > 0 as α ranges over a set of linearly independent
linear functions in V .
Fix one of these, say C . It is open in V . Define ∆ to be the roots defining the walls of C . The elements of ∆
are called the simple roots determined by the choice of C . As already noted, they are linearly independent.
By definition, no root hyperplane intersects C . Those roots that are positive on C are called positive, and the
rest negative.
8.2. Proposition. Every positive root is a non-negative, integral, linear combination of simple roots.
The chamber is a strong fundamental domain for W . The group W is generated by the elementary reflections
sα for α in ∆. Because C is simplicial, its closed faces of are all of the form
C Θ = v ∈ V hα, vi = 0 for all α ∈ Θ
Finite root systems
11
for some subset Θ ⊆ ∆. Let CΘ be the relative interior of C Θ .
8.3. Proposition. The closed face C Θ is exactly the points of C fixed by the subgroup WΘ generated by the
sα with α in Θ.
Let S be the set of elementary reflections sα . An expression
w = sα1 . . . sαn
(αi ∈ ∆)
is called reduced if it is of minimal length among all such expressions. This minimal length ℓ(w) is called the
length of w.
There is a basic relationship between combinatorics in W and geometry:
8.4. Proposition. If α is a simple root, then ℓ(wsα ) = ℓ(w) + 1 if and only if wα > 0.
A Cartan matrix is an integral square matrix C = (cα,β ), indexed by a finite set ∆ × ∆, with these properties:
(a) all diagonal entries cα,α are equal to 2;
(b) cα,β ≤ 0 for α 6= β ;
(c) there exists a positive diagonal matrix D such that CD is positive definite symmetric.
8.5. Proposition. The matrix (hα, β ∨ i) is a Cartan matrix. Every Cartan matrix is that attached to a finite root
system.
If α and β are two elements of ∆, then the product sα sβ is a two-dimensional rotation of finite order
m = mα,β . Define
nα,β = hα, β ∨ ihβ, α∨ i .
It must be a positive integer, and since W preserves a lattice then the only cases that occur are 0, 1, 2, or 3.
 2 if n
α,β


3
nα,β
m=
nα,β

4
6
nα,β
= 0;
= 1;
= 2;
= 3.
In other words, we recover the reduced systems of rank 2 described earlier.
Let S be the set of reflections sα for α in ∆.
8.6. Proposition. The group W is a Coxeter group with generating set S and relations
s2α = I,
(sα sβ )mα,β = I .
9. Root data
A (based) root datum is a quadruple (L, ∆, L∨ , ∆∨ ) in which
(a)
(b)
(c)
(d)
(e)
L is a free Z-module of finite rank;
L∨ = Hom(L, Z);
∆ is a finite subset of L;
α 7→ α∨ is a map from ∆ to L∨ such that (hα, β ∨ i) is a Cartan matrix;
if W is the group generated by the reflections
sα : v 7−→ v − hv, α∨ iα ,
then Σ = W · ∆ is finite.
Finite root systems
12
Root data determine reductive groups over an algebraically closed field just as root systems determine
reductive Lie algebras. The simplest examples of root data are the toral data, in which ∆ = ∅.
For a slightly more interesting example, look at
G = SLn
B = upper triangular matrices in G
T = diagonal matrices in B .
Take L = X ∗ (T ), ∆ the set of characters
αi : diag(xi ) 7−→ xεi −εi+1 = xi /xi+1
(1 ≤ i < n) .
The positive roots are those of the form xi /xj with i < j , which are those that occur in the eigenspace
decomposition of the adjoint action of T on the Lie algebra of b. The lattice L∨ may be identified with the
co-character group X∗ (T ).
There is an analogue of Corollary 4.6 valid for root data. Suppose L1 and L2 to be two root data of the same
rank. An isogeny from the first to the second is a map from L∨1 to L∨2 such that (a) the quotiemt of L2 by the
image of L∨1 is finite; (b) ∆∨1 is mapped to ∆∨ ; (c) under the dual map, ∆2 is taken to ∆1 .
A root datum is called simply connected if ∆∨ is a basis of L∨ . For example, the root datum of SLn is simply
connected.
9.1. Proposition. Any root datum is an isogeny quotient of a direct product of a toral lattice and a simply
connected root datum.
10. References
1. N. Bourbaki, Chapitres IV, V, et VI of Groupes et algèbres de Lie, Hermann, 1968.
Download