Lie Theory Math 534 - Fall 2011 Julia Gordon

advertisement
Lie Theory
Math 534 - Fall 2011
Julia Gordon
Lecture notes by Pooya Ronagh
Department of mathematics, University of British Columbia
Room 121 - 1984 Mathematics Road, BC, Canada V6T 1Z2
E-mail address: pooya@math.ubc.ca
Chapter 1. Lie algebras
1. Solvable, Nilpotent, and Semisimple Lie algebras
2. Representations
3. Generators, relations and dim 2
4. Nilpotency and Engel’s theorem
5. Solvability and Lie’s theorem
6. Cartan’s criterion
7. Killing form
8. Structure of semisimple algebras
3
4
5
6
6
7
9
9
10
Chapter 2. Modules and representations
1. Lie algebra modules
2. Complete reducibility (a special case)
3. Jordan decomposition
4. Representations of sl2 (C)
12
12
14
15
16
Chapter 3. Root space decomposition
1. Toral algebras
2. The decomposition
3. Properties of the root spaces
18
18
18
21
Chapter 4. Root systems
1. Basic definitions
2. Coxeter graphs and Dynkin diagrams
24
24
26
Chapter 5. Reconstructing the Lie algebras from root systems
1. From Dynkin diagrams to root systems
2. Automorphisms and conjugacy
3. Borel and Cartan subalgebras
4. From root systems to Borel subalgebras
5. The automorphism group of a Lie algebra
33
33
35
37
38
38
Chapter 6. Universal enveloping algebras
1. Universal property
2. Generators, relations, and the existence theorem
40
40
42
Chapter 7. Representation Theory
1. Weight lattice
2. Classification of representations
44
44
46
Contents
CHAPTER 1
Lie algebras
We work for now over an arbitrary field but as soon as we start the classification problem
we have to confine to C.
Definition 1. Let F be an arbitrary field and g a finite-dimensional vector space over F
with an operation
[, ] ∶ g × g Ð
→g
called the Lie bracket (or commutator) that is
(1) bilinear;
(2) [x, x] = 0;
(3) [x, [y, z]] + [y, [z, x]] + [z, [x, y]] = 0.
Note that if char F = 2 the second condition is equivalent to [x, y] = −[y, x]. A Lie subalgebra is a subspace of g closed under Lie bracket. A homomorphism of Lie algebras is a
linear map f ∶ g1 Ð
→ g2 preserving the Lie structure.
Example 0.1. gln = Mn×n (F ) has famous subalgebras: (say F = C)
son = {x ∈ gln ∶ XJ + JX t = 0}
where
J =(
0
In/2
In/2
)
0
⎛1
⎞
if n is even and J = ⎜ 0 I ⎟
⎝ I 0⎠
if n is odd.
More generally J can be taken to be any symmetric bilinear form over our field F . Other
examples are spn and sln = {X ∶ tr X = 0}.
⌟
More definitions are to follow: g is called abelian if [g, g] = 0. Note that if g is onedimensional then g is abelian. A subalgebra h ⊂ g is called an ideal whenever [x, y] ∈ h for
all x ∈ g and y ∈ h.
3
1. SOLVABLE, NILPOTENT, AND SEMISIMPLE LIE ALGEBRAS
4
1. Solvable, Nilpotent, and Semisimple Lie algebras
Definition 2. With the notation D1 g = [g, g] and Dk g = [g, Dk−1 g] the lower central series
for the Lie algebra g is
g ⊃ [g, g] ⊃ ⋯.
1
k
k−1
k−1
Given D = D1 but D g = [D g, D g] we get the derived series
g ⊃ [g, g] ⊃ ⋯.
Check that Dk g is an ideal of g.
Definition 3. g is nilpotent if Dk g = 0 for some k and is called solvable if Dk g = 0 for
some k. g is called semisimple if it contains no nontrivial solvable ideals and g is simple if
not one-dimensional ([g, g] ≠ 0) and it has no nontrivial ideals.
Note that the derived series is contained in the lower central series hence nilpotent implies
solvable.
Example 1.1. The subgroup of gln consisting of strictly upper-triangular matrices is
called nn and is nilpotent. The subgroup bn of upper-triangular matrices is solvable.
⌟
Lemma 1. The property of being solvable/nilpotent is inherited by the subalgebras and homomorphism images.
Remark. Subalgberas of semisimple do not have to be semisimple.
Example 1.2. sl2 is simple. Note that the matrices
X =(
0 1
),
0 0
Y =(
0 0
) , and
1 0
1 0
H =(
)
0 −1
form a basis (the standard basis) of sl2 with commutation relations
[H, X] = −2X,
[H, Y ] = −2Y, and
[X, Y ] = H.
Suppose 0 ≠ h ⊂ sl2 is an ideal. Let W = aX + bY + cH ∈ h. If a ≠ 0 commute with Y twice
and get Y ∈ h and likewise for b ≠ 0 and get X ∈ h. If one of X or Y is in h then h contains
H as well and we are done.
⌟
Proposition 1. There exists a unique maximal solvable ideal (called the radical and denoted by Rad(g)) in any Lie algebra g.
Proof. We’ll prove that a sum of solvable ideals is a solvable ideal. Then since
a + b/a = b/a ∩ b
then following is an exact sequence
0Ð
→aÐ
→a+bÐ
→ b/a ∩ b Ð
→0
2. REPRESENTATIONS
5
hence proving the lemma. Same argument with applying Zorn’s lemma proves the infinite
dimensional case.
Lemma 2. For h ⊂ g an ideal, g is solvable if and only if h and g/h are solvable.
Proof. The only important thing to remember is that a Lie algebra g is solvable iff
there is a series
g0 = g ⊃ g1 ⊃ g2 ⊃ ⋯ ⊃ gn = 0
such that gi /gi+1 is abelian. And this is easy to show as then Dk g ⊂ gk for all k by an
inductive argument.
So g is semisimple if Rad(g) = {0}.
2. Representations
A representation of g is as usual a homomorphism of Lie algebras ρ ∶ g Ð
→ gl(V ) for some
vector space V (that we assume to be finite dimensional). A representation is called faithful
if it is an injective homomorphism. A representation is called irreducible if it has no notrivial
subrepresenation.
Theorem 2.1 (Lie). If g is solvable and g ⊂ gl(V ) then there is v ∈ V that is the common
eigenvector for all X ∈ g.
If g ⊂ gl(V ) is a representation, g has a one-dimensional subrepresentation if and only if
there exists a common eigenvector. But recall that this does not say that every representation of a solvable Lie algebra is a direct sum of 1-dimensional representations.
The mapping
ad ∶ g Ð
→ gl(g)
is a homomorphism (by an application of the Jacobi identity) and is hence called the adjoint
representation. The kernel of ad is
Z(g) = {X ∶ [X, Y ] = 0, ∀Y }.
Check that if g is semisimple then Z(g) = {0}. This proves that every semisimple Lie
algebra has a faithful representation.
Remark. This is true for any Lie algebra by Ado’s theorem.
4. NILPOTENCY AND ENGEL’S THEOREM
6
3. Generators, relations and dim 2
Let g be a Lie algebra and x1 , ⋯, xn a basis satisfying the relations
n
(k)
[xi , xj ] = ∑ aij xk .
k=1
akij ’s
The
are called the structure constants. They satisfy by our Lie algebra axioms the
following relations
k m
k m
akij = −akji ∑ akij am
kl + ajl aki + ali akj = 0.
k
If dim g = 2 an g is non-abelian let x, y be any basis. Then [g, g] = ⟨[x, y]⟩ and g ⊃ [g, g]
gives a 1-dimensional subspace. We can pick x to be a generator of [g, g] and complete it
to a basis. Then [x, y] = cx = 1x = x by rescaling of y. So the is a unique non-abeliean
Lie algebra in dimension 2. It is solvable but not simple or semi-simple. So there is no
semi-simple Lie algebra in dimensions less that 3. sl2 is semisimple and 3-dimension.
4. Nilpotency and Engel’s theorem
Engel’s Theorem (Version 1). If every element is ad-nilpotent then g is a nilpotent
algebra.
Engel’s Theorem (Version 2). If g ⊂ gl(V ) and consists of nilpotent endomorphisms then
there is v ∈ V , such that X(v) = 0 for all X ∈ g.
Proof of Ver. 1 from Ver. 2. One observation is that if X ∈ gl(V ) is a nilpotent
element, then the adjoint action
ad(X) ∶ gl(V ) Ð
→ gl(V )
is nilpotent. In fact X being nilpotent is equivalent to existence of a flag of subspaces
0 ⊂ V1 ⊂ ⋯ ⊂ Vk+1 = V
such that X(Vi ) ⊂ Vi−1 . Then for any endomorphism Y of V , and high enough m >> 0,
ad(X)m (Y ) kills all of V (in fact for any m, the endomorphism ad(X)m (Y ) carries Vi into
Vi+k−m .
Therefore from Version 2, we have that there is a flag
g = V0 ⊃ V1 ⊃ ⋯ ⊃ Vk = 0
with [g, Vi ] ⊂ Vi+1 , from which it follows that Di g ⊂ Vi .
Proof. We will run an induction on dimension of g: Our first step is to prove that
there is h, an ideal of codimension one: Let h ⊂ g be a maximal proper subalgebra; we claim
that h has codimension one and is an ideal. We have that ad(h) acts on g and preserves h.
So it also acts on g/h. By what we proved above, ad(X) acts nilpotently on gl(V ), hence
5. SOLVABILITY AND LIE’S THEOREM
7
on g and consequently on g/h. So by induction hypothesis there is 0 ≠ Y ∈ g/h killed by
ad(X), for all X ∈ h. In other words there if Y ∈ h ∖ h, such that ad(X)(Y ) ∈ h for all
X ∈ h. This implies that, h′ the span of h and Y is a Lie subalgebra of g, in which h sites
as an ideal of codimension one. But by maximality of h, we have h′ = g.
So let’s assume
g = ⟨h, Y ⟩.
By induction hypothesis, there is nonzero v ∈ V such that X(v) = 0 for all X ∈ h. Let W
be the subspace of all such vectors. Then W ≠ {0}. We have to show that there is w ∈ W
such thta Y (w) = 0. We show Y (W ) ⊂ W and it therefore suffices to observe that for any
w ∈ W , X(Y (w)) = 0. But given w ∈ W ,
X(Y (w)) = Y (X(w)) + [X, Y ](w)
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶ ´¹¹ ¹ ¹ ¸¹ ¹ ¹ ¹¶
=0
∈h
and we are done.
 Note 4.1. If X(v) = 0 for all X, by induction on dimension of V we have that there
is a basis in which all elements of g are strictly upper triangular.
⌟
5. Solvability and Lie’s theorem
Lie’s Theorem. Let g be a solvable Lie algebra and g ⊂ gl(V ) then there is v ∈ V that is
the common eigenvector for all X ∈ g.
Proof. Step 1. We first prove that there is an ideal h of codimension one: Consider
the abelian quotient algebra g/Dg with quotient map
π∶gÐ
→ g/Dg.
Let a be any a codimension one subspace, and consider π −1(a) = h.
Step 2. g = ⟨h, Y ⟩ and v0 the common eigenvector for all X ∈ h. We have
X(v0 ) = λ(x)v0 ,
λ ∈ h∗
Let
W = {v ∈ V ∶ Xv = λ(x)v, ∀X ∈ h}.
So we are done by the next lemma which we state independently.
Lemma 3. h ⊂ g an ideal. V a representation of g, and λ ∶ h Ð
→ C a linear functional.
W = {v ∈ V ∶ Xv = λ(x)v, ∀X ∈ h}.
Then Y (W ) ⊂ W for all Y ∈ g.
5. SOLVABILITY AND LIE’S THEOREM
8
Proof. For any nonzero w ∈ W we want Y (w) ∈ W . But
=λ([X,Y ])w
³¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ · ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ µ
X(Y (w)) = Y (X(w)) + [X, Y ](w)
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶ ´¹¹ ¹ ¹ ¸¹ ¹ ¹ ¹¶
∈h
=λ(x)Y (w)
We want λ([X, Y ])w = 0 for all X ∈ h. Let U = ⟨w, Y (w), Y 2 (w), ⋯⟩. Then by induction
X(U ) ⊂ U for all X ∈ h: In fact X(w) = λ(X)w, and X(Y (w)) is the linear combination
of w and Y (w), and so on. This also shows that in the basis w, Y (w), ⋯ the matrices for
all X ∈ h are upper-triangular.
Also the diagonal entries are all λ(x). Then
tr(X∣U ) = dim(U ).λ(X).
Now [X, Y ] ∈ h, and 0 = tr([X, Y ]∣U ) = dim U.λ[X, Y ] and the claim follows.
By induction on dim V we then reach
Corollary 1. There exists a basis of V such that all elements of g are upper triangular.
(Reformulation: there exists a flag in V stabilized by g).
If ρ ∶ g Ð
→ gl(V ) is a represenation, then ρ(g) is also solvable. We apply this to adjoint
representations. By Lie’s theorem, there is a flag of subspaces of g stable under ad(g) (i.e.
ideals).
Corollary 2. For a solvable g, there is a chain of ideals in g
g = hn ⊃ hn−1 ⊃ ⋯ ⊃ h1 ⊃ {0}
such that dim hi = i.
Then every element X ∈ [g, g] is ad-nilpotent (hence [g, g] is nilpotent). This is because
the commutator of an upper-triangular, with anything is strictly upper-triangular.
Summary: We have this short exact sequence
0Ð
→ Rad(g) Ð
→gÐ
→ g/Rad(g) Ð
→0
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
gss
then V = V0 ⊗ L where V0 is an irreducible representation of gss and L is a one-dimensional
representation.
7. KILLING FORM
9
6. Cartan’s criterion
Our first observation is that for any x, y, z ∈ gl(V ) then
tr([x, y], z) = tr(x.[y, z]).
Theorem 6.1 (Cartan’s criterion). For g ⊂ gl(V ) suppose tr(xy) = 0 for any x ∈ [g, g], y ∈ g.
Then g is solvable.
Proof. Note that g is solvable iff [g, g] is nilpotent. The sufficiency is obvious, the
necessity follows from Lie’s theorem. From Engel’s theorem we know that if x ∈ [g, g] is
ad-nilpotent, then [g, g] is nilpotent. So we will show that any x ∈ [g, g] is nilpotent.
Fix x ∈ [g, g]. If tr(xy) = 0 for any y ∈ gl(V ) (suppose over C). Look at Jordan’s form of x
with eigenvalues λ1 , ⋯, λn . Take y = diag(λ1 , ⋯, λn ). Then tr(xy) = 0 then λi = 0 for any i.
The problem with this is that it is not clear why y ∈ g. To work around this let
M = {y ∈ gl(V ) ∶ [y, [g, g]] ⊂ g} ⊃ g
If tr(xy) = 0 for any x ∈ [g, g], y ∈ g we have tr(xy) = 0 for any x ∈ [g, g], y ∈ M . Now we
can find our y ∈ M .
Corollary 3. If in the Lie algebra g,
tr(ad x. ad y) = 0, ∀x ∈ [g, g], y ∈ g
then g is solvable.
Proof. Apply Cartan’s criterion to ad(g). So ad(g) is solvable. Its kernel is Z(g)
which is abelian. So we are done.
7. Killing form
We define
κ(x, y) = tr(ad x. ad y)
to be the Killing form, and it is a symmetric bilinear form. Notation
κij = (κ(xi , xj ))ij
where xi ’s form a basis of g.
The radical of the Killing form is
Sk = {x ∈ g ∶ κ(x, y) = 0, ∀y ∈ g}.
A form is called non-degenerate if its radical is zero. This is equivalent to det(κij ) ≠ 0. A
bilinear form is non-degenerate if and only if it gives an isomorphism between V and V ∗
via
x ↦ (y ↦ κ(x, y)).
Theorem 7.1. g is semisimple if and only if κ is non-degenerate.
8. STRUCTURE OF SEMISIMPLE ALGEBRAS
10
Lemma 4. Let h ⊂ g be an ideal. And
κh ∶ h × h Ð
→C
its Killing form. Then
κh = κ∣h×h .
Proof. Let A ∶ V Ð
→ V be a linear operator and im(A) ⊂ W ⊂ V where W is a subspace
of V . Then
tr(A) = tr A∣W .
Observation:
κ([x, y], z) = κ(x, [y, z]).
Corollary 4. The radical, Sκ of κ is an ideal in g.
Proof of the theorem. Recall that g is semisimple if and only if g has no nontrivial
abelian ideals.
Sk ⊂ Rad(g): If x ∈ Sk then
tr(ad x. ad y) = 0, ∀y ∈ g
(in particular such y is in [Sk , Sk ]). So by Cartan’s criterion Sk is solvable. And since it is
an ideal, it is contained in Rad(g).
We will show that any abelian ideal is contained in Sk . Let h be such an ideal. Pick x ∈
h, y ∈ g. We need to show κ(x, y) = 0. The operator ad x ad y maps g into h, and (ad x ad y)2
maps g into [h, h], which is zero! Hence (ad x ad y)2 = 0 and therefore tr(ad x ad y) = 0. 8. Structure of semisimple algebras
Our next goal is to show that the semisimple algebras are direct sum of simple ones.
Definition 4. h1 , ⋯, hn ideals in g. We say
g = h1 + ⋯ + hn
is a direct sum if it is a direct sum as a vector space.
Remark. [hi , hj ] ⊂ hi ∩ hj = {0} so then automatically by the above condition we have
g = h1 ⊕ ⋯ ⊕ hn
is a direct sum of Lie algebras with [hi , hj ] = 0.
Theorem 8.1. If g is semisimple then there is a unique decomposition of g into semisimples.
g = h1 ⊕ ⋯ ⊕ hn
8. STRUCTURE OF SEMISIMPLE ALGEBRAS
11
Proof. let h be any ideal in g. Let
h⊥ = {y ∈ g ∶ κ(x, y) = 0, ∀x ∈ h}
(this is an ideal since κ(x, [y, z]) = κ([x, y], z)). Note that since g is semisimple, κ is nondegenerate. Therefore we have g = h + h⊥ and all we have to check is that h ∩ h⊥ = {0}. This
is the case since
tr(ad x. ad y) = 0, ∀x, y ∈ h ∩ h⊥ .
Therefore by Cartan’s criterion, h∩h⊥ is solvable. But g is semisimple hence the intersection
is just the trivial ideal.
Now we do induction on dim g. Note that if h1 ⊂ h is an ideal in h then it is also an ideal
in g if [h, h⊥ ] = 0.
It remains to prove the uniqueness. Let I be an ideal in g.
[I, g] = ⊕i [I, hi ]
so exactly one of the right hand summands is nonzero because hi ’s are semisimple. Then
I = hi .
Corollary 5. If g is semisimple then g = [g, g].
Corollary 6. If g is semisimple, then ideals in g and homomorphic images of g are all
semisimple. (Note: this is not the case for subalgebras, unlike the solvable or nilpotent
ones.)
CHAPTER 2
Modules and representations
1. Lie algebra modules
Let g be a Lie algebra. A g-module is an action g × V Ð
→ V , that is compatible with the Lie
structures:
(1) X(αv + βw) = αXv + βXw,
(2) (αX + βY )v = αXv + βY v
(3) [X, Y ].v = X.Y.v − Y.X.v
The image of g in End(V ): take the (associative!) ring generated by ρ(g)’s. Ultimately we
want a ‘universal’ ring associated with g such that representations of g are in one-to-one
correspondence with modules over this ring.
Schur’s lemma still holds. If V, W are g-modules, then Homg (V, W ) consists of linear maps
f ∶V Ð
→ W such that g.f (v) = f (g.v) for g ∈ W . Hom(V, W ) ≅ V ∗ ⊗ W is a g-module with
action g.f = g.f (v) − f (g.v).
One can generalize the Killing form as follows: suppose ρ ∶ g Ð
→ gl(V ) is a representation.
Define
β(x, y) = tr(ρ(x), ρ(y)).
If ρ is faithful, β is non-degenerate (if g is semisimple). Also β is associate:
β(x, [y, z]) = β([x, y], z).
The radical, Sβ of β is an ideal (solvable).
Let x1 , ⋯, xn be a basis of g. Let y1 , ⋯yn be the dual basis with respect to β:
β(xi , yj ) = δij .
The Casimir operator
Cρ ∶= ∑ ρ(xi )ρ(yi )
i
is a linear operator on V and does not depend on x1 , ⋯, xn .
Lemma 5. cρ commutes with the action of g.
12
1. LIE ALGEBRA MODULES
13
Proof. Fix X ∈ g, and let Xi ’s be the basis of g and Yi ’s the dual basis:
[X, Xi ] = ∑ aij Xj , [X, Yi ] = ∑ bij Yj .
Then bji = −aij because:
−aij = − ∑ aik β(Xk , Yj ) = −β(∑ aik Xk , Yj ) = β([X, Xi ], Yj )
k
k
β([Xi , X], Yj ) = β(Xi , [X, Yj ]) = β(Xi , ∑ bjk Yk ) = ∑ bjk β(Xi , Yk ) = bji
k
k
Finally we do the computation:
[ρ(X), cρ ] = ∑[ρ(X), ρ(Xi )ρ(Yi )]
i
= ∑[ρ(X), ρ(Xi )]ρ(Yi ) + ∑ ρ(Xi )[ρ(X), ρ(Yi )]
i
= ∑ ∑ aij ρ(Xj )ρ(Yi ) + ∑ ∑ ρ(Xi )bij ρ(Yj ) = 0
i
j
i
j
Example 1.1. g = sl2 and ρ the identity representation on the 2-dimensional space.
Then
X ∗ = Y, Y ∗ = X, H ∗ = 1/2H.
So then
3/2
1
cρ = XY + Y X + H 2 = (
0
2
0
).
3/2
⌟
This generalizes by the
Lemma 6. Suppose ρ is irreducible. Then cρ is the scalar
dim g
dim V
.
Proof. cρ is scalar by Schur’s lemma. So it remains to compute the trace:
tr cρ = tr(∑ ρ(Xi )ρ(Yi )) = ∑ β(Xi , Yi ) = dim g.
i
Example 1.2. Consider the action of SL2 (R) on H by
(
az + b
a b
.
)z =
c d
cz + d
Then it acts on functions on H by
Lg f (z) = f (g −1.z)
2. COMPLETE REDUCIBILITY (A SPECIAL CASE)
14
for any g ∈ SL2 (R). We get an action of sl2 (R) on C ∞ (H), the real smooth functions:
X.f =
d
∣ f (exp(−tX).z).
dt t=0
Let β be the Casimir operator we found before (either on sl2 or its representation in gl2 ).
1 −t
e−tX = (
) z = z − t = x − t + iy
0 1
d
d
∂f
∣ f (exp(−tX).z) = ∣ f (x(t), y(t)) = − .
dt t=0
dt t=0
∂x
Continuing similarly we conclude that cρ is a second order differential operator and commutes with the action of g. So this has to be the Laplacian! One in fact can check by hand
that it coincides with the Laplacian on H
cy 2 (
∂2
∂2
+
).
∂x2 ∂y 2
⌟
2. Complete reducibility (a special case)
A final goal will be to prove
Weyl’s theorem. For the semisimple Lie algebras if V is a representaion, W a subrepresentation, then there is a subrepresenation U ⊂ V such that V = U ⊕ W .
Let g be a semisimple Lie algebra, ρ ∶ g Ð
→ gl(V ) a faithful representation. Let’s consider
a special case: that of when W ⊂ V is an invariant subspace of codimension one and
itself irreducible as a representation of g. We show that W has an invariant complement,
i.e.
0Ð
→W Ð
→V Ð
→ V /W Ð
→0
is a split sequence of representations.
Take cρ ∈ End(V ). Then cρ (V ) ⊂ W because it is a linear combination of product of
elements of g.
Look at cρ ∣W . Since W is irreducible, cρ ∣W has to be a nonzero scalar because of the trace.
ker(cρ ) ∩ W = {0} then ker(cρ ) is the dimension one complement.
When W is not irreducible but still codimension one, we do an easy induction on dim W . If
W is not codimension one, consider Hom(V, W ) as a g-module. This contains a submodule
V corresponding to the scalar action on W , and in its codimension one, there is W with
zero action on W . Let f be the generator of the complement of W in V. Then ker(f ) is
the complement of W .
3. JORDAN DECOMPOSITION
15
3. Jordan decomposition
We know from linear algebra that the Jordan decomposition gives a basis independent
decomposition
X = Xs + Xn
where Xn is nilpotent and Xs is diagonalizable (the minimal polynomial has distinct roots).
Moreover Xs and Xn are polynomials in X. To have an analogue of this decomposition
in an arbitrary Lie algebra, we fix a faithful representation ρ ∶ g Ð
→ gl(V ) and then we
have
ρ(X) = ρ(X)s + ρ(X)n
but then the question is that do the two right hand terms have to be in ρ(g). Do they
depend on the representation.
The answer is that all is fine if g is semisimple.
Example 3.1. For g = gl(1),
t 0
ρ1 ∶ g Ð
→ gl2 via t ↦ (
)
0 t
has semi-simple image,
0 t
ρ2 ∶ g Ð
→ gl2 via t ↦ (
)
0 0
has nilpotent image and t ↦ (
t t
) has neither type image.
0 0
⌟
We will show that if g is semisimple, then
(1) Xs , Xn are elements of g,
(2) If ρ ∶ g Ð
→ gl(V ′ ) is a homomorphism, then ρ(X) = ρ(Xs ) + ρ(Xn ) is the Jordan
decomposition in gl(V ′ ).
Remark. If g is semisimple, we know that ad g ∶ g Ð
→ gl(g) is injective. So
ad(X) = ad(Xs ) + ad(Xn )
is called the abstract or absolute or even the universal Jordan decomposition of g.
Example 3.2. If g = so(C), then the diagonal and the nilpotent parts of any orthogonal
matrix are orthogonal.
⌟
Proof of the first assertion. We can see that g ⊂ sl(V ) as g = [g, g]. Also Xs
and Xn are in sl(V ) by considering their trace. We shall think of V as a g-module. Then
for every submodule, W , we define
SW = {Y ∈ gl(V ) ∶ Y (W ) ⊂ W and tr Y ∣W = 0}.
4. REPRESENTATIONS OF sl2 (C)
16
Thus g is in all SW ’s and so are Xs and Xn as they are polynomials in X. We also define
M = {A ∈ gl(V ) ∶ [A, g] ⊂ g}.
Note that if A ∈ M then As , An ∈ M. Our claim is that
g=M∩( ⋂
SW ) .
W irred.
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
g′
Note that g′ is already a subalgebra of gl(V ) and g is an ideal of g′ . By the adjoint-action
of g on g′ , we have that g′ is a g-module and g is a submodule hence
g′ = g ⊕ U
such that [g, U ] = 0. We want to show that U = {0}. Take Y ∈ U , we will show that for
every irreducible g-submodule, W , Y ∣W = 0. Y commutes with g on W , therefore by Schur’s
lemma Y ∣W is a scalar, but at the same time traceless, hence proving the claim.
Proof of the second assertion. If ρ ∶ g Ð
→ g′ is a representation of g ⊂ gl(V ) in
′
′
gl(V ) ⊃ g , since g can be written as a direct sum of simple ideals, im g = g/(⊕gi ): g is
a direct sum of eigenspaces, then g′ is a direct sum of eigenspaces of ρ(Xs ). So ρ(Xs ) is
semisimple in gl(V ′ ). ρ(Xn ) is obviously nilpotent. Now we use the uniqueness or Jordan
decomposition in gl(V ′ ).
4. Representations of sl2 (C)
Recall our notation for the standard basis:
sl2 (C) = ⟨X, Y, H⟩.
Say V is a g-module under ρ ∶ g Ð
→ gl(V ). H is semisimple in the standard representation
so ρ(H) is also semisimple on V and therefore
V = ⊕λ Vλ .
Definition 5. λ is called a weight of V if Vλ ≠ {0}.
Lemma 7. If λ is a weight and v ∈ Vλ , then
Xv ∈ Vλ+2 , Y v ∈ Vλ−2 .
Proof. This is just an easy computation, for instance:
H.(X.v) = [H, X]v + X.Hv = 2Xv + X(λv) = (λ + 2)Xv.
4. REPRESENTATIONS OF sl2 (C)
17
Now if V is finite-dimensional, there exists a weight λ such that Vλ+2 = {0}. Therefore
any v0 ∈ Vλ is killed by X. v0 is the maximal vector of weight λ. We use a more general
notation
1
v−1 = 0, v0 as defined , v` = Y ` v0 .
`!
Then we have
(4.1)
Hv` = (λ − 2`)v` ,
(4.2)
Y v` = (` + 1)v`+1 ,
(4.3)
Xv` = (λ − ` + 1)v`−1 .
In fact v` ∈ Vλ−2` so at some point vm+1 = 0. So {v0 , ⋯, vm } is a set of linearly independent
vectors spanning a submodule of dimension m + 1 and by irreducibility of V this is going
to be all of V . When ` = m + 1, we have
Xvm+1 = (λ − m)vm = 0
from 4.3 so λ = m. Thus v0 ∈ Vm and the weights are
{m, m − 2, ⋯, −m}
and all V` are one-dimensional for ` ∈ {m, m − 2, ⋯, −m}. The integer m is called the highest
weight as it is the weight of the maximal vector, v0 (i.e. the one with the highest weight).
Let
V (m) ∶= ⟨v0 , ⋯, vm ⟩
with the action of sl2 (C) determined by 4.1 to 4.2. We have proved that
Proposition 2. Every irreducible sl2 -module is isomorphic to V (m) .
Example 4.4. The standard representation is a V (1) and the adjoint representation is
aV .
⌟
(2)
Remark. As a remark observe that every finite dimensional sl2 -module is now just a direct
sum of irreducible ones and the number of summands is
dim V0 + dim V1
where the first term is the number of 0-eigenspaces of H and the second term is the number
of 1-eigenspaces of it.
CHAPTER 3
Root space decomposition
If g is a semisimple Lie-algebra, then g is a g-module via adjoint action. If every element
of g was ad-nilpotent then we already know that g would have been nilpotent. Otherwise,
there is some element X ∈ g such that Xs ≠ 0 and ad X = ad Xs . So we can talk about
subalgebras consisting of semisimple elements then.
1. Toral algebras
Definition 6. A toral subalgebra h in g is a subalgebra that consists of semisimple elements
only.
Example 1.1. In sl2 the diagonal elements form a maximal toral subalgebra. The
0 x
subalgebra of elements (
) is just conjugate to the former one.
⌟
0 x
Proposition 3. h has to be abelian.
Proof. For any X ∈ h, we will show that adh X has only eigenvalue 0 on h. Let Y ∈ h
be an eigenvector:
[X, Y ] = aY.
We diagonalize Y . Then ad Y is also diagonal. Thus [Y, X] has to be a linear combination
of eigenvectors for ad Y corresponding to nonzero eigenvalues. But from above we have
[Y, X] = −aY , which is contradictory.
2. The decomposition
Thus when working with a semisimple g, we can fix a maximal toral subalgebra h and
simultaneously diagonalize h. Let
gα = {X ∈ g ∶ [h, X] = α(h)X}
be an eigenspace where α ∈ h∗ is a linear functional on h. g0 coincides with Cg (h) and we
have a decomposition
g = Cg (h) ⊕ ( ⊕ gα ) .
α≠0
18
2. THE DECOMPOSITION
19
ϕ = {α ∶ gα ≠ {0}}
is called the set of roots of g, and each gα is called a root space.
Example 2.1. For sl3 ⊂ gl3 there is a basis consisting of
⎛1 0 0⎞
h1 = ⎜0 −1 0⎟ ,
⎝0 0 0⎠
⎛0 0 0 ⎞
h2 = ⎜0 1 0 ⎟
⎝0 0 −1⎠
and eij ’s for all i ≠ j. The maximal torus is generated by h1 and h2 .
α
α1
α2
α3
α4
α5
α6
gα α(h1 ) α(h2 )
⟨e12 ⟩
2
−1
⟨e13 ⟩
1
1
⟨e21 ⟩
−2
1
⟨e23 ⟩
−1
2
⟨e31 ⟩
−1
−1
⟨e32 ⟩
1
−2
where as we see α5 = −α2 and α1 = −α3 and finally α1 + α4 = α2 .
⌟
Lemma 8. If α, β ∈ h∗
(1) [gα , gβ ] ⊆ gα+β .
(2) If X ∈ gα and α ≠ 0 then ad X is nilpotent.
(3) If α + β ≠ 0 then κ(gα , gβ ) = 0.
Proof. First assertion follows from an easy computation:
[h, [X, Y ]] = [[h, X], Y ] + [X, [h, Y ]] = α(h)[X, Y ] + β(h)[X, Y ].
Let Y be a linear combination of elements of gβ for the second part. Here β ∈ Φ∪{0}. Then
[X, Y ] is an element of gβ+α . By dimensional considerations ad X has to be nilpotent.
Finally for the last claim, let X ∈ gα and Y ∈ gβ , and pick h ∈ h such that (α + β)(h) ≠ 0.
Then
α(h)κ(X, Y ) = κ([h, X], Y ) = −κ([X, h], Y ) = −κ(X, [h, Y ]) = −β(h)κ(X, Y ).
Therefore κ(X, Y ) = 0.
Example 2.2. g = sl2 = h⊕gα ⊕g−α where h is spanned by H and is the maximal torus.
The root α ∶ ⟨H⟩ Ð
→ C maps H ↦ 2.
⌟
Proposition 4. Cg (h) = h.
2. THE DECOMPOSITION
20
Proof. Let C ∶= Cg h.
Step 1. C contains all semisimple and nilpotent parts of its elements: given X ∈ C, it
commutes with h therefore adg X maps h to 0. Then (adg X)s and (adg X)n also map h to
0 (as they are polynomials in adg X without constant term).
Step 2. All semisimple elements in C have to lie in h: This is because the sum of commuting
semisimple elements are semisimple and h is maximal.
Step 3. κ∣C is non-degenerate. For z ∈ C, already κ(z, gα ) = 0 for any α by previous
proposition. If κ(z, C) = 0 then z ∈ Sκ which is a contradiction as Sκ is empty.
Step 4. κ∣h is non-degenerate. Assume κ(z, h) = 0 for all z ∈ h, so z would belong to the
radical of κ∣C If y is nilpotent in C then
tr(ad z ad y) = 0.
By step 1 then κ(z, C) = 0 which contradicts step 3.
Step 5. C is nilpotent. For this it suffices to show that for any X ∈ C, adC X is nilpotent. If
X = Xs +Xn then Xs ∈ h so adC (Xs ) = 0. Xn is nilpotent so adC Xn is nilpotent, completing
the proof.
Step 6. h ∩ [C, C] = {0}. This is clear since
κ(h, [C, C]) = κ([h, C], C) = 0
but we know that κ is non-degenerate on h.
Step 7. C is abelian. We need a small
Lemma 9. If C is a nilpotent Lie algebra, and I is a nonzero ideal then I ∩ Z(C) ≠ {0}.
Proof. I is a C-module under the adjoint action so there is a common vector killed
by all of C.
From this we conclude that [C, C]∩Z(C) ≠ {0}. Let z ≠ 0 be an element of this intersection.
This can not be semisimple as it would be in h then, and we have shown h ∩ [C, C] = {0}.
This z = s + n with the nilpotent part nonzero: n ≠ 0. So n is an element of [C, C] ∩ Z(C).
So κ(n, C) = 0 which is a contradiction as κ∣C is non-degenerate.
Step 8. We finally show that C = h. If C ≠ h, there is z = s + n ∈ C which is not semisimple.
So 0 ≠ n ∈ C. But then κ(n, C) = 0 implying that n = 0.
Corollary 7. κ∣h is non-degenerate.
This is going to be a very useful fact in the next chapter. We will be using this to identify
h and h∗ : for any α ∈ Φ, we denote by tα the element of h satisfying
κ(tα , h) = α(h),
∀h ∈ h.
3. PROPERTIES OF THE ROOT SPACES
21
3. Properties of the root spaces
Our goal is to find a real euclidean vector space where the roots live and get information
about length and angles of them with respect to each other in that space.
Example 3.1. In the example of sl3 ⊂ gl3 above let α = α1 and β = α4 . Then we have
a configuration
β
α
generating all αi ’s. In fact with respect to the induced inner product (., .) on h∗ defined by
(λ, µ) = κ(tλ , tµ )
we have (α, β) = κ(tα , tβ ) = β(tα ) = α(tβ ). So let us find tβ = ah1 + ah2 where h1 and h2
are the elements of the basis of h ⊂ sl3 as before. In this concrete example we may find a, b
from computation of κ(tβ , h1 ) = β(h1 ) and κ(tβ , h2 ) = β(h1 ). It turns out that
1
1
1
1
tα = h1 , tβ = h2 , (α, β) = − , (α, α) = (β, β) = .
6
6
6
3
This justifies the picture above; we think of αi ’s as vectors in the R-span of them with
angles given by (., .).
⌟
Proposition 5.
(1) Φ spans h∗ .
(2) α ∈ Φ then −α ∈ Φ.
(3) For x ∈ gα and y ∈ g−α , we have [x, y] = κ(x, y)tα .
(4) If α ∈ Φ, then [gα , g−α ] is one-dimensional, with basis tα .
(5) α(tα ) = κ(tα , tα ) ≠ 0 for any α ∈ Φ.
(6) For any α ∈ Φ, and nonzero x ∈ gα , there is y ∈ g−α , such that ⟨x, y, [x, y]⟩ is a
subalgebra isomorphic to sl(2, F ). The element h = [x, y] ∈ [gα , g−α ] ∈ h is given
via
2tα
hα =
κ(tα , tα )
and therefore hα = −h−α .
Proof.
(1) If α(h) = 0 for all α ∈ Φ, then h ∈ Z(g) so h = 0.
3. PROPERTIES OF THE ROOT SPACES
22
(2) If −α ∈/ Φ, then g−α = 0 and hence κ(gα , gβ ) = 0 for all β ∈ h∗ . Thus κ(gα , g) = 0
contradicting the nondegeneracy of κ.
(3) For any h ∈ g we have
κ(h, [x, y]) = κ([h, x], y) = α(h)κ(x, y) = κ(tα , h)κ(x, y) = κ(h, κ(x, y)tα )
hence by nondegeneracy we get the result.
(4) Given α ≠ 0, if κ(α, g−α ) = 0 then κ is degenerate on the whole space. Therefore
[gα , g−α ] is nonzero.
(5) By last part tα , x and y form a three dimensional solvable algebra S. Thus adL s
is nilpotent for all s ∈ [S, S]. So adL tα is both semisimple and nilpotent and thus
tα ∈ Z(L) = 0 contrary to the choice of tα .
(6) This follows by computing commutators of the three elements easily.
We have seen that g = h ⊕ ⊕α∈Φ gα is an sl2 -module via the adjoint representation for each
embedding
sl2 ↪ g
via X ↦ xα , Y ↦ yα and H ↦ hα . There are submodules
M = h ⊕ ⊕ gcα
cα∈Φ
on which the Xα acts by Xα.gcα ⊆ g(c+1)α , etc. So gcα ’s are weight spaces of M with weight
cα(hα ) = 2c so they are one-dimensional. So the only possible submodules of this type
appear for c = ±1 in which cases
M = ⟨hα , xα , yα ⟩ ⊕ rest of h.
In particular the only multiples of α that can be roots are ±α.
Now let’s fix α and let β ≠ −α. Then
K = ⊕ gβ+iα
i∈Z
is a sum over weight spaces of our sl2 -module with weights β(hα ) + 2i ∈ Z. Then K is an
irreducible submodule and say has highest weight β(hα ) + 2q and lowest weight β(hα ) − 2r.
The corresponding roots in g are
β + qα, ⋯, β − rα.
Therefore β(hα ) = r − q and the above is called the α-string through β.
Also this provides an update on lemma 8:
g=h⊕ ⊕
αgα
3. PROPERTIES OF THE ROOT SPACES
23
where the second-type summands are all one-dimensional; the only occurrences of multiples
of α are ±α; and that for β ≠ −α we have
[gα , gβ ] = {
gα+β if α + β is a root
0
otherwise.
Now we can as well prove that (α, β) ∈ Q for all α, β ∈ Φ. Let α1 , ⋯, α` be a basis for h∗ .
Then for any β ∈ Φ, we set β = ∑`i=1 ci αi so
2
2
(β, αj ) = ∑ ci (αi , αj )
(αj , αj )
(αj , αj )
i
and hence we have a system of linear equations with Z-coefficients
β(hαj ) = ∑ ci αi (hαj ) ∀j
completing the proof of our claim: In the Q-space spanned by the roots, EQ , (., .) is a
Q-valued non-degenerate bilinear form.
Also for any λ, µ ∈ h∗ we have
(λ, µ) = tr(ad tλ , ad tµ ) = ∑ α(tλ )α(tµ ) = ∑ (λ, α)(µ, α).
α∈Φ
α∈Φ
In particular (λ, λ) = ∑α∈Φ (λ, α) . Thus (., .) is a positive definite bilinear pairing on
EQ .
2
2(α,β)
The fractions (β,β) turn out to play important role in the upcoming chapter. One reminder
about them is that they are always integer-valued.
CHAPTER 4
Root systems
1. Basic definitions
Pick a basis for h∗ from Φ, say α1 , ⋯, α` . Then we make a Q-span of ⟨α1 , ⋯, α` ⟩ and tensor
this with R to get an honest real-valued vector space of the same dimension as h with an
inner product (, ) on it.
Recall that we had hα ⊂ [gα , g−α ] = ⟨tα ⟩. We have
β(hα ) =
from sl2 -module ⊕i∈Z gβ+iα . Thus
Then also (, ) is positive-definite:
2(β,α)
(α,α)
2(β, α)
(α, α)
∈ Z for all α, β ∈ Φ. Thus (β, α) ∈ Z for any α, β ∈ Φ.
(λ, λ) = ∑ (λ, α)2 > 0.
´¹¹ ¹ ¹ ¸¹ ¹ ¹ ¹ ¶
∈Q
So Φ corresponds to a finite collection of vectors in a real Euclidean vector space (i.e. a
real vector space with an inner product on it).
Our next observation is that if β − rα, ⋯, β + qα are the roots corresponding to weights
β(hα ) + 2i in ⊕i gβ+iα then β − β(hα ).α is also in Φ.
Definition 7. Let E be a real Euclidean vector space. A finite collection Φ of vectors in
E is called a (reduced) root system if
R1 Φ is finite, spans E and 0 ∈/ Φ.
R2 If α ∈ Φ, then Φ contains ±α and no other multiples of α.
R3 For any α, β ∈ Φ, then the reflection σα (β) ∈ Φ.
R4 For any α, β ∈ Φ, then
2(β,α)
(α,α)
∈ Z.
The reflection σα is the reflection about the hyperplane orthogonal to α. So it fixes Pα and
maps α ↦ −α:
2(x, α)
x↦x−
α.
(α, α)
24
1. BASIC DEFINITIONS
Notation: ⟨β, α⟩ ∶=
25
2(β,α)
(α,α) .
Remark. The axioms (R1), (R3) and (R4) imply that the only multiples of α ∈ Φ are ±2α
and ±1/2α [HW #3]. In fact in some sources the axioms of a root system are confined to
(R1), (R3) and (R4). If the root system has (R2) then it is called a reduced root system.
Non-reduced root systems arise over R. Over C non-reduced systems cannot arise from Lie
algebras so we will stick to our nomenclature.
So by results of the previous chapter, the root space of a semisimple Lie algebra over C is
a root system.
Example 1.1. Let the rank of the Euclidean space be ` = dim E = dim h.1
(1) ` = 1
(2) ` = 2
⌟
1Pictures are copied from Humphreys’ book.
2. COXETER GRAPHS AND DYNKIN DIAGRAMS
26
(Φ, E) is isomorphic to (Φ′ , E ′ ) if and only if there is a linear isomorphism A ∶ E Ð
→ E′
′
mapping Φ to Φ such that
⟨A(β), A(α)⟩ = ⟨β, α⟩,
∀α, β ∈ Φ.
The dual of Φ is the collection of the co-roots
α∨ ∶=
2α
(α, α)
and is denote by Φ∨ . One can see that Φ∨ is a root system.
Note that
⟨β, α⟩⟨α, β⟩ = 4 cos2 (θ)
where θ is the angle between α and β, which is in [0, 1) provided that α ≠ ±β. Also
we know that ⟨β, α⟩ and ⟨α, β⟩ are integers of the same sign, so there are not too many
possiblities:
θ
⟨α, β⟩ ⟨β, α⟩
0
0
π/2
1
1
π/3
−1
2π/3
−1
1
2
π/4
−1
−2
3π/4
1
3
π/6
−1
−3
5π/6
∥β∥2
∥α∥2
any
1
1
2
2
3
3
label
A1 × A1
A2
A2
B2 , C 2
B2 , C 2
G2
G2
2. Coxeter graphs and Dynkin diagrams
In what follows we will have to find a good way of choosing α, β and more generally a
‘good’ bases {α1 , ⋯, α` } for E. But once this task if done with the outcome ∆ ⊂ Φ we will
call the elements of this simple roots and use it to draw more intrinsic graphs called the
Coxeter graphs by putting a vertix for every simple root, and connecting any α and β by
the number of edges ⟨α, β⟩⟨β, α⟩. So the possible number of edges varies form zero to three.
We will see that it is ‘almost’ possible to reconstruct Φ from the corresponding Coxeter
graph, but only from the Euclidean geometry of E we see that there are only a few possible
Coxeter graphs labeled by A` , B` and C` at the same time, D` for any ` ≥ 4 and then the
exceptional cases of E6 , E7 , E8 , F4 and finally G2 .
In a root system at most two lengths of roots can occur. If there are two different lengths
then we have a multiple edge and we will put an arrow pointing to the shorter root. The
2. COXETER GRAPHS AND DYNKIN DIAGRAMS
27
result if called a Dynkin diagrams. This will distinguish B’s and C’s:
These will classify simple algebras. The connected components of the Dynkin diagrams will
correspond to simple ideals in g.
The Classification Theorem. Dynkin diagrams are in one-to-one correspondence with
simple Lie algebras. Modulo the definition of ∆ we can construct a unique Dynkin diagrtam
for a simple Lie algebra (so the diagram is independent of h and ∆ and in other words any
two simple Lie algebras with same Dynking diagrams are isomorphic).
2.1. Prerequisites to classification of Dynkin diagrams. First we state a useful
Lemma 10. If α, β ∈ Φ, are non-proportional roots, if (α, β) > 0 then α − β is a root and if
(α, β) < 0 then α + β is a root.
Proof. For instance (α, β) > 0, is equivalent to ⟨α, β⟩ = 0 in that case ⟨β, α⟩⟨α, β⟩ is
either 1, 2 or 3 therefore one of the two terms is 1. Suppose ⟨β, α⟩ = 1, the
σα (β) = β − ⟨β, α⟩α = β − α ∈ Φ.
2. COXETER GRAPHS AND DYNKIN DIAGRAMS
28
Corollary 8. If {β + iα} ⊂ Φ is an α-string through β, then
(1) σα reverses the string.
(2) Every vector β + iα for −r ≤ i ≤ q is a root.
(3) q − r = ⟨β, α⟩ and in particular no root strings of length greater that 4 may exist.
Definition 8. ∆ ⊂ Φ is called a base if it spans E as a vector space, and every element
γ ∈ Φ can be written as
γ = ∑ kα α
α∈∆
with all kα either non-negative integers or non-positive ones.
The labels we choice without an explanation in the above diagrams form bases. If (α, β) > 0
then β − α ∈ Φ. This proves the following
Lemma 11. If ∆ ⊂ Φ is a base, then (α, β) ≤ 0 for all α, β ∈ ∆.
Suppose we choose a base ∆ ⊂ Φ and put a partial ordering on E by
γ ≽ 0 if it is a combination of elements of ∆ with non-negative coefficients
and µ − ν ≽ 0 if µ − ν ≽ 0 To show that a bases exists, pick γ ∈/ Pα for any α, and pick a half
space defined by Pγ , then call a root δ in that half-space decomposable if there are α, β in
the same half-space such that δ = α + β and otherwise indecomposable. Then one can see
that the set of indecomposable roots in Pγ form a base.
Definition 9. The Cartan matrix of a root system Φ is defined to be
(⟨αi , αj ⟩ij )
where {α1 , ⋯, α` } is a base for Φ.
There are some facts about independence of choices. In particular all bases are obtained by
the method we used. E ∖ ⋃α∈Φ Pα is a union of connected components called Weyl chambers.
On each Weyl chamber the sign of (x, α) (for x in the chamber) does not depend on x for
any α. So there is a one-to-one correspondence
{ bases } ↔ { Weyl chambers }.
2.2. The Weyl group.
Definition 10. The Weyl group of Φ is the group of all σα (α ∈ Φ).
2. COXETER GRAPHS AND DYNKIN DIAGRAMS
29
Every element of W gives a permutation of Φ (so W is a finite subgroup of S∣Φ∣ . As an
aside each Weyl group is a Coxeter group i.e. a group of the form
⟨ri ∶ (ri rj )mij = e, mij ∈ Z, mii = 1, mij ≥ 2⟩.
The group Aut(Φ) is the group of all linear operators A ∶ E Ð
→ E perserving Φ such
that
⟨A(β), A(α)⟩ = ⟨β, α⟩.
Note that under any linear operator as such ⟨, ⟩ is automatically preserves since
Aσα A−1 = σA(α) .
Then W ⊂ Aut(Φ). A next observation is that W is normal in Aut(Φ).
Theorem 2.1. Let ∆ be a base of Φ. Then
(1) If γ ∈ E, and γ ∈/ Pα , α ∈ Φ, then there is w ∈ W such that
(w(γ), α) > 0, ∀α ∈ Φ.
(i.e. W acts transitively on the Weyl chambers).
(2) Let ∆′ be another base. Then there is w ∈ W such that w(∆′ ) = ∆.
(3) For α ∈ Φ, there is w ∈ W such that w(α) ∈ ∆.
(4) W = ⟨σα ∶ α ∈ ∆⟩.
(5) If w(∆) = ∆ then w = 1.
Proof.
(1) Take δ =
1
2
∑α∈Φ+ α. Here
Φ+ = {α ∈ Φ ∶ α > 0}
For any γ ∈ E, and w ∈ W such that (w(γ), δ) is maximal over w ∈ W , we have
(σα wγ, δ) ≤ (wγ, δ).
But (σα wγ, δ) = (wγ, δ) − (wγ, α) proving the claim since γ ∈/ Pα .
(2) follows from (1).
(3) For α ∈ Φ, use w ∈ W to send ∆′ to ∆.
(4) W ′ = ⟨σα ∶ α ∈ ∆⟩ acts transitively on Weyl chambers by part (1). So β ∈ Φ there
is w ∈ W ′ , such that α ∶= wβ ∈ ∆. Then σβ = w−1σα w.
(5) Define the length `(w) as the minimum of the length of the expression
w = σα1 ⋯σαt
using elements of ∆. Then the first claim is that
2. COXETER GRAPHS AND DYNKIN DIAGRAMS
30
Lemma 12. `(W ) coincides with the number of positive roots that w maps to
negative roots.
The key fact is that if w = σα1 ⋯σαt and suppose
(σ1 ⋯σt−1 )(αt ) ≼ 0
then the expression can be shortened to
w = σ1 ⋯̂
σs ⋯σt−1 .
So the above lemma follows by induction on `(w). If `(w) = 0 then w = 1 then
n(w) = 0. If w = σα1 ⋯σαt is the reduced expression then w(αt ) ≼ 0. So
wσαt = σα1 ⋯σαt−1 (α) ≽ 0
but all other positive roots are permuted by σαt so n(wσαt ) = n(w) − 1.
From part (1) and (5) above an a little work on the boundary we have
Corollary 9. The closure of a Weyl chamber is a fundamental domain for the aciton of
W on E.
W preserves Φ, so it preserves ⟨, ⟩ and respects the lengths in such a way that we have
Corollary 10. The Dynking diagram does not depend on ∆.
The Weyl group contains the data of the action of g on itself via conjugation.
1Ð
→W Ð
→ Aut(Φ) Ð
→ΓÐ
→ 1.
Then
Aut(Φ) = W ⋊ Γ
is an extension through the elements of Aut(g) that are not conjugations.
2.3. Classification of Dynkin diagrams. We proceed by showing that the above
are the only possible Dynkin diagrams. We will be using some weight ci for each vertex i
in the following various steps. Let ui = αi /∥αi ∥ for each simple root αi .
Step 1. There is no loops: Let ci = 1 for any vertex in a simple loop and 0 otherwise. Then
for u = ∑ ci ui we have
1
(u, u) = ∑ c2i (ui , ui ) + 2 ∑ ci cj (ui , uj ) = ` + 2 ∑ ci cj ≥ ` + 2`(− ) ≤ 0.
2
i<j
i<j,i↔j
Then it has to be zero which implies u = 0 but ui ’s are chosen from simple roots.
Now we rule out specific branchings in our tree. Just as above in all cases we let c be half
the sum of the weights of the neighbors.
2. COXETER GRAPHS AND DYNKIN DIAGRAMS
31
Step 2. For single bonds, the only possibilities are (1) trees, or (2) only one branching in a
vertex. And if a branching happens it is either of the form Dn , E6 , E7 or E8 . Here are the
weights that lead to contradiction:
1
1
1
2
2
2
2
2
1
1
2
3
2
1
1
2
3
2
4
6
5
4
3
2
1
1
2
3
4
Step 3. A triple bond cannot have neighbor vertices:
1
2
x
Then if we let u = u1 + 2u2 + xu3 we get
(u, u) = 1 + 4 + x2 − 2(1 +
so if x =
√
√
3x)
3 then (u, u) must be zero. So the only possible case is that of G2 .
3
2
1
2. COXETER GRAPHS AND DYNKIN DIAGRAMS
32
Step 4. For double bonds we have:
√
√
2
√
1
1
1
1
2
2
2
2
2
1
2
1
√
2
√
2 2
3
2
1
So that the only possible case is F4 .
We have already seen that A, B and C families can be realized by Lie algebras. We will
see that the other remaining cases which we introduced previously can actually occure as
well.
CHAPTER 5
Reconstructing the Lie algebras from root systems
1. From Dynkin diagrams to root systems
The first fact is that the root system is uniquely determined by the Dynkin diagram.
Proposition 6. If (ϕ, E) and (ϕ′ , E ′ ) are root systems with corresponding bases ∆ and
∆′ and π ∶ ∆ Ð
→ ∆′ is such that
⟨αi , αj ⟩ = ⟨π(αi ), π(αj )⟩
then π extends (uniquely) to an isomorphism Φ Ð
→ Φ′ .
This is easily seen from the Weyl group W . From π we have a homomorphism of the Weyl
groups and then W ≅ W ′ implying that Φ = W.∆ = W ′ .∆′ = Φ′ . The constructive way to
find the root system is as follows. We define a height for any
β = ∑ mα .α, mα ≥ 0, or mα ≤ 0.
α∈∆
Then ht(β) = ∑ mα . We can actually always write β = α1 + ⋯ + αs which αi ∈ ∆ such that
every partial sum is a root. The algorithm is to build Φ by height, using strings: we start
by the roots of heigh one. Then for any pair αi ≠ αj the integer r for the αj -string through
αi is 0 so the integer q equals −⟨αi , αj ⟩. We repeat this procedure long enough.
Definition 11. ϕ decomposable if Φ = Φ1 ∪ Φ2 such that α ⊥ β for all α ∈ Φ1 and β ∈ Φ2 .
If ∆ = ∆1 ⊔ ∆2 with each element of ∆1 perpendicular to any of ∆2 we have
Φ = W.(∆1 ⊔ ∆2 ) = W1 ∆1 ∪ W2 ∆2
Then W = W1 × W2 . This shows that
Lemma 13. Φ is indecomposable if and only if the Dynkin diagram is connected.
Corollary 11. g is simple if and only if the Dynkin diagram is connected.
Proof. Let ∆1 be a connected component of ∆. Let M be the span of all hα for α ∈ ∆1
together with elements of gα and g−α . So Φ = Φ1 ∪ Φ2 with Φ1 ⊥ Φ2 . Then if β ∈/ Φ1 we
have β ∈ Φ2 and α + β is not a root for α ∈ ∆1 . Thus ad xβ (where xβ is a generator of gβ )
kills xα . Therefore ad xβ (yα ) ∈ gβ−α = 0. Since β − α is also not a root. We conclude that
M is a nontrivial ideal.
33
1. FROM DYNKIN DIAGRAMS TO ROOT SYSTEMS
34
It is easy to see that if g = ⊕gi and h ⊂ g is the maximal toral subalgebra of g then hi ∶= h∩gi
is the maximal toral subalgebra of gi . Also note the terminology for hα ’s; co-roots!
So we know already that g is generated by h and gα for all α ∈ Φ. We also know that
[gα , gβ ] = {
gα+β α + β ∈ Φ
.
0
otherwise
The next proposition gives us a standard set of generators for g.
Proposition 7. g is generated by xα and yα for all α ∈ ∆.
Proof. h is contained in M . So we need to prove that for all β ∈ Φ, gα ⊂ ⟨xα , yα ⟩. Suppose we have an isomorphism π ∶ (Φ, E) Ð
→ (Φ′ , E ′ ) of root systems. π initially is
preserving ⟨, ⟩. Rescale Φ so that π becomes an isometry. So we may assume that it is
preserving (, ) as well. Suppose Φ is a root system of g and Φ′ be that of g′ . Let h and h′
be the respective maximal toral subalgebras. Then {hα } and {h′α } are basis for h and h′ .
By isometry we know that tα ↦ t′π(α) . So we define the map h Ð
→ h′ via hα ↦ h′π(α) .
The Isomorphism Theorem. Let {xα } be an arbitrary set of nonzero vectors in gα ’s for
all α ∈ ∆. let yα ∈ g−α be such that [xα , yα ] = hα . Similarly choose x′π(α) is g′ and get
′
yπ(α)
’s likewise. Then there exists a unique isomorphism of Lie algebras ̃
π∶gÐ
→ g′ such
that ̃
π (xα ) = x′ , ̃
π (yα ) = y ′
(and of course ̃
(hα ) = h′ ).
π(α)
π(α)
π(α)
Proof. Uniqueness follows from our previous proposition. We may assume that g and
g are simple. For existence the idea is to construct a subalgebra D ⊂ g ⊕ g′ such that the
projections D Ð
→ g and D Ð
→ g′ are isomorphisms. Let xα = (xα , x′π(α) ) ∈ g ⊕ g′ and so for
tα ’s for α ∈ ∆. Let D be the Lie algebra generated by xα and y α for α ∈ ∆. We know that
ker π1 = (0, g′ ) and ker π2 = (g, 0). We need to prove that ker πi ∩ D = {0} and for this it
suffices to show that
D ≠ g ⊕ g′ .
′
let β be the maximal root in g with respect to the partial order defined by ∆. Consider
gα ⊕ g′π(β) . Note that π(β) is the maximal root of g′ . Let x = (xβ , x′π(β) ) and M be the
subspace of g ⊕ g′ spanned by all
(*)
ad y α1 ⋯ ad y αn (x)
where α’s do not have to be distinct. We will show that M is stable under D. Any
expression like * lives in gβ−∑ αi ⊕ g′β−∑ π(αi ) . Then if we append ad y α to * we stay in M
be definition. By an inductive argument ad hα ’s stabilize M , and finally from ad xα (x) = 0
we see that ad xα ’s stabilize M completing the proof.
Remark. At this point the classification of Lie algebras is complete:
2. AUTOMORPHISMS AND CONJUGACY
35
(1) We started with complete reducibility to get reduced to understanding the simple
ideals.
(2) We started by Engel’s theorem, Killing form we get the decomposition
g = C(h) ⊕ (⊕α∈h∗ gα ) = h ⊕ (⊕α∈h∗ gα ).
(3) From the classification of sl2 -modules we got that {α}’s for a root system.
(4) We pick a set of simple roots, ∆. We got the classification of Dynkin diagrams.
This classifies the root systems.
(5) The Dynkin diagram is independent of the choice of ∆ thanks to the Weyl group
action.
(6) The Isomorphism Theorem implies that isomorphism root systems give isomorphism Lie algebras.
(7) We will show that Φ does not depend on the choice of h by conjugacy properties
of h’s.
(8) One last fact that we will not prove is that if we start with a Dynkin diagram
there is a root system Φ corresponding to it. We will see that there exists a Lie
algebra g with root system Φ. So this will imply existence of Lie algebras from
Dynkin diagrams as well [Humphreys, Chapter 12].
2. Automorphisms and conjugacy
Let k be a field and A an algebra over k (note: in particular the Lie bracket gives the
structure of an algebra to a Lie algebra) then a derivative
D∶AÐ
→A
is a linear mapping such that D(ab) = D(a)b + aD(b).
Example 2.1. Let A = k[x1 , ⋯, xn ] and let D =
ad x is a derivative for any x ∈ g.
∂
∂xi .
Or let g be a Lie algebra and then
⌟
We have a k-vector space from the set of all derivatives: Der A. This is actually a Lie
sub-algebra of gl(A) via
[D1 , D2 ] = D1 D2 − D2 D1 .
This gives a lot of new interesting ways of constructing Lie algebras but the point is that
all the new interesting ones are going to be highly infinite dimensional.
Suppose D ∶ A Ð
→ A is locally nilpotent (given any x, Dk x = 0 for high k). Then
n−1
exp(D) = ∑
k=0
1 k
D ∈ GL(A).
k!
2. AUTOMORPHISMS AND CONJUGACY
36
and exp(D) ∶ A Ð
→ A turns out to be an automorphism of A with inverse exp(−D).
For us the main example is when g is a Lie algebra and x ∈ G then exp(ad x) ∈ Aut(g) is
defined when x is ad-nilpotent and the subgroup Int(g) of all automorphisms of g generated
by exp(ad x)’s for all ad-nilpotent x is called the group of inner automorphisms of g.
Remark. Int(g) is a Lie group with tangent algebra being ad g. If g is semisimple then
ad g ≅ g. This is the group with the trivial center (called the adjoint group corresponding
to g). In fact Int g is the identity component of the Lie group Aut(g).
Reconciling exp(ad) with g ⊂ gl(V ). Take x ∈ g nilpotent (hence ad-nilpotent) and then for
any y ∈ g we have
(exp x)y(exp x)−1 = (exp(ad x))(y).
To verify this use
λx ∶ End(V ) Ð
→ End(V ) via A ↦ XA
and also
ρ−x ∶ End(V ) Ð
→ End(V ) via A ↦ A(−X)
and then notice that
ad x = λx + ρ−x
and then exponentiate the right hand side in End(End(V )).
Now we come back to our simple roots xα ∈ gα with α ∈ ∆. Suppose σ ∶ Φ Ð
→ Φ is an
automorphism of Φ. We know that it lifts to an automorphism of g. Then we let ̃
σ ∶.
Example 2.2. Take −id ∶ Φ Ð
→ Φ via α ↦ −α. Then this induces the map h ↦ −h on h
so
̃
σ (hα ) = −hα = h−α .
We can define where the xα go for α ∈ ∆. We want xα ↦ −yα where (xα , yα ) = hα and
we know that this has to be in g−α . Then we will have yα ↦ −xα because of the relations
[xα , yα ] = hα . We get (by the isomorphism theorem) a ̃
σ∶gÐ
→ g such that ̃
σ = id.
⌟
Example 2.3. In sl2 we have σ, the conjugation by (
from this construction.
0 1
) which coincides with ̃
σ
−1 0
⌟
We expect to see a relation between W and Int(g). We want to associate a σ to an element
w ∈ W and then associate ̃
σ ∈ Aut(g) to that but the problem is that σ ↦ ̃
σ is not a group
homomorphism.
3.
BOREL AND CARTAN SUBALGEBRAS
37
3. Borel and Cartan subalgebras
What happens for different h? The main point is that in a semisimple g, all h’s are conjugate.
There is an element of Int(g) taking h to h′ .
Definition 12. A Cartan subalgebra in g is one that is nilpotent and its own normalizer.
Example 3.1. If g is semisimple, h is a maximal toral subalgebra then h is a Cartan
simple algerba. (Try to check this!)
⌟
In fact in Chapter 15 of [Humphreys] we see that more is true:
Theorem 3.1. If g is semisimple, then h is a Cartan subalgebra if and only if h is a maximal
toral subalgebra.
So to prove our main point we have to show that the Cartan subalgebra’s are conjugate.
Theorem 3.2. If g is any Lie algebra (not necessarily semisimple) then all Cartan subalgebras are conjugate. (Here we certainly use algebraic closedness of the underlying field).
Definition 13. A Borel subalgebra is a maximal solvable subalgebra in g.
For example the set of all upper-triangular matrices is a Borel subalgebra. For the proof
we need to pass to the world of Lie groups. One way to do this is to take the corresponding
group for g to be Int(g): we take xα , yα ’s and consider the exponentiation exp ad xα and
exp ad yα ’s.
Proof steps for the theorem. Here is the program of proving this:
(1) If g is solvable, all Cartan subalgebras are conjugate.
(2) Embed any Cartan subalgebra h in a Borel subalgebra. This is possible since h is
nilpotent and a Borel subalgebra is a maximal solvable one.
(3) We associate groups G and B to g and b.
(4) We prove that any two Borel subalgebras in semisimple g are conjuagte: Given
Borel subgroups B and B ′ , G/B is a projective variety (say from the Bruhat
decomposition). If B ′ is another Borel subalgebra then B ′ acts on G/B. We know
that if a solvable algebraic gorup acts on a projective variety then it has a fixed
point. So let x ∈ G/B be such that for any b ∈ B ′ we have
b(xB) = xB.
Therefore x−1bx ∈ B for any b ∈ B ′ . This shows conjugacy.
(5) We conclude that the corresponding Borel subalgebras b and b′ are conjugate.
This is the result of our association g ↦ G = Int(g).
5. THE AUTOMORPHISM GROUP OF A LIE ALGEBRA
38
(6) Under conjugation h′ ⊂ b′ maps to h′′ ⊂ B. But any two Cartan subalgebras of B
are conjugate by Step 1.
Remark. If G is any group with Lie algebra g then Int(g) is the group
Gsc /Z(Gsc )
were Gsc is the simply connected component of G containing the identity element of G.
4. From root systems to Borel subalgebras
Let g be semisimple. We start by fixing h a Cartan subalgebra. Then the borel sublagebras
containing h are in one-to-one correspondence with bases for Φ relative to h. This gives a
way for going from ∆ to b:
b = h ⊕ (⊕α>0 gα ) .
The the claim is that any Borel subalgebra containing h is of this form. Why is b solvable?
Look at ad xα acting on b. Then we know that α ∈ ∆ pushes the root vectors up (increases
their height). Why is it maximal? ... All ’s are conjugate and all h’s are conjugate. This
shows independence from the choice of h.
Example 4.1. In the sl3 picture in the Weyl chamber giving the simple roots {α, β}
we get the Borel subalgebra of all matrices of the form
⎛∗ ∗ ∗⎞
⎜ 0 ∗ ∗⎟
⎝ 0 0 ∗⎠
and if we take α and −α − β we get the Borel subalgebra of all matrices of the form
⎛∗ ∗ 0⎞
⎜0 ∗ 0⎟ .
⎝∗ ∗ ∗⎠
⌟
Remark. So we have a one-to-one correspondence between
Weyl chambers ↔ simple roots ↔ Borel subalgebras .
5. The automorphism group of a Lie algebra
Let g be a semisimple Lie algebra. Fix h and ∆. Let = (∆) be the Borel subalgebra. Let τ
be an automoprhism of h. Then τ () is a Borel subalgebra. Then there is σ1 ∈ Int(g) such
that
σ1 τ (b) = b.
5. THE AUTOMORPHISM GROUP OF A LIE ALGEBRA
39
Now σ1 τ takes h to some other Cartan subalgebra in b. Now let σ be in Int(b) chosen such
that
σ2 σ1 τ (h) = h.
Then σ2 σ1 τ given an automorphism of Φ which preserves ∆ since it is taking b to b. We
know that
Aut(Φ) = W ⋊ Γ.
Given γ ∈ Γ lift is to ̃
γ ∈ Aut(Φ). Then the claim is that γ, ̃
γ exists such taht ̃
γ σ2 σ1 τ takes
−1
hα to hα and xα to cα xα and yα to c α xα for any α ∈ ∆. Therefore ̃
γ σ2 σ1 τ has to take gα
to itself for any α ∈ ∆.
One can see that there is an element of Int(g) that unfolds the rescaling of the xα ’s and
yα ’s. We conclude that
Aut(g) = Int(g) ⋊ Γ.
CHAPTER 6
Universal enveloping algebras
The motivation is that if G a finite group there is a one-to-one correspondence between
representations of G and C[G]-modules (which are unital associative algebras by the way).
For locally compact G be may associate H(G) with same purposes (without a unit in
general). What we want to do is to make such a ring for g.
Another motivational comment is that we have seen the representations ρ ∶ g Ð
→ gl(V ) have
products and we used this to defined Jordan decompositions and Casimir elements. We
use this to prove complete reducibility. We have also seen that the Jordan decomposition
is independent of the representation. This suggests that there should be a construction
U(g), an algebra such that product in it captures the properties of ρ(X)ρ(Y ) which are
independent of ρ. By the fact that [X, Y ] ↦ ρ(X)ρ(Y ) − ρ(Y )ρ(X) it is clear how we want
the product structure of U(g) be related to the Lie brackets.
For the construction, thinking of g as a vector space we start by
T (g) = F ⊕ g ⊕ g⊗2 ⊕ ⋯
There is a natural product structure
(v1 ⊗ ⋯ ⊗ vm )(u1 ⊗ ⋯ ⊗ un ) = v1 ⊗ ⋯ ⊗ vm ⊗ u1 ⊗ ⋯ ⊗ un .
We define U(g) = T (g)/I where I is the 2-sided ideal in T (g) generated by all
(x ⊗ y − y ⊗ x) − [x, y].
1. Universal property
Recall that for a vector space V over field F , the space
T (V ) = F ⊕ ⊕ V ⊗n
n>0
is an associative algebra with unit 1 ∈ F . Recall that this space has the universal property
that for any linear mapping V Ð
→ A for a unital associative algebra V over F , there is
̃ ∶ T (V ) Ð
a unique lift ϕ
→ A which preserves the units 1T ↦ 1A and makes the following
40
1. UNIVERSAL PROPERTY
41
diagram commute
/ T (V )
EE
ϕ̃
EE
ϕ EEE " V E
E
A
Note that there is a canonical mapping i ∶ g Ð
→ U(g) (which need not be injective at this
moment). The universal property for U(g) reads: for any unital associative algebra A and
a linear mapping ϕ ∶ g Ð
→ A satisfying
ϕ([x, y]) = ϕ(x)ϕ(y) − ϕ(y)ϕ(x)
̃ ∶ U(g) Ð
there is a unique lift ϕ
→ A. In fact by the universal property of T (g), there exists a
̂ ∶ T (g) Ð
map ϕ
→ A which contains I in its kernel. Note that by the same universal property
an associative algebra with the mentioned property is unique.
Proposition 8. There is a one-to-one correspondence between the representations of g and
the U(g)-modules.
Proof. Given a U(g)-module the map i ∶ g ↪ U(g) induces a representation of g and
conversely if ρ ∶ g Ð
→ gl(V ) is a representation then there is a unique homomorphism of
algebras, ρ̃ ∶ U(g) Ð
→ gl(V ), turning V into a U(g)-module.
Remark. If g has a faithful representation, then i ∶ g Ð
→ U(g) has to be injective for trivial
reasons.
Theorem 1.1 (Poincare-Birkhoff-Witt). Suppose g has a countable basis {xj }. The the set
{1} ∪ {xi(1) ⋯xi(m) ∶ i(1) ≤ ⋯ ≤ u(m), m ∈ Z}
form a basis of U(g). Here xi(1) ⋯xi(m) = π(xi(1) ⊗ ⋯ ⊗ xi(m) ) where π is the quotient
T (g) Ð
→ T (g)/I.
We will present an alternative version of PBW and show that it implies the above theorem.
Let us fix a notation
i
T m V = V ⊗m , and Tm V = ⊕m
i=0 T .
Then π ∶ T (g) Ð
→ U(g) maps Tm to π(Tm ) =∶ Um . Finally with the convention that U−1 = 0
we set
∞
G m ∶= Um /Um−1 (∀m ≥ 0), and G ∶= ⊕ Gm .
m=0
Obviously there is a well-defined induced mapping
Um /Um−1 × Up /Up−1 Ð
→ Um+p /Um+p−1
given by the product structure of U, hence G is a graded algebra.
Lemma 14. The natural mapping ϕ ∶ T (g) Ð
→ G factors through the symmetric tensor algebra
S(g) = T (g)/⟨{x ⊗ y − y ⊗ x}⟩.
2. GENERATORS, RELATIONS, AND THE EXISTENCE THEOREM
42
Proof. This is obvious as ϕ(x ⊗ y − y ⊗ x) = [x, y] ∈ U1 whereas x ⊗ y − y ⊗ x ∈ U2 . So we have a surjection S(g) ↠ G.
Theorem 1.2 (Poincare-Birkhoff-Witt (2)). This map is an isomorphism of algebras.
Corollary 12. Let W ⊆ T m (g) be a subspace such that W maps isomorphically onto
S m (g). Then π(W ) is a complement of Um−1 in Um .
Proof. It following by starring at the following commutative diagram!
: Um DD
uu
DD mod Um−1
u
DD
uu
DD
uu
u
u
"
m
W ⊆ T Jm
<G
JJ
y
y
JJ
yy
J
yyisom ϕ
symm JJJ
y
$
y
π
Sm
Corollary 13. i ∶ g Ð
→ U(g) is injective.
Proof. This is the case of m = 1 in the previous corollary.
Corollary 14 (Poincare-Birkhoff-Witt theorem).
Proof. The subspace of T m spanned by all xi(1) ⊗ ⋯ ⊗ xi(m) map isomorphically onto
S . Therefore π(W ) is a complement of Um−1 in Um .
m
Corollary 15. If h ↪ g then U(h) ↪ U(g).
2. Generators, relations, and the existence theorem
Let X be any set. The free Lie algebra LX on the set X is defined by the universal property
depicted in the commutative diagram
/ LX
BB
BB !
ϕ BB ! XB
B
i
A.
Here i ∶ X Ð
→ LX is canonically chosen. In case of existence, such a Lie algebra is unique for
trivial reasons. For the existence, we start with a vector space V in basis X. Then let LX
be the Lie subalgebra of the tensor algebra T (V ), generated by X. Another construction
is suggested in discussion problems.
2. GENERATORS, RELATIONS, AND THE EXISTENCE THEOREM
43
Proposition 9. Let g be a semisimple finite-dimensional Lie algebra with a given maximal
toral subalgebra h ⊂ g, a root system Φ and a base of simple roots ∆ = {α1 , ⋯, α` }. Then for
each i pick
(1) hi ∈ h such that αi (hαi ) = 2;
(2) xi an arbitrary generator of gαi ;
(3) yi such that [xi , yi ] = hi .
Then g is generated by {xi , yi , hi ∶ i} with the following relations
S1 [hi , hj ] = 0 for all i, j;
S2 [xi , yi ] = hi and [xi , yj ] = 0 if i ≠ j;
S3 [hi , xj ] = ⟨αj , αi ⟩xj and [hi , yj = −⟨αj , αi ⟩yj ;
+
Sij
assuming i ≠ j, (ad xi )−⟨αj , αi ⟩ + 1(xj ) = 0;
−
assuming i ≠ j, (ad yi )−⟨αj , αi ⟩ + 1(yj ) = 0.
Sij
±
Proof. S1 − S3 are easy to see. For Sij
note that αj − αi is not a root. Hence the
αi -string through αj is αj , αj + αi , ⋯, αj + qαi where −q = ⟨αj , αi ⟩.
Theorem 2.1 (Serre’s existence). Let ∆ be a Dynkin diagram. The lie algebra g on generators {xi , yi , hi ∶ i} with relations above is a semisimple finite-dimensional Lie algebra which
has ∆ as its Dynkin diagram.
Remark. The Lie algebra g0 on same set of generators as in the above theorem, but only
satisfying relations S1 − S3 is a Lie algebra with finite dimensional Cartan subalgebra h.
±
insures that g is finite-dimensional.
The relations Sij
CHAPTER 7
Representation Theory
1. Weight lattice
Recall that E is a real vector space spanned by roots in Φ. We shall fix ∆ ⊂ Φ a set of
simple roots. Recall that this space has an inner product (., .) on it. The weight lattice is
the lattice of elements
{λ ∈ E ∶ ⟨λ, α⟩ ∈ Z, ∀α ∈ Φ}.
Another interesting lattice is the root lattice, which is the Z-module generated by ∆.
For example consider the corresponding lattice for sl2 , Q ⊂ P . Then #P /Q = 2. In fact
P /Q has the information about how many different groups can be associated with g; if G
is a group with Lie algebra g one can construct a character lattice X as an intermediate
lattice
Q ⊆ X ⊆ P.
1.1. Basis for P . We know that ∆ = {α1 , ⋯, α` } is a basis for E. Thus the vectors
still form a basis for E. Take {λi } a dual basis for the latter:
2αi
(αi ,αi )
P = ⟨λi , αj ⟩ = (λi ,
2αi
) = δij .
(αi , αi )
Then for any λ = ∑ mi λi we have ⟨λ, αi ⟩ = mi so P is the Z-span of {λi }. These are called
the fundamental weights.
Note that since ⟨αi , αj ⟩ ∈ Z for all i and j, we have Q ⊆ P . Moreover the elements ⟨αi , αj ⟩
are the entries of the Cartan matrix, C, and satisfy
αi = ∑ mij λi ,
mij = ⟨αi , αj ⟩.
Hence the transition matrix from basis of P to basis of Q is the C and therefore
#(P /Q) = ∣ det C∣.
1.2. Dominant weights.
Definition 14. A weight is called dominant if it lies in the closure of the positive Weyl
chamber:
⟨λ, α⟩ ≥ 0, ∀α ∈ ∆.
It is called strongly dominant if the above inequalities are all strict.
44
1. WEIGHT LATTICE
45
Note that σαi λj = λj − δij αi ∈ P hence P is preserved by the Weyl group, W . Our work
until now shows that
Lemma 15. For any λ ∈ P there is a unique dominant weight λ′ such that λ′ ∈ W.λ.
Recall the partial order on E: λ ≽ µ iff λ − µ is a sum of non-negative roots. Then it is
obvious that
Lemma 16. If λ is dominant then σλ ≼ λ for any σ ∈ W . If λ is strongly dominant then
the inequalities are strict unless σ = id.
However note that we can have dominant µ and λ ≽ µ such that λ is not dominant. We
will use the notation P + for the set of dominant weights.
Lemma 17. Fix λ ∈ P + . Then the set of dominant weights µ such that µ ≼ λ is finite.
Proof. All such weights µ, satisfy (µ, µ) ≤ (λ, λ) so they are in the intersection of a
discrete lattice and a bounded set in E.
Definition 15. A set Π of weights is called saturated if for any λ ∈ Π and α ∈ Φ, given any
integer i such that
0 ≤ i ≤ ⟨λ, α⟩
we have λ − iα ∈ Π.
Obviously such Π is stable under W since we may take i = ⟨λ, α⟩.
Example 1.1. The root system Φ is saturated. Also take λ ∈ P + and consider its W orbit. Then the intersection of P and the closure of the convex hull of W.λ is saturated. ⌟
If Π is saturated we say it has an element λ of highest weight if λ ∈ Π and for all µ ∈ Π we
have µ ≼ λ.
Lemma 18. Let Π be a saturated set of weights with highest weight λ. Then if µ is dominant
and µ ≼ λ we have µ ∈ Π.
Proof. Suppose for some set of integers kα ∈ Z≥0 we have µ′ = µ + ∑α∈∆ kα α ∈ Π. We
will show that that if we reduce at least one of the kα ’s by one unit we still stay in Π. If
µ′ ≠ µ then kα > 0 for at least one α. From (∑ kα α, ∑ kα α) > 0 we have
∑ kβ ( ∑ kα α, β) > 0.
β∈∆
α∈∆
So for at least one β ∈ ∆ we have (∑ kα α, β) > 0 and kα > 0. Since µ is dominant ⟨µ, β⟩ ≥ 0
hence ⟨µ′ , β⟩ > 0. Since Π is saturated we may subtract β form µ′ , as many as ⟨µ′ , β⟩ times
and stay in Π.
Also we deduce that
Corollary 16. For any λ ∈ P + there is a unique saturated set with highest weight λ.
2. CLASSIFICATION OF REPRESENTATIONS
46
2. Classification of representations
Let g be a semisimple Lie algebra, h ⊂ g a Cartan subalgebra, Φ a root system and ∆ a set
of simple roots. We know that we may view any g-module V as a U(g)-module.
If V is finite-dimensional we can diagonalize h. Then we get a decomposition
V = ⊕λ∈h∗ Vλ
into eigenspaces Vλ = {v ∶ h.v = λ(h)v}. We say λ is a weight if Vλ ≠ 0.
Example 2.1. In case of the adjoint representation ad ∶ g Ð
→ gl(g) the weights of ad
are just precisely the roots. Note also that h = V0 .
⌟
Example 2.2. For g = sl2 every m ∈ Z gives an irreducible representation Vm of dimension m. One should think of m ∈ Z as an element of the weight lattice in h∗ .
⌟
Remark. The summation ∑λ Vλ is always a direct sum as a result of linear independence
of eigenvectors for distinct eigenvalues.
Remark. In general (not assuming finite-dimensionality) V ′ = ⊕λ Vλ is not necessarily equal
to V but has the structure of a g-submodule V ′ ⊆ V . In fact we know that g is generated
by ⟨Xα , Yα , hα ⟩ for all α ∈ ∆. V ′ is obviously stable under hα . And since
h.Xα v = Xα (h(v)) + [h, Xα ]v = (λ + α(h))Xα (v)
we observe that Xα maps Vλ to Vλ+α and Yα maps it to Vλ−α .
2.1. Standard cyclic modules.
Definition 16. Let V be a g-module. v + ∈ V is called a maximal vector if gα .v + = 0 for
all α > 0. V is a standard cyclic module (of highest weight λ) if there exists v + ∈ Vλ which
is a maximal vector.
Note that if V is a finite dimensional it has to be standard cyclic of some weight λ. Consider
b = h ⊕α>0 gα . By Lie’s theorem b must have a common eigenvector, v + . Then v + has to be
killed by gα for any α > 0.
Now suppose V is a standard cyclic g-module and v + is a maximal vector. Consider V ′ =
U(g).v + and let Φ+ = {β1 , ⋯, βm } be a set of positive roots. V ′ has interesting properties
we count below:
m
(1) V ′ is spanned by yβn11 ⋯yβnm
.v + where yβ is a generator of gβ .
In fact we may write g = n− ⊕ b where n− is the nilpotent part generated by yβ ’s
(β < 0). Then
U(g) = U(n− ) ⊕ U(b).
m
by PBW, and U(b) stabilizes ⟨v + ⟩.
Now notice that U(n− ) has basis yβn11 ⋯yβnm
2. CLASSIFICATION OF REPRESENTATIONS
47
(2) Weights of V ′ are of the form
µ = λ − ∑ kα α,
kα ∈ Z+ .
α∈∆
In fact yβ maps Vλ to Vλ−β so dim Vµ is equal to the number of ways of writing µ
as µ = λ − ∑α∈Φ+ kα α for kα ≥ 0.
(3) dim Vλ = 1.
(4) Each submodule of U(g).v + is a direct sum of its weight spaces.
(5) U(g).v + is indecomposable; has a unique maximal proper submodule, namely
⊕µ<λ Vµ , and has a unique irreducible quotient.
(6) Every homomorphic image of U(g)v + is a standard cyclic module of the highest
weight λ.
Theorem 2.1. For any λ ∈ h+ there is a unique (up to isomorphism) irreducible standard
cyclic module of highest weight λ (it can be infinite-dimensional).
Proof of existence. Take highest weight λ. Let Dλ be the one-dimensional bmodule Dλ ∶= ⟨v + ⟩. So Dλ has the structure of a U(b)-module. Let Z(λ) = U(g) ⊗U (b) Dλ
(with maximal vector 1 ⊗ v + ). Let Y (λ) be the maximal proper submodule of Z(λ). Then
V (λ) = Z(λ)/Y (λ).
Example 2.3. For sl(C) with λ ∈ Z+ we have V (λ) = Z(λ)/Z(−1).
⌟
So every irreducible finite dimensional module has to be of the form V (λ). Let α ∈ ∆ be
a simple root and let Sα be the copy of sl2 given by Jacobson-Marozov theorem. Then
V (λ) = ⊕Vµ is a finite dimensional Sα -module where µ ranges over integral weights (in the
sense of last class; i.e. µ ∈ P ) and µ(hα ) = ⟨µ, α⟩ ∈ Z. Since λ(hα ) is the highest weight of
V as an sl2 -module we have ⟨λ, α⟩ ≥ 0 for any α So λ is a dominant integral weight.
The main result is that when V is a finite-dimensional module and Π(V ) is the set of
weights of V , then Π(V ) is saturated (so for any α ∈ Φ, µ ∈ Π(V ) the α-string through µ is
in Π(V )) and we have a one-to-one correspondence
{
finite-dimensional
saturated sets of highest weight λ
}↔{
}.
irreducible representations
with λ a dominant integral weight
Download