basic linear algebra - Kansas State University

advertisement
BASIC LINEAR ALGEBRA
AN EXERCISE APPROACH
Gabriel Nagy
Kansas State University
c
Gabriel
Nagy
CHAPTER 1
Vector spaces and linear maps
In this chapter we introduce the basic algebraic notions of vector spaces and
linear maps.
1. Vector spaces
Suppose k is a field. (Although the theory works for arbitrary fields, we will
eventually focus only on the choices k = R, C.)
Definition. A k-vector space is an abelian group (V, +), equipped with an
external operation1
k × V 3 (λ, v) 7−→ λv ∈ V,
called scalar multiplication, with the following properties:
• λ · (v + w) = (λ · v) + (λ · w), for all λ ∈ k, v, w ∈ V .
• (λ + µ) · v = (λv) + (µv), for all λ, µ ∈ k, v ∈ V .
• (λ · µ)v = λ · (µ · v), for all λ, µ ∈ k, v ∈ V .
• 1 · v = v, for all v ∈ V .
The elements of a vector space are sometimes called vectors.
Examples. The field k itself is a k-vector space, with its own multiplication
as scalar multiplication.
A trivial group (with one element) is always a k-vector space (with the only
possible scalar multiplication).
1
Suppose V is a k-vector space. Prove that
0 · v = 0, for all v ∈ V.
(The zero in the left-hand side is the field k. The zero in the right-hand side is the
neutral element in the abelian group V .)
Use the above fact to conclude that for any v ∈ V , the vector −v (the inverse
of v in the abelian group V ) can also be described by
−v = (−1) · v.
2 Fix a field k, a Q
non-empty set I, and a family (Vi )i∈I of k-vector spaces.
Consider the product i∈I Vk , equipped with the operations:
• (vi )i∈I + (wi )i∈I = (vi + wi )i∈I ;
• λ · (vi )i∈I = (λvi )i∈I .
1 When convenient, the symbol · may be omitted.
1
2
1. VECTOR SPACES AND LINEAR MAPS
Q
Prove that i∈I Vk is a k-vector space. This structure is called the k-vector space
direct product of the family (Vi )i∈I .
Definition. Suppose V is a k-vector space. A subset X ⊂ V is called a
k-linear subspace, if
• Whenever x, y ∈ X, we have x + y ∈ X.
• Whenever x, y ∈ Xand λ ∈ k, we have λx ∈ X.
3 If X is a k-linear subspace of the k-vector space V , then X itself is a k-vector
space, when equipped with the operations “inherited” from V . Prove than any
linear subspace of V contains the zero vector 0 ∈ V
4 Let (Vi )i∈I be a family
Qof k-vector spaces (indexed by a non-empty set I). For
an element v = (vi )i∈I ∈ i∈I Vi let us define the set
bvc = {i ∈ I : vi 6= 0}.
Prove that the set
{v ∈
Y
Vi : bvc is finite }
i∈I
Q
is a linear subspace of i∈I Vi . This space
L is called the k-vector space direct sum
of the family (Vi )i∈I , and is denoted by i∈I Vi .
Definition. Suppose we have a family
(Vi )i∈I of vector spaces. For a fixed
L
index j ∈ I, define the map εj : Vj → i∈I Vi as follows. For a vector v ∈ Vj we
construct εj (v) = (wi )i∈I , where
v if i = j
wi =
0 if i 6= j.
L
We call the maps εj : Vj → i∈I Vi , j ∈ I, the standard inclusions. The maps
Y
πj :
Vi 3 (vi )i∈I 7−→ vj ∈ Vj
i∈I
are called the coordinate maps.
5 Let (Vi )i∈I be a family of vector spaces. Prove that the standard inclusions
εi , i L
∈ I. are injective. In fact prove that πi ◦ εi = IdVi . Prove that any element
v ∈ i∈I Vi is given as
X
v=
(εi ◦ πi )(v).
i∈bvc
In other words, if v = (vi )i∈I , then v =
P
i∈bvc εi (vi ).
6 Suppose
(Xj )j∈J is a family of k-linear subspaces of V . Prove that the interT
section j∈J Xj is again a k-linear subspace of V .
1. VECTOR SPACES
3
Definition. Let V be a k-vector space, and let M ⊂ V be an arbitrary subset
of V . Consider the family
F = {X : X k-linear subspace of V , and X ⊃ M }.
The set
Spank (M ) =
\
X,
X∈F
which is a linear subspace of V by the preceding exercise, is called the k-linear span
of M in V .
Convention. Spank (∅) = {0}.
Example. The linear span of a singleton is described as
Spank ({v}) = kv(= {λv : λ ∈ k}).
7 Prove that if M and N are subsets of a k-vector space V , with M ⊂ N , then we
also have the inclusion Spank (M ) ⊂ Spank (N ). Give an example where M ( N ,
but their spans coincide.
8 Let V be a k-vector space, and M be a subset of V . For an element v ∈ V ,
prove that the following are equivalent:
(i) v ∈ Spank (M );
(ii) there exists an integer n ≥ 1, elements x1 , . . . , xn ∈ M , and scalars
λ1 , . . . , λn ∈ k such that2 v = λ1 x1 + · · · + λn xn .
Hint:
First prove that the set of elements satisfying property (ii) is a linear subspace. Second,
prove that the linear span of M contains all elements satisfying (ii).
Notation. Suppose V is a vector space, and A1 , . . . , An are subsets of V . We
define
A1 + · · · + A2 = {a1 + · · · + an : ak ∈ Ak , k = 1, . . . , n}.
9 Let V be a k-vector space. Suppose A1 , . . . , An are k-homogeneous, in the
sense that for every k = 1, . . . , n we have the equality:
Ak = {λx : λ ∈ k, x ∈ Ak }.
Prove the equality
Span(A1 ∪ · · · ∪ An ) = Span(A1 ) + · · · + Span(An ).
10 Let V be a k-vector space, and (Xj )j∈J be a family of linear subspaces of V .
For an element v ∈ V , prove that the following are equivalent:
S
(i) v ∈ Spank
j∈J Xj );
S
(ii) there exists an integer n ≥ 1 and x1 , . . . , xn ∈ j∈J Xj , such that v =
x1 + · · · + xn .
2 From now on we will use the usual convention which gives the scalar multiplication precedence over addition.
4
1. VECTOR SPACES AND LINEAR MAPS
Comment. If X1 , . . . , Xn are linear subspaces of the vector space V , then using
the notation preceding Exercise ??, and the above result, we get
Span(X1 ∪ · · · ∪ Xn ) = X1 + · · · + Xn .
11 In general, a union of linear subspaces is not a linear subspace. Give an
example of two linear subspaces X1 , X2 ⊂ R2 , such that X1 ∪ X2 is not a linear
subspace.
12 Prove that the union of a directed family of linear subspaces is a linear
subspace. That is, if (Xj )j∈J is a family of k-linear subspaces of the k-vector space
V , with the property
• For any j, k ∈ J there exists some ` ∈ J such that Xj ⊂ X` ⊃ Xk ,
S
then j∈J Xj is again a k-linear subspace of V .
Hint: Use the preceding exercise.
Definition. Suppose V is a k-vector space. A set M ⊂ V is said to be klinearly independent, if (compare with Exercise ??) for every strict subset P ( M ,
one has the strict inclusion Spank (P ) ( Spank (M ).
13 Let M be a subset in the k-vector space V . Prove that the following are
equivalent:
(i) M is linearly independent.
(ii) If n ≥ 1 is an integer, if x1 , . . . , xn ∈ M are different elements, and if
λ1 , . . . , λn ∈ k satisfy
λ1 x1 + · · · + λn xn = 0,
then λ1 = · · · = λn = 0.
Hint: To prove (i) ⇒ (ii) show that if one has a relation as in (ii), then one of the x’s can be
eliminated, without changing the linear Span.
14 Prove that a linearly independent set M cannot contain the zero element.
Prove that a subset of a linearly independent set is again linearly independent.
15 Prove that the union of a directed family of linearly independent sets is again
a linearly independent set. That is, if (Xj )j∈J is a family of k-linearly independent
subsets of the k-vector space V , with the property
• For any j, k ∈ J there exists some ` ∈ J such that Xj ⊂ X` ⊃ Xk ,
S
then j∈J Xj is again a k-linearly independent subset of V .
16 Suppose V is a vector space, and x1 , x2 , . . . is a sequence (finite or infinite)
of different non-zero vectors in V . Prove that the following are equivalent:
(i) The set M = {x1 , x2 , . . . } is linearly independent.
(ii) The sequence of susbspaces Wk = Span({x1 , . . . , xk }) is strictly increasing,
in the sense that we have strict inclusions
W1 ( W2 ( . . . .
1. VECTOR SPACES
5
Hint: The implication (i) ⇒ (ii) is clear from the definition.
Conversely, if M were not linearly independent, there exist scalars λ1 , . . . , λn ∈ k such that
λ1 x1 + · · · + λn xn = 0, and at least one of the λ’s non-zero. If we take k = max{j : 1 ≤ j ≤
n and λj 6= 0}, then we get xk ∈ Span{x1 , . . . , xk−1 , which proves that Wk = Wk−1 .
Definition. Let V be a k-vector space. A subset B ⊂ V is called a k-linear
basis for V , if:
• Spank (B) = V ;
• B is k-linearly independent.
17
Prove the following:
Theorem 1.1.1. Let V be a k-vector space, and let P ⊂ M ⊂ V be subsets,
with P linearly independent, and Spank (M ) = V . Then there exists a linear basis
B such that P ⊂ B ⊂ M .
Sketch of proof: Consider the set
B = {B ⊂ V : B linearly independent, and P ⊂ B ⊂ M },
equipped with the inclusion as the order relation. Use Zorn’s Lemma to prove that B has a
maximal element, and then prove that such a maximal element must span the whole space. (One
key step is the checking of the hypothesis of Zorn’s Lemma. Use Exercise 13??.)
18 Suppose B is a linear basis for the vector space V , and P and M are subsets
of V , such that one has strict inclusions P ( B ( M . Prove that P and M are no
longer bases (although P is linearly independent and Span(M ) = V ).
19 Let W be a linear subspace of the vector space V , and let A be a linear basis
for W . Prove that there exists a linear basis B for V , with B ⊃ A.
Hint: Use Theorem 1.1.1.
Definition. A vector space is said to be finite dimensional, if it has a finite
basis.
20
Prove the following
Lemma 1.1.1. (Exchange Lemma) Suppose A and B are linear bases for the
vector space V . Prove that for every a ∈ A, there exists some b ∈ B, such that the
set (A r {a}) ∪ {b} is again a linear basis.
Hint: Write a = β1 b1 + · · · + βn bn , for some b1 , . . . , bn ∈ B, and some β1 , . . . , βn ∈ k r {0}.
At least one of the b’s does not belong to Span (A r {a}). Choose this as the exchange for a.
We have the inclusion (A r {a}) ∪ {b} ⊂ A ∪ {b}, with the first set linearly independent, but the
second one not. Using the fact that Span (A ∪ {b}) = V , Theorem 1.1.1 will force (A r {a}) ∪ {b}
to be the basis.
21 Prove that any two linear bases, in a finite dimensional vector space, have the
same number of elements.
Hint: Fix a finite basis A = {a1 , . . . , an }, and let B be an arbitrary basis. Use the Exchange
Lemma 1.1.1 to construct inductively a sequence of elements b1 , . . . , bn ∈ B, such that
• For every k = 1, . . . , n, the set {b1 , . . . , bk−1 , ak , . . . , an } is a linear basis.
6
1. VECTOR SPACES AND LINEAR MAPS
Note that all the b’s must be different. We end up with a linear basis B0 = {b1 , . . . , bn }, with
|B0 | = n. Use Exercise ?? to conclude that B0 = B.
22*
Prove the following generalization of the above result.
Theorem 1.1.2. Any two linear bases in a vector space have the same cardinality. (In cardinal arithmetic two sets have the same cardinality if there is a
bijection between them.)
Sketch of proof: Suppose A and B are two bases for V . If either A or B is finite, we apply
the previous exercise. So we can assume that both A and B are infinite.
For every a ∈ A we write it (uniquely) as a = β1 b1 + · · · + βn bn , for some b1 , . . . , bn ∈ B,
and some β1 , . . . , βn ∈ k r {0}, and we set PB (a) = {b1 , . . . , bn }, so that PB (a) is the smallest
subset of P ⊂ B, with Span(P ) 3 a. If we denote by F in(B) the set of all finite subsets of B, we
have now a map Π : A 3 a 7−→ PB (a) ∈ F in(B). This map is not injective. However, for every
P ∈ F in(B), we have Π−1 (P ) ⊂ Span(P ), which means that Π−1 (P ) is a linearly independent set
in the finite dimensional vector space Span(P ). By Theorem 1.1.1, combined with the previous
exercise, this forces each of the sets Π−1 (P ), P ∈ F in(B),
to be finite.
S
Then A is a disjoint union of preimages A = P ∈F in(B) Π−1 (P ), each of the sets in this
union being finite. Since F in(B) is infinite, we get
[
Card(A) = Card
Π−1 (P ) ≤ Card F in(B) = Card(B).
P ∈F in(B)
By symmetry, we also have Card(B) ≤ Card(A), and we are done.
Definition. Given a k-vector space V , the above Theorem states that the
cardinality of a linear basis for V is independent of the choice of the basis. This
“number” is denoted by dimk V , and is called the dimension of V . In the finite
dimensional case, the dimension is a non-negative integer. If V is the trivial (zero)
space, we define its dimension to be zero.
23 Let n ≥ 1 be an integer. Define, for each j ∈ {1, . . . , n}, the element ej =
(δij )ni=1 ∈ kn . (Here δij stands for the Kronecker symbol, defined to be 1, if i = j,
and 0 if i 6= j.) Prove that the set {e1 , . . . , en } is a linear basis for the vector space
kn , therefore we have dim kn = n.
24
Generalize the above result, by proving that
M dim
k = Card(I).
i∈I
25 Suppose W is a linear subspace of the vector space V . Prove that dim W ≤
dim V . (This means that if A is a linear basis for W , and B is a linear basis for V ,
then Card(A) ≤ Card(B), which in turn means that there exists an injective map
f : A → B.)
Hint: Find another basis B 0 for V , with B 0 ⊃ A. Then use Theorem 1.1.2.
26
Suppose V is a vector space. Prove that the following are equivalent:
(i) V is finite dimensional.
(ii) Whenever W is a linear subspace of V , with dim W = dim V , it follows
that W = V .
1. VECTOR SPACES
7
(iii) Every infinite increasing sequence W1 ⊂ W2 ⊂ W3 ⊂ · · · ⊂ V is an infinite
of linear subspaces is stationary, in the sense that there exists some k ≥ 1,
such that Wn = Wk , for all n ≥ k.
Definition. Suppose W = (Wj )j∈J is a family of non-zero linear subspaces
of a vector space V . We say that W is linearly independent, if for every k ∈ J one
has
[
Wk ∩ Span
Wj = {0}.
j∈Jr{k}
27 Let V be a vector space, and X be a subset of V r {0}. The following are
equivalent:
(i) The set X is linearly independent (as a subset of V ).
(ii) The family (kx)x∈X is a linearly independent family of linear subspaces
of V .
28 Let W = (Wi )i∈I be a linearly independent family of linear subspaces of V .
Suppose X = (Xi )i∈I isanother family of linear
subspaces, such that Xi ⊂ Wi , for
all i ∈ I. Define J = j ∈ I : Xj 6= {0} . Prove that XJ = (Xj )j∈J is also a
linearly independent family.
29 Suppose V is a vector space, and W1 , W2 , . . . is a sequence (finite or infinite)
of non-zero linear subspaces of V . Prove that the following are equivalent:
(i) The sequence (W1 , W2 , . . . ) is linearly independent family.
(ii) For every k we have
Wk ∩ (W1 + · · · + Wk ) = {0}.
Hint: Argue along the same lines of the hint given to exercise ??.
30 Let V be a vector space, and let W1 and W2 be two non-zero linear subspaces.
Prove that (W1 , W2 ) is a linearly independent pair of linear subspaces, if and only
if W1 ∩ W2 = {0}.
31 Let W be a linear subspace of the vector space V . Prove that there exists a
linear subspace X of V , such that W ∩ X = {0}, and W + X = V .
Hint:
Start with some linear basis A for W . Find a linear basis B for V , with B ⊃ A. Take
X = Span(B r A).
32 Suppose W = (Wj )j∈J is a family of linear subspaces of a vector space V .
Prove that the following are equivalent
(i) W is linearly independent.
(ii) If n ≥ 1 is an integer, if j1 , . . . , jn ∈ J are different indices, and if w1 ∈
Wj1 , . . . , wn ∈ Wjn are elements such that w1 + · · · + wn = 0, it follows
that w1 = · · · = wn = 0.
(iii) There exists a choice, for each j ∈ J, of a linear basis Bj for Wj , such
that
(α) the sets (Bj )j∈J are mutually disjoint, i.e. for any j, k ∈ I with j 6= k,
we haveSBj ∩ Bk = ∅;
(β) the set j∈J Bj is linearly independent.
8
1. VECTOR SPACES AND LINEAR MAPS
(iii’) For any choice, for each j ∈ J, of a linear basis Bj for Wj , we have the
properties (α) and (β) above.
33 Let (Wj )j∈J be a linearly independent family of linear
S subspaces of V . If we
choose, for each j ∈ J, a linear basis Bj for Wj , then j∈J Bj is linear basis for
S
the vector space Span j∈J Wj .
34 Let W1 , . . . , Wn be a finite linearly independent family of linear subspaces of
V . Prove that
dim (W1 + · · · + Wn ) = dim W1 + · · · + dim Wn .
35 Let V be a finite dimensional vector space. Suppose W1 , . . . , Wn are linear
subspaces of V , with
• W1 + · · · + Wn = V ;
• dim W1 + · · · + dim Wn .
Prove that (Wk )nk=1 is a linearly independent family.
Hint: For each k ∈ {1, . . . , n}, let Bk be a linear basis for Wk . Put B = B1 ∪ · · · ∪ Bn . On the
one hand, we have
(1)
|B| ≤ |B1 | + · · · + |Bn | = dim W1 + · · · + dim Wn = dim V.
On the other hand, we clearly have Span(B) = V , so we must have |B| ≥ dim V . This means that
we must have equality in (1), so in particular the sets B1 , . . . , Bn are mutually disjoint, and their
union is a linear basis for V . Then the result follows from exercise ??
L
36 Let (Vi )i∈I be a family of vector spaces, and let εj : Vj → i∈I Vi , j ∈ I, be
the standard inclusions.
L
(i) εj (Vj ) j∈I is a linearly independent family of linear subspaces of i∈I Vi .
(ii) Suppose for each j ∈ I, we choose a linear basis Bj for Vj . Then εj (Bj )
is a linear basis for Vj .
Using the fact that εj are injective, conclude that dim εj (Vj ) = dim Vj , for all j ∈ I
As an application of the above, prove (use exercise ??) that the dimension of a
finite direct sum is given as
dim
n
M
i=1
n
X
Vi =
dim Vi .
i=1
Definition. Suppose X is a k-linear subspace of the k-vector space V . In
particular X is a subgroup of the abelian group (V, +) We can then define the
quotient group V /X, which is again an abelian group. Formally, the quotient is
defined as the set of equivalence classes modulo X:
v≡w
(mod X), if and only if v − w ∈ X.
The addition is defined as [v]X + [w]X = [v + w]X . (Here [v]X stands for the
equivalence class of v.) We can also define the scalar multiplication by λ · [v]X =
[λv]X . With these operations V /X becomes a vector space, called the quotient
vector space. For a subset A ⊂ V , we denote by [A]X the set {[a]X : a ∈ A} ⊂ V /X.
1. VECTOR SPACES
37
9
Verify the statements made in the above definition.
Definitions. Suppose X is a k-linear subspace of the k-vector space V .
• For a set M ⊂ V we define its k-linear span relative to X to be the linear
subspace
Spank (M ; X) = Spank (M ∪ X) = X + Spank (M ).
• A set P ⊂ V is said to be k-linearly independent relative to X, if the
map P 3 p 7−→ [p]X ∈ V /X is injective, and the set [P ]X is linearly
independent in the quotient vector space V /X.
• A set B ⊂ V is called a k-linear X-basis for V ,
- Span(B; X) = V ;
- B is linearly independent relative to X.
38
Let X be a linear subspace of the vector space V .
A. If P ⊂ V is linearly independent relative to X, then P is linearly independent.
B. For a set P ⊂ V , the following are equivalent:
(i) P is linearly independent relative to X.
(ii) If n ≥ 1 is an integer, if p1 , . . . , pn ∈ P are different elements, and if
λ1 , . . . , λn ∈ k satisfy
λ1 p1 + · · · + λn pn ∈ X,
then λ1 = · · · = λn = 0.
(iii) There exists a linear basis B for X, such that P ∪ B is linearly
independent.
(iii’) If B is any linear basis for X, then P ∪ B is linearly independent.
(iv) P is linearly independent, and X ∩ Span(P ) = {0}.
39
Let X and Y be linear subspaces of the vector space V , such that X ⊂ Y .
A. Prove that for every set M ⊂ V , we have the inclusion Span(M ; X) ⊂
Span(M ; Y ).
B. Prove that if a set P is linearly independent relative to Y , then P is also
linearly independent relative to X.
40 Let X be a linear subspace of the vector space V . For a set B ⊂ V prove that
the following are equivalent:
(i) B is a linear X-basis for V .
(ii) The map B 3 b 7−→ [b]X ∈ V /X is injective, and [B]X is a linear basis for
V /X.
In this situation, prove the equality
dim V /X = Card(B).
41 Let X be a linear subspace of the vector space V , and let A be a linear basis
for X. Suppose B ⊂ V has the property A ∩ B = ∅. Prove that the following are
equivalent:
(i) B is a linear X-basis for V .
10
1. VECTOR SPACES AND LINEAR MAPS
(ii) A ∪ B is a linear basis for V .
Use this fact, in conjunction with Exercise ??, to prove the equality
(2)
dim X + dim V /X = dim V.
42 Let W1 and W2 be linear subspaces of the vector space V . Let B ⊂ W1 be
an arbitrary subset. Prove that the following are equivalent:
(i) B is a linear (W1 ∩ W2 )-basis for W1 .
(i) B is a linear W2 -basis for W1 + W2 .
Use this fact to prove the following “Paralellogram Law”
(3)
dim W1 + dim W2 = dim (W1 + W2 ) + dim (W1 ∩ W2 ).
Hint: Assume B satisfies condition (i). Prove first that Span(B; W2 ) = W1 + W2 . It suffices to
prove that W1 ⊂ Span(B; W2 ), which is pretty easy. Second, prove that B is linearly independent
relative to W2 . Argue that, if v = β1 b1 + · · · + βn bn ∈ W2 , then in fact we have v ∈ W1 ∩ W2 ,
which using (i) forces β1 = · · · = βn = 0. Use Exercise ?? to conclude that B is a linear W2 -basis
for W1 + W2 .
If B satisfies condition (ii), then it is clear that B is also linearly independent relative to
W1 ∩ W2 . To prove the equality Span(B; W1 ∩ W2 ) = W1 , it suffices to prove only inclusion “⊃.”
Start with some element w1 ∈ W1 . Using (ii) there exist b ∈ Span(B) and w2 ∈ W2 , such
that
w1 = b + w2 . This forces w2 = w1 − b to belong to W1 ∩ W2 , so w1 ⊂ Span B ∪ (W1 ∩ W2 ) , thus
proving the desired inclusion.
To prove the equality (3), we use the above equivalence, combined with exercise ?? to get
dim W1 /(W1 ∩ W2 ) = Card(B) = dim (W1 + W2 )/W2 .
By adding dim(W1 ∩ W2 ) + dim W2 to both sides, and using exercise ??, the result follows.
2. Linear maps
Definition. Suppose V and W are k-vector spaces. A map T : V → W is
said to be k-linear, if:
• T is a group homomorphism, i.e. T (x + y) = T (x) + T (y), for all x, y ∈ V ;
• T is compatible with the scalar multiplication, i.e. T (λx) = λT (x), for all
λ ∈ k, x ∈ V .
43 Suppose V is a k-vector space. For a scalar µ ∈ k, we define the multiplication
map
Mµ : V 3 v 7−→ µv ∈ V.
Prove that Mµ is k-linear. If we take µ = 1, then Mµ = IdV , the identity map on
V.
44 Let V and W be vector spaces, and let T : V → W be a linear map. For any
subset M ⊂ V , prove the equality
Span T (M ) = T Span(M ) .
45 Prove that the composition of two linear maps is again linear. Prove that the
inverse of a bijective linear map is again linear. A bijective linear map is called a
linear isomorphism.
2. LINEAR MAPS
11
46 Let X be a linear subspace of the vector space V , and let T : V → W be a
linear map. Prove that the restriction T X : X → W is again a linear map.
47 Let X be a k-linear subspace of a k-vector space V . Prove that the quotient
map (defined on page 8) V 3 v 7−→ [v]X ∈ V /X is k-linear. Also prove that the
inclusion ι : X ,→ V is linear.
48 Let T : V1 → V2 be a linear map, let X1 be a linear subspace of V1 , and let
X2 be a linear subspace of V2 . Prove the
Lemma 1.2.1. (Factorization Lemma) The following are equivalent:
(i) There exists a linear map S : V1 /X1 → V2 /X2 such that the diagram
quotient map
V1 −−−−−−−−→ V1 /X1




Ty
yS
quotient map
V2 −−−−−−−−→ V2 /X2
is commutative. (This means that, if we denote by qk : Vk → Vk /Xk ,
k = 1, 2 the quotient maps, then we have the equality q2 ◦ T = S ◦ q1 .)
(ii) T (X1 ) ⊂ X2 .
Moreover, in this case the map S, described implicitly in (i), is unique.
49 Let
L (Vi )i∈I be a family of vector spaces. ForQeach j ∈ I, denote by εj :
Vj →
i∈I Vi the standard inclusion, and by πj :
i∈I Vi → Vj the coordinate
projection. Prove that εj and πj are linear maps.
50
Prove the following:
Theorem 1.2.1. (Universal Property of the direct sum) Let
L (Vi )i∈I be a family
of vector spaces. For each j ∈ I, denote by εj : Vj →
i∈I Vi the standard
inclusion. Let W be a vector space, and let (Tj : Vj →LW )j∈I be a collection of
linear maps. Then there exists a unique linear map T : i∈I Vi → W , such that
T ◦ εj = Tj , for all j ∈ J.
L
Hint: We know that any element v = (vi )i∈I ∈ i∈I Vi is represented as
X
X
v=
(εj ◦ πj )(v) =
εj (vj ).
j∈bvc
Define T (v) =
P
j∈bvc (Tj ◦ πj )(v) =
j∈bvc
P
j∈bvc Tj (vj ).
Corollary 1.2.1. Let V be a k-vector space and let W = (Wi )L
j∈J be a family
of linear subspaces of V . Then there exists a unique linear map ΓW : i∈I Wi → V ,
such that, for every i ∈ I, we have
(ΓW ◦ εi )(w) = w, for all w ∈ Wi .
L
For any w = (wi )i∈I ∈ i∈I Wi , we have
X
ΓW (w) =
wi .
i∈bwc
12
1. VECTOR SPACES AND LINEAR MAPS
Comment. A particular case of the above result is when all the Wj ’s have
dimension one Q
(or zero). In this case the family W is represented by an element
b = (bj )j∈J ∈ j∈J V , by Wj = kbj . Following the above construction, we have
L
a linear map denoted ΓVb :
, which is defined as follows. For every
j∈J k → V P
L
V
λ = (λj )j∈J ∈ j∈J k, we have Γb (λ) = j∈bλc λj bj ∈ V .
51 Let V be aL
k-vector space and let W = (Wi )j∈J be a family of linear subspaces
of V . Let ΓW : j∈J Wj → V be the linear map defined in exercise ??.
A. Prove that ΓW is injective, if and only if W is a linearly independent
family.
S
B. Prove that ΓW is surjective, if and only if V = Span j∈J Wj .
Conclude that ΓW is an isomorphism, if and only if both condition below are true:
(i) W is a linearly
family;
S independent
(ii) V = Span j∈J Wj .
Definition. A family W = (Wj )j∈J of linear subspaces of V , satisfying the
conditions (i) and (ii) above, is called a direct sum decomposition of V .
Comment. A particular case of the above result can be derived, along the
same lines as in the comment following exercise
Q ??. Specifically, if V is a vector
space, and if we have a system b = (bj )j∈J ∈ j∈J V , then
L
A. ΓVb : j∈J k → V is surjective, if and only if V = Span {bj : j ∈ J} .
L
B. ΓVb : j∈J k → V is injective, if and only if all the bj ’s are different and
the set {bj : j ∈ J} is linearly independent.
L
In particular, ΓVb : j∈J k → V is a linear isomorphism, if and only if all the bj ’s
are different and the set {bj : j ∈ J} is a linear basis for V .
52 Let (Vj )j∈J be a direct sum decomposition for a vector space V . Let W
be a vector space, and assume that, for each j ∈ J, we are given a linear map
Tj : Vj ∈ W . Prove that there exists a unique linear map T : V → W , such that
T = Tj , for all j ∈ J.
Vj
53 Let V and W be vector spaces, and let T : V → W be a linear map. Since T is
a group homomorphism, we know that Ker T ⊂ V and Ran T ⊂ W are subgroups.
Prove that they are in fact linear subspaces.
Comment. We know from group theory that the triviality of the kernel is
equivalent to the injectivity of the homomorphism. As a particular case, we get the
following:
• A linear map is injective, if and only if its kernel is the zero subspace.
54 Generalize the above fact as follows. Let V and W be vector spaces, let
T : V → W be a linear map, and let Z be a linear subspace of V . Prove that the
following are equivalent:
(i) The restriction T Z : Z → W is injective.
(ii) Z ∩ Ker T = {0}.
2. LINEAR MAPS
13
55 Let X be a linear subspace of the vector space V , and let q : V → V /X denote
the quotient map.
A. Prove that Ker q = X. Use this to show that, given a linear subspace W of
V , the restriction q W : W → V /X is injective, if and only if W ∩X = {0}.
B. Given a linear subspace W of V , the restriction q W : W → V /X is
surjective, if and only if W + X = V .
56*
Prove the following technical result.
Lemma 1.2.2. (Descent Lemma) Let V be a vector space, and let T : V → V
be a linear map. For every Define the subspaces W0 = {0} and
Wk = Ker(T
· · ◦ T}),
| ◦ ·{z
k factors
for all k ≥ 1.
(i) One has the inclusions W0 ⊂ W1 ⊂ W2 ⊂ . . . .
(ii) For every k ≥ 1, one has Wk = T −1 (Wk−1 )
(iii) For every k ≥ 1, there exists a unique linear map Sk : Wk+1 /Wk →
Wk /Wk−1 such that the diagram
quotient map
Wk+1 −−−−−−−−→ Wk+1 /Wk



S
Ty
y k
Wk
−−−−−−−−→ Wk /Wk−1
quotient map
is commutative.
(iv) All linear maps Sk : Wk+1 /Wk → Wk /Wk−1 are injective.
(v) Suppose there exists some `, such that W` = W`+1 . Then Wk = W` , for
all k > `.
Sketch of proof: Parts (i) and (ii) are obvious.
From (ii) we get T (Wk ) ⊂ Wk−1 , and then part (iii) follows from the Factorization Lemma
??.
To prove part (iv), start with some x ∈ Ker Sk . This means that x = [w]Wk , for some
w ∈ Wk+1 , and T (w) ∈ Wk−1 . This forces w ∈ Wk , so x = 0 in the quotient space Wk+1 /Wk .
This means that Ker Sk = {0}, and we are done.
To prove part (v), observe first that the given condition forces W`+1 /W` = {0}. Use then
induction, based on (iv).
57 Let V and Z be vector spaces, let W be a linear subspace of V , and let
T : W → Z be a linear map. Prove that there exists a linear map S : V → Z, such
that S W = T .
Hint: It suffices to prove this in the particular case when Z = W and T = Id. By exercise
??,we can choose a linear subspace X of V , with X ∩ W = {0} and X + W = V . If we take
q : V → V /X the quotient map, then its restriction q W : W → V /X is a linear isomorphism.
−1
Put S = (q ) ◦ q.
W
58 Let V be a non-zero k-vector space, and let v ∈ V r {0}. Prove the existence
of a k-linear map φ : V → k with φ(v) = 1.
14
1. VECTOR SPACES AND LINEAR MAPS
Hint: Take W = Span(v) = kv, and define the map ψ : k 3 λ 7−→ λv ∈ W . Prove that
ψ : k → W is a linear isomorphism. Take φ0 : W → k to be the inverse of ψ, and apply the
previous exercise.
59 Let V and W be vector spaces, and let T : V → W be a linear map. For a
subset M ⊂ V , prove the equality
T −1 Span T (M ) = Span(M ; Ker T ).
60 Let V and W be vector spaces, and let T : V → W be a linear map. For a
subset M ⊂ V , prove that the following are equivalent:
(i) M is linearly independent
relative to Ker T .
(ii) The restriction T M : M → W is injective and the subset T (M ) ⊂ W is
linearly independent.
61 Let V and W be vector spaces, and let T : V → W be a linear map. Prove
that the following are equivalent:
(i) T is injective.
(ii) There exists a linear basis B for V , such that the restriction T B : B → W
is injective, and T (B) is linearly independent.
(ii’) If B is any linear basis for V , then the restriction T B : B → W is
injective, and T (B) is linearly independent.
62 Let V and W be vector spaces, and let T : V → W be a linear map. Prove
that the following are equivalent:
(i) T is surjective.
(ii) There exists a linear basis B for V , such that the restriction Span T (B) =
W . independent.
(ii’) If B is any linear basis for V , Span T (B) = W .
63 Let T : V → W be a linear isomorphism from the vector space V into the
vector space W . Prove that a set B ⊂ V is linear basis for V , if and only if T (B)
is a linear basis for W . Conclude that dim V = dim W .
64 Let V and W be vector spaces, and let T : V → W be a linear map. Prove
that the following are equivalent:
(i) T is a linear isomorphism.
(ii) There exists a linear basis B for V , such that the restriction T B : B → W
is injective, and T (B) is a linear basis for W .
(ii’) If B is any linear basis for V , then the restriction T B : B → W is
injective, and T (B) is a linear basis for W .
65 Let V and W be vector spaces, and let T : V → W be a linear map. Prove
the following.
2. LINEAR MAPS
15
Theorem 1.2.2. (Isomorphism Theorem) There exists a unique bijective linear
map Tb : V /Ker T → Ran T such that the following diagram is commutative
quotient map
V −−−−−−−−→ V /Ker T


b

Ty
yT
W
←−−−−−
inclusion
Ran T
(this means that T = ι ◦ T̂ ◦ q, where ι is the inclusion, and q is the quotient map).
Use exercise ?? to conclude that
(4)
dim(Ker T ) + dim(Ran T ) = dim V.
To get the existence, and uniqueness of T̂ , use the Factorization Lemma ??, with the
Hint:
subspaces Ker T ⊂ V and {0} ⊂ W .
66 Let V and W be finite dimensional vector spaces, with dim V = dim W , and
let T : V → W be a linear map.
A. Prove the “Fredholm alternative”
dim(Ker T ) = dim (W/Ran T ).
B. Prove that the following are equivalent:
(i) T is a linear isomorphism.
(ii) T is surjective.
(iii) T is injective.
Hint: Part A follows immediately from (4) and (2). To prove the equivalence in part B, use the
fact that for a linear subspace Y ⊂ W , the condition dim(W/Y ) = 0 is equivalent to the fact that
Y = W.
67* Give an infinite dimensional counterexample to the “Fredholm
L∞ alternative.”
Specifically, consider the infinite dimensional vetor space V =
n=1 k and construct a non-invertible surjective linear map T : V → V .
Notation. Suppose V and W are k-vector spaces. We denote by Link (V, W )
the set of all k-linear maps V → W .
68 Let V and W be k-vector spaces. For T, S ∈ Lin(V, W ) we define the map
T + S : V → W by
(T + S)(v) = T (v) + S(v), for all v ∈ V.
Prove that T + S is again a linear map. Prove that Lin(V, W ) is an abelian group,
when equipped with the addition operation. The neutral element is the null map
0(v) = 0.
69 Let V and W be k-vector spaces. For T ∈ Lin(V, W ) and λ ∈ k, we define
the map λT : V → W by
(λT )(v) = λT (v), for all v ∈ V.
Prove that λT is a linear map. Prove that Lin(V, W ), when equipped with this
multiplication, and the addition defined above, is k-vector space.
16
1. VECTOR SPACES AND LINEAR MAPS
70 The above construction is a special case of a more general one. Start with a
vector space W and a set X. Consider the set M ap(X, W ) of all functions X → W .
For f, g ∈ M ap(X, W ) we define f + g ∈ M ap(X, W ) by
(f + g)(x) = f (x) + g(x), for all x ∈ X,
and for f ∈ M ap(X, W ) and λ ∈ k, we define λf ∈ M ap(X, W ) by
(λf )(x) = λf (x), for all x ∈ X.
The M ap(X, W ) becomes a k-vector space. Prove that if V is a vector space, then
Lin(V, W ) is a linear subspace in M ap(V, W ).
Q
Comment. The set M ap(X, W ) is precisely the product x∈X W . The above
vector space structure on M ap(X, W ) is precisely the direct product vector space
structure.
71 Suppose V , W , X and Y are vector spaces, and T : X → V and S : W → Y
are linear maps. Prove that the map
Lin(V, W ) 3 R 7−→ S ◦ R ◦ T ∈ Lin(X, Y )
is linear.
72
Let V be a k-vector space. Prove that the map
Link (k, V ) 3 T 7−→ T (1) ∈ V
is a k-linear isomorphism.
73 Let V and W be vector spaces. Suppose Z ( V is a proper linear subspace.
Assume we have elements v ∈ V r
Z and w ∈ W . Prove that there exists a linear
map T ∈ Lin(V, W ), such that T Z = 0, and T (v) = w.
Hint: By assumption, the element v
b = [v]Z is non-zero in the quotient space V /Z. Use exercise
?? to find a linear map φ : V /Z → k, with φ(b
v ) = 1. Take σ : K → W to be unique linear map
with σ(1) = w. Now we have a composition σ ◦ φ ∈ Lin(V /Z, W ), with (σ ◦ φ)(b
v ) = w. Finally,
compose this map with the quotien map V → V /Z.
74
Suppose V is a vector space, and X is a finite set. Prove that
dim M ap(X, V ) = Card(X) · dim V.
Hint: Use exercise ??, plus the identification M ap(X, V ) =
75
L
x∈X
V.
Let V and W be vector spaces. Fix a subset X ⊂ V , and consider the map
ΣX : Lin(V, W ) 3 T 7−→ T X ∈ M ap(X, W ).
Prove that ΣX is a linear map.
76* Use same notations as in the previous exercise. Assume both V and W are
non-zero.
A. Prove that ΣX is injective, if and only if V = Span(X).
B. Prove that ΣX is surjective, if and only if X is linearly independent.
2. LINEAR MAPS
17
Conclude that ΣX is an isomorphism, if and only if X is a linear basis for V .
Hint: A. If Span(X) ( V , choose an element v ∈ V r Span(X), and an element
w ∈ W r {0}.
Use exercise ?? to produce a linear map T : V → W , such that T (v) = w and T X = 0. Such an
element is non-zero, but belongs to Ker ΣX , so ΣX is not injective. Conversely, any T ∈ Ker ΣX
will have the property that X ⊂ Ker T . In particular, if Span(X) = V , this will force Ker T = V ,
hence T = 0.
B. Assume X is not linearly independent. This means that there exist different elements
x1 , . . . , xn ∈ X and scalars α1 , . . . , αn ∈ k r {0}, such that α1 x1 + · · · + αn xn = 0. Choose a an
element w ∈ W r {0} and define the map f : X → W by
w
if x = x1
f (x) =
0
if x 6= x1
Prove that f does not belong to Ran ΣX , so ΣX is not surjective. Conversely, assume X is linearly
Q
independent, and let f : X → W , thought as an element b = f (x) x∈X ∈ x∈X W . Define the
L
linear map ΓW
x∈X k → W , as in exercise ??. Likewise, if we put Z = Span(X), we can use
b :
Q
L
Z
the element a = (x)x∈X ∈ x∈X Z, to define a linear map ΓZ
a :
x∈X k → Z. This time, Γa
W
Z
−1
is a linear isomorphism. If we consider the composition T = Γb ◦ (Γa )
∈ Lin(Z, W ), we have
managed to produce a linear map with T (x) = f (x), for all x ∈ X. We now extend T to a linear
map S : V → W , which will satisfy ΣX (S) = f .
77
Suppose V and W are vector spaces.
A. Prove that the following are equivalent:
(i) dim V ≤ dim W ;
(ii) there exists an injective linear map T : V → W ;
(iii) there exists a surjective linear map S : W → V .
B. Prove that the following are equivalent:
(i) dim V = dim W ;
(ii) there exists a linear isomorphism T : V → W .
Hint: Fix A a linear basis for V , and B a linear basis for W , so that ΣA : Lin(V, W ) →
M ap(A, W ) and ΣB : Lin(W, V ) → M ap(B, V ) are linear isomorphisms. To prove A (i) ⇒ (ii),
we use the definition of cardinal inequality
Card(A) ≤ Card(B) ⇔ ∃ f : A → B injective .
For f as above, if we take T ∈ Lin(V, W ) with the property that ΣA (T ) = f , it follows that T is
injective. To prove the implication (ii) ⇒ (iii), start with some injective linear map T : V → W ,
and consider S : Ran T → V to be the inverse of the isomorphism T : V → Ran T . Use exercise ??
to extend S to a linear map R : W → V . Since S is surjective, so is R. The implication (iii) ⇒ (i)
follows from the (2).
B. The implication (i) ⇒ (ii) has already been discussed. To prove the converse, use the
equivalence
Card(A) ≤ Card(B) ⇔ ∃ f : A → B bijective .
For f as above, if we take T ∈ Lin(V, W ) with the property that ΣA (T ) = f , it follows that T is
an isomorphism.
78
(5)
Assume V and W are vector spaces, with V finite dimensional. Prove that
dim Lin(V, W ) = dim V · dim W.
As a particular case, we have
(6)
dim Link (V, k) = dim V
Definition. For a k-vector space V , the vector space Link (V, k) is called the
dual of V .
18
1. VECTOR SPACES AND LINEAR MAPS
79 Let V be a k-vector space. Suppose B is a linear basis for V . For every b ∈ B,
define the map fb : B → k by
1 if x = b
fb =
0 if x 6= b
Use exercise ?? to find, for each b ∈ B, a unique element b∗ ∈ Link (V, k), with
ΣB (b∗ ) = fb . The linear map b∗ is uniquely characterized by b∗ (b) = 1, and
b∗ (x) = 0, for all x ∈ B r {b}.
(i) Prove that set B ∗ = {b∗ : b ∈ B} is linearly independent in Link (V, k).
(ii) Prove that the map B 3 b 7−→ b∗ ∈ B ∗ is injective, so we have Card(B) =
Card(B ∗ ).
80 Let V be a k-vector space, and let B be a linear basis for V . Using the
notations from the preceding exercise, prove that the following are equivalent
(i) B ∗ is a linear basis for Link (V, k).
(ii) V is finite dimensional.
Q
Hint: Use the isomorphism ΣB : Link (V, k) → b∈B k, we have
Span ΣB (B ∗ ) = Span({fb : b ∈ B}).
Q
But when we identify M ap(B, k) with b∈B k, we get
M
Span({fb : b ∈ B}) =
k.
b∈B
Therefore the condition Span(B ∗ ) = Link (V, k) is equivalent to the condition
Q
b∈B
k=
L
b∈B
k,
which is equivalent to the condition that B is finite.
Definition. Suppose V is a finite dimensional k-vector space, and B is a linear
basis for V . Then (see the above result) the linear basis B ∗ for Link (V, k) is called
the dual basis to B. In fact the notion of the dual basis includes also the bijection
B 3 b 7−→ b∗ ∈ B ∗ as part of the data.
3. The minimal polynomial
Notations. From now on, the composition of linear maps will be written
without the ◦ symbol. In particular, if V is a k-vector space, if n ≥ 2 is and
integer, and if T ∈ Link (V, V ), we write T n instead of T ◦ T ◦ · · · ◦ T (n factors).
Instead of Link (V, V ) we shall simply write Link (V ).
81 Suppose V is a k-vector space. The composition operation in Link (V, V ) is
obviously associative. Prove that, for every T ∈ Link (V ), the maps
Link (V ) 3 S −
7 → ST ∈ Link (V )
Link (V ) 3 S −
7 → T S ∈ Link (V )
are linear. (In particular, the composition product is distributive with respect to
the addition.)
Comment. The above result states that Link (V ) is a unital k-algebra. The
term “unital” refers to the existence of a multiplicative identity element. In our
case, this is the identity IdV , which from now on will be denoted by I (or IV when
we need to specify the space).
3. THE MINIMAL POLYNOMIAL
19
82 If dim V ≥ 2, prove that the algebra Lin(V ) is non-commutative, in the sense
that there exist elements T, S ∈ Lin(V ) with T S 6= ST . (Give an example when
T S = 0, but ST 6= 0.
83 Let V be a vector space, and let S, T : V → V be linear maps. Suppose S
and T commute, i.e. ST = T S. Prove
(i) For any integers m, n ≥ 1, the linear maps S m and T n commute: S m Tn =
T nSm.
(ii) The Newton Binomial Formula holds:
n
(S + T ) =
n
X
n
k
S n−k T k .
k=0
Hints: For (i) it Suffices to analyze the case n = 1. Use induction on m
For (ii) use (i) and induction.
The following is a well-known result from algebra
Theorem 1.3.1. Suppose L is a unital k-algebra. Then for any element X ∈ L,
there exists a unique unital k-algebra homomorphism ΦX : k[t] → L, such that
ΦX (t) = X.
Here k[t] stands for the algebra of polynomials in a (formal) variable t, with
coefficients in k. The unital algebra homomorphism condition means that
• ΦX is linear;
• ΦX is multiplicative, in the sense that φx (P Q) = ΦX (P )ΦX (Q), for all
p, q ∈ k[t];
• Φx (1) = I.
To be more precise, the homomorphism ΦX is constructed as follows. Start with
some polynomial P (t) = α0 + α1 t + · · · + αn tn . Then
ΦX (P ) = α0 I + α1 X + · · · + αn X n .
Definition. Suppose V is a k-vector space, and X : V → V is a linear
map. If we apply the above Theorem for the algebra Link (V ), and the element X,
the corresponding map ΦX : k[t] → Link (V ) is called the polynomial functional
calculus of X. For a polynimial P ∈ k[t], the element Φx (P ) will simply be denoted
by P (X).
Remark 1.3.1. If P and Q are polynomials, then P (X)Q(X) = Q(X)P (X),
simply because both are equal to (P Q)(X). This means that, although Link (V ) is
not commutative, the sub-algebra
{P (X) : P ∈ k[t]} ⊂ Link (V )
is commutative.
84 Let V be a vector space, and let S, T : V → V be linear maps. Assume S
and T commute. Prove that, for any two polynomials P, Q ∈ k[t], the linear maps
P (S) and Q(T ) commute.
85 Prove that the algebra k[t] is infinite dimensional, as a k-vector space. More
explicitly, the set {1} ∪ {tn : n ≥ 1} is a linear basis for k[t].
20
1. VECTOR SPACES AND LINEAR MAPS
86 Let V be a finite dimensional k-vector space. For every X ∈ Link (V ), prove
that there exists a non-zero polynomial P ∈ k[t], such that P (X) = 0.
Hint: Consider the polynomial functional calculus ΦX : k[t] → Link (V ). On the one hand, we
notice that dim Link (V ) = (dim V )2 < ∞ (by exercise ??). This will force dim(Ran ΦX ) < ∞.
On the other hand, we know that
dim(Ker ΦX ) + dim(Ran ΦX ) = dim(k[t]).
But since k[t] is infinite dimensional, this forces Ker φX to be infinite dimensional. In particular
Ker ΦX 6= {0}.
87 Suppose V is a finite dimensional k-vector space, and X ∈ Link (V ). Prove
that there exists a unique polynomial M ∈ k[t] with the following properties:
(i) M (X) = 0.
(ii) M is monic, in the sense that the leading coefficient is 1.
(iii) If P ∈ k[t] is any polynomial with P (X) = 0, then P is divisible by M
Hint: One (conceptual) method of proving this result is to quote the property of k[t] of being a
principal ideal domain. Then Ker ΦX , being an ideal, must be presented as Ker ΦX = M · k[t],
for a unique monic polynomial M .
Here is the sketch of a “direct” proof (which in fact traces the steps used in proving that k[t]
is a pid, in this particular case). Choose a (non-zero) polynomial N ∈ k[t], of minimal degree,
such that N (X) = 0. Replace N with M = λN , so that M is monic (we will still have M (X) = 0).
Suppose now P is a polynomial with P (x) = 0. Divide P by M with remainder, so that we have
P = M Q+R, for some polynomials Q and R, with deg R < deg M . Using the fact that M (X) = 0,
we get
R(X) = M (X)Q(X) + R(X) = P (X) = 0.
By minimality, this forces R = 0, thus proving property (iii). The uniqueness is a direct application
of (iii).
Definition. Suppose V is a finite dimensional k-vector space, and X : V → V
is a linear map. The polynomial M , described in the preceding exercise, is called
the minimal polynomial of X, and will be denoted by MX .
88*
Give an infinite dimensional counter example to exercise ??.
Hint: Take V = k[t] and define the linear map X : k[t] 3 P (t) 7−→ tP (t) ∈ k[t]. Prove that
there is no polynomial M such that M (X) = 0.
89 Let V be a finite dimensional vector space, and fix some X ∈ Lin(V ). Prove
that the following are equivalent:
• X is a scalar multiple of the identity, i.e. there exists some α ∈ k such
that X = αI.
• The minimal polynomial MX (t) is of degree one.
Definition. Suppose V is a vector space. A linear map X : V → V is said to
be nilpotent, if there exists some integer n ≥ 1, such that X n = 0.
90 Let V be a finite dimensional vector space, and let X ∈ Lin(V ) be nilpotent.
Prove that the minimal polynomial is of the form MX (t) = tp , for some p ≤ dim V .
In particular, we have X dim V = 0.
Hint: Fix n such that X n = 0 Consider the linear subspaces W0 = {0} and Wp = Ker(X k ),
k ≥ 1. It is clear that MX (t) = tp , where p is the smallest index for which Wp = V . Prove that
3. THE MINIMAL POLYNOMIAL
21
for every k ∈ {1, . . . , p} we have the strict inclusion Wk−1 ( Wk . (Use the Descent Lemma ??)
Use then (recursively) the dimension formula for quotient spaces (2), to get
dim V =
p
X
dim(Wk /Wk−1 ).
k=1
Since each term in the sum is ≥ 1, this will force dim V ≥ p.
91 Let V be a vector space, and let M, N : V → V be nilpotent linear maps. If
M and N commute, then prove that M + N is again nilpotent. If X : V → V is an
arbitrary linear map, which commutes with N , then prove that XN is nilpotent.
92* Suppose V is finite dimensional, X ∈ Lin(V ), and W is an invariant linear
subspace, i.e. with the property T (W ) ⊂ W . Let T : V /X → V /X be the unique
linear map which makes the digram
quotient map
V −−−−−−−−→ V /W




Xy
yT
quotient map
V −−−−−−−−→ V /W
commutative (see exercise ??). Let us also consider the linear map S = X W :
W → W . Prove that
(i) The minimal polynomial MX (t) of X divides the product of minimal polynomials MS (t)MT (t).
(ii) The polynomial MX (t) is divisble by both minimal polynomials MS (t) and
MT (t), hence MX (t) is divisible by the lowest common multiple (lcm) of
MS (t) and MT (t).
Conclude that, if MS (t) and MT (t) are relatively prime, then
MX (t) = MS (t)MT (t).
Hint: Observe first that, for any polynomial P ∈ k[t], one has a commutative diagram
quotient map
V −−−−−−−−−→ V /W


P (T )

P (X)y
y
quotient map
V −−−−−−−−−→ V /W
and the equality P (S) = P (X)
. In particular, since MX (X) = 0, we immediately get MX (S) =
W
0 and MX (T ) = 0, thus proving (ii). To prove (i), we only need to show that MS (X)MT (X) = 0.
Start with an arbitrary element v ∈ V . Since MT (T ) = 0, using the above diagram we get the
equality
q [MT (X)](v) = 0, in V /W,
which means that the element w = [MT (X)](v) belongs to W . But then, using MS (S) = 0, we
get
0 = [MS (S)](w) = [MS (X)](w) = [MS (X)] [MT (X)](v) = [MS (X)MT (X)](v).
93 The following example shows that, in general the minimal polynomial MX (t)
can differ from both the product MS (t)MT (t) and lcm(MS (t), MT (t)). Consider
the space V = k4 and the linear map
X : k4 3 (α1 , α2 , α3 , α4 ) 7−→ (0, α1 , 0, α2 ) ∈ k4 .
22
1. VECTOR SPACES AND LINEAR MAPS
Take the linear subspace W = {(0, 0, λ, µ) : λ, µ ∈ k} ⊂ V . Check the equalities:
MS (t) = t, MT (t) = t2 , and MX (t) = t4 .
Hint: Prove that S = 0, T 2 = 0, but T 6= 0, and X 4 = 0, but X 3 6= 0.
94* Suppose V is finite dimensional, X ∈ Lin(V ), and W1 , . . . , Wn are invariant
subspaces, such that W1 + · · · + Wn = V . For each k = 1, . . . , n we take Mk (t) to
be the minimal polynomial of X Wk . Prove that the minimal polynomial of X is
given as
MX (t) = lcm(M1 (t), . . . , Mk (t)).
Hint:
Use induction on k. In fact, it suffices to prove the case k = 2. As in the previous exercise, prove that MX (t) is divisible by lcm(M1 (t), . . . , Mk (t)). Conversely, if we denote lcm(M1 (t), . . . , Mk (t)) simply by P (t), we know that for every k, we have a factorization
P (t) = Qk (t)Mk (t). In particular, for every w ∈ Wk , we get
[P (X)](w) = [Qk (X)] [Mk (X)](w) = [Qk (X)] [Mk (X W )](w) = 0.
k
Now we have P (X)W = 0, for all k, which gives P (X) = 0, thus proving that P (t) is divisible
k
by MX (t).
95* Let V be a finite dimensional k-vector space, and let X : V → V be a linear
map. Prove the following
Theorem 1.3.2. (Abstract Spectral Decomposition) Assume the minimal polynomial is decomposed as
MX (t) = M1 (t) · M2 (t) · · · · · Mn (t),
(7)
such that for any j, k ∈ {1, . . . , n} with j 6= k, we have gcd(Mj , Mk ) = 1. Define
the polynomials
MX (t)
Pj (t) =
, j = 1, . . . , n.
Mj (t)
Since gcd(P1 , P2 , . . . , Pn ) = 1, we know there exist polynomials Q1 , . . . , Qn ∈ k[t],
such that
Q1 (t)P1 (t) + Q2 (t)P2 (t) + · · · + Qn (t)Pn (t) = 1.
(8)
Define the linear maps
Ej = Qj (X)Pj (X) ∈ Link (V ), j = 1, 2, . . . , n.
(i)
(ii)
(iii)
(iv)
(v)
The linear maps Ej , j = 1, . . . , n are idempotents, i.e. Ej 2 = Ej .
For any j, k ∈ {1, . . . , n} with j 6= k, we have Ej Ek = 0.
E1 + E2 + · · · + En = I.
For each j ∈ {1, . . . , n} we have the equality Ran Ej = Ker[M
j (X)].
For every j, the minimal polynomial of the restriction X Ran Ej is precisely
Mj (t).
Sketch of proof: By construction, we have (iii), so in fact conditions (i) and (ii) are equivalent.
To prove (ii) start with j 6= k. Using the fact that Pj Pk is divisible by all the Mi ’s it follows that
Pj Pk is divisible by MX , so we get Pj (X)Pk (X) = 0. Using polynomial functional calculus, we
get
Ej Ek = Qj (X)Pj (X)Qk (X)Pk (X) = Qj (X)Qk (X)Pj (X)Pk (X) = 0.
To prove (iv) we first observe that since Mj (t)Pj (t) = MX (t), we have (again by functional
calculus)
Mj (X)Ej = Mj (X)Qj (X)Pj (X) = Qj (X)Mj (X)Pj (X) = Qj (X)MX (X) = 0,
3. THE MINIMAL POLYNOMIAL
23
which proves the inclusion Ran E j ⊂ Ker[Mj (X)]. Conversely, since
Pk (t) is divisible by Mj (t)
for all k 6= j, we see that Pk (X)Ker[M (X)] = 0, which forces Ek Ker[M (X)] = 0, for all k 6= j.
j
j
Using (iii) this gives Ej = Id
, thus proving the other inclusion.
Ker[Mj (X)]
Ker[Mj (X)]
To prove (v), denote for simplicity the subspaces Ran Ej by Wj . By (iii) we know that
W1 + · · · + Wn = V . By exercise ?? we know that
MX = lcm(M10 , . . . , Mn0 ),
where Mj0 is the minimal polynomial of X W . By (iii) we know that Mj (X W ) = 0, hence Mj (t)
(9)
j
j
is divisble by Mj0 (t). In particular we will also have gcd(Mj0 , Mk0 ) = 1, for all j 6= k, so by (9) we
will get
MX (t) = M10 (t)M20 (t) · · · · · Mn0 (t).
This forces Mj0 = Mj , for all j
Definition. Use the hypothesis of the above Theorem. The idempotents
E1 , . . . , En are called the spectral idempotents of X, associated with the decomposition (7). The subspaces Ran Ej = Ker[Mj (X)], j = 1, . . . , n are called the
spectral subspaces of X, associated with the decomposition (7). Notice that
(i) V = Ker[M1 (X)]
n + Ker[M2 (X)] + · · · + Ker[Mn (X)].
(ii) Ker[Mj (X)] j=1 is a linearly independent family of linear subspaces.
n
In other words, Ker[Mj (X)] j=1 is a direct sum decomposition of V .
96
Prove the properties (i) and (ii) above.
97 The above definition to carries a slight ambiguity, in the sense that the
spectral idempotents Ej are defined using a particular choice of the polynomials
Qj for which (8) holds. Prove that if we choose another sequence of polynomials,
Q1 , . . . , Qn , such that
Q1 (t)P1 (t) + Q2 (t)P2 (t) + · · · + Qn (t)Pn (t) = 1,
then we have the equalities
Qj (X)Pj (X) = Ej , for all j = 1, . . . , n.
Therefore the spectral idempotents are un-ambiguously defined.
Hint: Define E j = Qj (X)Pj (X), j = 1, . . . , n. Use the Theorem to show Ran E j = Ran Ej , and
then use the Theorem again to show that E j = Ej , for all j.
Another approach is to look at the spectral decomposition
V = Ker[M1 (X)] + Ker[M2 (X)] + · · · + Ker[Mn (X)],
which depends only on the factorization (7). For every element v ∈ V , there are unique elements
wj ∈ Ker[Mj (X)], j = 1, . . . , n, such that v = w1 + · · · + wn . Then Ej (v) = wj .
98
Prove the following
Theorem 1.3.3. (Uniqueness of Spectral Decomposition) Let V be a finite
dimensional vector space, let X : V → V be a linear map, and let W1 , . . . , Wn be a
collection of invariant linear subspaces. For each j = 1, . . . , n, define Mj (t) to be
the minimal polynomial of X W ∈ Lin(Wj ). Assume that:
j
(i) W1 + · · · + Wn = V .
(ii) For every j, k ∈ {1, . . . , n} with j 6= k, we have gcd(Mj , Mk ) = 1.
24
1. VECTOR SPACES AND LINEAR MAPS
Then the minimal polynomial decomposes as MX (t) = M1 (t) · · · · · Mn (t), and
moreover, the Wj ’s are precisely the spectral subspaces associated with this decomposition, i.e. Wj = Ker[Mj (X)], for all j = 1, . . . , n.
Hint: The decomposition of MX (t) has already been discussed (see exercise ??). It is clear
that Wj ⊂ Ker[Mj (X)]. On the one hand, (Wj )n
j=1 is a linearly independent family of linear
subspaces, so
dim V = dim(W1 + · · · + Wn ) = dim W1 + · · · + dim Wn .
On the other hand, we have dim Wj ≤ dim(Ker[Mj (X)]), and we also have
dim V = dim(Ker[M1 (X)]) + · · · + dim(Ker[Mn (X)]).
By finiteness, this forces dim Wj = dim(Ker[Mj (X)]), so we must have Wj = Ker[Mj (X)].
99* Work again under the hypothesis of the Abstract Spectral Decomposition
Theorem. The following is an alternative construction of the spectral idempotents.
Suppose R1 , . . . , Rn are polynomials such that, for each j ∈ {1, . . . , n}, we have:
(i) Rj is divisible by Mk , for all k ∈ {1, . . . , n}, k 6= j;
(ii) 1 − Rj is divisible by Mj .
(See the hint on why such polynomials exist.) Prove that Rj (X) = Ej , for all j.
Hint:
The simplest example of the existence of R1 , . . . , Rn is produced by taking Rj (t) =
Qj (t)Pj (t), j = 1, . . . , n.
Define Fj = Rj (X). On the one hand, j, k ∈ {1, . . . , m} with j 6= k, the polynomial Rj Rk is
divisible by both Pj and Pk , hence it is divisible by gcd(Pj , Pk ) = MX . In particular we get
Fj Fk = Rj (X)Rk (X) = (Rj Rk )(X) = 0.
On the other hand, for a fixed j, it is clear that Rk is divisible by Mj , for all k 6= j. Therefore,
if we consider the polynomial R(t) = R1 (t) + · · · + Rn (t), we see that R − Rj is divisible by Mj .
But so is 1 − Rj . This means that 1 − R is divisble by Mj . Since this is true for all j, it follows
that 1 − R is divisible by gcd(M1 , . . . , Mn ) = MX . This forces R(X) = 1, which means that
F1 + · · · + Fn = I, so we must have Fj 2 = Fj .
n
Now we have a linearly independent family of linear subspaces Ran Fj j=1 , with
(10)
Ran Fn + · · · + Ran Fn = V.
Notice that Mj Rj is divisible by MX , so Mj (X)Fj = Mj (X)Rj (X) = 0. This means that
Ran Fj ⊂ Ker[Mj (X)]. Because of (10), this forces Ran Fj = Ker[Mj (X)] = Ran Ej , which in
turn forces Fj = Ej .
100 Suppose V and W are k-vector spaces, and S : V → W is a linear isomorphism. Prove that the map
Link (V ) 3 X 7−→ SXS −1 ∈ Link (W )
is an isomorphism of k-algebras. Use this fact tow prove that, for a linear map
X ∈ Link (V ) one has
P (SXS −1 ) = SP (X)S −1 , for all P ∈ k[t].
101 Suppose V , W and S are as above, and V (hence also W ) finite dimensional.
Prove that for any X ∈ Lin(V ) we have the equality of minimal polynomials
MX (t) = MSXS −1 (t).
4. SPECTRUM
25
4. Spectrum
Convention. Throughout this section we restrict ourselves to a particular
choice of the field k = C, the field of complex numbers. All vector spaces are
assumed to be over C.
This choice of the field is justified by the fact that the arithmetic in C[t] is
particularly nice, as shown by the following:
Theorem 1.4.1. (Fundamental Theorem of Algebra) For every non-zero polynomial P ∈ C[t] there exists a collection of different numbers λ1 , . . . , λn ∈ C, and
integers m1 , . . . , mk ≥ 1, such that
P (t) = (t − λ1 )m1 · · · · · (t − λn )mn .
The numbers λ1 , . . . , λn are called the roots of P . They are precisely the
soultions (in C) of the equation P (λ) = 0. The numbers m1 , . . . , mn are called the
multiplicities. More precisely, mk is called the multiplicity of the root λk . Remark
that
deg P = m1 + · · · + mn .
Definitions. Let V be a vector space, and X : V → V be a linear map. The
spectrum of X is defined to be the set
Spec(X) = {λ ∈ C : MX (λ) = 0}.
Suppose λ ∈ Spec(X). The multiplicity of λ, as a root of the minimal polynomial, is
called the spectral multiplicity of λ, and is denoted by mX (λ). The linear subspace
N (X, λ) = Ker[(X − λI)mX (λ) ,
is called the spectral subspace of λ. The number
dX (λ) = dim N (X, λ)
is called the spectral dimension of λ.
According to the Fundamental Theorem of Algebra, if we list the spectrum as
Spec(X) = {λ1 , . . . , λn }, with the λ’s different, we have a decomposition
(11)
MX (t) = (t − λ1 )mX (λ1 ) · · · · · (t − λn )mX (λn ) .
When we apply the Abstract Spectral Decomposition Theorem for the polynomials
Mj (t) = (t − λj )mX (λj ) , j = 1, . . . , n, we get the following
Theorem 1.4.2. (Spectral Decomposition Theorem) Suppose V is a finite
dimensional vector space, and X : V → V is a linear map. List the spectrum as
Spec(X) = {λ1 , . . . , λn }, with all λ’s different.
(i) The spectral subspaces N (X, λj ), j = 1, . . . , n, form a linearly independent
family of linear subspaces.
(ii) N (X, λ1 ) + · · · + N (X, λn ) = V .
If for every j ∈ {1, . . . , n} we define Xj = X N (X,λ ) ∈ Lin N (X, λj ) , then
j
(iii) the minimal polynomial of Xj is (t − λj )mX (λj ) ; in particular the spectrum
is a singleton Spec(Xj ) = {λj }.
26
1. VECTOR SPACES AND LINEAR MAPS
Definitions. Suppose V is a finite
dimensional vector space, and X : V → V
is a linear map. The family N (X, λ) λ∈Spec(X) , which by Theorem ?? is a direct
sum decomposition of V , is called the spectral decomposition of X. The spectral
idempotents of X are the ones described in the Abstract Spectral Decomposition
Theorem (exercise ??), corresponding to (11). They are be denoted by EX (λ),
λ ∈ Spec(X). These idempotents have the following properties (which uniquely
characterize them):
(i) EX (λ)EX (µ) = 0, for all λ, µ ∈ Spec(X) with λ 6= µ;
(ii) Ran EX (λ) = N (X, λ), for all λ ∈ Spec(X).
P
As a consequence, we also get λ∈Spec(X) EX (λ) = I.
102 Let V be a finite dimensional vector space, and let X : V → V be a linear
map. Prove that, for every λ ∈ Spec(X), we have the inequality mX (λ) ≤ dX (λ).
Hint:
Consider the linear map Xλ = X N (X,λ) ∈ Lin N (X, λ) . We know that the minimal
polynomial of Xλ is Mλ (t) = (t−λ)mX (λ) . Then the linear map T = Xλ −λI : N (X, λ) → N (X, λ)
is nilpotent. Use exercise ?? to conclude that (Xλ − λI)dX (λ) = 0, hence obtaining the fact that
(t − λ)dX (λ) is divisible by Mλ (t).
Definition. Let V be a finite dimensional vector space, let X : V → V be a
linear map, and let Spec(X) = {λ1 , . . . , λn } be a listing if its spectrum (with all
λ’s different). The polynomial
HX (t) = (t − λ1 )dX (λ1 ) · · · · · (t − λn )dX (λn )
is called the chracteristic polynomial of X.
Remark 1.4.1. The above result gives the fact that the minimal polynomial
MX divides the characteristic polynomial HX .
103 Prove that deg HX = dim V . As a consequence of the above remark, we
have.
Proposition 1.4.1. The degree of the minimal polynomial does not exceed
dim V .
Hint: Use the Spectral Decomposition to derive the fact that deg HX = dim V .
104 Suppose V and W are finite dimensional vector spaces and S : V → W
is a linear isomorphism. Prove that for any X ∈ Lin(V ) we have the equality of
characteristic polynomials HX (t) = HSXS −1 (t).
Hint: Using the equality of minimal polynomials MX (t) = MSXS −1 (t), we know that Spec(X) =
Spec(SXS −1 ). Prove that, for every λ ∈ Spec(X) we have
S N (X, λ) = N (SXS −1 , λ),
hence dX (λ) = dSXS −1 (λ).
105
Prove the following characterization of the spectrum.
Proposition 1.4.2. Let V be a finite dimensional vector space, and X : V →
V be a linear map. For a complex number λ, the following are equivalent:
(i) λ ∈ Spec(X);
4. SPECTRUM
27
(ii) the linear map X − λI is not injective;
(iii) the linear map X − λI is not surjective;
(iv) the linear map X − λI is not bijective;
Hint: The equivalence of (ii)-(iv) follows from the Fredholm alternative. If λ ∈ Spec(X), upon
writing MX (t) = P (t)(t − λ), we see that 0 = MX (X) = P (X)(X − λI), so X − λI cannot be
invertible. Conversely, if λ 6∈ Spec(X), then when we divide MX (t) by t − λ we get a non-zero
remainder. In other words, we get MX (t) = (t − λ)Q(t) + µ, for some polynomial Q and some
µ 6= 0. But then, using functional calculus, we get
0 = MX (X) = (X − λI)Q(X) + µI,
which proves that the linear map Y = −µ−1 Q(X) satisfies (X − λ)Y = Y (X − λI) = I, so X − λI
is invertible.
The above characterization, specifically condition (ii) brings up an important
concept.
Definition. A number λ is said to be an eigenvalue for the linear map X :
V → V , if there exists a non-zero element v ∈ V , with X(v) = λv. Such a v is then
called an eigenvector for X, corresponding to λ.
With this terminiology, we have λ ∈ Spec(X), if and only if λ is an eigenvalue
for X. (This condition is equivalent to the fact that Ker(X − λI) 6= {0}, which is
the same as the non-injectivity of X − λI.)
106
Prove the following:
Theorem 1.4.3. (Spectral Mapping Theorem) If V is a finite dimensional
vector space, and X : V → V is linear, then for any polynomial P ∈ C[t] one has
the equality
Spec P (X) = P Spec(X) .
Sketch of proof: To prove the inclusion “⊃” start with some λ ∈ Spec(X), and use exercise
?? to get a λ-eigenvector
v for X. Then show that v is a P (λ)-eigenvector for P (X), hence
P (λ) ∈ Spec P (X) .
To prove the inclusion “⊂” start with some µ ∈ Spec P (X) , and Use the Fundamental
Theorem of Algebra to get a factorization
P (t) − µ = (t − λ1 ) · (t − λn ),
for some λ − 1, . . . , λn ∈ C (not necessarily different). Use functional calculus to obtain
P (X) − µI = (X − λ1 I) · (X − λn I).
Argue that at least one of the linear maps X − λj I, j = 1, . . . , n is non-invertible, so that at least
one of the λj ’s belongs to Spec(X). For such a λj we then have µ = P (λj ) ∈ P Spec(X) .
The Spectral Mapping Theorem has numerous applications. One of them is
the following.
107 Let V be a finite dimensional vector space, and let X : V → V be a linear
map. Consider the linear map S : V → V defined by
X
S=
λEX (λ),
λ∈Spec(X)
where EX (λ), λ ∈ Spec(X), are the spectral idempotents of X. Prove that
28
1. VECTOR SPACES AND LINEAR MAPS
(i) The minimal polynomial of S is
Y
MS (t) =
(t − λ).
λ∈Spec(X)
In particular MS (t) has simple roots.
(ii) The linear map N = X − S is nilpotent.
Hints:
Let us recall (see exercise ??) that the spectral idempotents can be constructed as
EX (λ) = Rλ (X), where Rλ (t), λ ∈ Spec(X) is a family of polynomials with the following properties:
(α) Rλ (t) is divisible by (t − µ)mX (µ) , for all λ, µ ∈ Spec(X) with λ 6= µ;
(β) 1 − Rλ (t) is divisible by (t − λ)mX (λ) , for all λ ∈ Spec(X).
In particular, we get
Rλ (µ) = δλµ , for all λ, µ ∈ Spec(X).
(12)
Now, by functional calculus, the linear map S is given by S = P (X), where
X
P (t) =
λRλ (t),
λ∈Spec(X)
and then (12) will give
(13)
P (λ) = λ, for all λ ∈ Spec(X).
Using the Spectral Mapping Theorem, we get the equality Spec(S) = Spec(X). Notice that, by
X (λ) ], λ ∈ Spec(X) = Spec(S),
construction, the subspaces Wλ = Ran EX (λ) = Ker[(X − λI)m
for a direct sum decomposition of V . It is pretty obvious that S W = λIWλ , for all λ ∈ Spec(S),
λ
which means that the minimal polynomial of S is Mλ (t) = t − λ. Using exercise ?? we then
Wλ
have the desired form for MS (t).
If we look at N = X − S, we see that we can write, N = Q(X), where Q(t) = t − P (t). Then
using (13) we have
Q(λ) = 0, for all λ ∈ Spec(X),
so by the Spectral Mapping Theorem we get Spec(N ) = {0}, so N is nilpotent.
Definition. Let V be a finite dimensional vector space. A linear map S :
V → V is said to be semisimple, if the minimal polynomial MS (t) has simple roots.
The above result then states that any linear X : V → V map has a “Semisimple
+ Nilpotent” decomposition X = S + N , with both S and N representable using
functional calculus of X.
108 Let V be a finite dimensional vector space, and let X : V → V be a linear
map. Prove that the following are equivalent:
(i) X is semisimple.
(ii) There exist idempotents E1 , . . . , En , with
• Ei Ej = 0, for all i, j ∈ {1, . . . , n}, with i 6= j,
• E1 + · · · + En = I,
and there exist complex numbers λ1 , . . . , λn ∈ C, such that
X = λ 1 E 1 + · · · + λ n En .
(iii) There exists a linear basis for V consisting of eigenvectors for X.
Hint: Suppose X is semisimple. Use the construction given in exercise ?? to produce the
semisimple map
X
S=
λEX (λ).
λ∈Spec(X)
We know that S can be written as S = P (X), where P ∈ C[t] is a certain polynomial, with
P (λ) = λ, for all λ ∈ Spec(X). In particular, the polynomial Q(t) = t − P (t) is divisible by
4. SPECTRUM
29
Q
λ∈spec(X) (t − λ) = MX (t), hence Q(X) = 0, which by functional calculus gives X = P (X) = S,
thus proving (ii).
Assume now X satisfies (ii). In particular the subspaces Wj = Ran Ej , j = 1, . . . , n form a
directSsum decomposition for V . If, for every j ∈ {1, . . . , n}, we choose a basis Bj for Wj , then
B= n
j=1 Bλ is a desired linear basis (each b ∈ Bj is a λj -eigenvector for X).
Finally, assume B is a linear basis for V consisting of eigenvectors for S. This means that
we have a map B 3 b 7−→ θ(b) ∈ C, such that
S(b) = θ(b)b, for all b ∈ B.
Put Λ = {θ(b) : b ∈ B}. For each λ ∈ Λ, we define the set Bλ = θ−1 ({λ}), and we put
Wλ = Span(Bλ ). Then it is clear that S W = λIWλ , for all λ ∈ Λ, so that the minimal
λ
P
S
polynomial of S W is Mλ (t) = t − λ. Since B = λ∈Λ Bλ , we have
λ∈Λ Wλ = V , so by
λ
Q
exercise ?? we conclude that the minimal polynomial of S is MS (t) = λ∈Λ (t − λ).
109 Let V be a finite dimensional vector space, and let S, T : V → V be
semisimple linear maps, which commute, i.e. ST = T S. Prove that S + T and ST
are again semisimple.
P
Hints: Use the spectral decompositions to write S =
λ∈Spec(S) λES (λ), with the ES (λ)’s
obtained
from
functional
calculus
of
S.
Likewise,
we
have
the spectral decomposition T =
P
µ∈Spec(T ) λET (µ), with the ET (µ)’s obtained from functional calculus of T . In particular,
the ES (λ)’s commute with the ET (µ)’s. Using this fact we see that, if we define the linear maps
Fλµ = ES (λ)ET (µ), then the Fλµ ’s will be idempotents. Moreover, we will have
• Fλ1 µ1 Fλ2 µ2 = 0, if (λ1 , µ1 ) 6= (λ2 , µ2 );
X
•
Fλµ = I.
λ∈Spec(S)
µ∈Spec(T )
Now we have
T +S =
X
(λ + µ)Fλµ ,
TS =
X
(λµ)Fλµ ,
λ∈Spec(S)
µ∈Spec(T )
λ∈Spec(S)
µ∈Spec(T )
and we can apply exercise ??.
110
The result from exercise ?? can be strengthened. Prove the following.
Theorem 1.4.4. (“Semisimple + Niplotent” Theorem) Let V be a finite dimensional vector space, and let X : V → V be a linear map. Then there exists a
unique decomposition X = S + N , with
(i) S semisimple;
(ii) N niplotent;
(iii) S and N commute, i.e. SN = N S.
Skecth of proof: The existence is clear from exercise ??.
To prove uniqueness, assume X = S 0 + N 0 is another decomposition, with properties (i)(iii). Notice that both S 0 and N 0 commute with X, hence they also commute with the spectral
idempotents EX (λ), λ ∈ Spec(X). In particular, S 0 and S commutive, and also N 0 and N
commute. Now we have
S0 − S = N − N 0,
with the lefthand side semisimple (exercise ??) and the righthand nilpotent (exercise ??). But the
only linear map, which is both semisimple and nilpotent, is the null map. This forces S 0 = S and
S = N.
30
1. VECTOR SPACES AND LINEAR MAPS
5. The similarity problem and the spectral invariant
Definition. Let V be a vector space, and let X, Y : V → V be linear maps.
We say that X is similar to Y , if there exists a linear isomorphism S : V → V , such
that Y = SXS −1 .
111
Prove that “is similar to” is an equivalence relation.
The following is a fundamental question in linear algebra.
Similarity Problem. Suppose V is a finite dimensional vector space, and
X, Y : V → V are linear maps. When is X similar to Y ?
An additional problem is to construct at least one linear isomorphism S : V →
V , such that Y = SXS −1 .
The approach to the Similarity Problem is by constructing various (computable)
invariants. More precisely, for every linear map X : V → V , we want to construct
an “object” ΘX , such that
• if X is similar to Y , then ΘX = ΘY .
If we have such a construction, we call the correspondence Lin(V ) 3 X 7−→ ΘX a
similarity invariant. Then our main goal will be to construct a “strong” similarity
invariant, in the sense that
• If ΘX = ΘY , then X is similar to Y .
In this section we will eventually construct such a strong invariant for complex
vector spaces.
Convention. All vector spaces in this section are assumed to be over C.
Examples. The following are examples of similarity invariants
(i) the spectrum Lin(V ) 3 X 7−→ Spec(X);
(ii) the minimal polynomial Lin(V ) 3 X 7−→ MX ∈ C[t];
(iii) the characteristic polynomial Lin(V ) 3 X 7−→ HX ∈ C[t].
Note that (ii) and (iii) are stronger than (i).
112 The mininmal and the charecteristic polynomial are not strong enough.
Give examples of linear maps T, X ∈ Lin(C3 ), and Y, Z ∈ Lin(C2 ) such that
MT (t) = MX (t), HY (t) = HZ (t), but HT (t) 6= HX (t) and MY (t) 6= MZ (t). This
shows that T is not similar to X, although MT (t) = MX (t), and Y is not similar
to Z, although HY (t) = HZ (t).
Hint: Take T (α1 , α2 , α3 ) = (0, α2 , α3 ), and X(α1 , α2 , α3 ) = (0, 0, α3 ).
Take Y (α1 , α2 ) = (0, α1 ), Z(α1 , α2 ) = (0, 0).
113 Even when the minimal and characteristic polynomial are combined, i.e.
when we consider the invariant ΘX = (MX , HX ), we still do not get a strong
invariant. Consider the linear maps X, Y ∈ Lin(C4 ), defined by
X(α1 , α2 , α3 , α4 ) = (0, α1 , 0, α3 )
Y (α1 , α2 , α3 , α4 ) = (0, α1 , 0, 0)
Later on we shall see why X and Y are not similar. For the moment prove that
MX (t) = MY (t) = t2 , and HX (t) = HY (t) = t4 .
5. THE SIMILARITY PROBLEM AND THE SPECTRAL INVARIANT
31
Definition. Let V be a finite dimensional vector space, and let X : V → V
is a linear map. For λ ∈ Spec(X), we define the linear subspaces Nk (X, λ) =
Ker[(X − λI)k ], k ≥ 1. These subspaces are called the generalized eigenspaces of
X, corresponding to λ. More specifically, Nk (X, λ) will be called the generalized
eigenspace of order k. It will be convenient to use the convention N0 (X, λ) = {0}.
114 Suppose V is a finite dimensional vector space, and X : V → V is a linear
map. Fix λ ∈ Spec(X). Prove that for any integers p > k ≥ 1 one has the inclusion
X k Np (X, λ) ⊂ Np−k (X, λ).
115 Suppose V is a finite dimensional vector space, and X : V → V is a linear
map. Fix λ ∈ Spec(X). Prove that the sequence of generalized λ-eigenspaces
exhibits the following inclusion pattern:
(14) {0} ( N1 (X, λ) ( · · · ( NmX (λ) (X, λ) = NmX (λ)+1 (X, λ) = · · · = N (X, λ).
Hint: It is clear that we have the inclusions
{0} ( N1 (X, λ) ⊂ N2 (X, λ) ⊂ . . . ,
Define the maps Yk = X N (X,λ) ∈ Lin Nk (X, λ) . On the one hand, it is clear that (Yk −λI)k =
k
0, so that if we denote by Mk the minimal polynomial of Yk , we know that
(α) The polynomial Mk (t) divides the polynomial (t − λ)k .
On the other hand, by exercise ?? we also know that the minimal polynomial Mk of Yk also
divides the minimal polynomial MX of X. Combined with (α) this gives
(β) The polynomial Mk (t) divides the polynomial (t − λ)min{k,mX (λ)} .
Because Yk = Yk+1 N (X,λ) , again by exercise ?? we get
k
(γ) Mk divides Mk+1 , for all k ≥ 1.
Suppose now k ≥ mX (λ). Combining (β) and (γ) yields Mk (t) = MmX (λ) (t) = (t − λ)mX (λ) .
In particular, we get (Yk − λ)mX (λ) = 0, i.e. (X − λI)mX (λ) N (X,λ) = 0, forcing the inclusion
k
Nk (X, λ) ⊂ NmX (λ) (X, λ). This then gives in fact the equality Nk (X, λ) = NmX (λ) (X, λ) =
N (X, λ).
The next step would be to prove that, if k ∈ {1, . . . , mX (λ)}, then Nk−1 (X, λ) ( Nk (X, λ).
By the Descent Lemma (exercise ??), it suffices to check this only for k = mX (λ). Suppose
NmX (λ)−1 (X, λ) = N (X, λ). Using (β) this will force (X − λI)mX (λ)−1 N (X,λ) = 0. This is
however impossible, because the minimal polynomial of X is (t − λ)mX (λ) .
N (X,λ)
Definition. Let V be a finite dimensional vector space, and let X : V → V
be a linear map. For each λ ∈ Spec(X) we define the sequence of integers
∆X (λ) = [dim N1 (X, λ), dim N2 (X, λ), . . . , dim NmX (λ) (X, λ)].
The family D X = ∆X (λ) λ∈Spec(X) is called the spectral invariant of X.
Comments. It is understood that the spectral invariant ∆X has the following
data built in it:
• The spectrum Spec(X) as the indexing set.
• The spectral multiplicities. For each λ ∈ Λ, the spectral multiplicty
mX (λ) is the length of the sequence ∆X (λ).
So, given X, Y ∈ Lin(V ), the equality ∆X = ∆Y means:
32
1. VECTOR SPACES AND LINEAR MAPS
(i) Spec(X) = Spec(Y );
(ii) mX (λ) = mY (λ), for all λ ∈ Spec(X) = Spec(Y );
(iii) dim Nk (X, λ) = dim Nk (Y, λ), for all λ ∈ Spec(X) = Spec(Y ) and all
k with 1 ≤ k ≤ mX (λ) = mY (λ).
116
Prove that the spectral invariant is a similarity invariant.
Hint: Assume Y = SXS −1 . We know already that the spectra and the multiplicities are equal.
Prove that for any λ ∈ Spec(X) = Spec(Y ) and any integer k ≥ 1 we have S Nk (X, λ) =
Nk (Y, λ), so (rememeber that linear isomorphisms preserve dimensions) we get dim Nk (X, λ) =
dim Nk (Y, λ).
117 Prove that the linear maps X, Y ∈ Lin(C4 ) defined in exercise ?? have
different spectral invariants. In particular X is not similar to Y .
The remainder of this section is devoted to the converse of exercise ??:
Theorem 1.5.1. (Spectral Invariant Theorem) If V is a finite dimensional
vector space, and if X, Y : V → V are linear maps with D X = ∆Y , then X is
similar to Y .
The proof of this result is quite ellaborate, so we will break it into several steps.
The first result we are concerned with is a set of restrictions that occur on the
spectral invariant.
118 Let V be a finite dimensional vector space, and let X : V → V be a linear
map. Prove that, for each λ ∈ Spec(X), the sequence
mX (λ)
dim[Nk (X, λ)/Nk−1 (X, λ)] k=1
is non-decreasing.
Hint: Use the Descent Lemma (exercise ??) for the linear map T = X − λI.
Comments. Since we have strict inclusions Nk−1 (X, λ) ( Nk (X, λ), k =
1, . . . , mX (λ), by exercise ??, it follows that the terms in the above sequence also
satsifies:
dim[Nk (X, λ)/Nk−1 (X, λ)] ≥ 1, for all k = 1, . . . , mX (λ).
We also know that, for each k ≥ 1, we have
dim[Nk (X, λ)/Nk−1 (X, λ)] = dim Nk (X, λ) − dim Nk−1 (X, λ).
So the spectral invariant exhibits the following pattern
(15)
dim N1 (X, λ) ≥ dim N2 (X, λ) − dim N1 (X, λ) ≥ . . .
· · · ≥ dim NmX (λ) (X, λ) − dim NmX (λ)−1 (X, λ) ≥ 1.
119 Let V be a finite dimensional vector space, and let X : V → V be
a linear map. Fix some λ ∈ Spec(X). Prove there exists a unique sequence
[c1 , c2 , . . . , cmX (λ) ] of non-negative integers, with cmX (λ) ≥ 1, such that for every
k = 1, . . . , mX (λ), we have:
(16)
dim Nk (X, λ) = c1 + 2c2 c + · · · + kck + kck+1 + · · · + kcmX (λ) .
5. THE SIMILARITY PROBLEM AND THE SPECTRAL INVARIANT
33
Hint: Define first the numbers
pk = dim[Nk (X, λ)/Nk−1 (X, λ)] = dim Nk (X, λ) − dim Nk−1 (X, λ), k = 1, . . . , mX (λ),
so that by exercies ?? we have
p1 ≥ p2 ≥ · · · ≥ pmX (λ) ≥ 1,
and
(17)
dim Nk (X, λ) = p1 + p2 + · · · + pk , for all k = 1, . . . , mX (λ).
Define cmX (λ) = pmX (λ) , and ck = pk − pk+1 for all k with 1 ≤ k < mX (λ). Then we will have
pk = ck + ck+1 + · · · + cmX (λ) , for all k = 1, . . . , mX (λ),
and the conclusion follows from (17)
Definitions. Let V be a finite dimensional vector space, and let X : V → V
be a linear map. For every λ ∈ Spec(X), the numbers defined in the above exercise
will be called the λ-cell numbers of X. They will be denoted by ck (X, λ). From the
above discussion, it is clear that if one defines a system ΥX = ΥX (λ) λ∈Spec(X) ,
where
ΥX (λ) = [c1 (X, λ), c2 (X, λ), . . . , cmX (λ) (X, λ)],
then we have constructed a new similarity invariant, called the Jordan invariant.
For each λ ∈ Spec(X), the set
bΥX (λ)c = k ∈ {1, . . . , mX (λ)} : ck (X, λ) 6= 0
will be called the λ-cell support of X.
Comment. It is clear that the spectral invariant and the Jordan invariant are
equivalent, in the sense that
• ΥX = ΥX , if and only if ∆X = ∆Y .
The proof of the Spectral Invariant Theorem will use the Jordan invariant in an
essential way.
120 Let V be a finite dimensional vector space, and let T : V → V be a nilpotent
linear map, so that Spec(T ) = {0}, and the minimal polynomial is MT (t) = tm for
some m ≥ 1. Prove that there eixsts a sequence of linear subspaces
Z1 ⊂ Nk (T, 0), Z2 ⊂ N2 (T, 0), . . . , Zm ⊂ Nm (T, 0),
such that, for every p ∈ {1, . . . , m} we have:
(i) Zp + T (Zp+1 ) + · · · + T m−p (Zm ) + Np−1 (T, 0) =
Np (T, 0);
(ii) Zp ∩ T (Zp+1 ) + · · · + T m−p (Zm ) + Np−1 (T, 0) = {0}.
Hint: Denote, for simplicity, the generalized eigenspaces Nk (T, 0) by Nk , k = 1, . . . , m. Do
reverse induction on p. Start off by choosing a linear subspace Zm ⊂ Nm such that
• Zm + Nm−1 = Nm ;
• Zm ∩ Nm−1 = {0}.
The existence of such a subspace follows from exercise ??. Suppose the spaces Zk+1 , Zk+2 , . . . , Zm
have been chosen, such that properties (i) and (ii) hold for p ∈ {k + 1, k + 2, . . . , m}. Consider
the linear subspace
Wk = T (Zk+1 ) + T 2 (Zk+2 ) + · · · + T m−k (Zm ) + Nk−1 ⊂ Nk ,
and use again exercise ?? to find a linear subspace Zk , such that
• Zk + Wk = Nk ;
• Zk ∩ Wk = {0}.
34
1. VECTOR SPACES AND LINEAR MAPS
Definition. With the above notations, the sequence (Z1 , Z2 , . . . , Zm ) will be
called a cell skeleton for T .
121 Use the setting and notations from exercise ??. Prove that if (Z1 , . . . , Zm )
is a cell skeleton for T , then for every p ∈ {1, . . . , m} we have the equality
Np (T, 0) =
p X
m
X
T j−` (Zj ).
`=1 j=`
0
(Here we use the convention: T (Zj ) = Zj .)
Hint: Use property (i) and induction on p.
In exercises ??-?? below, we fix a finite dimensional vector space V , a nilpotent
linear map T : V → V . Denote by m the degree of the minimal polynomial, i.e.
we have MT (t) = tm . For simplicity, the generalized eigenspaces Nk (T, 0) will be
denoted by Nk , k = 1, . . . , m.
122 Suppose p ∈ {1, . . . , m}. If W ⊂ Np is a linear subspace, with W ∩ Np−1 =
{0}. Suppose k is an integer with 1 ≤ k < p. Prove that
• The linear map T p−k W : W → V is injective.
• The space T p−k (W ) is a linear subspace of Nk , with T p−k (W ) ∩ Nk−1 =
{0}.
Hint: Notice that, by the construction of the generalized eigenspaces, as Nj = Ker(T j ), we have
the inclusions
(18)
T i (Nj ) ⊂ Nj−i , for all i, j with 1 ≤ i < j ≤ m.
In particular, we have the inclusions
T p−k (Np ) ⊂ Nk and T p−k (Np−1 ) ⊂ Nk−1 .
From the first inclusion, we immediately get the inclusion T p−k (W ) ⊂ Nk . Using the Factorization
Lemma (exercise ??), there exists a linear map S : Np /Np−1 → Nk /Nk−1 , such that the diagram
quotient map
Np −−−−−−−−−→ Np /Np−1




yS
T p−k y
quotient map
Nk −−−−−−−−−→ Nk /Nk−1
is commutative. So, if we denote by qj : Nj → Nj /Nj−1 , j = 1, . . . , m, the quotient maps, we
have
(19)
S ◦ qp = qk ◦ T p−k .
It turns out that S is injective. (Indeed, if we start with some element x ∈ Np /Np−1 with
S(x) = 0, then if we write x = qp (v) for some v ∈ Np , then we get 0 = S(x) = S qp (x) =
p−k
p−k
k−1
qk T
(v) , which means that T
(v) belongs to Nk−1 = Ker(T
). This will then give
T p−1 (v) = T k−1 T p−k (v) = 0, which means that v belongs to Ker(T p−1 ) = Np−1 . Finally this
forces x = qp (v) = 0.)
Now, since W ∩ Ker qp = W ∩ Np−1 = {0}, by exercise ?? we know that the restriction qp W
is injective. This gives the fact that the map (S ◦ qp )W is injective.
When the equality (19) to W we see that the composition (qk ◦ T p−k )W is injective, so this
will force T p−k W to be injective. Moreover, the injectivity of (qk ◦ T p − k)W will also force the
equality T p−k (W ) ∩ Ker qk = {0}, which means precisely that T p−k (W ) ∩ Nk−1 = {0}.
123* Let (Z1 , Z2 , . . . , Zm ) be a cell skeleton for T . Fix some integer p with
1 ≤ p < m. Define the linear subspace
Vp = T (Zp+1 ) + T 2 (Zp+2 ) + · · · + T m−p (Zm ).
5. THE SIMILARITY PROBLEM AND THE SPECTRAL INVARIANT
35
Let Xp : Zp+1 ⊕ Zp+2
⊕ · · · ⊕ Zm → Vp be the unique linear map with the property
that Xp Z
= T k Z , for all k = 1, 2, . . . , m − p. Prove that
p+k
p+k
(i) Xp is a linear isomorphism.
(ii) Vp ∩ Np−1 = {0}.
Hint: Use reverse induction on p. Start with p = m − 1. In this case Vm−1 = T (Zm ), and
everything is obvious from exercise ??. Assume now the properties (i) (ii) are true for p = k + 1,
and let us prove them for p = k. By construction, the linear subspace Zp+1 ⊂ Np+1 is constructed
in such a way that
(α) Zk+1 + Vk+1 + Np = Nk+1 ;
(β) Zk+1 ∩ [Vk+1 + Nk ] = {0}.
Using the inductive hypothesis, we know that Vk+1 ∩ Nk = {0}, which means that Vk+1 and
Nk form a linearly independent pair of subspaces. In particular, using exercise ?? and condition
(β), we get the fact that the triple (Zk+1 , Vk+1 , Nk ) is also a linearly independent family. In
particular, the subspace Yk+1 = Zk+1 + Vk+1 has the properties:
(γ) Yk+1 ∩ Nk = {0};
(δ) the linear map
Y : Zk+1 ⊕ Vk+1 3 (z, v) 7−→ z + v ∈ Yk+1
is a linear isomorphism.
Using (γ) and exercise ??, we know that T Y
k+1
is injective, and T (Yk+1 ) ∩ Nk = {0}, thus giving
property (i) for p = k. Observe that
T (Yk+1 ) = T (Zk+1 ) + T (Vk+1 ) = T (Zk+1 ) + T 2 (Zk+2 ) + · · · + T m−k (Zk ) = Vk ,
: Yk+1 → Vp . Using (δ), and the inductive hypothesis, we
so we get a linear isomorphism T Y
k+1
immediately obtain property (ii) for p = k.
124
Let (Z1 , Z2 , . . . , Zm ) be a cell skeleton for T . Prove the equalities:
dim Zp = cp (T, 0), for all p = 1, . . . , m,
where c1 (T, 0), . . . , cm (T, 0) are the cell numbers of T .
Hint: Define, Vp = T (Zp+1 ) + · · · + T m−p (Zm ), and Wp = Vp + Np−1 , for all p = 1, . . . , m − 1.
Also put Wm = Zm . Denote dim Zk simply by ck . On the one hand, by the preceding exercise,
we know that
(20)
dim Vp = dim Zp+1 + dim Zp+2 + · · · + dim Zm = cp+1 + cp+2 + · · · + cm ,
for all p = 1, . . . , m − 1. On the other hand, again by the preceding exercise, we know that
(21)
dim Wp = dim Vp + dim Np−1 .
Recall that, by construction, the subspace Zp ⊂ Np is chosen such that
• Zp ∩ Wp = {0};
• Zp + Wp = Np .
This will give dim Np = dim Zp + dim Wp = cp + dim Wp . Using (21) and (20) we get
dim Np = cp + dim Vp + dim Np−1 = cp + cp+1 + · · · + cm + dim Np−1 .
By the construction of the cell numbers (see exercise ??) this gives the desired result.
125
Let us denote by J the cell support of T , i.e.
J = j ∈ {1, . . . , m} : cj (X, 0) 6= 0 .
Let (Z1 , . . . , Zm ) be a cell skeleton for T . Prove that the family
T k (Zj ) j∈J
0≤k<j
is a direct sum decomposition of V . (Here we use the convention: T 0 (Zj ) = Zj .)
36
1. VECTOR SPACES AND LINEAR MAPS
Hint: On the one hand, we know that
dim T k (Zj ) = dim Zj = cj (X, 0), for all j, k with 0 ≤ k < j ≤ m.
Of course, if j 6∈ J, these dimensions are zero. In particular, using the properties of cell numbers
(exercise ??), we get we have
X
(22)
dim T k (Zj ) =
m j−1
X
X
dim T k (Zj ) =
j=1 k=0
j∈J
0≤k<j
m j−1
X
X
cj (X, 0) =
j=1 k=0
m
X
jcj (X, 0) =
j=1
= dm (X, 0) = dim Nm = dim V.
On the other hand, by exercise ??
V = Nm =
m X
m
X
T j−` (Zj ),
`=1 j=`
and after making the change of indices k = j − `, we get
V =
m j−1
X
X
X
T k (Zj ) =
j=1 k=0
T k (Zj ),
j∈J
0≤k<j
and the result then follows from (22) and exercise ??.
126*
Prove the following
Theorem 1.5.2. (Spectral Invariant Theorem for Nilpotents) Let V and W be
finite dimensional vector spaces, and let T : V → V and X : W → W be nilpotent
linear maps. Assume
dim[Ker(T k )] = dim[Ker(X k )], for all k ≥ 1.
Then there exists a linear isomorphism S : V → W such that X = ST S −1 .
Sketch of proof: Denote for simplicity the numbers dim[Ker(T k )] = dim[Ker(X k )] by dk . We
know that we have 0 < d1 < · · · < dm = dm+1 = . . . , where m is the degree of the minimal
polynomial. In particular, we get:
• the minimal polynomials of T and X coincide: MT (t) = MX (t) = tm ;
• dim V = dim W = dm .
As a consequence of the hypothesis, we also get the equality of the cell numbers:
ck (T, 0) = ck (X, 0), for all k ∈ {1, . . . , m}.
Denote, for simplicity these cell numbers by ck . Define the set
J = j ∈ {1, . . . , m} : cj 6= 0 .
Let (Z1 , . . . , Zm ) be a cell skeleton for T , and let (G1 , . . . , Gm ) be a cell skeleton for X. For each
j ∈ J, define the subspaces
(23)
Vj
= Zj + T (Zj ) + · · · + T j−1 (Zj );
(24)
Wj
= Gj + X(Gj ) + · · · + X j−1 (Gj ).
Fix for the moment j ∈ J. We know that
(i) dim Zj = dim Gj = cj (> 0);
j−1
j−1
(ii) the families T k (Zj ) k=0 and X k (Gj ) k=0 are linearly independent;
(iii) the linear maps T k : Zj → T k (Zj ) and X k : Gj → X k (Gj ), k = 1, . . . , j − 1 are linear
isomorphisms.
5. THE SIMILARITY PROBLEM AND THE SPECTRAL INVARIANT
37
Using (i), there exists a linear isomorphism Sj0 : Zj → Gj . Using (iii), for each k = 1, . . . , j − 1
there exist linear isomorphisms Sjk : T k (Zj ) → X k (Gj ) such that the diagram
Tk
Zj −−−−−→ T k (Zj )



S
Sj0 y
y jk
Xk
is commutative. (Simply define Sjk
Gj −−−−−→ X k (Zj )
= X k ◦ Sj0 ◦ (T k Z )−1 .) By (ii), we know that (23) and (24)
j
are in fact direct sum decompositions. In particular, there exists a (unique) linear isomorphism
Sj : Vj → Wj such that
Sj k
= Sjk , for all k = 0, . . . , j − 1.
T (Zj )
It is not hard to prove that Vj is invariant for T , and Wj is invariant for X. Moreover, by
construction we will now have a commutative diagram:
T
Vj −−−−−→


Sj y
(25)
Vj )

S
y j
X
Wj −−−−−→ Wj
Let now j vary in J. Using exercise ?? we have direct sum decompositions
X
X
V =
Vj and W =
Wj ,
j∈J
j∈J
so there exists a unique linear isomorphism S : V → W such that
S = Sj , for all j ∈ J.
Vj
Using (25) we then get ST = XS, and we are done.
127*
Prove the Spectral Invariant Theorem.
Hints: Assume X, Y : V → V have the same spectral invariants. Denote the set Spec(X) =
Spec(Y ) simply by Λ. We know that the minimal polynomials of X and Y coincide
Y
MX (t) = MY (t) =
(t − λ)m(λ) .
λ∈Λ
Put Vλ = N (X, λ) and Wλ = N (Y, λ). Use the spectral decomposition, to write direct sum
decompositions
X
X
(26)
V =
Vλ =
Wλ .
λ∈Λ
λ∈Λ
Fix for the moment λ ∈ Λ.
for X, and Wλ is invariant for Y .
We know that Vλ is invariant
Moreover, if we define Xλ = X V ∈ Lin(Vλ ) and Yλ = Y W ∈ Lin(Wλ ), then these linear maps
λ
λ
will have the same spectral invariants. We have Spec(Xλ ) = Spec(Yλ ) = {λ}, and
dk (Xλ , λ) = dk (X, λ) = dk (Y, λ) = dk (Yλ , λ).
It follows that the nilpotents Xλ − λI and Yλ − λI have the same spectral invariants. By the
Spectral Theorem for Nilpotents, there exists a linear isomorphism Sλ : Vλ → Wλ such that
Sλ (Xλ − λI) = (Yλ − λI)Sλ . This clearly gives
(27)
Sλ Xλ = Yλ Sλ .
Let now λ vary. Using the direct sum decompositions (26), there exists a (unique) linear isomorphism S : V → V , such that
S = Sλ , for all λ ∈ Λ.
Vλ
Using (27), we immediatel get SX = Y S, and we are done.
CHAPTER 2
Matrices
1. Arithmetic of matrices
Definitions. Suppose k is a field, and m, n ≥ 1 are integers. A rectangular
table:


a11 a12 . . . a1n
 a21 a22 . . . a2n 


(28)
 ..
..
..  ,
 .
.
...
. 
am1 am2 . . . amn
filled with elements of k, is called an m × n matrix with coefficients in k. The set
of all such matrices will be denoted by M atm×n (k).
A square matrix is one with m = n. For the set of all square n × n matrices,
we will use the simplified notation M atn (k).
The indexing convention for writing the coefficients will be that the first index
represents the row number, and the second index represents the column number.
So the matrix (28) can also be written as
aij 1≤i≤m .
1≤j≤n
128 Prove that M atm×n (k) is a k-vector space, when equipped with the following
operations:

 

a11 a12 . . . a1n
b11 b12 . . . b1n
 a21 a22 . . . a2n   b21 b22 . . . b2n 

 

 ..
..
..  +  ..
..
..  =
 .


.
...
.
.
.
...
. 
am1 am2 . . . amn
bm1 bm2 . . . bmn


a11 + b11
a12 + b12 . . . a1n + b1n
 a21 + b21
a22 + b22 . . . a2n + b2n 


=
,
..
..
..


.
.
...
.
am1 + bm1

a11
 a21

λ· .
 ..
a12
a22
..
.
am1
am2
am2 + bm2 . . . amn + bmn
 
. . . a1n
λa11 λa12 . . .
 λa21 λa22 . . .
. . . a2n 
 
..  =  ..
..
...
.   .
.
...
. . . amn
λam1 λam2 . . .

λa1n
λa2n 

.. 
. 
λamn
The zero matrix in 0m×n ∈ M atm×n (k) is the matrix filled with zeros. Prove that
dim M atm×n (k) = mn.
39
40
2. MATRICES
Matrices arise naturally in the study of linear maps between finite dimensional
vector spaces.
Definition. Suppose m, n ≥ 1 are integers. Let V be a vector space of dimension n, and let W be a vector space of dimension m. Suppose we have two
ordered 1 linear bases A = (v1 , . . . , vn ) for V , and B = (w1 , . . . , wm ) for W . Suppose a linear map T : V → W is given. For each j ∈ {1, . . . , n}, there are unique
scalars α1j , α2j , . . . , αmj such that
T (vj ) = α1j w1 + α2j w2 + · · · + αmj wm .
The m × n matrix

MTA,B
α11
 α21

= .
 ..
α12
α22
..
.
αm1
αm2
...
...
...
...

α1n
α2n 

.. 
. 
αmn
is called the matrix of T , with respect to the pair (A, B). The map
ΩAB : Lin(V, W ) 3 T 7−→ M atm×n (k)
is called the matrix representation associated with the pair (A, B).
129
With the above notations, prove that the matrix representation
ΩAB : Lin(V, W ) → M atm×n (k)
is a linear isomorphism.
Example. The simplest example of the above construction is the following.
Assume V = k, with ordered basis A = (1), and W = k, with the standard ordered
linear basis B = (e1 , . . . , em ). We have the following bijective correspondences:
(29)
(30)
Lin(k, km ) 3 T
Lin(k, km ) 3 T
7−→
7−→
MTAB ∈ M atm×1 (k)
T (1) ∈ km .
Composing the correspondence (29) with the inverse of the correspondence (30),
we get a bijective correspondence km → M atm×1 (k).
130
Prove that the bijective correspondence described above is given as
 
α1
 α2 
 
km 3 (α1 , . . . , αm ) 7−→  .  .
 .. 
αm
Convention. From now on we will think km as identified with the collection
of m × 1 matrices. Such matrices are called column matrices.
1 An ordered linear basis is a prefered listing of a linear basis (by means of specifying which
is the first vector, the second vector, etc.)
1. ARITHMETIC OF MATRICES
41
Comment. With the above identification, the standard ordered linear basis
(e1 , . . . , em ) of km is described by
 
0
 .. 
 . 
 

ei = 
 1  ← i , i = 1, . . . , m.
 . 
 .. 
0
131 Let m, n ≥ 1 be integers, and let T : kn → km be a linear map. Let A =
(n)
(n)
(m)
(m)
(e1 , . . . , en ) be the standard orderd basis for kn , and let B = (e1 , . . . , en )
be the standard orderd basis for km . Prove that the matrix of T with respect to
(A, B) is given by
(n) (n) AB
(n)
MT = T (e1 )T (e2 ) . . . T (en ) .
(n)
(n)
That is, the columns of MTAB are precisely the column matrices T (e1 ), . . . T (en ).
Definition. Suppose m, n, p ≥ 1 are integers, and we have matrices X ∈
M atm×n (k), and Y ∈ M atn×p (k), say
X = xij 1≤i≤m and Y = yk` 1≤k≤n .
1≤j≤n
1≤`≤p
Construct the matrix
Z = zij 1≤i≤m ∈ M atm×p (k)
1≤j≤p
by defining
zij = xi1 y1j + xi2 y2j + · · · + xin ynj , for all i ∈ {1, . . . , m}, j ∈ {1, . . . , p}.
The matrix Z is called the product of X and Y , and is simply denoted by XY .
The following exercise explains why the product is defined in this way.
132 Let m, n, p ≥ 1 be integers, and let U, V, W be vector spaces with dim U = p,
dim V = n and dim W = m. Fix an ordered linear bases A for U , B for V , and
C for W . Suppose we are given linear maps S : U → V and T : V → W . Let
X ∈ M atm×n (k) be the matrix of T with respect to (B, C), and let Y ∈ M atn×p (k)
be the matrix of S with respect to (A, B). Prove that the matrix of T ◦ S : U → W ,
with respect to (A, C) is equal to the product XY .
Comments. The above result says that we have a commutative diagram
(31)
Lin(V, W ) × Lin(U, V )


ΩBC ×ΩAB y
M atm×n (k) × M atn×p (k)
133
composition
−−−−−−−→ Lin(U, W )

Ω
y AC
−−−−−→
product
M atm×p (k)
Use (31) to derive the following properties of the product:
42
2. MATRICES
A. The product is associative. That is, given matrices X ∈ M atm×n (k),
Y ∈ M atn×p (k), and Z ∈ M atp×r (k), one has the equality
(XY )Z = X(Y Z).
B. Given matrices M ∈ M atm×n (k) and P ∈ M atp×r (k), the maps
M atn×p (k) 3 X 7−→ M X ∈ M atm×p (k)
M atn×p (k) 3 X 7−→ XP ∈ M atn×r (k)
are linear
Definition. Given an integer n ≥ 1, we define the identity matrix


1 0 ... 0
0 1 . . . 0


In =  . . .
. . ... 
 .. ..

0
0 ...
1
134 Let V be a vector space of dimension n, and let B an ordered linear basis
for V . Prove that ΩBB (IdV ) = In .
In particular, using (31), obtain the identities
Im X = XIn = X, for all X ∈ M atm×n (k).
Notation. Let n ≥ 1 be an integer, let V be a vector space of dimension n, and
let B be an ordered linear basis for V . The linear map ΩBB : Lin(V ) → M atn (k)
will be denoted by ΩB .
Comments. The above results, when applied to square matrices, give the fact
that M atn (k) is a unital k-algebra. In particular (see Section 3 in Chapter 1), for
any matrix M ∈ M atn (k) one has a polynomial functional calculus
k[t] 3 P 7−→ P (M ) ∈ M atn (k).
If V is a vector space of dimension n, and if B is an ordered linear basis for V ,
then
ΩB : Lin(V ) → M atn (k)
is an isomorphism of k-algebra. It is clear that, for a linear map T : V → V , we
have
ΩB P (T ) = P ΩB (T ) , for all P ∈ k[t].
135
Let n ≥ 1 be an integer, and let T : kn → kn be a linear map. Let A
136 Let m, n ≥ 1 be integers, and let T : kn → km be a linear map. Let
M ∈ M atm×n (k) be the matrix of T with respect to the standard ordered linear
bases. Prove that
(32)
T (X) = M X, for all X ∈ kn .
(Recall that we identify kn with M atn×1 (k).)
2. GAUSS-JORDAN ELIMINATION
43
Definition. Let m, n ≥ 1 be integers, and let T : kn → km be a linear map.
An m × n matrix M , satisfying (32) is said to represent T .
137 Let m, n ≥ 1 be integers, and let T : kn → km be a linear map. Prove that
the matrix M connstructed in exercise ?? is the unique one that represents T .
138 Let n ≥ 1 be an integer, and let X be an n × n matrix. Prove that there
exists a unique non-zero polynomial M ∈ k[t] with the following with the following
properties:
(i) M (X) = 0.
(ii) M is monic, in the sense that the leading coefficient is 1.
(iii) If P ∈ k[t] is any polynomial with P (X) = 0, then P is divisible by M
Hint: Use the same arguments as in exercise ??.
Definition. The polynomial described above is called the minimal polynomial
of the matrix X, and is denoted by MX .
139 Let n ≥ 1 be an integer, let V be a vector space of dimension n, and let B
be an ordered linear basis for V . Suppose a linear map T : V → V is given. Define
the n × n matrix X = ΩB (T ). Prove that the minimal polynomials of T and X
coincide:
MX (t) = MT (t).
Hint: Use the fact that ΩB is an isomorphism of k-algebras.
140
Let n ≥ 1 be an integer, and let A ∈ M atn (k). Define the linear map
TA : kn 3 X 7−→ AX ∈ kn ,
represented by A. Prove that the minimal polynomials of T and A coincide:
MT (t) = MA (t).
Hint: Show that the correspondence
M atn (k) 3 A 7−→ TA ∈ Lin(kn )
is an isomorphism of k-algebras.
Using this fact, one can derive the The matrix representations of linear maps
The matrix operations
Rank
Systems of linear equations
2. Gauss-Jordan elimination
Row transformations
Gauss-Jordan reduction
invariance of rank; rank test for consistency
Applications to finding the kernel
Aplications to completing basis
44
2. MATRICES
3. Similarity
The similarity problem
Upper triangular forms
4. Determinants
Def. of determinant
det(AB)=det(A)det(B)
row/column expansion
linear independence via determinants
inverse of a matrix via determinants
5. A conceptual approach to determinants*
Tensor product
Tensor algebra
Exterior algebra
Meaning of determinant and its basic properties
CHAPTER 3
The Jordan canonical form
1. Characteristic polynomial
Def. of char polyn
Roots of char polyn = eigenvalues
Cayley-Hamilton thm
Multiplicity in char polyn = dim of spectral subspaces
2. Jordan basis
The algorithm
The Jordan canonical form
Spectral multiplicity diagram vs. Jordan canonical form
Conclusion: spectral multiplicity diagram is a complete invariant for the similarity problem
Jordan dec: SS+NILP
3. Applications
Entire functions
Exponentials and trigonometric functions
Holomorphic functional calculus*
Spectral mapping thm
4. Real Jordan form*
Complexification
Minimal/char real polynomial, vs. minimal/char complex polynomial
Real Jordan form
Applications
45
CHAPTER 4
Linear algebra on finite dimensional Hilbert spaces
1. Hilbert spaces
Inner products and norms
CBS ineq.
Orthon. basis
2. Linear maps on Hilbert spaces
The adjoint
Duality ker X*, Ran X*, etc.
The norm of a linear map
Spectral properties of the adjoint
3. Normal operators
Normal and self-adjoint maps
Positivity
The unitary group
Polar decomposition*
Generalized eig spaces for normal operators = eigensppace
The spectral thm
4. Applications
Spectral thm for self-adjoint
Functional calculus depends only on the values on the spectrum
Square roots
Positivity via diagonal minors
47
Appendices
A. Results from group theory
B. The symmetric group
49
Download