Math 406, Linear Algebra 3, Winter 2005

advertisement
Math 406, Linear Algebra 3, Winter 2005
Problem Set 5, Solutions


3 9/7 −13/7
1
2 .
1. Let T : F3 → F3 be the map given by M(T ) = 2
3
1
1
(a) Find a basis of generalized eigenvectors of T .
The eigenvalues of T are 1 and 2, with multiplicities 1 and 2. One can find these by
computing the characteristic polynomial det(λI − M(T )) = (λ − 1)(λ − 2)2 , or by finding a basis which makes M(T ) upper triangular, and reading the eigenvalues and their
multiplicities off the diagonal. In any case, we can find a basis of generalized eigenvalues
by finding bases for null (T − 1I)3 and null (T − 2I)3 .
Now


1
− 37
0
7
,
− 66
M((T − I)3 ) = 18 64
7
7
47
50
13 7 − 7
while
13
2
7
3
M(T − 2I) ) = −6 − 39
7
−2 − 13
7

− 18
7
54
7
18
7

.
Then null (T − I)3 consists of all
 
x
y 
z
such that

0
18
13
1
7
64
7
47
7
   
− 37
x
0
 y  = 0 .
− 66
7
z
0
− 50
7
This amounts to augmenting

0
18
13
and row reducing. We get the matrix

1
0
0
1
7
64
7
47
7

− 37
,
− 66
7
50
−7
0 1
1 −3
0 0

0
0 ,
0
so that the solutions to our equation consist of the vectors in the set
 


 −c
 3c  | c ∈ F .


c
Thus {(−1, 3, 1)} is a basis for null (T − I)3 .
augmented matrix in this situation is

1 13
14
0 0
0 0
We can do the same for null (T − 2I)3 . The
− 97
0
0

0
0
0
so that null (T − 2I)3 is the set
 13


 − 14 b + 97 c


 | b, c ∈ F ,
b


c
and {(−13, 14, 0), (9, 0, 7)} is a basis.
We conclude that {(−1, 3, 1), (−13, 14, 0), (9, 0, 7)} is a basis of generalized eigenvectors
of F3 .
(b) There is a basis of V such that the matrix of T is of the form described by theorem 8.28.
Find such a basis and give the resulting matrix.
We can find a basis which behaves in this fashion by following the construction in lemma
8.26. First we choose a basis for null (T − I), extend to a basis for null (T − I)2 , then
extend to a basis for null (T − I)3 . Repeating this process for (T − 2I) gives the basis
of the form we are looking for (according to theorem 8.28). Since dim null (T − I)3 = 1,
there is no work to do for (T − I) (we can take {(−1, 3, 1)} as computed in part (a)).
Without too much trouble we find that a basis for null (T − 2I) is (−1, 8, 5) (compute
this as in part (a) by row reducing M(T − 2I)), and we can extend this to a basis for
null (T − 2I)2 by adding any linear independent vector from null (T − 2I)3 (here we use
the fact that dim null (T − 2I)3 = 2). So we choose (9, 0, 7) to be the second basis vector,
as it is clearly independent from (−1, 8, 5).
Thus the basis we are looking for is {(−1, 3, 1), (−1, 8, 5), (9, 0, 7)}. Just to check we
compute the matrix for T with respect to this basis. Note that
T (−1, 3, 1) = (−1, 3, 1)
T (−1, 8, 5) = (−2, 16, 10) = 2(−1, 8, 5)
T (9, 0, 7) = (14, 32, 34) = 4(−1, 8, 5) + 2(9, 0, 7)
(I just computed these by augmenting the matrix whose columns are our basis vectors
with the image vector, and then row reducing). Thus


1 0 0
M(T ) = 0 2 4 ,
0 0 2
which is in the required form.
8.19 Prove that if V is a complex vector space, then every invertible operator on V has a cube root.
First we show that I + N has a cube root for any nilpotent map N ∈ L(V ). Because N is
nilpotent, there is m ∈ N such that N m = 0. Then it is enough to show that we can find
ai ∈ C such that (I + a1 N + a2 N 2 + · · · + am N m−1 )3 = I + N . Note that that
(I + a1 N + a2 N 2 + · · · + am N m−1 )3 = I + 3a1 N + (3a2 + a21 )N 2 + (3a3 + 3a1 a2 + a31 )N 3
+ · · · + (3am−1 + terms involving a1 , . . . , am−1 )N m−1 .
It will be the case that this sum equals I + N if we let a1 = 1/3, a2 = −1/27, a3 =
(−3a1 a2 − a31 )/3 = −2/81, and so on. More precisely, we see that the coefficient of N i is
3ai + terms involving a1 , . . . , ai−1 , which we may write as 3ai + fi (a1 , . . . , ai−1 ) where fi is
some polynomial in a1 , . . . , ai−1 . Then I + a1 N + · · · + am−1 N m−1 is the cube root of I + N if
we let a1 = 1/3, a2 = −f1 (a1 )/3, a3 = −f2 (a1 , a2 )/3, . . . , ai = −fi (a1 , . . . , ai−1 )/3, . . . , am =
−fm (a1 , . . . , am−1 )/3.
Now suppose that T ∈ L(V ) is invertible, let λ1 , . . . , λm be the distinct eigenvalues of T , and
let U1 , . . . , Um be the corresponding generalized eigenspaces. By theorem 8.23, (T − λj I)|Uj is
nilpotent, so let Nj = (T − λj I)|Uj . Note that λj 6= 0 (see theorems 5.18 and 5.16), and thus
N
we may write T |Uj = λj (I + λjj ) for each j. It is the case that Nj λj is nilpotent (because if
p
p
p
Nj = 0 then (Nj /λj ) = Nj /λpj = 0 as well), and thus (by the result above), (I + Nj /λj ) has
p
→
→
a cube root, which we call Ŝj . Then the map Sj : Uj → Uj given by Sj (−
u ) = 3 λj Ŝj (−
u ) is a
p
N
j
3
3
3
3
cube root of T |Uj (because Sj = ( λj Ŝj ) = λj Ŝj = λj (I + λj ) = T |Uj ).
→
→
→
→
→
Now V = U1 ⊕ · · · ⊕ Um , so a typical vector −
v ∈ V is −
v =−
u1 + ··· + −
u m where −
u i ∈ Ui
→
−
→
−
→
−
for i = 1, . . . , m. Thus we can define S : V → V by S( v ) = S1 ( u 1 ) + · · · + Sm ( u m ). Note
→
that S is a well defined linear map. It also follows that S is a cube root of T because S 3 (−
v)=
→
−
→
−
→
−
→
−
→
−
→
−
2
2
S S( u 1 + · · · + u m ) = S (S1 ( u 1 ) + · · · + Sm ( u m )) = S(S(S1 ( u 1 ) + · · · + Sm ( u m ))) =
→
→
→
→
2 −
3 −
S(S12 (−
u 1 ) + · · · + Sm
(→
u m )) = S13 (−
u 1 ) + · · · + Sm
(→
u m ) = T |U1 (−
u 1 ) + · · · + T |Um (−
u m) =
→
−
→
−
→
−
T ( u + · · · + u ) = T ( v ) as required (we have used that the U are S invariant).
1
m
i
i
8.27 Give an example of an operator on C4 whose characteristic polynomial equals z(z − 1)2 (z − 3)
and whose minimal polynomial equals z(z − 1)(z − 3).
The first thing to try is, of course, the map T such

0 0
0 1
M(T ) = 
0 0
0 0
that
0
0
1
0

0
0
.
0
3
This in fact works, that is, the characteristic polynomial of T is clearly p(z) = z(z − 1)2 (z − 3),
but it is also true that T (T − 1I)(T − 3I) = 0. As the minimal polynomial is monic, divides
any polynomial for which T vanishes, and has roots exactly the eigenvalues, we know that
q(z) = z(z − 1)(z − 3) must be it.
More interesting is what happens if we introduce
other

 entries into the matrix. Consider, for
0 a b c
0 1 d e 

example, the map T such that M(T ) = 
0 0 1 f . The characteristic polynomial is still
0 0 0 3
the same, but it turns out (you can work this out by multiplying) that the minimal polynomial
is q(z) if and only if d = 0. Here the minimal polynomial is helping us discern between the
two possible Jordan form (for maps with eigenvalues 0, 1, and 3, and multiplicities 1, 2, and
1).
8.30 Suppose V is a complex vector space and T ∈ L(V ). Prove that there does not exist a direct
sum decomposition of V into two proper subspaces invariant under T if and only if the minimal
polynomial of T is of the form (z − λ)dim V for some λ ∈ C.
Suppose that M(T ) is a matrix in Jordan form. Then M(T ) is block upper triangular. Let
the vectors corresponding to the columns in the ith block generate a subspace of V which we
call Ui for i = 1, . . . , m. Then certainly V = U1 ⊕ · · · ⊕ Um , and it is also true (because M(T )
is block diagonal) that each Ui is T -invariant. Note that the direct sum of two T -invariant
subspaces is T -invariant. Thus there does not exist a direct sum decomposition of V into two
proper subspaces invariant under T if and only if M(T ) has Jordan form with only one block.
So to finish the problem we need to show that M(T ) has Jordan form with only one block if
and only if (z − λ)dim V is the minimal polynomial.
Question 29 in the text is relevant here. It claims that the dimension of the minimal polynomial
of any nilpotent map N is the longest consecutive strings of ones that appear on the diagonal
directly above the main diagonal when N is is Jordan form. We will prove this below, but in
the meantime, note that, if the minimal polynomial of T is (z − λ)dim V , then T has only one
eigenvalue, and because T has a basis of generalized eigenvectors, T − λI is nilpotent. Thus
by question 29, the the length of the longest consecutive strings of ones on the off diagonal
of (T − λI) in Jordan form is dim V − 1. This implies that when we consider the matrix of
(T − λI) in Jordan form, it can have only one block, and hence the matrix of T in Jordan
form has only one block. Similarly, suppose that M(T ) has Jordan form with only one block
and hence only one eigenvalue, λ. Thus the minimal polynomial of T is (z − λ)q for some
q ≤ dim V . This implies that (T − λI)q = 0. But M(T − λI) is an n × n nilpotent matrix,
so again by question number 29, the minimal polynomial of M(T − λI) is z p+1 where p is the
longest consecutive string of ones that appears above the main diagonal when the matrix is
in Jordan form. Thus, the minimal polynomial of (T − λI) is z dim V , and we conclude that
q = dim V , as required.
I will sketch the proof of problem 29 (it is really a 206 problem). We know that any nilpotent
matrix is upper triangular with zeros on the main diagonal (we may assume that we are
over C, because they ask us to use Jordan form). So if A is a nilpotent matrix in Jordan
form, its only nonzero entries are ones along the off diagonal. Note that if A is n × n and
the whole off diagonal is full of ones,
then Aj 6=
< j ≤ n − 1 and An = 0. This
h
i 0 for 0−
→
→
−
→
−
→
−
is because if B is is the matrix b 1 . . . b n where b 1 , . . . , b n are the columns, then
h
i
→
→
−
→ −
−
AB = 0
b 1 . . . b n−1 (then some quick thought shows us that AAj = 0 if and only if
j = n − 1). Now if A is a matrix in Jordan form, let A1 , . . . , Am be the blocks along the main

A1


diagonal. So A = 


A2
..
.


,

 j
A1


and it is easy to see that Aj = 


Aj2
..


.

.
Am
Ajm
j
p
We conclude that A = 0 for the largest j such that At is j × j, and that A 6= 0 for p < j. As
the largest integer such that At is j × j is exactly equal to the largest consecutive strings of
ones along the off diagonal minus one (each Aj contains an unbroken string of ones), we are
finished.
Download