Section 5.1: Defn 1. A linear operator T : V → V on a finite

advertisement
Section 5.1:
Defn 1. A linear operator T : V → V on a finite-dimensional vector space V is called
diagonalizable if there is an ordered basis β for V such that β [T ]β is a diagonal matrix. A
square matrix A is called diagonalizable if LA is diagonalizable.
Note 1. If A =B [T ]B where T = LA then to find β such that β [I]B B [T ]B B [I]β =β [T ]β is
diagonal is the same thing as finding an invertible S such that S −1 AS is diagonal.
Defn 2. Let T be a linear operator on V , a non-zero v ∈ V is called an eigenvector of T
if ∃λ ∈ F , such that T (v) = λv. The scalar λ is called the eigenvalue corresponding to
the eigenvector v. Let A ∈ Mn (F ), v ∈ F n , v 6= 0 is called an eigenvector of A if v is an
eigenvector of LA . That is, Av = λv.
Theorem 5.1. A linear operator T on a finite dimensional vector space V is diagonalizable
if and only if there is an ordered basis β for V consisting of eigenvectors of T . Furthermore,
if T is diagonalizable, β = {v1 , v2 , . . . , vn } is an ordered basis of eigenvectors of T , and for
D =β [T ]β , D is a diagonal matrix and Dj,j is the e-val corresponding to vj , 1 ≤ j ≤ n.
Proof. If T is diagonalizable, then


λ1 0 0 · · · 0 0
 0 λ2 0 · · · 0 0 


β [T ]β = 

···
0 0 0 · · · 0 λn
for basis β = {v1 , v2 , . . . , vn }, which means that for all i, T (vi ) = λi vi . So each λi is an
eigenvalue corresponding to the eigenvector vi . Conversely, if there exists an ordered basis
β = {v1 , v2 , . . . , vn } such that each vi is an e-vctr of T then there exist λ1 , λ2 , . . . , λn such
that T (vi ) = λi vi and so


λ1 0 0 · · · 0 0
 0 λ2 0 · · · 0 0 


β [T ]β = 

···
0 0 0 · · · 0 λn
Example 1.
·
¸
cos α − sin α
,
B [T ]B =
sin α cos α
whereB = {(1, 0), (0, 1)}, If α = π/2, T has no eigenvectors because the linear transformation
is “rotation by α”.
But if α = π, then it does have e-vectors (1, 0) and (0, 1).
But this transformation is invertible. (Rotate by the negative angle) So invertible and
diagonalizable are not the same.
1
2
Theorem 5.2. A ∈ Mn (F ). λ is an e-val of A if and only if det(A − λI) = 0.
Proof. λ is an e-val of A
⇔ ∃v 6= 0 such that Av = λv. ⇔ ∃v 6= 0 such that Av − λv = 0. ⇔ ∃v 6= 0 such that
(A − λIn )v = 0. ⇔ det(A − λIn ) = 0.
Defn 3. For matrix A ∈ Mn (F ), f (t) = det(A − tIn ) is the characteristic polynomial of A.
Defn 4. Let T be a linear operator on an n-dimensional vector space V with ordered basis
β. We define the characteristic polynomial f (t) of T to be the characteristic polynomial of
A =β [T ]β . That is, f (t) = det(A − tIn ).
Note 2. Similar matrices have the same characteristic polynomials, since if B = S −1 AS,
det(B − tIn ) = det(S −1 AS − tIn ) = det(S −1 AS − tS −1 S) = det(S −1 (A − tIn )S)
1
det(A − tIn ) det(S) = det(A − tIn )
det(S)
So that characteristic polynomials are “similarity invariants”. If B1 and B2 are 2 bases
of V then B1 [T ]B1 is similar to B2 [T ]B2 and so we see that the definition of characteristic
polynomial of T , does not depend on the basis used in the representation. So, we may say
f (t) = det(T − tIn ).
= det(S −1 ) det(A − tIn ) det(S) =
Theorem 5.3. Let A ∈ Mn (F )
(1) The characteristic polynomial of A is a polynomial of degree n with leading coefficient
(−1)n .
(2) A has at most n distinct eigenvalues.
Proof. First, we will prove (1). We need a slightly stronger statement for our induction
statement to be of use.
If B is a square n × n matrix such that for some permutation θ ∈ Sn , and
some subset K ∈ [n],
(B)i,j = bi,θ(i) − t, i ∈ K, θ(i) = j
(B)i,j = bi,j , i ∈ K, j 6= θ(i)
(B)i,j = bi,j , i 6∈ K
where for all i, j ∈ [n], bi,j is a scalar, then det(B) is a polynomial in t of
degree |K|. Furthermore, if |K| = n and θ = id, the leading coefficient is
(−1)n . In this case, the entries on the diagonal of B are of the form bi,i − t.
The proof is by induction. The base case is for is for n = 1. A = [a1,1 ], B = A − tI1 ,
det(B) = a1,1 − t, which is a polynomial of degree 1 and has leading coefficient (−1)1 and if
B = A, we have that det(B) = a1,1 which is a polynomial of degree 0.
Assume n > 1 and the theorem is true for n − 1 × n − 1 matrices.
Assume B satisfies the hypothesis of the statement. We compute det(B) by expanding on
row 1.
n
X
det(B) =
(−1)1+i (B)1,i |B(1|i)|
i=1
3
We see that for each i, by induction, B(1|i) is an n − 1 square matrix with |K| or |K| − 1
entries of the form bi,j − t.
If there exists an i ∈ [n], such that (B)1,i is of the form b1,i − t then B(1|i) has |K| − 1
entries of the form br,s − t and satisfies the induction hypothesis. det(B(1|i)) is therefore
a polynomial of degree |K| − 1 and also for j 6= i, B(1|j) has |K| − 1 entries of the form
br,s − t and satisfies the induction hypothesis. det(B(1|i)) is therefore a polynomial of degree
|K| − 1. We see in this case that det(B) is a polynomial of degree 1 + |K| − 1 = |K|.
If additionally, |K| = n and θ = id, then b1,1 is of the form b1,1 − t and B(1|1) has all of
its diagonal entries of the form bi,i − t and we see that its leading coefficient is (−1)n−1 by
induction and so the leading coefficient of det(B) is (−1)(−1)n−1 = (−1)n .
And if for all i ∈ [n], (B)1,i is of the form b1,i then for all i ∈ [n], B(1|i) has |K| or entries
of the form br,s − t and satisfies the induction hypothesis. So we see that in this case, det(B)
is a polynomial of degree |K|.
Now (2) follows by the fact from algebra that a polynomial of degree n over a field can
have at most n roots.
Theorem 5.4. Let T be a linear operator on a vector space V and let λ be an e-val of T . A
vector v ∈ V is an e-vctr of T corresponding to λ if and only if v 6= 0 and v ∈ N (T − λIn ).
Proof. λ is an e-val of T
⇔ ∃v 6= 0 such that T (v) = λv. ⇔ ∃v 6= 0 such that T (v) − λv = 0. ⇔ ∃v 6= 0 such
that (T − λIn )(v) = 0. ⇔ ∃v 6= 0 such that v ∈ N (T − λIn ).
Section 5.2.
Theorem 5.5. Let T be a linear operator on a vector space V and let λ1 , λ2 , . . . , λk be
distinct e-vals of T . If v1 , v2 , . . . , vk are e-vctrs of T such that for all i ∈ [k], λi corresponds
to vi , then {v1 , v2 , . . . , vk } is a linearly independent set.
Proof. The proof is by induction on k.
Let k = 1. {v1 } is a linearly independent set.
Assume k > 1 and the theorem holds for k − 1 distinct e-vals and e-vctrs.
Now suppose λ1 , λ2 , . . . , λk be distinct e-vals of T and v1 , v2 , . . . , vk are e-vctrs of T such
that for all i ∈ [k], λi corresponds to vi .
We wish to show {v1 , v2 , . . . , vk } is a linearly independent set.
Let
a1 v1 + a2 v2 + · · · + ak vk = 0 (1)
for some scalars a1 , . . . , ak .
Applying T − λk I to both sides of the equation, we obtain,
(T − λk I)(a1 v1 + a2 v2 + · · · + ak vk ) = 0
a1 (T − λk I)(v1 ) + a2 (T − λk I)(v2 ) + · · · + ak (T − λk I)(vk )) = 0 (2)
∀i ∈ [k − 1],
ai (T − λk I)(vi ) = ai (T (vi ) − λk I(vi )) = ai (λi vi − λk vi ) = ai (λi − λk )vi
4
and
ak (T − λk I)(vk ) = ak (T (vk ) − λk I(vk )) = ak (λk vk − λk vk ) = ak (λk − λk )vk
So (2) becomes
a1 (λ1 − λk )v1 + a2 (λ2 − λk )v2 + · · · + ak−1 (λk−1 − λk )vk−1 = 0
By induction, {v1 , v2 , . . . , vk−1 } is a linearly independent set and so for all i ∈ [k − 1],
ai (λi − λk ) = 0. But, since λi − λk 6= 0, it must be that ai = 0.
Now looking back at equation (1), we have that ak vk = 0. But since vk is not the zero
vector, it must be that ak = 0 as well.
Therefore, {v1 , v2 , . . . , vk } is a linearly independent set.
Cor 1. Let T be a linear operator on an n-dimensional vector space V . If T has n distinct
e-vals, then T is diagonlizable.
Proof. Suppose λ1 , λ2 , . . . , λn are distinct e-vals of T with corresponding e-vctrs v1 , v2 , . . . , vn .
By Theorem 5.5, {v1 , v2 , . . . , vn } is a linearly independent set. By Theorem 5.1, T is diagonalizable.
Defn 5. A polynomial f (t) in P (F ) splits over F if there are scalars c, a1 , . . . , an such that
f (t) = c(t − a1 )(t − a2 ) · · · (t − an ).
Theorem 5.6. The characteristic polynomial of any diagonalizable linear operator splits.
Proof. Let T be a diagonalizable linear operator on V .
D =β [T ]β is diagonal.

λ1 0 0 · · · 0
 0 λ2 0 · · · 0
D=
 ···
0 0 0 ··· 0
f (t) is the characteristic polynomial so

λ1 − t
0
 0
λ2 − t
f (t) = det(D − tI) = 
 ···
0
0
Suppose β is a basis of V such that

0
0 


λn
0 ··· 0
0 ··· 0
0
0




0 · · · 0 λn − t
= (λ1 − t)(λ2 − t) · · · (λn − t).
Defn 6. Let λ be an e-val of a linear operator (or matrix) with characteristic polynomial
f (t). The algebraic multiplicity (or just multiplicity) of λ is the largest positive integer k for
which (t − λ)k is a factor of f (t).
Defn 7. Let T be a linear operator on a vector space V and let λ be an e-val of T . Define
Eλ = {x ∈ V |T (x = λx} = N (T −λI). The set Eλ is called the eigenspace of T corresponding
to λ. The eigenspace of a matrix A ∈ Mn (F ) is the e-space of LA .
Fact 1. Eλ is a subspace.
Proof. Let a ∈ F , x, y ∈ Eλ . Then T (ax + y) = aT (x) + T (y) = aλx + λy = λ(ax + y).
5
Theorem 5.7. Let T be a linear operator on a finite dimensional vector space V , and let λ
be an e-val of T having multiplicity m. Then 1 ≤ dim(Eλ ) ≤ m.
Proof. Let {v1 , v2 , . . . , vp } be a basis of Eλ . Extend it to a basis β = {v1 , v2 , . . . , vp , vp+1 , . . . , vn }
of V . Let A = [T ]β , then
µ
¶
λIp B
A=
0 C
since ∀i ≤ p, T (vi ) = λvi .
So,
µ
¶
(λ − t)Ip
B
A − tI =
0
C − tIn−p
Expanding on the 1st column, we see that det(A − tIn ) = (λ − t)p det(C − tIn−p ) =
(λ − t)p q(t).
So the multiplicity of λ is greater than or equal to p and the dim(Eλ ) = p.
Lemma 1. Let T be a linear operator on a vector space V and let λ1 , λ2 , . . . , λk be distinct
e-vals of T . For each i = 1, 2, . . . , k, let vi ∈ Eλi . If v1 +v2 +· · · vk = 0, then ∀i ∈ [k], vi = 0.
Proof. Renumbering if necessary, suppose for 1 ≤ i ≤ p, vi 6= 0 and for p + 1 ≤ i ≤ k,
vi = 0.
Then, v1 + v2 + · · · , vp = 0. But this contradicts Theorem 5.5.
Thus, ∀i ∈ [k], vi = 0.
Theorem 5.8. Let T be a linear operator on a vector space V and let λ1 , λ2 , . . . , λk be
distinct e-vals of T . For each i = 1, 2, . . . , k, let Si ⊆ Eλi , be a finite linearly independent
set. Then S = S1 ∪ S2 ∪ · · · ∪ Sk is a linearly independent subset of V .
Proof. For all i suppose Si = {vi,1 , vi,2 , . . . , vi,ni }. Then
S = {vi,j : 1 ≤ i ≤ k, 1 ≤ j ≤ ni }. Suppose there exists {ai,j } such that
ni
k X
X
Pni
ai,j vi,j = 0.
i=1 j=1
For each i, let wi = j=1 ai,j vi,j . Then wi ∈ Eλi , for all i and w1 + w2 + · · · + wk = 0.
So, by the lemma, wi = 0, ∀i.
But each Si is linearly independent, so for all j, ai,j = 0.
Theorem 5.9. Let T be a linear operator on a finite dimensional vector space V such that
the characteristic polynomial of T splits. Let λ1 , λ2 , . . . , λk be distinct e-vals of T . Then
(1) T is diagonalizable if and only if the multiplicity of λi is equal to dim(Eλi ), ∀i.
(2) If T is diagonalizable and βi is an ordered basis for Eλi , for each i, then β = β1 ∪
β2 ∪ · · · ∪ βk is an ordered basis for V consisting of eigenvectors of T .
Proof. For all i ∈ [k], let mi be the multiplicity of λi , di = dim(Eλi ), and dim(V ) = n.
We will show (2) first. Suppose T is diagonalizable. let βi be a basis for Eλi , ∀i ∈ [k].
We know β = {β1 ∪ β2 ∪ · · · ∪ βk } is a linearly independent set by Theorem 5.8. By
Theorem 5.1, there is a basis γ of V such that γ consists of eigenvectors of T . Let x ∈ V .
x ∈Span(γ), so x = a1 v1 + a2 v2 + · · · + an vn where v1 , v2 , . . . , vn are eigenvectors of T .
6
Each vi ∈Span(βj ), for some j ∈ [k]. So it can be expressed as a linear combination of
vectors in βj . Thus x ∈Span(β) and we have span(()β) = V .
Now we show (1).
(⇒:) We know di ≤ mi , ∀i.
But since by (2), β is a basis, we have
n = d1 + d2 + · · · + dk ≤ m1 + m2 + · · · + mk = n
Thus, by the “squeeze principle” we have
d1 + d2 + · · · + dk = m1 + m2 + · · · + mk
and
m1 − d1 + m2 − d2 + · · · + mk − dk 0
But ∀i ∈ [k], mi − di ≥ 0 and so mi = di .
(⇐:) Suppose ∀i, mi = di . We know m1 + m2 + · · · + mk = n since T splits and by Theorem
5.3 f (t) has degree n. Thus, we know d1 + d2 + · · · + dk = n and if ∀i, βi is an ordered
basis for Eλi , by Theorem 5.8 β = β1 ∪ β2 ∪ · · · ∪ βk is linearly independent. And, since
|β| = n, by Corollary 2, (b) to Theorem 1.10, β is a basis of V . Then by Theorem 5.1, T is
diagonalizable.
Note 3. Test for diagonalization. A linear operator T on a vector space V of dimension n
is diagonalizable if and only if both of the following hold.
(1) The characteristic polynomial splits.
(2) For each λ, eigenvalue of T , the multiplicity of λ equals the dimension of Eλ .
Notice that Eλ = {x|(Tλ I)(x) = 0} = N (T − λI) and n = nullity(T − λI) + rank(T − λI).
So, dim(Eλ ) = nullity(t − λI) = n − rank(T − λI).
Proof. Assume T is diagonalizable. By Theorem 5.6, the characteristic polynomial of T
splits. Now by Theorem 5.9, for each λ, eigenvalue of T , the multiplicity of λ equals the
dimension of Eλ .
Now assume (1) and (2) hold. Then by Theorem 5.9 again, T is diagonalizable.
Defn 8. Let W1 , W2 , . . . , Wk be subspaces of a vector space V . The sum is:
k
X
Wi = {v1 + v2 + · · · + vk : vi ∈ Wi , ∀i ∈ [k]}
i=1
Fact 2. The sum is a subspace.
Defn 9. Let W1 , W2 , . . . , Wk be subspaces of a vector space V . We call V the direct sum of
W1 , W2 , . . . , Wk , written
V = W1 ⊕ W2 ⊕ · · · ⊕ Wk
P
If V is the sum of W1 , W2 , . . . , Wk and ∀j ∈ [k], Wj ∩ i6=j Wi = {0}.
Example 2. V = R4
W1 = {(a, b, 0, 0)|a, b ∈ R}
W2 = {(0, 0, c, 0)|c ∈ R}
W3 = {(0, 0, 0, d)|d ∈ R}
Theorem 5.10. Let W1 , W2 , . . . , Wk be subspaces of a finite-dimensional vector space V .
Tfae
7
(1) V = W1 ⊕ W2 ⊕ · · · ⊕ Wk
P
(2) V = ki=1 Wi and ∀i, vi ∈ Wi if v1 + v2 + · · · vk = 0 then vi = 0, ∀i
(3) Each vector v ∈ V can be written uniquely as v = v1 + v2 + · · · vk , where ∀i ∈
[k], vi ∈ Wi .
(4) If ∀i ∈ [k], γi is an ordered basis for Wi then γ1 ∪ γ2 ∪ · · · ∪ γk is an ordered basis for
V.
(5) For each i ∈ [k] there is an ordered basis γi for Wi such that γ1 ∪ γ2 ∪ · · · ∪ γk is an
ordered basis for V .
Proof. (1)
i ≤ k. Then
P ⇒ (2): Suppose for i ∈ [k], vi ∈ Wi v1 + v2 + · · · + vk = 0. Let 1 ≤P
vi = − j6=i vj . But since by (1), V = W1 ⊕W2 ⊕· · ·⊕Wk , we know that vi = − j6=i vj = 0.
(2) ⇒ (3): By (2), each vector v ∈ V can be written as v = v1 + v2 + · · · vk , where ∀i ∈
[k], vi ∈ Wi . To show uniqueness, suppose that v = v1 +v2 +· · · vk and v = w1 +w2 +· · · wk ,
where ∀i ∈ [k], vi , wi ∈ Wi .
Then we have
v1 + v2 + · · · vk = w1 + w2 + · · · wk
v1 − w1 + v2 − w2 + · · · + vk − wk = 0.
or each i ∈ [k], vi − wi ∈ Wi , so by (2), vi − wi = 0 and we have that vi = wi .
(3) ⇒ (4): Let i ∈ [k] and γi = {wi,1 , wi,2 , . . . , wi,ni } be an ordered basis for Wi . By (3),
we know that γ1 ∪ γ2 ∪ · · · ∪ γk spans V . To show linear independence, we suppose for some
{ai,j : 1 ≤ i ≤ k, 1 ≤ j ≤ ni },
ni
k X
X
ai,j wi,j = 0
Notice that for each i ∈ [k],
i=1 j=1
P ni
j=1
ai,j wi,j ∈ Wi . We also have
k
X
So by uniqueness, we have that
must be that ∀j ∈ [ni ], ai,j = 0.
Pni
j=1
0=0
i=1
ai,j wi,j = 0. Now since γi is linearly independent, it
(4) ⇒ (5): We know that for each i ∈ [k], Wi has a finite basis, γi . Thus γ1 ∪ γ2 ∪ · · · ∪ γk is
an ordered basis for V by (4).
(5) ⇒ (1): Let i ∈ [k] and γi = {wi,1 , wi,2 , . . . , wi,ni } be an ordered basis for Wi . Since
Pk
γ1 ∪ γ2 ∪ · · · ∪ γk spans V , we have that V =
i=1 Wi . Let j ∈ [k] and consider v ∈
P
Wj ∩ ki=1,i6=j Wi . Since v ∈ Wj , we have that
v = aj,1 wj,1 + aj,2 wj,2 + · · · + aj,nj wj,nj
But also,
v=
k
X
i=1,i6=j
xi
8
for some vectors xi ∈ Wi , which are linear combinations of the vectors in γi . We have that
aj,1 wj,1 + aj,2 wj,2 + · · · + aj,nj wj,nj −
k
X
xi = 0
i=1,i6=j
and hence all of the coefficients of vectors in γ1 ∪ γ2 ∪ · · · ∪ γk are zero. This implies that
v = 0.
Theorem 5.11. A linear operator T on a finite-dimensional vector space V is diagonalizable
if and only if V is the direct sum of the eigenspaces of T .
Proof. Let λ1 , λ2 , . . . , λk be the distinct eigenvalues of T .
(⇒) Let T be diagonalizable. Then ∀i ∈ [k], let γi be an ordered basis of Eλi . By Theorem
5.9, γ1 ∪ γ2 ∪ · · · ∪ γk is an ordered basis for V . By Theorem 5.10, V is the direct sum of
Eλi ’s.
P
(⇐) If V = ki=1 Eλi . Choose a basis γi for each Eλi . By Theorem 5.10, γ1 ∪ γ2 ∪ · · · ∪ γk
is an ordered basis for V . Since there is a basis for V of E-vcts of T , T is diagonalizable, by
Theorem 5.1.
Section 5.3 - Skip.
Section 5.4 - Invariant subspaces and the Cayley-Hamilton Theorem.
Defn 1. Let T be a linear operator on a vector space V . A subspace W of V is called a
T -invariant subspace of V if (W ) ⊆ W .
Defn 2. If T is a linear operator on V and W is a T -invariant subspace of V , then the
restriction TW of T to W is a mapping from W to W and it follows that TW is a linear
operator on W .
Lemma 2. Exercise 21 from Section 4.3. If M ∈ Mn (F ) can be expressed as
µ
¶
A B
M=
,
O C
where A ∈ Mr (F ), C ∈ Ms (F ), s + r = n, and O is the all s × r matrix of all zeros.
Proof. The proof is by induction on r. If r = 1, we form det M be expanding on column 1.
Then det M = a1,1 M (1|1) = det A · det C.
Now assume for all such matrices M where A is r − 1 × r − 1. Again, we expand on column
1 of M .
det M = a1,1 det M (1|1) − a2,1 det M (2|1) + · · · (−1)r+1 det M (r|1)
For each i, M (i|1) has the form
µ
¶
A(i|1) B ∗
,
O
C
where B ∗ is a submatrix of B. By induction, det M (i|1) = det A(i|1) det C. So, we have:
det M = a1,1 det A(1|1) det C − a2,1 det A(2|1) det C + · · · (−1)r+1 det A(r|1) det C
= det C(a1,1 det A(1|1) − a2,1 det A(2|1) + · · · (−1)r+1 det A(r|1)) = det C det A
9
Theorem 5.21. Let T be a linear operator on a finite-dimensional vector space V , and let
W be a T -invariant subspace of V . Then the caracteristic polynomial of TW divides the
characteristic polynomial of T .
Proof. Choose an ordered basis γ = {v1 , v2 , . . . , vk } for W , and extend it to an ordered basis
β = {v1 , v2 , . . . , vk , vk+1 , . . . , vn } for V . Let A = [T ]β and B1 = [TW ]γ . Observe that A can
be written in the form
µ
¶
B1 B2
A=
.
O B3
Let f (t) be the characteristic polynomial of T and g(t) the characteristic polynomial of TW .
Then
µ
¶
B1 − tIk
B2
f (t) = det(A − tIn ) = det
= g(t) · det(B3 − tIn−k )
O
B3 − tIn−k
by the Lemma. Thus g(t) divides f (t).
The following was presented by N.Vankayalapati. He provided a handout.
Defn 3. Let T be a linear operator on a vector space V and let x be a nonzero vector in V .
The subspace
W = span({x, T (x), T 2 (x), . . .})
is called the T -cyclic subspace of V generated by x.
Theorem 5.22. Let T be a linear operator on a finite-dimensional vector space V , and let
W denote the T -cyclic subspace of V generated by a nonzero vector v ∈ V . Let k = dim(W ).
Then
(a) {v, T (v), T 2 (v), . . . , ak−1 T k−1 (v)T k (v)} is a basis for W .
(b) If a0 v + a1 T (v) + · · · + T k−1 (v) = 0, then the characteristic polynomial of TW is
f (t) = (−1)k (a0 + a1 t + · · · + ak−1 tk−1 + tk ).
The following was presented by J. Stockford. He provided a handout.
Theorem 5.23. (Cayley-Hamilton). Let T be a linear operator on a finite-dimensional
vector space V , and let f (t) be the characteristic polynomial of T . Then f (T ) = T0 , the zero
transformation.
The following was presented by Q. Ding. He provided a handout.
Cor 1. (Cayley-Hamilton Theorem for Matrices). Let A be an n × n matrix, and let f (t) be
the characteristic polynomial of A. Then f (A) = O, the zero matrix.
We did not cover the following theorems.
Theorem 5.24. Let T be a linear operator on a finite-dimensional vector space V , and
suppose that V = W1 ⊕ W2 ⊕ · · · ⊕ Wk , where Wi is a T -invariant subspace of V for each i
(1 ≤ i ≤ k). Suppose that fi (t) is the characteristic polynomial of TWi (1 ≤ i ≤ k). Then
f1 (t)f˙2 (t)· ·˙ ·f˙k (t) is the characteristic polynomial of T .
10
Defn 4. Let B1 ∈ Mm (F ), and let B2 ∈ Mn (F ). We define the direct sum of B1 and B2 ,
denoted B1 ⊕ B2 as the (m + n) × (m + n) matrix A such that

f or 1 ≤ i, j ≤ m
 (B1 )i,j
(B2 )(i−m),(j−m) f or m + 1 ≤ i, j ≤ n + m
Ai,j =
 0
otherwise
If B1 , B2 , . . . , Bk are square matrices with entries from F , then we define the direct sum
of B1 , B2 . . . , Bk recursively by
B1 ⊕ B2 ⊕ · · · ⊕ Bk = (B1 ⊕ B2 ⊕ · · · ⊕ Bk−1 ) ⊕ Bk
Of A = B1 ⊕ B2 ⊕ · · · ⊕ Bk , then we often write

B1 O · · · O
 O B2 · · · O
A=
..
..
 ...
.
.
O O · · · Bk




Theorem 5.25. Let T be a linear operator on a finite-dimensional vector space V , an d
let W1 , W2 , . . . , Wk be T invariant subspaces of V such that V = W1 ⊕ W2 ⊕ · · · ⊕ Wk . For
each i, let βi be an ordered basis for Wi , and let β = β1 ∪ β2 ∪ · · · ∪ βk . Let A = [T ]β and
Bi = [TWi ]βi for each i. Then A = B1 ⊕ B2 ⊕ · · · ⊕ Bk .
Download