MATH 304 Linear Algebra Lecture 32: Bases of eigenvectors. Diagonalization. Eigenvalues and eigenvectors of an operator Definition. Let V be a vector space and L : V → V be a linear operator. A number λ is called an eigenvalue of the operator L if L(v) = λv for a nonzero vector v ∈ V . The vector v is called an eigenvector of L associated with the eigenvalue λ. (If V is a functional space then eigenvectors are also called eigenfunctions.) If V = Rn then the linear operator L is given by L(x) = Ax, where A is an n×n matrix. In this case, eigenvalues and eigenvectors of the operator L are precisely eigenvalues and eigenvectors of the matrix A. Characteristic polynomial of an operator Let L be a linear operator on a finite-dimensional vector space V . Let u1 , u2 , . . . , un be a basis for V . Let A be the matrix of L with respect to this basis. Definition. The characteristic polynomial p(λ) = det(A − λI ) of the matrix A is called the characteristic polynomial of the operator L. Then eigenvalues of L are roots of its characteristic polynomial. Theorem. The characteristic polynomial of the operator L is well defined. That is, it does not depend on the choice of a basis. Theorem. The characteristic polynomial of the operator L is well defined. That is, it does not depend on the choice of a basis. Proof: Let B be the matrix of L with respect to a different basis v1 , v2 , . . . , vn . Then A = UBU −1 , where U is the transition matrix from the basis v1 , . . . , vn to u1 , . . . , un . We have to show that det(A − λI ) = det(B − λI ) for all λ ∈ R. We obtain det(A − λI ) = det(UBU −1 − λI ) = det UBU −1 − U(λI )U −1 = det U(B − λI )U −1 = det(U) det(B − λI ) det(U −1 ) = det(B − λI ). Let V be a vector space and L : V → V be a linear operator. Proposition 1 If v ∈ V is an eigenvector of the operator L then the associated eigenvalue is unique. Proof: Suppose that L(v) = λ1 v and L(v) = λ2 v. Then λ1 v = λ2 v =⇒ (λ1 − λ2 )v = 0 =⇒ λ1 − λ2 = 0 =⇒ λ1 = λ2 . Proposition 2 Suppose v1 and v2 are eigenvectors of L associated with different eigenvalues λ1 and λ2 . Then v1 and v2 are linearly independent. Proof: Since L(v1 ) = λ1 v1 , we have L(tv1 ) = tL(v1 ) = t(λ1 v1 ) = λ1 (tv1 ) for any scalar t. Since λ2 6= λ1 , it follows that v2 6= tv1 . That is, v2 is not a scalar multiple of v1 . Similarly, v1 is not a scalar multiple of v2 . Let L : V → V be a linear operator. Proposition 3 If v1 , v2 , and v3 are eigenvectors of L associated with distinct eigenvalues λ1 , λ2 , and λ3 , then they are linearly independent. Proof: Suppose that t1 v1 + t2 v2 + t3 v3 = 0 for some t1 , t2 , t3 ∈ R. Then L(t1 v1 + t2 v2 + t3 v3 ) = 0, t1 L(v1 ) + t2 L(v2 ) + t3 L(v3 ) = 0, t1 λ1 v1 + t2 λ2 v2 + t3 λ3 v3 = 0. It follows that t1 λ1 v1 + t2 λ2 v2 + t3 λ3 v3 − λ3 (t1 v1 + t2 v2 + t3 v3 ) = 0 =⇒ t1 (λ1 − λ3 )v1 + t2 (λ2 − λ3 )v2 = 0. By the above, v1 and v2 are linearly independent. Hence t1 (λ1 − λ3 ) = t2 (λ2 − λ3 ) = 0 =⇒ t1 = t2 = 0 =⇒ t3 = 0. Theorem If v1 , v2 , . . . , vk are eigenvectors of a linear operator L associated with distinct eigenvalues λ1 , λ2 , . . . , λk , then v1 , v2 , . . . , vk are linearly independent. Corollary 1 If λ1 , λ2 , . . . , λk are distinct real numbers, then the functions e λ1 x , e λ2 x , . . . , e λk x are linearly independent. Proof: Consider a linear operator D : C ∞ (R) → C ∞ (R) given by Df = f ′ . Then e λ1 x , . . . , e λk x are eigenfunctions of D associated with distinct eigenvalues λ1 , . . . , λk . Corollary 2 Let A be an n×n matrix such that the characteristic equation det(A − λI ) = 0 has n distinct real roots. Then Rn has a basis consisting of eigenvectors of A. Proof: Let λ1 , λ2 , . . . , λn be distinct real roots of the characteristic equation. Any λi is an eigenvalue of A, hence there is an associated eigenvector vi . By the theorem, vectors v1 , v2 , . . . , vn are linearly independent. Therefore they form a basis for Rn . Corollary 3 Let λ1 , λ2 , . . . , λk be distinct eigenvalues of a linear operator L. For any 1 ≤ i ≤ k let Si be a basis for the eigenspace associated with the eigenvalue λi . Then the union S1 ∪ S2 ∪ · · · ∪ Sk is a linearly independent set. Diagonalization Let L be a linear operator on a finite-dimensional vector space V . Then the following conditions are equivalent: • the matrix of L with respect to some basis is diagonal; • there exists a basis for V formed by eigenvectors of L. The operator L is diagonalizable if it satisfies these conditions. Let A be an n×n matrix. Then the following conditions are equivalent: • A is the matrix of a diagonalizable operator; • A is similar to a diagonal matrix, i.e., it is represented as A = UBU −1 , where the matrix B is diagonal; • there exists a basis for Rn formed by eigenvectors of A. The matrix A is diagonalizable if it satisfies these conditions. Otherwise A is called defective. Example. A = 2 1 . 1 2 • The matrix A has two eigenvalues: 1 and 3. • The eigenspace of A associated with the eigenvalue 1 is the line spanned by v1 = (−1, 1). • The eigenspace of A associated with the eigenvalue 3 is the line spanned by v2 = (1, 1). • Eigenvectors v1 and v2 form a basis for R2 . Thus the matrix A is diagonalizable. Namely, A = UBU −1 , where 1 0 −1 1 B= , U= . 0 3 1 1 Example. A • • v1 • v2 • 1 1 −1 = 1 1 1 . 0 0 2 The matrix A has two eigenvalues: 0 and 2. The eigenspace corresponding to 0 is spanned by = (−1, 1, 0). The eigenspace corresponding to 2 is spanned by = (1, 1, 0) and v3 = (−1, 0, 1). Eigenvectors v1 , v2 , v3 form a basis for R3 . Thus the matrix A is diagonalizable. Namely, A = UBU −1 , where B= 0 0 0 0 2 0 , 0 0 2 U= −1 1 −1 1 1 0 . 0 0 1 Problem. Diagonalize the matrix A = 4 3 . 0 1 We need to find a diagonal matrix B and an invertible matrix U such that A = UBU −1 . Suppose that v1 = (x1 , y1 ), v2 = (x2 , y2 ) is a basis for R2 formed by eigenvectors of A, i.e., Avi = λi vi for some λi ∈ R. Then we can take x1 x2 λ1 0 . , U= B= y1 y2 0 λ2 Note that U is the transition matrix from the basis v1 , v2 to the standard basis. 4 3 Problem. Diagonalize the matrix A = . 0 1 4−λ 3 = 0. Characteristic equation of A: 0 1−λ (4 − λ)(1 − λ) = 0 =⇒ λ1 = 4, λ2 = 1. Associated eigenvectors: v1 = (1, 0), v2 = (−1, 1). Thus A = UBU −1 , where 4 0 1 −1 B= , U= . 0 1 0 1 Problem. Let A = 4 3 . Find A5 . 0 1 We know that A = UBU −1 , where 4 0 1 −1 B= , U= . 0 1 0 1 Then A5 = UBU −1 UBU −1 UBU −1 UBU −1 UBU −1 1 −1 1024 0 1 1 = UB 5 U −1 = 0 1 0 1 0 1 1024 −1 1 1 1024 1023 = = . 0 1 0 1 0 1 Problem. Let A = 4 3 . Find a matrix C 0 1 such that C 2 = A. We know that A = UBU −1 , where 1 −1 4 0 . , U= B= 0 1 0 1 Suppose that D 2 = B for some matrix D. Let C = UDU −1 . Then C 2 = UDU −1 UDU −1 = UD 2 U −1 = UBU −1 = A. √ 2 0 4 √0 We can take D = . = 0 1 1 0 2 1 1 1 2 0 1 −1 . = Then C = 0 1 0 1 0 1 0 1 There are two obstructions to existence of a basis consisting of eigenvectors. They are illustrated by the following examples. 1 1 Example 1. A = . 0 1 det(A − λI ) = (λ − 1)2 . Hence λ = 1 is the only eigenvalue. The associated eigenspace is the line t(1, 0). 0 −1 Example 2. A = . 1 0 det(A − λI ) = λ2 + 1. =⇒ no real eigenvalues or eigenvectors (However there are complex eigenvalues/eigenvectors.)