Math 2280-001 Fri Mar 27 5.5 Linear systems x#= A x for which A is not diagonalizable. We will spend most of the lecture finishing Wednesday's notes, about forced oscillations in mass-spring systems, section 5.4. Then we'll begin section 5.5 in today's notes. It addresses the problem of how to solve linear systems x#= A x when the matrix A is not diagonalizable. We will continue this discussion on Monday next week. preview/review: connecting the eigenvalue-eigenvector method for first order linear systems of differential equations to change of basis ideas from Math 2270. Consider the first order system of differential equations x#= A x . Case 1 If the matrix A is diagonalizable, then let S be a matrix whose columns are eigenvectors of A, where these eigenvectors are a basis for =n or Cn . In the completely representative case of A2 # 2 we can write S in column form, S = v1 v2 with A v1 = l 1 v1 , A v2 = l 2 v2 . Then we have the identity A S = A v1 v2 = l 1 v1 l 2 v2 = v1 v2 l1 0 0 l2 =AL AS=SL. where L is the diagonal matrix containing the eigenvalue of vj in the jth diagonal entry. This identity can be solved either for A or L: SK1 A S = L, A = S L SK1 . This is the algebraic reason we call such matrices A "diagonalizable"- they are similar to diagonal matrices. Now, return to the DE x#= A x. Consider the transformation x t d S u t , equivalently SK1 x t = u t . Substitute this into the system: x#= A x 5 S u #= A S u 5 S u #= A S u. (There is a universal product rule for any sort of multiplication that distributes over addition. We'll discuss it Monday. In this case S is a constant matrix so the product rule reduces to the "constant multiple" rule for differentiation above.) 5 u #= SK1 A S u 5 u #= L u u1 # 5 = u2 # l1 0 0 l2 u1 t l1u1 = u2 t l2u2 . This diagonal system can be solved via Chapter 1: l1 t u1 #= l 1 u1 0 u1 t = c1 e l2 t u2 #= l 2 u2 0 u2 t = c2 e . Thus u1 t u2 t l1 t c1 e = . l2 t c2 e Since x t d S u t , x t = x1 t x2 t u1 t = v1 v2 u2 t l1 t = v1 v2 c1 e l2 t l t l2 t = c1 e 1 v1 C c2 e v2 . c2 e This is what we've been doing since the start of Chapter 5, but this time it's completely motivated, i.e. with no "let's guess solutions of the form el t v" pulled out of thin air! This explanation is why that ad-hoc method turned out to work. Case 2 A is not diagonalizable. [I may ask you to read this on your own, and skip right to the exercises at the end.] One learns in linear algebra that if An # n is not diagonalizable then instead of being similar to a diagonal matrix, it is similar to a Jordan matrix, which is "almost" as good. http://en.wikipedia.org/wiki/Jordan_normal_form Here are some facts: For An # n Let the characteristic polynomial p l = det A K l I factor as p l = K1 n l K l1 k 1 k l K l2 2 ... l K l m k m with m % n distinct eigenvalues l j , and k1 C k2 C...C km = n . (1) Each eigenspace El = nullspace A K l j I j has dimension 1 % dim El j % kj . This dimension is called the geometric multiplicity of the eigenvalue l j , whereas the power kj is called the algebraic multiplicity of l j . (2) The matrix A is diagonalizable if and only if each dim El j = kj . In this case one can construct a basis of =n (or Cn ) by amalgamating the various eigenspace bases. Algebraically this is expressed by the identity AS=SL where the columns of S comprise a basis of =n (or Cn ) made out of eigenvectors of S, and L is the diagonal matrix that has the corresponding eigenvalues in its columns. Each l j appears exactly kj times in the diagonal matrix L, so the matrix L is unique up to how its columns (and S's columns are ordered. (In this case the discussion at the start of the notes applies, for solving x#= A x.) (3) Any eigenspace for which dim El j ! kj is called defective. If A has any defective eigenspaces then it is not diagonalizable. However, the larger generalized eigenspace k A K ljI El 4 Gl d nullspace j j j does always have dimension kj . If bases for each Gl are amalgamated they will form a basis for =n or j Cn . It is possible to choose the bases for each Gl so that when they are amalgamated into the columns of j a matrix S, the identity AS=SJ S A S = J; A = S J SK1 K1 holds, where J is the Jordan normal form of A. The structure of J is as follows: It's an upper triangular matrix, and the diagonal entries are the eigenvalues of A. The eigenvalue l j appears exactly kj times on the diagonal of J. If l j is defective, then there will be one or more square "Jordan" blocks centered on the diagonal, having l j 's along the diagonal, ones along the super-diagonal, and zeroes everywhere else: lj 1 0 lj lj 1 , 0 0 l j 1 , etc. 0 0 lj Off of the diagonal and super-diagonal all entries in J are zero. Jordan forms for a given matrix are unique, up to reordering the blocks. For example, here is a possible Jordan normal form for an 8 # 8 matrix with characteristic polynomial A K l I = l K l1 4 l K l2 2 l K l3 2 Exercise 1) Up to re-ordering the blocks, what are the other possible Jordan forms for an 8 # 8 matrix having the same characteristic polynomial? 2×2 non-diagonalizable matrices: Because eigenvectors with different eigenvalues are always linearly independent, the only way A2 # 2 can fail to be diagonalizable is if A K l I = l K l1 with l 1 real, and dim El 2 = 1. In this case the Jordan form for A must be 1 l1 1 0 l1 . And, AS=SJ reads A S = A v1 v2 = v1 v2 l1 1 0 l1 = S J. In other words, Av1 = l 1 v1 A v2 = v1 C l 1 v2 . Now repeat the transformation process that we did for the diagonalizable case...to solve the DE: x#= A x. Consider the transformation x t d S u t , equivalently SK1 x t = u t . Substitute this into the system: x#= A x 5 S u #= A S u 5 S u #= A S u. 5 u #= SK1 A S u 5 u #= J u u # 1 5 u # = 2 l1 1 0 l1 This system can be back-solved via Chapter 1: l1u C u u t 1 u t 2 = 1 l1u 2 2 . l1 t u2 #= l 1 u2 0 u2 t = c2 e u1 #= l 1 u1 C u2 Kl t 1 e l t 1 u1 #Kl 1 u1 = u2 = c2 e Kl t l t 1 1 u1 #Kl 1 u1 = c2 e Kl t 1 e e = c2 u1 = c2 t C c1 l t 1 u1 = c2 t C c1 e . Thus l1 t u1 t = u2 t c2t C c1 e . l1 t c2 e Since x t d S u t , x t = x1 t x2 t l1 t u1 t = v1 v2 c2t C c1 e = v1 v2 u2 t l t l1 t c2 e l1 t = c2t C c1 e 1 v1 C c2 e l t 1 = c1 e l t 1 v1 C c2 e v2 v2 C t v1 . Summary!!!! In the case of non-diagonalizable A2 # 2 , 2 A K l I = l K l1 with l 1 real, and dim El 1 = 1. Find an eigenvector v1 and then a generalized eigenvector v2 , i.e. find solutions to A K l 1 I v1 = 0 A K l 1 I v2 = v1 . Then the the two-dimensional solution space to x#= A x is given by linear combinations l t 1 x t = c1 e l t 1 v1 C c2 e t v1 C v2 . l t 1 Exercise 2) Verify directly that in the case above, the function e x#= A x t v1 C v2 does indeed solve Exercise 3) Solve the linear system of DE's x1 # t x2 # t = 1 x1 K4 0 x2 4 .