MATH 323 Linear Algebra Lecture 24: Matrix polynomials. Matrix exponentials. Matrix polynomials Definition. Given an n×n matrix A, we let A2 = AA, A3 = AAA, . . . , Ak = AA . . . A}, . . . | {z Also, let A1 = A and A0 = I . k times Associativity of matrix multiplication implies that all powers Ak are well defined and Aj Ak = Aj+k for all j, k ≥ 0. In particular, all powers of A commute. Definition. For any polynomial p(x) = c0x m + c1x m−1 + · · · + cm−1 x + cm , let p(A) = c0 Am + c1 Am−1 + · · · + cm−1A + cm I . Theorem If Av = λv for a column vector v and a scalar λ, then p(A)v = p(λ)v for any polynomial p(x). Evaluation of a matrix polynomial is yet another problem where the diagonalization can help. Theorem If A = diag(a1 , a2, . . . , an ), then p(A) = diag p(a1), p(a2), . . . , p(an ) . Now suppose that the matrix A is diagonalizable. Then A = UBU −1 for some diagonal matrix B and an invertible matrix U. A2 = UBU −1UBU −1 = UB 2U −1, A3 = A2 A = UB 2U −1UBU −1 = UB 3 U −1. Likewise, An = UB n U −1 for any n ≥ 1. I + 2A − 3A2 = UIU −1 + 2UBU −1 − 3UB 2U −1 = U(I + 2B − 3B 2)U −1. Likewise, p(A) = Up(B)U −1 for any polynomial p(x). • Initial value problem for a linear ODE: dy dt = 2y , y (0) = 3. Solution: y (t) = 3e 2t . • Initial value problem for a system of linear ODEs: ( dx = 2x + 3y , dt x(0) = 2, y (0) = 1. dy = x + 4y , dt The system can be rewritten in vector form d x x 2 3 =A , where A = . y 1 4 dt y x(t) 2 tA Solution: =e . y (t) 1 What is e tA ? Exponential function Exponential function: f (x) = exp x = e x , x ∈ R. Principal property: e x+y = e x · e y . x n x . Definition 1. e = lim 1 + n→∞ n 1 n In particular, e = lim 1 + ≈ 2.7182818. n→∞ n xn x2 + ··· + + ··· Definition 2. e = 1 + x + 2! n! x Definition 3. f (x) = e x is the unique solution of the initial value problem f 0 = f , f (0) = 1. Complex exponentials Definition. For any z ∈ C let zn z2 z + ··· + + ··· e =1+z + 2! n! Remark. A sequence of complex numbers z1 = x1 + iy1, z2 = x2 + iy2, . . . converges to z = x + iy if xn → x and yn → y as n → ∞. Theorem 1 If z = x + iy , x, y ∈ R, then e z = e x (cos y + i sin y ). In particular, e iφ = cos φ + i sin φ, φ ∈ R. Theorem 2 e z+w = e z · e w for all z, w ∈ C. Proposition e iφ = cos φ + i sin φ for all φ ∈ R. Proof: e iφ (i φ)2 (i φ)n = 1 + iφ + + ··· + + ··· 2! n! The sequence 1, i , i 2, i 3, . . . , i n , . . . is periodic: 1, i , −1, −i , 1, i , −1, −i , . . . | {z } | {z } It follows that φ2 φ4 φ2k e iφ = 1 − + − · · · + (−1)k + ··· 2! 4! (2k)! 2k+1 φ φ3 φ5 + − · · · + (−1)k + ··· + i φ− 3! 5! (2k + 1)! = cos φ + i sin φ. Matrix exponentials Definition. For any square matrix A let 1 1 exp A = e A = I + A + A2 + · · · + An + · · · 2! n! Matrix exponential is a limit of matrix polynomials. Remark. Let A(1) , A(2) , . . . be a sequence of n×n (n) matrices, A(n) = (aij ). The sequence converges to (n) an n×n matrix B = (bij ) if aij → bij as n → ∞, i.e., if each entry converges. Theorem The matrix exp A is well defined, i.e., the series converges. Properties of matrix exponentials Theorem 1 If AB = BA then e A e B = e B e A = e A+B . Corollary (a) e tA e sA = e sA e tA = e (t+s)A , t, s ∈ R; (b) e O = I ; (c) (e A )−1 = e −A . Theorem 2 d tA e = Ae tA = e tA A. dt tn n t2 2 Indeed, e = I + tA + A + · · · + A + · · · , 2! n! and the series can be differentiated term by term. d tn n d tn t n−1 n A An . = A = dt n! dt n! (n − 1)! tA Lemma Let A be an n×n matrix and x ∈ Rn . Then the vector function v(t) = e tA x satisfies v0 = Av. dv d Proof: = e tA x = Ae tA x = A e tA x = Av. dt dt Theorem For any t0 ∈ R and x0 ∈ Rn the initial value problem dv = Av, v(t0) = x0 dt has a unique solution v(t) = e (t−t0 )A x0 . Indeed, v(t) = e (t−t0 )A x0 = e tA e −t0 A x0 = e tA x, where x = e −t0 A x0 is a constant vector. Evaluation of matrix exponentials Example. A = diag(a1 , a2, . . . , ak ). An = diag(a1n , a2n , . . . , akn ), n = 1, 2, 3, . . . e A = I + A + 2!1 A2 + · · · + n!1 An + · · · = diag(b1 , b2, . . . , bk ), where bi = 1 + ai + 2!1 ai2 + 3!1 ai3 + · · · = e ai . Theorem 1 If A = diag(a1 , a2, . . . , ak ) then e A = diag(e a1 , e a2 , . . . , e ak ), e tA = diag(e a1 t , e a2 t , . . . , e ak t ). Theorem 2 If A = UBU −1, then e A = Ue B U −1. Moreover, tA = U(tB)U −1 so that e tA = Ue tB U −1. Example. A = 2 3 . 1 4 The eigenvalues of A: λ1 = 1, λ2 = 5. 3 1 Eigenvectors: v1 = , v2 = . −1 1 Therefore A = UBU −1 , where 3 1 1 0 , U= . B= 0 5 −1 1 Then t −1 3 1 e 0 3 1 e tA = Ue tB U −1 = −1 1 0 e 5t −1 1 t 5t t 3e e 3e + e 5t −3e t + 3e 5t 1 1 −1 1 = =4 . −e t e 5t 4 1 3 −e t + e 5t e t + 3e 5t Problem. Solve a system of differential equations ( dx dt = 2x + 3y , dy dt = x + 4y subject to initial conditions x(0) = 2, y (0) = 1. The unique solution: x(t) 2 = e tA x0, where A = y (t) 1 t 3e + e 5t −3e t e tA = 14 −e t + e 5t et ( x(t) = 34 e t + 54 e 5t , =⇒ y (t) = − 41 e t + 54 e 5t . 3 2 , x0 = . 4 1 + 3e 5t + 3e 5t 0 1 0 Example. A = 0 0 1, a Jordan block. 0 0 0 0 0 1 2 A = 0 0 0, A3 = O, An = O for n ≥ 3. 0 0 0 1 1 21 1 e A = I + A + A2 = 0 1 1, 2 0 0 1 1 t 12 t 2 2 t e tA = I + tA + A2 = 0 1 t . 2 0 0 1 λ 1 Example. A = , a general Jordan block. 0 λ 0 1 We have that A = λI + B, where B = . 0 0 Since (λI )B = B(λI ), it follows that e A = e λI e B . Similarly, e tA = e tλI e tB . λt e 0 e tλI = = e λt I , 0 e λt 1 t B 2 = O =⇒ e tB = I + tB = . 0 1 λt λt e te Thus e tA = e tλI e tB = . 0 e λt Problem. Solve a system of differential equations ( dx dt = 2x + y , dy dt = 2y subject to initial conditions x(0) = y (0) = 1. The unique solution: x(t) 2 1 1 tA = e x0, where A = , x0 = . y (t) 0 2 1 2t 2t 1 t e te = e 2t e tA = 0 e 2t 0 1 ( x(t) = e 2t (1 + t), =⇒ y (t) = e 2t .