Ch 7.1: Introduction to Systems of First Order Linear Equations A system of simultaneous first order ordinary differential equations has the general form x1′ = F1 (t , x1 , x2 , K xn ) x2′ = F2 (t , x1 , x2 , K xn ) M xn′ = Fn (t , x1 , x2 , K xn ) where each xk is a function of t. If each Fk is a linear function of x1, x2, …, xn, then the system of equations is said to be linear, otherwise it is nonlinear. Systems of higher order differential equations can similarly be defined. Example 1 The motion of a spring-mass system from Section 3.8 was described by the equation u ′′(t ) + 16 u ′(t ) + 192u (t ) = 0 This second order equation can be converted into a system of first order equations by letting x1 = u and x2 = u'. Thus x1′ = x2 x2′ + 16 x2 + 192 x1 = 0 or x1′ = x2 x2′ = −16 x2 − 192 x1 Nth Order ODEs and Linear 1st Order Systems The method illustrated in previous example can be used to transform an arbitrary nth order equation ( y ( n ) = F t , y, y′, y′′, K , y ( n −1) ) into a system of n first order equations, first by defining x1 = y, x2 = y′, x3 = y′′, K , xn = y ( n −1) Then x1′ = x2 x2′ = x3 M xn′ −1 = xn xn′ = F (t , x1 , x2 , K xn ) Solutions of First Order Systems A system of simultaneous first order ordinary differential equations has the general form x1′ = F1 (t , x1 , x2 , K xn ) M xn′ = Fn (t , x1 , x2 , K xn ). It has a solution on I: α < t < β if there exists n functions x1 = φ1 (t ), x2 = φ2 (t ),K , xn = φn (t ) that are differentiable on I and satisfy the system of equations at all points t in I. Initial conditions may also be prescribed to give an IVP: x1 (t0 ) = x10 , x2 (t0 ) = x20 , K , xn (t0 ) = xn0 Example 2 The equation y′′ + y = 0, 0 < t < 2π can be written as system of first order equations by letting x1 = y and x2 = y'. Thus x1′ = x2 x2′ = − x1 A solution to this system is x1 = sin(t ), x2 = cos(t ), 0 < t < 2π which is a parametric description for the unit circle. Theorem 7.1.1 Suppose F1,…, Fn and ∂F1/∂x1,…, ∂F1/∂xn,…, ∂Fn/∂ x1,…, ∂Fn/∂xn, are continuous in the region R of t x1 x2…xn-space defined by α < t < β, α1 < x1 < β1, …, αn < xn < βn, and let the point (t , x , x ,K, x ) 0 0 1 0 2 0 n be contained in R. Then in some interval (t0 - h, t0 + h) there exists a unique solution x1 = φ1 (t ), x2 = φ2 (t ),K , xn = φn (t ) that satisfies the IVP. x1′ = F1 (t , x1 , x2 , K xn ) x2′ = F2 (t , x1 , x2 , K xn ) M xn′ = Fn (t , x1 , x2 , K xn ) Linear Systems If each Fk is a linear function of x1, x2, …, xn, then the system of equations has the general form x1′ = p11 (t ) x1 + p12 (t ) x2 + K + p1n (t ) xn + g1 (t ) x2′ = p21 (t ) x1 + p22 (t ) x2 + K + p2 n (t ) xn + g 2 (t ) M xn′ = pn1 (t ) x1 + pn 2 (t ) x2 + K + pnn (t ) xn + g n (t ) If each of the gk(t) is zero on I, then the system is homogeneous, otherwise it is nonhomogeneous. Theorem 7.1.2 Suppose p11, p12,…, pnn, g1,…, gn are continuous on an interval I: α < t < β with t0 in I, and let x10 , x20 , K , xn0 prescribe the initial conditions. Then there exists a unique solution x1 = φ1 (t ), x2 = φ2 (t ),K , xn = φn (t ) that satisfies the IVP, and exists throughout I. x1′ = p11 (t ) x1 + p12 (t ) x2 + K + p1n (t ) xn + g1 (t ) x2′ = p21 (t ) x1 + p22 (t ) x2 + K + p2 n (t ) xn + g 2 (t ) M xn′ = pn1 (t ) x1 + pn 2 (t ) x2 + K + pnn (t ) xn + g n (t ) Ch 7.2: Review of Matrices For theoretical and computation reasons, we review results of matrix theory in this section and the next. A matrix A is an m x n rectangular array of elements, arranged in m rows and n columns, denoted ⎛ a11 ⎜ ⎜ a21 A = ( ai j )= ⎜ M ⎜ ⎜a ⎝ m1 a12 a22 M am 2 L a1n ⎞ ⎟ L a2 n ⎟ O M ⎟ ⎟ L amn ⎟⎠ Some examples of 2 x 2 matrices are given below: 3 − 2i ⎞ ⎛ 1 ⎛1 3⎞ ⎛1 2⎞ ⎟⎟ ⎟⎟, C = ⎜⎜ ⎟⎟, B = ⎜⎜ A = ⎜⎜ ⎝ 4 + 5i 6 − 7i ⎠ ⎝ 2 4⎠ ⎝3 4⎠ Transpose The transpose of A = (aij) is AT = (aji). ⎛ a11 ⎜ ⎜ a21 A=⎜ M ⎜ ⎜a ⎝ m1 a12 a22 M am 2 L a1n ⎞ ⎛ a11 ⎟ ⎜ L a2 n ⎟ ⎜ a12 T ⇒A =⎜ ⎟ M O M ⎟ ⎜ ⎜a L amn ⎟⎠ ⎝ 1n a21 L am1 ⎞ ⎟ a22 L am 2 ⎟ M O M ⎟ ⎟ a2 n L amn ⎟⎠ For example, ⎛1 3⎞ ⎛1 2⎞ T ⎟⎟, ⎟⎟ ⇒ A = ⎜⎜ A = ⎜⎜ ⎝ 2 4⎠ ⎝3 4⎠ ⎛1 4⎞ ⎜ ⎟ ⎛ 1 2 3⎞ T ⎟⎟ ⇒ B = ⎜ 2 5 ⎟ B = ⎜⎜ ⎝ 4 5 6⎠ ⎜ 3 6⎟ ⎝ ⎠ Conjugate The conjugate of A = (aij) is A = (aij). ⎛ a11 ⎜ ⎜ a21 A=⎜ M ⎜ ⎜a ⎝ m1 a12 a22 M am 2 L a1n ⎞ ⎛ a11 ⎟ ⎜ L a2 n ⎟ ⎜ a21 ⇒A=⎜ ⎟ M O M ⎟ ⎜ ⎜a L amn ⎟⎠ ⎝ m1 a12 a22 M am 2 L a1n ⎞ ⎟ L a2 n ⎟ O M ⎟ ⎟ L amn ⎟⎠ For example, 2 + 3i ⎞ 2 − 3i ⎞ ⎛ 1 ⎛ 1 ⎟⎟ ⎟⎟ ⇒ A = ⎜⎜ A = ⎜⎜ 4 ⎠ 4 ⎠ ⎝ 3 + 4i ⎝ 3 − 4i Adjoint The adjoint of A is AT , and is denoted by A* ⎛ a11 ⎜ ⎜ a21 A=⎜ M ⎜ ⎜a ⎝ m1 a12 a22 M am 2 L a1n ⎞ ⎛ a11 ⎟ ⎜ L a2 n ⎟ ⎜ a12 * ⇒A =⎜ ⎟ M O M ⎟ ⎜ ⎜a L amn ⎟⎠ ⎝ 1n a21 L am1 ⎞ ⎟ a22 L am 2 ⎟ M O M ⎟ ⎟ a2 n L amn ⎟⎠ For example, 2 + 3i ⎞ 3 + 4i ⎞ ⎛ 1 ⎛ 1 * ⎟⎟ ⎟⎟ ⇒ A = ⎜⎜ A = ⎜⎜ 4 ⎠ 4 ⎠ ⎝ 2 − 3i ⎝ 3 − 4i Square Matrices A square matrix A has the same number of rows and columns. That is, A is n x n. In this case, A is said to have order n. ⎛ a11 ⎜ ⎜ a21 A=⎜ M ⎜ ⎜a ⎝ n1 a12 L a1n ⎞ ⎟ a22 L a2 n ⎟ M O M ⎟ ⎟ an 2 L ann ⎟⎠ For example, ⎛1 2⎞ ⎟⎟, A = ⎜⎜ ⎝3 4⎠ ⎛ 1 2 3⎞ ⎜ ⎟ B = ⎜ 4 5 6⎟ ⎜7 8 9⎟ ⎝ ⎠ Vectors A column vector x is an n x 1 matrix. For example, ⎛1⎞ ⎜ ⎟ x = ⎜ 2⎟ ⎜ 3⎟ ⎝ ⎠ A row vector x is a 1 x n matrix. For example, y = (1 2 3) Note here that y = xT, and that in general, if x is a column vector x, then xT is a row vector. The Zero Matrix The zero matrix is defined to be 0 = (0), whose dimensions depend on the context. For example, ⎛ 0 0⎞ ⎛ 0 0 0⎞ ⎟⎟, 0 = ⎜⎜ ⎟⎟, 0 = ⎜⎜ ⎝ 0 0⎠ ⎝ 0 0 0⎠ ⎛ 0 0⎞ ⎜ ⎟ 0 = ⎜ 0 0 ⎟, K ⎜ 0 0⎟ ⎝ ⎠ Matrix Equality Two matrices A = (aij) and B = (bij) are equal if aij = bij for all i and j. For example, ⎛1 2⎞ ⎛1 2⎞ ⎟⎟, B = ⎜⎜ ⎟⎟ ⇒ A = B A = ⎜⎜ ⎝3 4⎠ ⎝3 4⎠ Matrix – Scalar Multiplication The product of a matrix A = (aij) and a constant k is defined to be kA = (kaij). For example, ⎛ 1 2 3⎞ ⎛ − 5 − 10 − 15 ⎞ ⎟⎟ ⇒ −5A = ⎜⎜ ⎟⎟ A = ⎜⎜ ⎝ 4 5 6⎠ ⎝ − 20 − 25 − 30 ⎠ Matrix Addition and Subtraction The sum of two m x n matrices A = (aij) and B = (bij) is defined to be A + B = (aij + bij). For example, ⎛1 2⎞ ⎛ 5 6⎞ ⎛6 8⎞ ⎟⎟, B = ⎜⎜ ⎟⎟ ⇒ A + B = ⎜⎜ ⎟⎟ A = ⎜⎜ ⎝3 4⎠ ⎝7 8⎠ ⎝10 12 ⎠ The difference of two m x n matrices A = (aij) and B = (bij) is defined to be A - B = (aij - bij). For example, ⎛1 2⎞ ⎛ 5 6⎞ ⎛ − 4 − 4⎞ ⎟⎟, B = ⎜⎜ ⎟⎟ ⇒ A − B = ⎜⎜ ⎟⎟ A = ⎜⎜ ⎝3 4⎠ ⎝7 8⎠ ⎝ − 4 − 4⎠ Matrix Multiplication The product of an m x n matrix A = (aij) and an n x r matrix B = (bij) is defined to be the matrix C = (cij), where n cij = ∑ aik bkj k =1 Examples (note AB does not necessarily equal BA): ⎛1 2⎞ ⎛ 1 + 4 3 + 8 ⎞ ⎛ 5 11 ⎞ ⎛ 1 3⎞ ⎟ ⎜ A=⎜ ⎟, B = ⎜⎜ 2 4 ⎟⎟ ⇒ AB = ⎜⎜ 3 + 8 9 + 16 ⎟⎟ = ⎜⎜11 25 ⎟⎟ 3 4 ⎠ ⎝ ⎠ ⎠ ⎝ ⎠ ⎝ ⎝ ⎛ 1 + 9 2 + 12 ⎞ ⎛10 14 ⎞ ⎟⎟ ⎟⎟ = ⎜⎜ ⇒ BA = ⎜⎜ ⎝ 2 + 12 4 + 16 ⎠ ⎝14 20 ⎠ ⎛ 3 0⎞ ⎟ ⎜ 1 2 3 ⎛ 3+ 2 + 0 0 + 4 − 3 ⎞ ⎛ 5 1⎞ ⎞ ⎛ ⎟⎟ ⎟⎟ = ⎜⎜ ⎟⎟, D = ⎜ 1 2 ⎟ ⇒ CD = ⎜⎜ C = ⎜⎜ ⎝12 + 5 + 0 0 + 10 − 6 ⎠ ⎝17 4 ⎠ ⎝ 4 5 6⎠ ⎜ 0 − 1⎟ ⎠ ⎝ Vector Multiplication The dot product of two n x 1 vectors x & y is defined as n xT y = ∑ xi y j k =1 The inner product of two n x 1 vectors x & y is defined as (x,y ) = x n T y = ∑ xi y j k =1 Example: ⎛1⎞ ⎛ −1 ⎞ ⎜ ⎟ ⎜ ⎟ x = ⎜ 2 ⎟, y = ⎜ 2 − 3i ⎟ ⇒ xT y = (1)(−1) + (2)(2 − 3i ) + (3i )(5 + 5i ) = −12 + 9i ⎜ 3i ⎟ ⎜ 5 + 5i ⎟ ⎝ ⎠ ⎝ ⎠ ⇒ (x, y ) = xT y = (1)(−1) + (2)(2 + 3i ) + (3i )(5 − 5i ) = 18 + 21i Vector Length The length of an n x 1 vector x is defined as x = (x,x ) 1/ 2 ⎤ ⎡ = ⎢ ∑ xk xk ⎥ ⎦ ⎣ k =1 n 1/ 2 ⎡ = ⎢ ∑ | xk ⎣ k =1 n ⎤ |2 ⎥ ⎦ 1/ 2 Note here that we have used the fact that if x = a + bi, then x ⋅ x = (a + bi )(a − bi ) = a 2 + b 2 = x 2 Example: ⎛ 1 ⎞ ⎜ ⎟ 1/ 2 x = ⎜ 2 ⎟ ⇒ x = (x, x ) = (1)(1) + (2)(2) + (3 + 4i )(3 − 4i ) ⎜ 3 + 4i ⎟ ⎝ ⎠ = 1 + 4 + (9 + 16) = 30 Orthogonality Two n x 1 vectors x & y are orthogonal if (x,y) = 0. Example: ⎛ 11⎞ ⎛1⎞ ⎜ ⎟ ⎜ ⎟ x = ⎜ 2 ⎟ y = ⎜ − 4 ⎟ ⇒ (x, y ) = (1)(11) + (2)(−4) + (3)(−1) = 0 ⎜ − 1⎟ ⎜ 3⎟ ⎝ ⎠ ⎝ ⎠ Identity Matrix The multiplicative identity matrix I is an n x n matrix given by ⎛1 ⎜ ⎜0 I=⎜ M ⎜ ⎜0 ⎝ 0 1 M 0 L L O L 0⎞ ⎟ 0⎟ M⎟ ⎟ 1 ⎟⎠ For any square matrix A, it follows that AI = IA = A. The dimensions of I depend on the context. For example, ⎛ 1 2 ⎞⎛ 1 0 ⎞ ⎛ 1 2 ⎞ ⎟⎟, ⎟⎟ = ⎜⎜ ⎟⎟⎜⎜ AI = ⎜⎜ ⎝ 3 4 ⎠⎝ 0 1 ⎠ ⎝ 3 4 ⎠ ⎛ 1 0 0 ⎞⎛ 1 2 3 ⎞ ⎛ 1 2 3 ⎞ ⎟ ⎟ ⎜ ⎟⎜ ⎜ IB = ⎜ 0 1 0 ⎟⎜ 4 5 6 ⎟ = ⎜ 4 5 6 ⎟ ⎜ 0 0 1 ⎟⎜ 7 8 9 ⎟ ⎜ 7 8 9 ⎟ ⎠ ⎠ ⎝ ⎠⎝ ⎝ Inverse Matrix A square matrix A is nonsingular, or invertible, if there exists a matrix B such that that AB = BA = I. Otherwise A is singular. The matrix B, if it exists, is unique and is denoted by A-1 and is called the inverse of A. It turns out that A-1 exists iff detA ≠ 0, and A-1 can be found using row reduction (also called Gaussian elimination) on the augmented matrix (A|I), see example on next slide. The three elementary row operations: Interchange two rows. Multiply a row by a nonzero scalar. Add a multiple of one row to another row. Example: Finding the Identity Matrix (1 of 2) Use row reduction to find the inverse of the matrix A below, if it exists. 1 2⎞ ⎛0 ⎜ ⎟ A=⎜1 0 3⎟ ⎜ 4 − 3 8⎟ ⎝ ⎠ Solution: If possible, use elementary row operations to reduce (A|I), 1 2 1 0 0⎞ ⎛0 ⎜ ⎟ (A I ) = ⎜ 1 0 3 0 1 0 ⎟ , ⎜ 4 − 3 8 0 0 1⎟ ⎝ ⎠ such that the left side is the identity matrix, for then the right side will be A-1. (See next slide.) Example: Finding the Identity Matrix (2 of 2) 1 2 1 0 0⎞ ⎛ 1 0 3 0 1 0⎞ ⎛0 ⎜ ⎟ ⎟ ⎜ (A I ) = ⎜ 1 0 3 0 1 0 ⎟ → ⎜ 0 1 2 1 0 0 ⎟ ⎜ 4 − 3 8 0 0 1⎟ ⎜ 4 − 3 8 0 0 1⎟ ⎝ ⎠ ⎠ ⎝ 0 3 0 1 0⎞ ⎛ 1 0 3 0 1 0⎞ ⎛1 ⎜ ⎟ ⎜ ⎟ 1 2 1 0 0⎟ → ⎜0 1 2 1 0 0⎟ → ⎜0 ⎜ 0 − 3 − 4 0 − 4 1⎟ ⎜ 0 0 2 3 − 4 1⎟ ⎝ ⎠ ⎝ ⎠ 0 1 0⎞ ⎛ 1 0 0 − 9 / 2 7 − 3 / 2⎞ ⎛1 0 3 ⎜ ⎟ ⎜ ⎟ 4 − 1⎟ → ⎜ 0 1 0 → ⎜0 1 0 − 2 −2 4 − 1⎟ ⎜0 0 2 3 −4 1⎟⎠ ⎜⎝ 0 0 1 3/ 2 − 2 1 / 2 ⎟⎠ ⎝ Thus 7 − 3 / 2⎞ ⎛−9/ 2 ⎟ ⎜ −1 4 A =⎜ −2 − 1⎟ ⎜ 3/ 2 − 2 1 / 2 ⎟⎠ ⎝ Matrix Functions The elements of a matrix can be functions of a real variable. In this case, we write ⎛ x1 (t ) ⎞ ⎛ a11 (t ) a12 (t ) ⎜ ⎟ ⎜ ⎜ x2 (t ) ⎟ ⎜ a21 (t ) a22 (t ) , A(t ) = ⎜ x(t ) = ⎜ ⎟ M M M ⎜ ⎟ ⎜ ⎜ x (t ) ⎟ ⎜ a (t ) a (t ) m2 ⎝ m ⎠ ⎝ m1 L a1n (t ) ⎞ ⎟ L a2 n (t ) ⎟ O M ⎟ ⎟ L amn (t ) ⎟⎠ Such a matrix is continuous at a point, or on an interval (a, b), if each element is continuous there. Similarly with differentiation and integration: dA ⎛ daij ⎞ ⎟⎟, = ⎜⎜ dt ⎝ dt ⎠ b ⎞⎟ ⎛ = A ( t ) dt a ( t ) dt ⎜ ij ∫a ⎝ ∫a ⎠ b Example & Differentiation Rules Example: cos t ⎞ ⎛ 3t 2 sin t ⎞ dA ⎛ 6t ⎟ ⎜ ⎟⎟, A(t ) = ⎜ ⇒ = ⎜⎜ ⎟ 0 ⎠ dt ⎝ − sin t 4 ⎠ ⎝ cos t ⇒∫ π 0 ⎛π 3 0 ⎞ ⎟ A(t )dt =⎜⎜ ⎟ 1 4 π − ⎠ ⎝ Many of the rules from calculus apply in this setting. For example: d (CA ) dA =C , where C is a constant matrix dt dt d (A + B ) dA d B = + dt dt dt d (AB ) ⎛ dA ⎞ ⎛dB⎞ =⎜ ⎟B + A ⎜ ⎟ dt ⎝ dt ⎠ ⎝ dt ⎠ Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues A system of n linear equations in n variables, a1,1 x1 + a1, 2 x2 + L + a1,n xn = b1 a2,1 x1 + a2, 2 x2 + L + a2,n xn = b2 M an ,1 x1 + an , 2 x2 + L + an ,n xn = bn , can be expressed as a matrix equation Ax = b: ⎛ a1,1 ⎜ ⎜ a2,1 ⎜ M ⎜ ⎜a ⎝ n ,1 a1, 2 L a1,n ⎞⎛ x1 ⎞ ⎛ b1 ⎞ ⎟⎜ ⎟ ⎜ ⎟ a2, 2 L a2,n ⎟⎜ x2 ⎟ ⎜ b2 ⎟ =⎜ ⎟ ⎟ ⎜ ⎟ M O M M M ⎟⎜ ⎟ ⎜ ⎟ an , 2 L an ,n ⎟⎠⎜⎝ xn ⎟⎠ ⎜⎝ bn ⎟⎠ If b = 0, then system is homogeneous; otherwise it is nonhomogeneous. Nonsingular Case If the coefficient matrix A is nonsingular, then it is invertible and we can solve Ax = b as follows: Ax = b ⇒ A −1Ax = A −1b ⇒ Ix = A −1b ⇒ x = A −1b This solution is therefore unique. Also, if b = 0, it follows that the unique solution to Ax = 0 is x = A-10 = 0. Thus if A is nonsingular, then the only solution to Ax = 0 is the trivial solution x = 0. Example 1: Nonsingular Case (1 of 3) From a previous example, we know that the matrix A below is nonsingular with inverse as given. 1 2⎞ 7 − 3 / 2⎞ ⎛0 ⎛−9/ 2 ⎜ ⎟ ⎜ ⎟ −1 − 1⎟ A=⎜1 0 3 ⎟, A = ⎜ − 2 4 ⎜ 4 − 3 8⎟ ⎜ 3/ 2 − 2 ⎟ 1 / 2 ⎝ ⎠ ⎝ ⎠ Using the definition of matrix multiplication, it follows that the only solution of Ax = 0 is x = 0: 7 − 3 / 2 ⎞⎛ 0 ⎞ ⎛ 0 ⎞ ⎛−9/ 2 ⎜ ⎟⎜ ⎟ ⎜ ⎟ −1 x = A 0=⎜ −2 − 1⎟⎜ 0 ⎟ = ⎜ 0 ⎟ 4 ⎜ 3/ 2 − 2 ⎟⎜ 0 ⎟ ⎜ 0 ⎟ 1 / 2 ⎝ ⎠⎝ ⎠ ⎝ ⎠ Example 1: Nonsingular Case (2 of 3) Now let’s solve the nonhomogeneous linear system Ax = b below using A-1: 0 x1 + x2 + 2 x3 = 2 1x1 + 0 x2 + 3 x3 = −2 4 x1 − 3 x2 + 8 x3 = 0 This system of equations can be written as Ax = b, where 1 2⎞ ⎛0 ⎛ x1 ⎞ ⎛ 2⎞ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ 0 3 ⎟, x = ⎜ x2 ⎟, b = ⎜ − 2 ⎟ A=⎜1 ⎜ 4 − 3 8⎟ ⎜x ⎟ ⎜ 0⎟ ⎝ ⎠ ⎝ 3⎠ ⎝ ⎠ Then 7 − 3 / 2 ⎞⎛ 2 ⎞ ⎛ − 23 ⎞ ⎛−9/ 2 ⎜ ⎟⎜ ⎟ ⎜ ⎟ − 1⎟⎜ − 2 ⎟ = ⎜ − 12 ⎟ x = A −1b = ⎜ − 2 4 ⎜ 3/ 2 − 2 1 / 2 ⎟⎠⎜⎝ 0 ⎟⎠ ⎜⎝ 7 ⎟⎠ ⎝ Example 1: Nonsingular Case (3 of 3) Alternatively, we could solve the nonhomogeneous linear system Ax = b below using row reduction. 0 x1 + x2 + 2 x3 = 2 1x1 + 0 x2 + 3 x3 = −2 4 x1 − 3 x2 + 8 x3 = 0 To do so, form the augmented matrix (A|b) and reduce, using elementary row operations. 1 2 2⎞ ⎛ 1 0 3 − 2⎞ ⎛ 1 0 ⎛0 ⎟ ⎜ ⎜ ⎟ ⎜ (A b ) = ⎜ 1 0 3 − 2 ⎟ → ⎜ 0 1 2 2 ⎟ → ⎜ 0 1 ⎜4 −3 8 0 ⎟⎠ ⎜⎝ 4 − 3 8 0 ⎟⎠ ⎜⎝ 0 − 3 ⎝ + 3 x3 ⎛ 1 0 3 − 2 ⎞ ⎛ 1 0 3 − 2 ⎞ x1 ⎟ ⎟ ⎜ ⎜ 2⎟ → ⎜0 1 2 x2 + 2 x3 2⎟ → → ⎜0 1 2 ⎜ 0 0 2 14 ⎟ ⎜ 0 0 1 x3 7 ⎟⎠ ⎠ ⎝ ⎝ 3 − 2⎞ ⎟ 2 2⎟ 8 ⎟⎠ −4 = −2 ⎛ − 23 ⎞ ⎟ ⎜ = 2 → x = ⎜ − 12 ⎟ ⎜ 7⎟ = 7 ⎠ ⎝ Singular Case If the coefficient matrix A is singular, then A-1 does not exist, and either a solution to Ax = b does not exist, or there is more than one solution (not unique). Further, the homogeneous system Ax = 0 has more than one solution. That is, in addition to the trivial solution x = 0, there are infinitely many nontrivial solutions. The nonhomogeneous case Ax = b has no solution unless (b, y) = 0, for all vectors y satisfying A*y = 0, where A* is the adjoint of A. In this case, Ax = b has solutions (infinitely many), each of the form x = x(0) + ξ, where x(0) is a particular solution of Ax = b, and ξ is any solution of Ax = 0. Example 2: Singular Case (1 of 3) Solve the nonhomogeneous linear system Ax = b below using row reduction. 1x1 − 2 x2 − 1x3 = 1 − 1x1 + 5 x2 + 6 x3 = 0 5 x1 − 4 x2 + 5 x3 = −1 To do so, form the augmented matrix (A|b) and reduce, using elementary row operations. 1⎞ ⎛ 1 − 2 − 1 1⎞ ⎛ 1 − 2 − 1 1⎞ ⎛ 1 − 2 − 1 ⎟ ⎟ ⎜ ⎜ ⎟ ⎜ (A b ) = ⎜ − 1 5 6 0 ⎟ → ⎜ 0 3 5 1⎟ → ⎜ 0 3 5 1⎟ ⎜ 5 − 4 5 − 1⎟ ⎜ 0 6 10 − 6 ⎟⎠ ⎜⎝ 0 3 5 − 3 ⎟⎠ ⎝ ⎠ ⎝ 1⎞ ⎛ 1 − 2 − 1 1⎞ x1 − 2 x2 − x3 = 1 ⎛ 1 − 2 −1 ⎟ ⎟ ⎜ ⎜ 3 5 1⎟ → ⎜ 0 3 5 1⎟ → 3 x2 + 5 x3 = 1 → no soln → ⎜0 ⎜0 0 0 − 4 ⎟⎠ ⎜⎝ 0 0 0 1⎟⎠ 0 x3 = 1 ⎝ Example 2: Singular Case (2 of 3) Solve the nonhomogeneous linear system Ax = b below using row reduction. 1x1 − 2 x2 − 1x3 = b1 − 1x1 + 5 x2 + 6 x3 = b2 5 x1 − 4 x2 + 5 x3 = b3 Reduce the augmented matrix (A|b) as follows: b1 ⎞ ⎛ 1 − 2 − 1 b1 ⎞ ⎛ 1 − 2 − 1 ⎜ ⎟ ⎜ ⎟ (A b ) = ⎜ − 1 5 6 b2 ⎟ → ⎜ 0 3 5 b2 + b1 ⎟ ⎜ 5 − 4 5 b ⎟ ⎜0 6 10 b3 − 5b1 ⎟⎠ 3⎠ ⎝ ⎝ ⎞ ⎛ ⎞ ⎛ ⎜ ⎜ 1 − 2 −1 ⎟ 1 − 2 −1 b1 b1 ⎟ ⎟ ⎜ ⎟ ⎜ 3 5 3 5 b2 + b1 ⎟ → ⎜ 0 → ⎜0 b2 + b1 ⎟ → b3 − 2b2 − 7b1 = 0 1 5 ⎟ ⎜ 1 7 ⎟ ⎜ 0 3 5 0 0 0 − − b b − b b b1 ⎟ ⎜ ⎜ 3 2 3 1⎟ 2 2 ⎠ ⎝ 2 2 ⎠ ⎝ Example 2: Singular Case (3 of 3) From the previous slide, we require b3 − 2b2 − 7b1 = 0 Suppose b1 = 1, b2 = −1, b3 = 5 Then the reduced augmented matrix (A|b) becomes: ⎛ ⎞ ⎜ 1 − 2 −1 b1 ⎟ x1 − 2 x2 − 1x3 = 1 ⎜ ⎟ + b b 0 3 5 3 x2 + 5 x3 = 0 2 1⎟ → ⎜ 1 7 ⎟ ⎜ 0 x3 = 0 b3 − b2 − b1 ⎟ 0 0 ⎜0 2 2 ⎠ ⎝ ⎛1 − 7 x3 / 3 ⎞ ⎛ 1⎞ ⎛ − 7 / 3⎞ ⎛ 1⎞ ⎛ 7 ⎞ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ → x = ⎜ − 5 x3 / 3 ⎟ → x = ⎜ 0 ⎟ + x3 ⎜ − 5 / 3 ⎟ =→ x = ⎜ 0 ⎟ + c ⎜ 5 ⎟ = x ( 0 ) + ξ ⎜ ⎜ 0⎟ ⎜ ⎜ 0 ⎟ ⎜ − 3⎟ x3 ⎟⎠ 1⎟⎠ ⎝ ⎝ ⎠ ⎝ ⎝ ⎠ ⎝ ⎠ Linear Dependence and Independence A set of vectors x(1), x(2),…, x(n) is linearly dependent if there exists scalars c1, c2,…, cn, not all zero, such that c1x (1) + c2 x ( 2 ) + L + cn x ( n ) = 0 If the only solution of c1x (1) + c2 x ( 2 ) + L + cn x ( n ) = 0 is c1= c2 = …= cn = 0, then x(1), x(2),…, x(n) is linearly independent. Example 3: Linear Independence (1 of 2) Determine whether the following vectors are linear dependent or linearly independent. x (1) ⎛ 0⎞ ⎛ 1⎞ ⎛ 2⎞ ⎜ ⎟ ( 2 ) ⎜ ⎟ ( 3) ⎜ ⎟ = ⎜ 1⎟, x = ⎜ 0 ⎟, x = ⎜ 3 ⎟ ⎜ 4⎟ ⎜ − 3⎟ ⎜ 8⎟ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ We need to solve c1x (1) + c2 x ( 2 ) + c3 x (3) = 0 or 1 2 ⎞⎛ c1 ⎞ ⎛ 0 ⎞ ⎛ 0⎞ ⎛ 1⎞ ⎛ 2 ⎞ ⎛ 0 ⎞ ⎛0 ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎜ ⎟ 0 3 ⎟⎜ c2 ⎟ = ⎜ 0 ⎟ c1 ⎜ 1⎟ + c2 ⎜ 0 ⎟ + c⎜ 3 ⎟ = ⎜ 0 ⎟ ⇔ ⎜ 1 ⎜ 4⎟ ⎜ − 3⎟ ⎜ 8 ⎟ ⎜ 0 ⎟ ⎜ 4 − 3 8 ⎟⎜ c ⎟ ⎜ 0 ⎟ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠⎝ 3 ⎠ ⎝ ⎠ Example 3: Linear Independence (2 of 2) We thus reduce the augmented matrix (A|b), as before. 1 2 0⎞ ⎛ 1 0 3 0⎞ ⎛0 ⎜ ⎟ ⎜ ⎟ (A b ) = ⎜ 1 0 3 0 ⎟ → ⎜ 0 1 2 0 ⎟ ⎜ 4 − 3 8 0⎟ ⎜0 0 1 0⎟ ⎝ ⎠ ⎝ ⎠ c1 + 3c3 =0 ⎛0⎞ ⎜ ⎟ c2 + 2c3 = 0 → c = ⎜ 0 ⎟ → ⎜0⎟ c3 = 0 ⎝ ⎠ Thus the only solution is c1= c2 = …= cn = 0, and therefore the original vectors are linearly independent. Example 4: Linear Dependence (1 of 2) Determine whether the following vectors are linear dependent or linearly independent. x (1) ⎛ 1⎞ ⎛ − 2⎞ ⎛ − 1⎞ ⎜ ⎟ ( 2 ) ⎜ ⎟ ( 3) ⎜ ⎟ = ⎜ − 1⎟, x = ⎜ 5 ⎟, x = ⎜ 6 ⎟ ⎜ 5⎟ ⎜ − 4⎟ ⎜ 5⎟ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ We need to solve c1x (1) + c2 x ( 2 ) + c3 x (3) = 0 or ⎛ 1⎞ ⎛ − 2⎞ ⎛ − 1⎞ ⎛ 0 ⎞ ⎛ 1 − 2 − 1⎞⎛ c1 ⎞ ⎛ 0 ⎞ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎜ ⎟ 5 6 ⎟⎜ c2 ⎟ = ⎜ 0 ⎟ c1 ⎜ − 1⎟ + c2 ⎜ 5 ⎟ + c3 ⎜ 6 ⎟ = ⎜ 0 ⎟ ⇔ ⎜ − 1 ⎜ 5⎟ ⎜ − 4⎟ ⎜ 5⎟ ⎜ 0⎟ ⎜ 5 − 4 5 ⎟⎜ c ⎟ ⎜ 0 ⎟ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠⎝ 3 ⎠ ⎝ ⎠ Example 4: Linear Dependence (2 of 2) We thus reduce the augmented matrix (A|b), as before. ⎛ 1 − 2 −1 0 ⎞ ⎛ 1 − 2 −1 0⎞ ⎜ ⎟ ⎜ ⎟ (A b ) = ⎜ − 1 5 6 0 ⎟ → ⎜ 0 3 5 0 ⎟ ⎜ 5 − 4 5 0⎟ ⎜0 0 0 0 ⎟⎠ ⎝ ⎠ ⎝ c1 − 2c2 − 1c3 = 0 ⎛ − 7c3 / 3 ⎞ ⎛ 7⎞ ⎜ ⎟ ⎜ ⎟ 3c2 + 5c3 = 0 → c = ⎜ − 5c3 / 3 ⎟ → c = k ⎜ 5 ⎟ → ⎜ ⎜ − 3⎟ 0c3 = 0 c3 ⎟⎠ ⎝ ⎝ ⎠ Thus the original vectors are linearly dependent, with ⎛ 1⎞ ⎛ − 2 ⎞ ⎛ − 1⎞ ⎛ 0 ⎞ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ 7 ⎜ − 1⎟ + 5 ⎜ 5 ⎟ − 3 ⎜ 6 ⎟ = ⎜ 0 ⎟ ⎜ 5⎟ ⎜ − 4⎟ ⎜ 5⎟ ⎜ 0⎟ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ Linear Independence and Invertibility Consider the previous two examples: The first matrix was known to be nonsingular, and its column vectors were linearly independent. The second matrix was known to be singular, and its column vectors were linearly dependent. This is true in general: the columns (or rows) of A are linearly independent iff A is nonsingular iff A-1 exists. Also, A is nonsingular iff detA ≠ 0, hence columns (or rows) of A are linearly independent iff detA ≠ 0. Further, if A = BC, then det(C) = det(A)det(B). Thus if the columns (or rows) of A and B are linearly independent, then the columns (or rows) of C are also. Linear Dependence & Vector Functions Now consider vector functions x(1)(t), x(2)(t),…, x(n)(t), where ⎛ x1( k ) (t ) ⎞ ⎜ (k ) ⎟ ⎜ x2 (t ) ⎟ (k ) x (t ) = ⎜ , k = 1, 2, K , n, ⎟ ⎜ M ⎟ ⎜ x ( k ) (t ) ⎟ ⎝ m ⎠ t ∈ I = (α , β ) As before, x(1)(t), x(2)(t),…, x(n)(t) is linearly dependent on I if there exists scalars c1, c2,…, cn, not all zero, such that c1x (1) (t ) + c2 x ( 2 ) (t ) + L + cn x ( n ) (t ) = 0, for all t ∈ I Otherwise x(1)(t), x(2)(t),…, x(n)(t) is linearly independent on I See text for more discussion on this. Eigenvalues and Eigenvectors The eqn. Ax = y can be viewed as a linear transformation that maps (or transforms) x into a new vector y. Nonzero vectors x that transform into multiples of themselves are important in many applications. Thus we solve Ax = λx or equivalently, (A-λI)x = 0. This equation has a nonzero solution if we choose λ such that det(A-λI) = 0. Such values of λ are called eigenvalues of A, and the nonzero solutions x are called eigenvectors. Example 5: Eigenvalues (1 of 3) Find the eigenvalues and eigenvectors of the matrix A. 3⎞ ⎛2 ⎟⎟ A = ⎜⎜ ⎝ 3 − 6⎠ Solution: Choose λ such that det(A-λI) = 0, as follows. ⎛⎛ 2 3⎞ ⎛ 1 0 ⎞ ⎞ ⎜ ⎟⎟ ⎟⎟ ⎟⎟ − λ ⎜⎜ det (A − λI ) = det⎜ ⎜⎜ ⎝⎝ 3 − 6⎠ ⎝0 0⎠⎠ 3⎞ ⎛2 − λ ⎟⎟ = det⎜⎜ 3 −6−λ⎠ ⎝ = (2 − λ )(− 6 − λ ) − (3)(3) = λ2 + 4λ − 21 = (λ − 3)(λ + 7 ) ⇒ λ = 3, λ = −7 Example 5: First Eigenvector (2 of 3) To find the eigenvectors of the matrix A, we need to solve (A-λI)x = 0 for λ = 3 and λ = -7. Eigenvector for λ = 3: Solve 3 ⎞⎛ x1 ⎞ ⎛ 0 ⎞ ⎛2 −3 ⎟⎟⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟ ⇔ (A − λI )x = 0 ⇔ ⎜⎜ ⎝ 3 − 6 − 3 ⎠⎝ x2 ⎠ ⎝ 0 ⎠ 3 ⎞⎛ x1 ⎞ ⎛ 0 ⎞ ⎛ −1 ⎜⎜ ⎟⎟⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟ ⎝ 3 − 9 ⎠⎝ x2 ⎠ ⎝ 0 ⎠ by row reducing the augmented matrix: 1x − 3 x2 3 0⎞ ⎛ 1 − 3 0⎞ ⎛ 1 − 3 0⎞ ⎛ −1 ⎟⎟ → ⎜⎜ ⎟⎟ → 1 ⎜⎜ ⎟⎟ → ⎜⎜ 0 x2 0 0⎠ ⎝ 3 − 9 0⎠ ⎝3 − 9 0⎠ ⎝ 0 ⎛ 3 x2 ⎞ ⎛ 3⎞ ⎛ 3⎞ (1) (1) ⎟⎟ = c ⎜⎜ ⎟⎟, c arbitrary → choose x = ⎜⎜ ⎟⎟ → x = ⎜⎜ ⎝1⎠ ⎝1⎠ ⎝ x2 ⎠ =0 =0 Example 5: Second Eigenvector (3 of 3) Eigenvector for λ = -7: Solve 3 ⎞⎛ x1 ⎞ ⎛ 0 ⎞ ⎛2 + 7 ⎟⎟⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟ ⇔ (A − λI )x = 0 ⇔ ⎜⎜ 3 − 6 + 7 ⎠⎝ x2 ⎠ ⎝ 0 ⎠ ⎝ ⎛ 9 3 ⎞⎛ x1 ⎞ ⎛ 0 ⎞ ⎜⎜ ⎟⎟⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟ ⎝ 3 1⎠⎝ x2 ⎠ ⎝ 0 ⎠ by row reducing the augmented matrix: 1x + 1 / 3x2 = 0 ⎛ 9 3 0⎞ ⎛ 1 1/ 3 0⎞ ⎛ 1 1/ 3 0⎞ ⎜⎜ ⎟⎟ → ⎜⎜ ⎟⎟ → ⎜⎜ ⎟⎟ → 1 0 x2 = 0 1 0⎠ ⎝0 0 0⎠ ⎝ 3 1 0⎠ ⎝3 ⎛ − 1 / 3 x2 ⎞ ⎛ − 1 / 3⎞ ⎛ − 1⎞ ( 2) ( 2) ⎟⎟ = c ⎜⎜ ⎟⎟, c arbitrary → choose x = ⎜⎜ ⎟⎟ → x = ⎜⎜ x2 ⎠ ⎝ 1 ⎠ ⎝ 3⎠ ⎝ Normalized Eigenvectors From the previous example, we see that eigenvectors are determined up to a nonzero multiplicative constant. If this constant is specified in some particular way, then the eigenvector is said to be normalized. For example, eigenvectors are sometimes normalized by choosing the constant so that ||x|| = (x, x)½ = 1. Algebraic and Geometric Multiplicity In finding the eigenvalues λ of an n x n matrix A, we solve det(A-λI) = 0. Since this involves finding the determinant of an n x n matrix, the problem reduces to finding roots of an nth degree polynomial. Denote these roots, or eigenvalues, by λ1, λ2, …, λn. If an eigenvalue is repeated m times, then its algebraic multiplicity is m. Each eigenvalue has at least one eigenvector, and a eigenvalue of algebraic multiplicity m may have q linearly independent eigevectors, 1 ≤ q ≤ m, and q is called the geometric multiplicity of the eigenvalue. Eigenvectors and Linear Independence If an eigenvalue λ has algebraic multiplicity 1, then it is said to be simple, and the geometric multiplicity is 1 also. If each eigenvalue of an n x n matrix A is simple, then A has n distinct eigenvalues. It can be shown that the n eigenvectors corresponding to these eigenvalues are linearly independent. If an eigenvalue has one or more repeated eigenvalues, then there may be fewer than n linearly independent eigenvectors since for each repeated eigenvalue, we may have q < m. This may lead to complications in solving systems of differential equations. Example 6: Eigenvalues (1 of 5) Find the eigenvalues and eigenvectors of the matrix A. ⎛0 1 1⎞ ⎟ ⎜ A = ⎜1 0 1⎟ ⎜1 1 0⎟ ⎠ ⎝ Solution: Choose λ such that det(A-λI) = 0, as follows. 1 1⎞ ⎛− λ ⎟ ⎜ det (A − λI ) = det⎜ 1 − λ 1⎟ ⎟ ⎜ 1 1 λ − ⎠ ⎝ = −λ3 + 3λ + 2 = (λ − 2)(λ + 1) 2 ⇒ λ1 = 2, λ2 = −1, λ2 = −1 Example 6: First Eigenvector (2 of 5) Eigenvector for λ = 2: Solve (A-λI)x = 0, as follows. 1 1 0⎞ ⎛ 1 1 ⎛− 2 ⎟ ⎜ ⎜ 1 0⎟ → ⎜ 1 − 2 ⎜ 1 −2 ⎜ 1 − 2 0 ⎟⎠ ⎜⎝ − 2 1 ⎝ 1 ⎛ 1 1 − 2 0 ⎞ ⎛ 1 0 −1 ⎜ ⎟ ⎜ → ⎜ 0 1 −1 0 ⎟ → ⎜ 0 1 −1 ⎜0 0 0 0 ⎟⎠ ⎜⎝ 0 0 0 ⎝ → x (1) − 2 0⎞ ⎛ 1 1 − 2 0⎞ ⎟ ⎜ ⎟ 1 0⎟ → ⎜0 − 3 3 0⎟ 1 0 ⎟⎠ ⎜⎝ 0 3 − 3 0 ⎟⎠ − 1x3 = 0 1x1 0⎞ ⎟ 1x2 − 1x3 = 0 0⎟ → 0 x3 = 0 0 ⎟⎠ ⎛ x3 ⎞ ⎛1⎞ ⎛1⎞ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ (1) = ⎜ x3 ⎟ = c ⎜1⎟, c arbitrary → choose x = ⎜1⎟ ⎜x ⎟ ⎜1⎟ ⎜1⎟ ⎝ 3⎠ ⎝ ⎠ ⎝ ⎠ Example 6: 2nd and 3rd Eigenvectors (3 of 5) Eigenvector for λ = -1: Solve (A-λI)x = 0, as follows. 1x1 + 1x2 + 1x3 ⎛1 1 1 0 ⎞ ⎛ 1 1 1 0 ⎞ ⎜ ⎟ ⎜ ⎟ 0 x2 ⎜1 1 1 0 ⎟ → ⎜ 0 0 0 0 ⎟ → ⎜1 1 1 0 ⎟ ⎜ 0 0 0 0 ⎟ 0 x3 ⎝ ⎠ ⎝ ⎠ ⎛ − x2 − x3 ⎞ ⎛ − 1⎞ ⎛ − 1⎞ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ( 2) → x = ⎜ x2 ⎟ = x2 ⎜ 1⎟ + x3 ⎜ 0 ⎟, where x2 , x3 ⎜ x ⎟ ⎜ 0⎟ ⎜ 1⎟ 3 ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ → choose x ( 2 ) ⎛ 1⎞ ⎛ 0⎞ ⎜ ⎟ ( 3) ⎜ ⎟ = ⎜ 0 ⎟ , x = ⎜ 1⎟ ⎜ − 1⎟ ⎜ − 1⎟ ⎝ ⎠ ⎝ ⎠ =0 =0 =0 arbitrary Example 6: Eigenvectors of A (4 of 5) Thus three eigenvectors of A are x (1) ⎛1⎞ ⎛ 1⎞ ⎛ 0⎞ ⎜ ⎟ ( 2 ) ⎜ ⎟ ( 3) ⎜ ⎟ = ⎜1⎟, x = ⎜ 0 ⎟ , x = ⎜ 1⎟ ⎜1⎟ ⎜ − 1⎟ ⎜ − 1⎟ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ where x(2), x(3) correspond to the double eigenvalue λ = - 1. It can be shown that x(1), x(2), x(3) are linearly independent. Hence A is a 3 x 3 symmetric matrix (A = AT ) with 3 real eigenvalues and 3 linearly independent eigenvectors. ⎛0 1 1⎞ ⎟ ⎜ A = ⎜1 0 1⎟ ⎜1 1 0⎟ ⎠ ⎝ Example 6: Eigenvectors of A (5 of 5) Note that we could have we had chosen x (1) ⎛1⎞ ⎛ 1⎞ ⎛ 1⎞ ⎜ ⎟ ( 2 ) ⎜ ⎟ ( 3) ⎜ ⎟ = ⎜1⎟, x = ⎜ 0 ⎟ , x = ⎜ − 2 ⎟ ⎜1⎟ ⎜ − 1⎟ ⎜ 1⎟ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ Then the eigenvectors are orthogonal, since (x (1) ) ( ) ( ) , x ( 2 ) = 0, x (1) , x ( 3) = 0, x ( 2 ) , x ( 3) = 0 Thus A is a 3 x 3 symmetric matrix with 3 real eigenvalues and 3 linearly independent orthogonal eigenvectors. Hermitian Matrices A self-adjoint, or Hermitian matrix, satisfies A = A*, where we recall that A* = AT . Thus for a Hermitian matrix, aij = aji. Note that if A has real entries and is symmetric (see last example), then A is Hermitian. An n x n Hermitian matrix A has the following properties: All eigenvalues of A are real. There exists a full set of n linearly independent eigenvectors of A. If x(1) and x(2) are eigenvectors that correspond to different eigenvalues of A, then x(1) and x(2) are orthogonal. Corresponding to an eigenvalue of algebraic multiplicity m, it is possible to choose m mutually orthogonal eigenvectors, and hence A has a full set of n linearly independent orthogonal eigenvectors. Ch 7.4: Basic Theory of Systems of First Order Linear Equations The general theory of a system of n first order linear equations x1′ = p11 (t ) x1 + p12 (t ) x2 + K + p1n (t ) xn + g1 (t ) x2′ = p21 (t ) x1 + p22 (t ) x2 + K + p2 n (t ) xn + g 2 (t ) M xn′ = pn1 (t ) x1 + pn 2 (t ) x2 + K + pnn (t ) xn + g n (t ) parallels that of a single nth order linear equation. This system can be written as x' = P(t)x + g(t), where ⎛ x1 (t ) ⎞ ⎛ g1 (t ) ⎞ ⎛ p11 (t ) ⎜ ⎟ ⎜ ⎟ ⎜ ⎜ x2 (t ) ⎟ ⎜ g 2 (t ) ⎟ ⎜ p21 (t ) , P(t ) = ⎜ , g(t ) = ⎜ x(t ) = ⎜ ⎟ ⎟ M M M ⎜ ⎟ ⎜ ⎟ ⎜ ⎜ x (t ) ⎟ ⎜ g (t ) ⎟ ⎜ p (t ) n n ⎝ ⎠ ⎝ ⎠ ⎝ n1 p12 (t ) L p1n (t ) ⎞ ⎟ p22 (t ) L p2 n (t ) ⎟ M O M ⎟ ⎟ pn 2 (t ) L pnn (t ) ⎟⎠ Vector Solutions of an ODE System A vector x = φ(t) is a solution of x' = P(t)x + g(t) if the components of x, x1 = φ1 (t ), x2 = φ2 (t ),K , xn = φn (t ), satisfy the system of equations on I: α < t < β. For comparison, recall that x' = P(t)x + g(t) represents our system of equations x1′ = p11 (t ) x1 + p12 (t ) x2 + K + p1n (t ) xn + g1 (t ) x2′ = p21 (t ) x1 + p22 (t ) x2 + K + p2 n (t ) xn + g 2 (t ) M xn′ = pn1 (t ) x1 + pn 2 (t ) x2 + K + pnn (t ) xn + g n (t ) Assuming P and g continuous on I, such a solution exists by Theorem 7.1.2. Example 1 Consider the homogeneous equation x' = P(t)x below, with the solutions x as indicated. ⎛ 1 1⎞ ⎟⎟x ; x′ = ⎜⎜ ⎝ 4 1⎠ ⎛ e 3t ⎞ ⎛ 1⎞ 3t x(t ) = ⎜⎜ 3t ⎟⎟ = ⎜⎜ ⎟⎟e ⎝ 2e ⎠ ⎝ 2 ⎠ To see that x is a solution, substitute x into the equation and perform the indicated operations: ⎛ 1 1⎞ ⎛ 1 1⎞⎛ e3t ⎞ ⎛ 3e 3t ⎞ ⎛ e3t ⎞ ⎜⎜ ⎟⎟x = ⎜⎜ ⎟⎟⎜⎜ 3t ⎟⎟ = ⎜⎜ 3t ⎟⎟ = 3⎜⎜ 3t ⎟⎟ = x′ ⎝ 4 1⎠ ⎝ 4 1⎠⎝ 2e ⎠ ⎝ 6e ⎠ ⎝ 2e ⎠ Homogeneous Case; Vector Function Notation As in Chapters 3 and 4, we first examine the general homogeneous equation x' = P(t)x. Also, the following notation for the vector functions x(1), x(2),…, x(k),… will be used: ⎛ x11 (t ) ⎞ ⎛ x12 (t ) ⎞ ⎛ x1n (t ) ⎞ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ x21 (t ) ⎟ ( 2 ) ⎜ x22 (t ) ⎟ ⎜ x2 n (t ) ⎟ (k ) (1) x (t ) = ⎜ , x (t ) = ⎜ , K, x (t ) = ⎜ ,K ⎟ ⎟ ⎟ M M M ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ x (t ) ⎟ ⎜ x (t ) ⎟ ⎜ x (t ) ⎟ ⎝ n1 ⎠ ⎝ n2 ⎠ ⎝ nn ⎠ Theorem 7.4.1 If the vector functions x(1) and x(2) are solutions of the system x' = P(t)x, then the linear combination c1x(1) + c2x(2) is also a solution for any constants c1 and c2. Note: By repeatedly applying the result of this theorem, it can be seen that every finite linear combination x = c1x (1) (t ) + c2 x ( 2 ) (t ) + K + ck x ( k ) (t ) of solutions x(1), x(2),…, x(k) is itself a solution to x' = P(t)x. Example 2 Consider the homogeneous equation x' = P(t)x below, with the two solutions x(1) and x(2) as indicated. ⎛ e 3t ⎞ ( 2 ) ⎛ e −t ⎞ ⎛ 1 1⎞ (1) ⎟ ⎟⎟x ; x (t ) = ⎜⎜ 3t ⎟⎟, x (t ) = ⎜⎜ x′ = ⎜⎜ −t ⎟ ⎝ 4 1⎠ ⎝ 2e ⎠ ⎝ − 2e ⎠ Then x = c1x(1) + c2x(2) is also a solution: ⎛ 1 1⎞⎛ c1e3t ⎞ ⎛ 1 1⎞⎛ c2 e −t ⎞ ⎛ 1 1⎞ ⎟ ⎟+⎜ ⎟⎟⎜⎜ ⎟⎟⎜⎜ ⎟⎟x = ⎜⎜ ⎜⎜ 3t ⎟ ⎜ −t ⎟ ⎝ 4 1⎠⎝ 2c1e ⎠ ⎝ 4 1⎠⎝ − 2c2 e ⎠ ⎝ 4 1⎠ ⎛ 3c1e 3t ⎞ ⎛ − c2 e −t ⎞ ⎟ ⎟+⎜ = ⎜⎜ −t ⎟ 3t ⎟ ⎜ ⎝ 6c1e ⎠ ⎝ 2c2 e ⎠ ⎛ − e −t ⎞ ⎛ 3e3t ⎞ = c1 ⎜⎜ 3t ⎟⎟ + c2 ⎜⎜ −t ⎟⎟ = x′ ⎝ 2e ⎠ ⎝ 6e ⎠ Theorem 7.4.2 If x(1), x(2),…, x(n) are linearly independent solutions of the system x' = P(t)x for each point in I: α < t < β, then each solution x = φ(t) can be expressed uniquely in the form x = c1x (1) (t ) + c2 x ( 2 ) (t ) + K + cn x ( n ) (t ) If solutions x(1),…, x(n) are linearly independent for each point in I: α < t < β, then they are fundamental solutions on I, and the general solution is given by x = c1x (1) (t ) + c2 x ( 2 ) (t ) + K + cn x ( n ) (t ) The Wronskian and Linear Independence The proof of Thm 7.4.2 uses the fact that if x(1), x(2),…, x(n) are linearly independent on I, then detX(t) ≠ 0 on I, where ⎛ x11 (t ) L x1n (t ) ⎞ ⎜ ⎟ X(t ) = ⎜ M O M ⎟, ⎜ x (t ) L x (t ) ⎟ nn ⎝ n1 ⎠ The Wronskian of x(1),…, x(n) is defined as W[x(1),…, x(n)](t) = detX(t). It follows that W[x(1),…, x(n)](t) ≠ 0 on I iff x(1),…, x(n) are linearly independent for each point in I. Theorem 7.4.3 If x(1), x(2),…, x(n) are solutions of the system x' = P(t)x on I: α < t < β, then the Wronskian W[x(1),…, x(n)](t) is either identically zero on I or else is never zero on I. This result enables us to determine whether a given set of solutions x(1), x(2),…, x(n) are fundamental solutions by evaluating W[x(1),…, x(n)](t) at any point t in α < t < β. Theorem 7.4.4 Let e (1) ⎛1⎞ ⎛ 0⎞ ⎛0⎞ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜0⎟ ⎜1⎟ ⎜0⎟ = ⎜ 0 ⎟, e ( 2) = ⎜ 0 ⎟, K, e ( n ) = ⎜ M ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜M⎟ ⎜M⎟ ⎜0⎟ ⎜0⎟ ⎜ 0⎟ ⎜1⎟ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ Let x(1), x(2),…, x(n) be solutions of the system x' = P(t)x, α < t < β, that satisfy the initial conditions x (1) (t0 ) = e (1) , K , x ( n ) (t0 ) = e ( n ) , respectively, where t0 is any point in α < t < β. Then x(1), x(2),…, x(n) are fundamental solutions of x' = P(t)x. Ch 7.5: Homogeneous Linear Systems with Constant Coefficients We consider here a homogeneous system of n first order linear equations with constant, real coefficients: x1′ = a11 x1 + a12 x2 + K + a1n xn x2′ = a21 x1 + a22 x2 + K + a2 n xn M xn′ = an1 x1 + an 2 x2 + K + ann xn This system can be written as x' = Ax, where ⎛ x1 (t ) ⎞ ⎛ a11 ⎜ ⎟ ⎜ ⎜ x2 (t ) ⎟ ⎜ a21 x(t ) = ⎜ , A=⎜ ⎟ M M ⎜ ⎟ ⎜ ⎜ x (t ) ⎟ ⎜a m ⎝ ⎠ ⎝ n1 a12 L a1n ⎞ ⎟ a22 L a2 n ⎟ M O M ⎟ ⎟ an 2 L ann ⎟⎠ Equilibrium Solutions Note that if n = 1, then the system reduces to x′ = ax ⇒ x(t ) = e at Recall that x = 0 is the only equilibrium solution if a ≠ 0. Further, x = 0 is an asymptotically stable solution if a < 0, since other solutions approach x = 0 in this case. Also, x = 0 is an unstable solution if a > 0, since other solutions depart from x = 0 in this case. For n > 1, equilibrium solutions are similarly found by solving Ax = 0. We assume detA ≠ 0, so that x = 0 is the only solution. Determining whether x = 0 is asymptotically stable or unstable is an important question here as well. Phase Plane When n = 2, then the system reduces to x1′ = a11 x1 + a12 x2 x2′ = a21 x1 + a22 x2 This case can be visualized in the x1x2-plane, which is called the phase plane. In the phase plane, a direction field can be obtained by evaluating Ax at many points and plotting the resulting vectors, which will be tangent to solution vectors. A plot that shows representative solution trajectories is called a phase portrait. Examples of phase planes, directions fields and phase portraits will be given later in this section. Solving Homogeneous System To construct a general solution to x' = Ax, assume a solution of the form x = ξert, where the exponent r and the constant vector ξ are to be determined. Substituting x = ξert into x' = Ax, we obtain rξe rt = Aξe rt ⇔ rξ = Aξ ⇔ (A − rI )ξ = 0 Thus to solve the homogeneous system of differential equations x' = Ax, we must find the eigenvalues and eigenvectors of A. Therefore x = ξert is a solution of x' = Ax provided that r is an eigenvalue and ξ is an eigenvector of the coefficient matrix A. Example 1: Direction Field (1 of 9) Consider the homogeneous equation x' = Ax below. ⎛ 1 1⎞ ⎟⎟x x′ = ⎜⎜ ⎝ 4 1⎠ A direction field for this system is given below. Substituting x = ξert in for x, and rewriting system as (A-rI)ξ = 0, we obtain 1 ⎞⎛ ξ1 ⎞ ⎛ 0 ⎞ ⎛1 − r ⎜⎜ ⎟⎟⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟ ⎝ 4 1 − r ⎠⎝ ξ1 ⎠ ⎝ 0 ⎠ Example 1: Eigenvalues (2 of 9) Our solution has the form x = ξert, where r and ξ are found by solving 1 ⎞⎛ ξ1 ⎞ ⎛ 0 ⎞ ⎛1 − r ⎜⎜ ⎟⎟⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟ ⎝ 4 1 − r ⎠⎝ ξ1 ⎠ ⎝ 0 ⎠ Recalling that this is an eigenvalue problem, we determine r by solving det(A-rI) = 0: 1− r 1 4 1− r = (1 − r ) 2 − 4 = r 2 − 2r − 3 = (r − 3)(r + 1) Thus r1 = 3 and r2 = -1. Example 1: First Eigenvector (3 of 9) Eigenvector for r1 = 3: Solve 1⎞⎛ ξ1 ⎞ ⎛ 0 ⎞ ⎛1 − 3 ⎟⎟⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟ ⇔ (A − rI )ξ = 0 ⇔ ⎜⎜ ⎝ 4 1 − 3 ⎠⎝ ξ 2 ⎠ ⎝ 0 ⎠ 1⎞⎛ ξ1 ⎞ ⎛ 0 ⎞ ⎛− 2 ⎜⎜ ⎟⎟⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟ ⎝ 4 − 2 ⎠⎝ ξ 2 ⎠ ⎝ 0 ⎠ by row reducing the augmented matrix: 1 0⎞ ⎛ 1 −1/ 2 ⎛− 2 ⎜⎜ ⎟⎟ → ⎜⎜ −2 ⎝ 4 − 2 0⎠ ⎝ 4 ⎛1 / 2ξ 2 ⎞ ⎛1 / 2 ⎞ (1) ⎟⎟ = c ⎜⎜ ⎟⎟, c → ξ = ⎜⎜ ⎝ 1 ⎠ ⎝ ξ2 ⎠ 1ξ − 1 / 2ξ 2 0⎞ ⎛ 1 −1/ 2 0 ⎞ ⎟⎟ → ⎜⎜ ⎟⎟ → 1 0ξ 2 0⎠ ⎝ 0 0 0⎠ ⎛ 1⎞ (1) arbitrary → choose ξ = ⎜⎜ ⎟⎟ ⎝ 2⎠ =0 =0 Example 1: Second Eigenvector (4 of 9) Eigenvector for r2 = -1: Solve 1⎞⎛ ξ1 ⎞ ⎛ 0 ⎞ ⎛1 + 1 ⎟⎟⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟ ⇔ (A − rI )ξ = 0 ⇔ ⎜⎜ ⎝ 4 1 + 1⎠⎝ ξ 2 ⎠ ⎝ 0 ⎠ ⎛ 2 1⎞⎛ ξ1 ⎞ ⎛ 0 ⎞ ⎜⎜ ⎟⎟⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟ ⎝ 4 2 ⎠⎝ ξ 2 ⎠ ⎝ 0 ⎠ by row reducing the augmented matrix: 1ξ + 1 / 2ξ 2 = 0 ⎛ 2 1 0⎞ ⎛ 1 1/ 2 0⎞ ⎛ 1 1/ 2 0⎞ ⎜⎜ ⎟⎟ → ⎜⎜ ⎟⎟ → ⎜⎜ ⎟⎟ → 1 0ξ 2 = 0 2 0⎠ ⎝0 0 0⎠ ⎝ 4 2 0⎠ ⎝ 4 ⎛ − 1 / 2ξ 2 ⎞ ⎛ −1/ 2 ⎞ ⎛ 1⎞ ( 2) ( 2) ⎟⎟ = c ⎜⎜ ⎟⎟, c arbitrary → choose ξ = ⎜⎜ ⎟⎟ → ξ = ⎜⎜ ξ2 ⎠ ⎝ 1 ⎠ ⎝ − 2⎠ ⎝ Example 1: General Solution (5 of 9) The corresponding solutions x = ξert of x' = Ax are ⎛ 1⎞ 3t ( 2 ) ⎛ 1⎞ −t x (t ) = ⎜⎜ ⎟⎟e , x (t ) = ⎜⎜ ⎟⎟e ⎝ 2⎠ ⎝ − 2⎠ (1) The Wronskian of these two solutions is [ ] W x (1) , x ( 2) (t ) = e 3t e −t 2e3t − 2e −t = −4e − 2t ≠ 0 Thus x(1) and x(2) are fundamental solutions, and the general solution of x' = Ax is x(t ) = c1x (1) (t ) + c2 x ( 2) (t ) ⎛ 1⎞ 3t ⎛ 1⎞ −t = c1 ⎜⎜ ⎟⎟e + c2 ⎜⎜ ⎟⎟e ⎝ 2⎠ ⎝ − 2⎠ Example 1: Phase Plane for x(1) (6 of 9) To visualize solution, consider first x = c1x(1): ⎛x ⎞ ⎛ 1⎞ x (1) (t ) = ⎜⎜ 1 ⎟⎟ = c1 ⎜⎜ ⎟⎟e3t ⎝ 2⎠ ⎝ x2 ⎠ ⇔ x1 = c1e3t , x2 = 2c1e3t Now x1 = c1e , x2 = 2c1e 3t 3t x1 x2 ⇔ e = = ⇔ x2 = 2 x1 c1 2c1 3t Thus x(1) lies along the straight line x2 = 2x1, which is the line through origin in direction of first eigenvector ξ(1) If solution is trajectory of particle, with position given by (x1, x2), then it is in Q1 when c1 > 0, and in Q3 when c1 < 0. In either case, particle moves away from origin as t increases. Example 1: Phase Plane for x(2) (7 of 9) Next, consider x = c2x(2): ⎛ x1 ⎞ ⎛ 1⎞ −t x (t ) = ⎜⎜ ⎟⎟ = c2 ⎜⎜ ⎟⎟e ⎝ − 2⎠ ⎝ x2 ⎠ ( 2) ⇔ x1 = c2 e −t , x2 = −2c2 e −t Then x(2) lies along the straight line x2 = -2x1, which is the line through origin in direction of 2nd eigenvector ξ(2) If solution is trajectory of particle, with position given by (x1, x2), then it is in Q4 when c2 > 0, and in Q2 when c2 < 0. In either case, particle moves towards origin as t increases. Example 1: Phase Plane for General Solution (8 of 9) The general solution is x = c1x(1) + c2x(2): ⎛ 1⎞ 3t ⎛ 1⎞ −t x(t ) = c1 ⎜⎜ ⎟⎟e + c2 ⎜⎜ ⎟⎟e ⎝ 2⎠ ⎝ − 2⎠ As t → ∞, c1x(1) is dominant and c2x(2) becomes negligible. Thus, for c1 ≠ 0, all solutions asymptotically approach the line x2 = 2x1 as t → ∞. Similarly, for c2 ≠ 0, all solutions asymptotically approach the line x2 = -2x1 as t → - ∞. The origin is a saddle point, and is unstable. See graph. Example 1: Time Plots for General Solution (9 of 9) The general solution is x = c1x(1) + c2x(2): ⎛ x1 (t ) ⎞ ⎛ c1e 3t + c2 e − t ⎞ ⎛ 1⎞ 3t ⎛ 1⎞ −t ⎟ ⎟⎟ = ⎜⎜ x(t ) = c1 ⎜⎜ ⎟⎟e + c2 ⎜⎜ ⎟⎟e ⇔ ⎜⎜ −t ⎟ 3t ⎝ 2⎠ ⎝ − 2⎠ ⎝ x2 (t ) ⎠ ⎝ 2c1e − 2c2 e ⎠ As an alternative to phase plane plots, we can graph x1 or x2 as a function of t. A few plots of x1 are given below. Note that when c1 = 0, x1(t) = c2e-t → 0 as t → ∞. Otherwise, x1(t) = c1e3t + c2e-t grows unbounded as t → ∞. Graphs of x2 are similarly obtained. Example 2: Direction Field (1 of 9) Consider the homogeneous equation x' = Ax below. ⎛−3 2⎞ ⎟x x′ = ⎜⎜ ⎟ ⎝ 2 − 2⎠ A direction field for this system is given below. Substituting x = ξert in for x, and rewriting system as (A-rI)ξ = 0, we obtain ⎛−3− r 2 ⎞⎛ ξ1 ⎞ ⎛ 0 ⎞ ⎜ ⎟⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟ ⎜ 2 − 2 − r ⎟⎠⎝ ξ1 ⎠ ⎝ 0 ⎠ ⎝ Example 2: Eigenvalues (2 of 9) Our solution has the form x = ξert, where r and ξ are found by solving ⎛−3− r 2 ⎞⎛ ξ1 ⎞ ⎛ 0 ⎞ ⎜ ⎟⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟ ⎜ 2 − 2 − r ⎟⎠⎝ ξ1 ⎠ ⎝ 0 ⎠ ⎝ Recalling that this is an eigenvalue problem, we determine r by solving det(A-rI) = 0: −3− r 2 2 = (−3 − r )(−2 − r ) − 2 = r 2 + 5r + 4 = (r + 1)(r + 4) −2−r Thus r1 = -1 and r2 = -4. Example 2: First Eigenvector (3 of 9) Eigenvector for r1 = -1: Solve ⎛ − 3 +1 2 ⎞⎛ ξ1 ⎞ ⎛ 0 ⎞ ⎜ ⎟⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟ ⇔ (A − rI )ξ = 0 ⇔ ⎜ ⎟ ξ 2 − 2 + 1 ⎝ ⎠⎝ 2 ⎠ ⎝ 0 ⎠ ⎛−2 ⎜ ⎜ 2 ⎝ 2 ⎞⎛ ξ1 ⎞ ⎛ 0 ⎞ ⎟⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟ − 1⎟⎠⎝ ξ 2 ⎠ ⎝ 0 ⎠ by row reducing the augmented matrix: ⎛−2 ⎜ ⎜ 2 ⎝ → ξ (1) 2 0⎞ ⎛ 1 − 2 / 2 0⎞ ⎛ 1 − 2 / 2 0⎞ ⎟→⎜ ⎟→⎜ ⎟ ⎟ ⎜ ⎟ ⎜ −1 0 ⎠ ⎝ 2 −1 0 ⎠ ⎝ 0 0 0 ⎟⎠ ⎛ 2 / 2ξ 2 ⎞ ⎛ 1⎞ ⎟ → choose ξ (1) = ⎜⎜ ⎟⎟ = ⎜⎜ ξ 2 ⎟⎠ ⎝ 2⎠ ⎝ Example 2: Second Eigenvector (4 of 9) Eigenvector for r2 = -4: Solve ⎛−3+ 4 2 ⎞⎛ ξ1 ⎞ ⎛ 0 ⎞ ⎟⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟ ⇔ (A − rI )ξ = 0 ⇔ ⎜⎜ 2 − 2 + 4 ⎟⎠⎝ ξ 2 ⎠ ⎝ 0 ⎠ ⎝ ⎛ 1 ⎜ ⎜ 2 ⎝ by row reducing the augmented matrix: ⎛ 1 ⎜ ⎜ 2 ⎝ 2 0⎞ ⎛ 1 ⎟→⎜ 2 0 ⎟⎠ ⎜⎝ 0 → choose ξ ( 2) ⎛− 2 ⎞ ⎟ = ⎜⎜ 1⎟⎠ ⎝ ⎛ − 2ξ 2 ⎞ 2 0⎞ ⎟ ⎟ → ξ ( 2) = ⎜ ⎟ ⎟ ⎜ ξ 0 0⎠ 2⎠ ⎝ 2 ⎞⎛ ξ1 ⎞ ⎛ 0 ⎞ ⎟⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟ 2 ⎟⎠⎝ ξ 2 ⎠ ⎝ 0 ⎠ Example 2: General Solution (5 of 9) The corresponding solutions x = ξert of x' = Ax are ⎛ − 2 ⎞ − 4t ⎛ 1⎞ − t ( 2 ) ⎟e x (t ) = ⎜⎜ ⎟⎟e , x (t ) = ⎜⎜ ⎟ 1 ⎝ 2⎠ ⎝ ⎠ (1) The Wronskian of these two solutions is [ ] W x (1) , x ( 2) (t ) = e −t − 2e −4t 2e − t e − 4t = 3e −5t ≠ 0 Thus x(1) and x(2) are fundamental solutions, and the general solution of x' = Ax is x(t ) = c1x (1) (t ) + c2 x ( 2) (t ) ⎛ − 2 ⎞ − 4t ⎛ 1⎞ −t ⎟e = c1 ⎜⎜ ⎟⎟e + c2 ⎜⎜ 1⎟⎠ ⎝ 2⎠ ⎝ Example 2: Phase Plane for x(1) (6 of 9) To visualize solution, consider first x = c1x(1): ⎛ x1 ⎞ ⎛ 1⎞ −t x (t ) = ⎜⎜ ⎟⎟ = c1 ⎜⎜ ⎟⎟e ⎝ 2⎠ ⎝ x2 ⎠ (1) ⇔ x1 = c1e −t , x2 = 2c1e −t Now −t x1 = c1e , x2 = 2c1e −t x1 x2 ⇔ e = = ⇔ x2 = 2 x1 c1 2c1 −t Thus x(1) lies along the straight line x2 = 2½ x1, which is the line through origin in direction of first eigenvector ξ(1) If solution is trajectory of particle, with position given by (x1, x2), then it is in Q1 when c1 > 0, and in Q3 when c1 < 0. In either case, particle moves towards origin as t increases. Example 2: Phase Plane for x(2) (7 of 9) Next, consider x = c2x(2): ⎛ − 2 ⎞ − 4t ⎛ x1 ⎞ ⎟e x (t ) = ⎜⎜ ⎟⎟ = c2 ⎜⎜ 1⎟⎠ ⎝ x2 ⎠ ⎝ ( 2) ⇔ x1 = − 2c2 e − 4t , x2 = c2 e − 4t Then x(2) lies along the straight line x2 = -2½ x1, which is the line through origin in direction of 2nd eigenvector ξ(2) If solution is trajectory of particle, with position given by (x1, x2), then it is in Q4 when c2 > 0, and in Q2 when c2 < 0. In either case, particle moves towards origin as t increases. Example 2: Phase Plane for General Solution (8 of 9) The general solution is x = c1x(1) + c2x(2): ⎛ − 2 ⎞ − 4t ⎛ 1⎞ − t ( 2 ) ⎟e x (t ) = ⎜⎜ ⎟⎟e , x (t ) = ⎜⎜ ⎟ 1 ⎝ 2⎠ ⎝ ⎠ (1) As t → ∞, c1x(1) is dominant and c2x(2) becomes negligible. Thus, for c1 ≠ 0, all solutions asymptotically approach origin along the line x2 = 2½ x1 as t → ∞. Similarly, all solutions are unbounded as t → - ∞. The origin is a node, and is asymptotically stable. Example 2: Time Plots for General Solution (9 of 9) The general solution is x = c1x(1) + c2x(2): ⎛ − 2 ⎞ − 4t ⎛ x1 (t ) ⎞ ⎛ c1e − t − 2c2 e −4t ⎞ ⎛ 1⎞ −t ⎟ ⎟e ⎟⎟ = ⎜ ⇔ ⎜⎜ x(t ) = c1 ⎜⎜ ⎟⎟e + c2 ⎜⎜ −t − 4t ⎟ ⎟ ⎜ 1⎠ ⎝ 2⎠ ⎝ x2 (t ) ⎠ ⎝ 2c1e + c2 e ⎠ ⎝ As an alternative to phase plane plots, we can graph x1 or x2 as a function of t. A few plots of x1 are given below. Graphs of x2 are similarly obtained. 2 x 2 Case: Real Eigenvalues, Saddle Points and Nodes The previous two examples demonstrate the two main cases for a 2 x 2 real system with real and different eigenvalues: Both eigenvalues have opposite signs, in which case origin is a saddle point and is unstable. Both eigenvalues have the same sign, in which case origin is a node, and is asymptotically stable if the eigenvalues are negative and unstable if the eigenvalues are positive. Eigenvalues, Eigenvectors and Fundamental Solutions In general, for an n x n real linear system x' = Ax: All eigenvalues are real and different from each other. Some eigenvalues occur in complex conjugate pairs. Some eigenvalues are repeated. If eigenvalues r1,…, rn are real & different, then there are n corresponding linearly independent eigenvectors ξ(1),…, ξ(n). The associated solutions of x' = Ax are x (1) (t ) = ξ (1) e r1t , K , x ( n ) (t ) = ξ ( n ) e rnt Using Wronskian, it can be shown that these solutions are linearly independent, and hence form a fundamental set of solutions. Thus general solution is x = c1ξ (1) e r1t + K + cnξ ( n ) e rnt Hermitian Case: Eigenvalues, Eigenvectors & Fundamental Solutions If A is an n x n Hermitian matrix (real and symmetric), then all eigenvalues r1,…, rn are real, although some may repeat. In any case, there are n corresponding linearly independent and orthogonal eigenvectors ξ(1),…, ξ(n). The associated solutions of x' = Ax are x (1) (t ) = ξ (1) e r1t , K , x ( n ) (t ) = ξ ( n ) e rnt and form a fundamental set of solutions. Example 3: Hermitian Matrix (1 of 3) Consider the homogeneous equation x' = Ax below. ⎛0 1 1⎞ ⎜ ⎟ x′ = ⎜ 1 0 1 ⎟x ⎜1 1 0⎟ ⎝ ⎠ The eigenvalues were found previously in Ch 7.3, and were: r1 = 2, r2 = -1 and r3 = -1. Corresponding eigenvectors: ξ (1) ⎛ 1⎞ ⎛ 1⎞ ⎛ 0⎞ ⎜ ⎟ ( 2 ) ⎜ ⎟ ( 3) ⎜ ⎟ = ⎜1⎟, ξ = ⎜ 0 ⎟ , ξ = ⎜ 1⎟ ⎜ 1⎟ ⎜ − 1⎟ ⎜ − 1⎟ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ Example 3: General Solution (2 of 3) The fundamental solutions are x (1) ⎛ 1⎞ ⎛ 1⎞ ⎛ 0⎞ ⎜ ⎟ 2 t ( 2 ) ⎜ ⎟ −t ( 3) ⎜ ⎟ −t = ⎜ 1⎟ e , x = ⎜ 0 ⎟ e , x = ⎜ 1 ⎟ e ⎜ 1⎟ ⎜ − 1⎟ ⎜ − 1⎟ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ with general solution ⎛1⎞ ⎛ 1⎞ ⎛ 0⎞ ⎜ ⎟ 2t ⎜ ⎟ −t ⎜ ⎟ −t x = c1 ⎜1⎟e + c2 ⎜ 0 ⎟e + c3 ⎜ 1⎟e ⎜1⎟ ⎜ − 1⎟ ⎜ − 1⎟ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ Example 3: General Solution Behavior (3 of 3) The general solution is x = c1x(1) + c2x(2) + c3x(3): ⎛1⎞ ⎛ 1⎞ ⎛ 0⎞ ⎜ ⎟ 2t ⎜ ⎟ −t ⎜ ⎟ −t x = c1 ⎜1⎟e + c2 ⎜ 0 ⎟e + c3 ⎜ 1⎟e ⎜1⎟ ⎜ − 1⎟ ⎜ − 1⎟ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ As t → ∞, c1x(1) is dominant and c2x(2) , c3x(3) become negligible. Thus, for c1 ≠ 0, all solns x become unbounded as t → ∞, while for c1 = 0, all solns x → 0 as t → ∞. The initial points that cause c1 = 0 are those that lie in plane determined by ξ(2) and ξ(3). Thus solutions that start in this plane approach origin as t → ∞. Complex Eigenvalues and Fundamental Solns If some of the eigenvalues r1,…, rn occur in complex conjugate pairs, but otherwise are different, then there are still n corresponding linearly independent solutions x (1) (t ) = ξ (1) e r1t , K, x ( n ) (t ) = ξ ( n ) e rnt , which form a fundamental set of solutions. Some may be complex-valued, but real-valued solutions may be derived from them. This situation will be examined in Ch 7.6. If the coefficient matrix A is complex, then complex eigenvalues need not occur in conjugate pairs, but solutions will still have the above form (if the eigenvalues are distinct) and these solutions may be complex-valued. Repeated Eigenvalues and Fundamental Solns If some of the eigenvalues r1,…, rn are repeated, then there may not be n corresponding linearly independent solutions of the form (1) (1) r t (n) (n) r t x (t ) = ξ e 1 , K , x (t ) = ξ e n In order to obtain a fundamental set of solutions, it may be necessary to seek additional solutions of another form. This situation is analogous to that for an nth order linear equation with constant coefficients, in which case a repeated root gave rise solutions of the form e rt , te rt , t 2 e rt , K This case of repeated eigenvalues is examined in Section 7.8. Ch 7.6: Complex Eigenvalues We consider again a homogeneous system of n first order linear equations with constant, real coefficients, x1′ = a11 x1 + a12 x2 + K + a1n xn x2′ = a21 x1 + a22 x2 + K + a2 n xn M xn′ = an1 x1 + an 2 x2 + K + ann xn , and thus the system can be written as x' = Ax, where ⎛ x1 (t ) ⎞ ⎛ a11 ⎜ ⎟ ⎜ ⎜ x2 (t ) ⎟ ⎜ a21 x(t ) = ⎜ , A=⎜ ⎟ M M ⎜ ⎟ ⎜ ⎜ x (t ) ⎟ ⎜a n ⎝ ⎠ ⎝ n1 a12 L a1n ⎞ ⎟ a22 L a2 n ⎟ M O M ⎟ ⎟ an 2 L ann ⎟⎠ Conjugate Eigenvalues and Eigenvectors We know that x = ξert is a solution of x' = Ax, provided r is an eigenvalue and ξ is an eigenvector of A. The eigenvalues r1,…, rn are the roots of det(A-rI) = 0, and the corresponding eigenvectors satisfy (A-rI)ξ = 0. If A is real, then the coefficients in the polynomial equation det(A-rI) = 0 are real, and hence any complex eigenvalues must occur in conjugate pairs. Thus if r1 = λ + iμ is an eigenvalue, then so is r2 = λ - iμ. The corresponding eigenvectors ξ(1), ξ(2) are conjugates also. To see this, recall A and I have real entries, and hence (A − r1I )ξ (1) = 0 ⇒ (A − r1I ) ξ (1) = 0 ⇒ (A − r2 I )ξ ( 2 ) = 0 Conjugate Solutions It follows from the previous slide that the solutions x (1) = ξ (1) e r1t , x ( 2 ) = ξ ( 2 ) e r2t corresponding to these eigenvalues and eigenvectors are conjugates conjugates as well, since x ( 2 ) = ξ ( 2) e r2t = ξ (1) e r2t = x (1) Real-Valued Solutions Thus for complex conjugate eigenvalues r1 and r2 , the corresponding solutions x(1) and x(2) are conjugates also. To obtain real-valued solutions, use real and imaginary parts of either x(1) or x(2). To see this, let ξ(1) = a + ib. Then x (1) = ξ (1) e (λ +iμ )t = (a + ib )e λ t (cos μ t + i sin μ t ) = e λ t (a cos μ t − b sin μ t ) + ie λ t (a sin μ t + b cos μ t ) = u(t ) + i v (t ) where u(t ) = e λ t (a cos μ t − b sin μ t ), v (t ) = e λ t (a sin μ t + b cos μ t ), are real valued solutions of x' = Ax, and can be shown to be linearly independent. General Solution To summarize, suppose r1 = λ + iμ, r2 = λ - iμ, and that r3,…, rn are all real and distinct eigenvalues of A. Let the corresponding eigenvectors be ξ (1) = a + ib, ξ ( 2 ) = a − ib, ξ ( 3) , ξ ( 4 ) , K, ξ ( n ) Then the general solution of x' = Ax is x = c1u(t ) + c2 v (t ) + c3ξ ( 3) e r3 t + K + cnξ ( n ) e rn t where u(t ) = e λ t (a cos μ t − b sin μ t ), v (t ) = e λ t (a sin μ t + b cos μ t ) Example 1: Direction Field (1 of 7) Consider the homogeneous equation x' = Ax below. 1 ⎞ ⎛ −1/ 2 ⎟⎟x x′ = ⎜⎜ ⎝ −1 −1/ 2 ⎠ A direction field for this system is given below. Substituting x = ξert in for x, and rewriting system as (A-rI)ξ = 0, we obtain 1 ⎛ −1/ 2 − r ⎞⎛ ξ1 ⎞ ⎛ 0 ⎞ ⎜⎜ ⎟⎟⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟ − 1 / 2 − r ⎠⎝ ξ1 ⎠ ⎝ 0 ⎠ ⎝ −1 Example 1: Complex Eigenvalues (2 of 7) We determine r by solving det(A-rI) = 0. Now −1/ 2 − r −1 1 5 = (r + 1 / 2) + 1 = r + r + −1/ 2 − r 4 2 2 Thus − 1 ± 12 − 4(5 / 4) − 1 ± 2i 1 r= = = − ±i 2 2 2 Therefore the eigenvalues are r1 = -1/2 + i and r2 = -1/2 - i. Example 1: First Eigenvector (3 of 7) Eigenvector for r1 = -1/2 + i: Solve 1 ⎛ −1/ 2 − r ⎞⎛ ξ1 ⎞ ⎛ 0 ⎞ ⎟⎟⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟ ⇔ ⎜⎜ − 1 / 2 − r ⎠⎝ ξ1 ⎠ ⎝ 0 ⎠ ⎝ −1 1⎞⎛ ξ1 ⎞ ⎛ 0 ⎞ i ⎞⎛ ξ1 ⎞ ⎛ 0 ⎞ ⎛−i ⎛ 1 ⎟⎟⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟ ⇔ ⎜⎜ ⎟⎟⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟ ⇔ ⎜⎜ ⎝ − 1 − i ⎠⎝ ξ 2 ⎠ ⎝ 0 ⎠ ⎝ − 1 − i ⎠⎝ ξ 2 ⎠ ⎝ 0 ⎠ (A − rI )ξ = 0 by row reducing the augmented matrix: i 0⎞ ⎛ 1 i 0⎞ ⎛ − iξ 2 ⎞ ⎛ 1 ⎛1⎞ ⎟⎟ → choose ξ (1) = ⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟ → ⎜⎜ ⎟⎟ → ξ (1) = ⎜⎜ ⎝ −1 − i 0 ⎠ ⎝ 0 0 0 ⎠ ⎝i⎠ ⎝ ξ2 ⎠ Thus ξ (1) ⎛ 1⎞ ⎛ 0 ⎞ = ⎜⎜ ⎟⎟ + i ⎜⎜ ⎟⎟ ⎝ 0 ⎠ ⎝ 1⎠ Example 1: Second Eigenvector (4 of 7) Eigenvector for r1 = -1/2 - i: Solve ⎛ −1/ 2 − r (A − rI )ξ = 0 ⇔ ⎜⎜ ⎝ −1 ⎛ i 1⎞⎛ ξ1 ⎞ ⎛ 0 ⎞ ⎟⎟⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟ ⇔ ⇔ ⎜⎜ ⎝ − 1 i ⎠⎝ ξ 2 ⎠ ⎝ 0 ⎠ 1 ⎞⎛ ξ1 ⎞ ⎛ 0 ⎞ ⎟⎟⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟ − 1 / 2 − r ⎠⎝ ξ1 ⎠ ⎝ 0 ⎠ ⎛ 1 − i ⎞⎛ ξ1 ⎞ ⎛ 0 ⎞ ⎜⎜ ⎟⎟⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟ i ⎠⎝ ξ 2 ⎠ ⎝ 0 ⎠ ⎝ −1 by row reducing the augmented matrix: ⎛ iξ ⎞ ⎛ 1 − i 0⎞ ⎛ 1 − i 0⎞ ⎛ 1⎞ ⎜⎜ ⎟⎟ → ⎜⎜ ⎟⎟ → ξ ( 2 ) = ⎜⎜ 2 ⎟⎟ → choose ξ ( 2 ) = ⎜⎜ ⎟⎟ i 0⎠ ⎝0 0 0⎠ ⎝ −1 ⎝−i⎠ ⎝ ξ2 ⎠ Thus ξ ( 2) ⎛ 1⎞ ⎛ 0 ⎞ = ⎜⎜ ⎟⎟ + i ⎜⎜ ⎟⎟ ⎝ 0 ⎠ ⎝ − 1⎠ Example 1: General Solution (5 of 7) The corresponding solutions x = ξert of x' = Ax are u(t ) = e −t / 2 v(t ) = e −t / 2 ⎡⎛ 1⎞ ⎤ ⎛ 0⎞ −t / 2 ⎛ cos t ⎞ ⎟⎟ ⎢⎜⎜ ⎟⎟ cos t − ⎜⎜ ⎟⎟ sin t ⎥ = e ⎜⎜ ⎝ 1⎠ ⎝ − sin t ⎠ ⎣⎝ 0 ⎠ ⎦ ⎡⎛ 1⎞ ⎤ ⎛ 0⎞ −t / 2 ⎛ sin t ⎞ ⎟⎟ ⎢⎜⎜ ⎟⎟ sin t + ⎜⎜ ⎟⎟ cos t ⎥ = e ⎜⎜ ⎝ 1⎠ ⎝ cos t ⎠ ⎣⎝ 0 ⎠ ⎦ The Wronskian of these two solutions is [ W x ,x (1) ( 2) ](t ) = e −t / 2 cos t e −t / 2 sin t − e −t / 2 sin t e −t / 2 cos t = e −t ≠ 0 Thus u(t) and v(t) are real-valued fundamental solutions of x' = Ax, with general solution x = c1u + c2v. Example 1: Phase Plane (6 of 7) Given below is the phase plane plot for solutions x, with ⎛ e −t / 2 cos t ⎞ ⎛ e −t / 2 sin t ⎞ ⎛ x1 ⎞ ⎟ + c2 ⎜ − t / 2 ⎟ x = ⎜⎜ ⎟⎟ = c1 ⎜⎜ −t / 2 ⎟ ⎜ ⎟ ⎝ x2 ⎠ ⎝ − e sin t ⎠ ⎝ e cos t ⎠ Each solution trajectory approaches origin along a spiral path as t → ∞, since coordinates are products of decaying exponential and sine or cosine factors. The graph of u passes through (1,0), since u(0) = (1,0). Similarly, the graph of v passes through (0,1). The origin is a spiral point, and is asymptotically stable. Example 1: Time Plots (7 of 7) The general solution is x = c1u + c2v: ⎛ x1 (t ) ⎞ ⎛ c1e −t / 2 cos t + c2 e −t / 2 sin t ⎞ ⎟ ⎟⎟ = ⎜⎜ x = ⎜⎜ −t / 2 −t / 2 ⎟ ⎝ x2 (t ) ⎠ ⎝ − c1e sin t + c2 e cos t ⎠ As an alternative to phase plane plots, we can graph x1 or x2 as a function of t. A few plots of x1 are given below, each one a decaying oscillation as t → ∞. Spiral Points, Centers, Eigenvalues, and Trajectories In previous example, general solution was ⎛ e −t / 2 cos t ⎞ ⎛ e −t / 2 sin t ⎞ ⎛ x1 ⎞ ⎟ + c2 ⎜ − t / 2 ⎟ x = ⎜⎜ ⎟⎟ = c1 ⎜⎜ −t / 2 ⎟ ⎜ ⎟ ⎝ x2 ⎠ ⎝ − e sin t ⎠ ⎝ e cos t ⎠ The origin was a spiral point, and was asymptotically stable. If real part of complex eigenvalues is positive, then trajectories spiral away, unbounded, from origin, and hence origin would be an unstable spiral point. If real part of complex eigenvalues is zero, then trajectories circle origin, neither approaching nor departing. Then origin is called a center and is stable, but not asymptotically stable. Trajectories periodic in time. The direction of trajectory motion depends on entries in A. Example 2: Second Order System with Parameter (1 of 2) The system x' = Ax below contains a parameter α. ⎛ α 2⎞ ⎟⎟x x′ = ⎜⎜ ⎝ − 2 0⎠ Substituting x = ξert in for x and rewriting system as (A-rI)ξ = 0, we obtain 2 ⎞⎛ ξ1 ⎞ ⎛ 0 ⎞ ⎛α − r ⎜⎜ ⎟⎟⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟ ⎝ − 2 − r ⎠⎝ ξ1 ⎠ ⎝ 0 ⎠ Next, solve for r in terms of α : α −r 2 α ± α 2 − 16 2 = r (r − α ) + 4 = r − α r + 4 ⇒ r = −2 −r 2 Example 2: Eigenvalue Analysis r= (2 of 2) α ± α 2 − 16 2 The eigenvalues are given by the quadratic formula above. For α < -4, both eigenvalues are real and negative, and hence origin is asymptotically stable node. For α > 4, both eigenvalues are real and positive, and hence the origin is an unstable node. For -4 < α < 0, eigenvalues are complex with a negative real part, and hence origin is asymptotically stable spiral point. For 0 < α < 4, eigenvalues are complex with a positive real part, and the origin is an unstable spiral point. For α = 0, eigenvalues are purely imaginary, origin is a center. Trajectories closed curves about origin & periodic. For α = ± 4, eigenvalues real & equal, origin is a node (Ch 7.8) Second Order Solution Behavior and Eigenvalues: Three Main Cases For second order systems, the three main cases are: Eigenvalues are real and have opposite signs; x = 0 is a saddle point. Eigenvalues are real, distinct and have same sign; x = 0 is a node. Eigenvalues are complex with nonzero real part; x = 0 a spiral point. Other possibilities exist and occur as transitions between two of the cases listed above: A zero eigenvalue occurs during transition between saddle point and node. Real and equal eigenvalues occur during transition between nodes and spiral points. Purely imaginary eigenvalues occur during a transition between asymptotically stable and unstable spiral points. − b ± b 2 − 4ac r= 2a Ch 7.7: Fundamental Matrices Suppose that x(1)(t),…, x(n)(t) form a fundamental set of solutions for x' = P(t)x on α < t < β. The matrix ⎛ x1(1) (t ) L x1( n ) (t ) ⎞ ⎟ ⎜ Ψ (t ) = ⎜ M O M ⎟, ⎜ x (1) (t ) L x ( n ) (t ) ⎟ n ⎠ ⎝ n whose columns are x(1)(t),…, x(n)(t), is a fundamental matrix for the system x' = P(t)x. This matrix is nonsingular since its columns are linearly independent, and hence detΨ ≠ 0. Note also that since x(1)(t),…, x(n)(t) are solutions of x' = P(t)x, Ψ satisfies the matrix differential equation Ψ' = P(t)Ψ. Example 1: Consider the homogeneous equation x' = Ax below. ⎛ 1 1⎞ ⎟⎟x x′ = ⎜⎜ ⎝ 4 1⎠ In Chapter 7.5, we found the following fundamental solutions for this system: ⎛ 1⎞ 3t ( 2 ) ⎛ 1⎞ −t ⎜ ⎟ x (t ) = ⎜ ⎟e , x (t ) = ⎜⎜ ⎟⎟e ⎝ 2⎠ ⎝ − 2⎠ (1) Thus a fundamental matrix for this system is ⎛ e 3t Ψ(t ) = ⎜⎜ 3t ⎝ 2e e −t ⎞ ⎟ −t ⎟ − 2e ⎠ Fundamental Matrices and General Solution The general solution of x' = P(t)x x = c1x (1) (t ) + L + cn x ( n ) can be expressed x = Ψ(t)c, where c is a constant vector with components c1,…, cn: ⎛ x1(1) (t ) L x1( n ) (t ) ⎞⎛ c1 ⎞ ⎜ ⎟⎜ ⎟ x = Ψ (t )c = ⎜ M O M ⎟⎜ M ⎟ ⎜ x (1) (t ) L x ( n ) (t ) ⎟⎜ c ⎟ n ⎝ n ⎠⎝ n ⎠ Fundamental Matrix & Initial Value Problem Consider an initial value problem x' = P(t)x, x(t0) = x0 where α < t0 < β and x0 is a given initial vector. Now the solution has the form x = Ψ(t)c, hence we choose c so as to satisfy x(t0) = x0. Recalling Ψ(t0) is nonsingular, it follows that Ψ (t0 )c = x 0 ⇒ c = Ψ −1 (t0 )x 0 Thus our solution x = Ψ(t)c can be expressed as x = Ψ (t )Ψ −1 (t0 )x 0 Recall: Theorem 7.4.4 Let e (1) ⎛1⎞ ⎛ 0⎞ ⎛0⎞ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜0⎟ ⎜1⎟ ⎜0⎟ = ⎜ 0 ⎟, e ( 2) = ⎜ 0 ⎟, K, e ( n ) = ⎜ M ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜M⎟ ⎜M⎟ ⎜0⎟ ⎜0⎟ ⎜ 0⎟ ⎜1⎟ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ Let x(1),…, x(n) be solutions of x' = P(t)x on I: α < t < β that satisfy the initial conditions x (1) (t0 ) = e (1) , K , x ( n ) (t0 ) = e ( n ) , α < t0 < β Then x(1),…, x(n) are fundamental solutions of x' = P(t)x. Fundamental Matrix & Theorem 7.4.4 Suppose x(1)(t),…, x(n)(t) form the fundamental solutions given by Thm 7.4.4. Denote the corresponding fundamental matrix by Φ(t). Then columns of Φ(t) are x(1)(t),…, x(n)(t), and hence ⎛1 ⎜ ⎜0 Φ (t0 ) = ⎜ M ⎜ ⎜0 ⎝ 0 L 0⎞ ⎟ 1 L 0⎟ =I ⎟ M O M ⎟ 0 L 1 ⎟⎠ Thus Φ-1(t0) = I, and the hence general solution to the corresponding initial value problem is x = Φ(t )Φ −1 (t0 )x 0 = Φ(t )x 0 It follows that for any fundamental matrix Ψ(t), x = Ψ(t )Ψ −1 (t0 )x 0 = Φ(t )x 0 ⇒ Φ(t ) = Ψ (t )Ψ −1 (t0 ) The Fundamental Matrix Φ and Varying Initial Conditions Thus when using the fundamental matrix Φ(t), the general solution to an IVP is x = Φ(t )Φ −1 (t0 )x 0 = Φ(t )x 0 This representation is useful if same system is to be solved for many different initial conditions, such as a physical system that can be started from many different initial states. Also, once Φ(t) has been determined, the solution to each set of initial conditions can be found by matrix multiplication, as indicated by the equation above. Thus Φ(t) represents a linear transformation of the initial conditions x0 into the solution x(t) at time t. Example 2: Find Φ(t) for 2 x 2 System (1 of 5) Find Φ(t) such that Φ(0) = I for the system below. ⎛ 1 1⎞ ⎟⎟x x′ = ⎜⎜ ⎝ 4 1⎠ Solution: First, we must obtain x(1)(t) and x(2)(t) such that ⎛ 1⎞ ( 2) ⎛ 0⎞ x (0) = ⎜⎜ ⎟⎟, x (0) = ⎜⎜ ⎟⎟ ⎝0⎠ ⎝ 1⎠ (1) We know from previous results that the general solution is ⎛ 1⎞ 3t ⎛ 1⎞ −t x = c1 ⎜⎜ ⎟⎟e + c2 ⎜⎜ ⎟⎟e ⎝ 2⎠ ⎝ − 2⎠ Every solution can be expressed in terms of the general solution, and we use this fact to find x(1)(t) and x(2)(t). Example 2: Use General Solution (2 of 5) Thus, to find x(1)(t), express it terms of the general solution ⎛ 1⎞ 3t ⎛ 1⎞ −t x (t ) = c1 ⎜⎜ ⎟⎟e + c2 ⎜⎜ ⎟⎟e ⎝ 2⎠ ⎝ − 2⎠ (1) and then find the coefficients c1 and c2. To do so, use the initial conditions to obtain ⎛ 1⎞ ⎛ 1⎞ ⎛ 1⎞ x (0) = c1 ⎜⎜ ⎟⎟ + c2 ⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟ ⎝ 2⎠ ⎝ − 2⎠ ⎝0⎠ (1) or equivalently, 1⎞⎛ c1 ⎞ ⎛ 1⎞ ⎛1 ⎜⎜ ⎟⎟⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟ ⎝ 2 − 2 ⎠⎝ c2 ⎠ ⎝ 0 ⎠ Example 2: Solve for x(1)(t) (3 of 5) To find x(1)(t), we therefore solve 1⎞⎛ c1 ⎞ ⎛ 1⎞ ⎛1 ⎜⎜ ⎟⎟⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟ ⎝ 2 − 2 ⎠⎝ c2 ⎠ ⎝ 0 ⎠ by row reducing the augmented matrix: 1 1⎞ ⎛ 1 1 1⎞ ⎛ 1 1 1⎞ ⎛ 1 0 1 / 2 ⎞ ⎛1 ⎜⎜ ⎟⎟ → ⎜⎜ ⎟⎟ → ⎜⎜ ⎟⎟ → ⎜⎜ ⎟⎟ ⎝ 2 − 2 0⎠ ⎝ 0 − 4 − 2⎠ ⎝ 0 1 1/ 2⎠ ⎝ 0 1 1/ 2⎠ c1 = 1/ 2 → c2 = 1 / 2 Thus ⎛ 1 3t 1 − t ⎞ 1 1 ⎛ ⎞ ⎛ ⎞ 1 1 ⎜ e + e ⎟ x (1) (t ) = ⎜⎜ ⎟⎟e3t + ⎜⎜ ⎟⎟e −t = ⎜ 2 2 ⎟ 2 ⎝ 2⎠ 2 ⎝ − 2⎠ 3 t ⎜ e − e −t ⎟ ⎝ ⎠ Example 2: Solve for x(2)(t) (4 of 5) To find x(2)(t), we similarly solve 1⎞⎛ c1 ⎞ ⎛ 0 ⎞ ⎛1 ⎜⎜ ⎟⎟⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟ ⎝ 2 − 2 ⎠⎝ c2 ⎠ ⎝ 1⎠ by row reducing the augmented matrix: 1 0⎞ ⎛ 1 1 0⎞ ⎛ 1 1 0⎞ ⎛ 1 0 1/ 4⎞ ⎛1 ⎜⎜ ⎟⎟ → ⎜⎜ ⎟⎟ → ⎜⎜ ⎟⎟ → ⎜⎜ ⎟⎟ ⎝ 2 − 2 1⎠ ⎝ 0 − 4 1⎠ ⎝ 0 1 − 1 / 4 ⎠ ⎝ 0 1 − 1 / 4 ⎠ c1 = 1/ 4 → c2 = −1 / 4 Thus ⎛ 1 3t 1 − t ⎞ ⎜ e − e ⎟ 1 1 ⎛ ⎞ ⎛ ⎞ 1 1 4 ⎟ x ( 2 ) (t ) = ⎜⎜ ⎟⎟e3t − ⎜⎜ ⎟⎟e −t = ⎜ 4 4 ⎝ 2⎠ 4 ⎝ − 2⎠ ⎜⎜ 1 e3t + 1 e −t ⎟⎟ 2 ⎠ ⎝2 Example 2: Obtain Φ(t) (5 of 5) The columns of Φ(t) are given by x(1)(t) and x(2)(t), and thus from the previous slide we have ⎛ 1 3t 1 − t ⎜ e + e 2 Φ(t ) = ⎜ 2 ⎜⎜ e 3t − e −t ⎝ 1 3t 1 − t ⎞ e − e ⎟ 4 4 ⎟ 1 3t 1 − t ⎟ e + e ⎟ 2 2 ⎠ Note Φ(t) is more complicated than Ψ(t) found in Ex 1. However, now that we have Φ(t), it is much easier to determine the solution to any set of initial conditions. ⎛ e 3t Ψ(t ) = ⎜⎜ 3t ⎝ 2e e −t ⎞ ⎟ −t ⎟ − 2e ⎠ Matrix Exponential Functions Consider the following two cases: The solution to x' = ax, x(0) = x0, is x = x0eat, where e0 = 1. The solution to x' = Ax, x(0) = x0, is x = Φ(t)x0, where Φ(0) = I. Comparing the form and solution for both of these cases, we might expect Φ(t) to have an exponential character. Indeed, it can be shown that Φ(t) = eAt, where e At ∞ ∞ A nt n A nt n =∑ =I+∑ n =0 n ! n =1 n ! is a well defined matrix function that has all the usual properties of an exponential function. See text for details. Thus the solution to x' = Ax, x(0) = x0, is x = eAtx0. Example 3: Matrix Exponential Function Consider the diagonal matrix A below. ⎛1 0⎞ ⎟⎟ A = ⎜⎜ ⎝0 2⎠ Then ⎛ 1 0 ⎞⎛ 1 0 ⎞ ⎛ 1 0 ⎞ ⎛ 1 0 ⎞⎛ 1 0 ⎞ ⎛ 1 0 ⎞ 3 ⎟,K ⎟⎟ = ⎜⎜ ⎟⎜ ⎟, A = ⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟⎜⎜ A = ⎜⎜ 2⎟ 2 ⎟⎜ 3⎟ ⎝ 0 2 ⎠⎝ 0 2 ⎠ ⎝ 0 2 ⎠ ⎝ 0 2 ⎠⎝ 0 2 ⎠ ⎝ 0 2 ⎠ 2 In general, ⎛1 0 ⎞ ⎟ A = ⎜⎜ n⎟ ⎝0 2 ⎠ n Thus e At ∞ 1 / n! 0 ⎞ n ⎛ et ⎛ A nt n ⎟⎟ t = ⎜⎜ =∑ = ∑ ⎜⎜ n 2 / n !⎠ n =0 ⎝ 0 n =0 n ! ⎝0 ∞ 0⎞ ⎟ 2t ⎟ e ⎠ Coupled Systems of Equations Recall that our constant coefficient homogeneous system x1′ = a11 x1 + a12 x2 + K + a1n xn M xn′ = an1 x1 + an 2 x2 + K + ann xn , written as x' = Ax with ⎛ x1 (t ) ⎞ ⎛ a11 L a1n ⎞ ⎜ ⎟ ⎜ ⎟ x(t ) = ⎜ M ⎟, A = ⎜ M O M ⎟, ⎜ x (t ) ⎟ ⎜a L a ⎟ nn ⎠ ⎝ n ⎠ ⎝ n1 is a system of coupled equations that must be solved simultaneously to find all the unknown variables. Uncoupled Systems & Diagonal Matrices In contrast, if each equation had only one variable, solved for independently of other equations, then task would be easier. In this case our system would have the form x1′ = d11 x1 + 0 x2 + K + 0 xn x2′ = 0 x1 + d11 x2 + K + 0 xn M xn′ = 0 x1 + 0 x2 + K + d nn xn , or x' = Dx, where D is a diagonal matrix: ⎛ x1 (t ) ⎞ ⎜ ⎟ x(t ) = ⎜ M ⎟, ⎜ x (t ) ⎟ ⎝ n ⎠ ⎛ d11 ⎜ ⎜ 0 D=⎜ M ⎜ ⎜ 0 ⎝ 0 d 22 M 0 0 ⎞ ⎟ L 0 ⎟ O M ⎟ ⎟ L d nn ⎟⎠ L Uncoupling: Transform Matrix T In order to explore transforming our given system x' = Ax of coupled equations into an uncoupled system x' = Dx, where D is a diagonal matrix, we will use the eigenvectors of A. Suppose A is n x n with n linearly independent eigenvectors ξ(1),…, ξ(n), and corresponding eigenvalues λ1,…, λn. Define n x n matrices T and D using the eigenvalues & eigenvectors of A: ⎛ξ L ξ ⎞ ⎜ ⎟ T = ⎜ M O M ⎟, ⎜ ξ (1) L ξ ( n ) ⎟ n ⎠ ⎝ n (1) 1 (n) 1 ⎛ λ1 0 ⎜ ⎜ 0 λ2 D=⎜ M M ⎜ ⎜0 0 ⎝ L 0⎞ ⎟ L 0⎟ O M ⎟ ⎟ L λn ⎟⎠ Note that T is nonsingular, and hence T-1 exists. Uncoupling: T-1AT = D Recall here the definitions of A, T and D: ⎛ a11 L a1n ⎞ ⎜ ⎟ A = ⎜ M O M ⎟, ⎜a L a ⎟ nn ⎠ ⎝ n1 ⎛ λ1 0 ⎛ ξ1(1) L ξ1( n ) ⎞ ⎜ ⎜ ⎟ ⎜ 0 λ2 T = ⎜ M O M ⎟, D = ⎜ M M ⎜ ξ (1) L ξ ( n ) ⎟ ⎜ n ⎠ ⎜0 0 ⎝ n ⎝ 0⎞ ⎟ 0⎟ O M⎟ ⎟ L λn ⎟⎠ L L Then the columns of AT are Aξ(1),…, Aξ(n), and hence ⎛ λ1ξ1(1) L λnξ1( n ) ⎞ ⎜ ⎟ AT = ⎜ M O M ⎟ = TD ⎜ λ ξ (1) L λ ξ ( n ) ⎟ n n ⎠ ⎝ 1 n It follows that T-1AT = D. Similarity Transformations Thus, if the eigenvalues and eigenvectors of A are known, then A can be transformed into a diagonal matrix D, with T-1AT = D This process is known as a similarity transformation, and A is said to be similar to D. Alternatively, we could say that A is diagonalizable. ⎛ξ L ξ ⎞ ⎛ a11 L a1n ⎞ ⎜ ⎟ ⎜ ⎟ A = ⎜ M O M ⎟, T = ⎜ M O M ⎟, ⎜ ξ (1) L ξ ( n ) ⎟ ⎜a L a ⎟ nn ⎠ n ⎠ ⎝ n1 ⎝ n (1) 1 (n) 1 ⎛ λ1 0 ⎜ ⎜ 0 λ2 D=⎜ M M ⎜ ⎜0 0 ⎝ 0⎞ ⎟ 0⎟ O M⎟ ⎟ L λn ⎟⎠ L L Similarity Transformations: Hermitian Case Recall: Our similarity transformation of A has the form T-1AT = D where D is diagonal and columns of T are eigenvectors of A. If A is Hermitian, then A has n linearly independent orthogonal eigenvectors ξ(1),…, ξ(n), normalized so that (ξ(i), ξ(i)) =1 for i = 1,…, n, and (ξ(i), ξ(k)) = 0 for i ≠ k. With this selection of eigenvectors, it can be shown that T-1 = T*. In this case we can write our similarity transform as T*AT = D Nondiagonalizable A Finally, if A is n x n with fewer than n linearly independent eigenvectors, then there is no matrix T such that T-1AT = D. In this case, A is not similar to a diagonal matrix and A is not diagonlizable. ⎛ξ L ξ ⎞ ⎛ a11 L a1n ⎞ ⎜ ⎟ ⎜ ⎟ A = ⎜ M O M ⎟, T = ⎜ M O M ⎟, ⎜ ξ (1) L ξ ( n ) ⎟ ⎜a L a ⎟ nn ⎠ n ⎠ ⎝ n1 ⎝ n (1) 1 (n) 1 ⎛ λ1 0 ⎜ ⎜ 0 λ2 D=⎜ M M ⎜ ⎜0 0 ⎝ 0⎞ ⎟ 0⎟ O M⎟ ⎟ L λn ⎟⎠ L L Example 4: Find Transformation Matrix T (1 of 2) For the matrix A below, find the similarity transformation matrix T and show that A can be diagonalized. ⎛ 1 1⎞ ⎟⎟ A = ⎜⎜ ⎝ 4 1⎠ We already know that the eigenvalues are λ1 = 3, λ2 = -1 with corresponding eigenvectors ⎛ 1⎞ ( 2) ⎛ 1⎞ ξ (t ) = ⎜⎜ ⎟⎟, ξ (t ) = ⎜⎜ ⎟⎟ ⎝ 2⎠ ⎝ − 2⎠ (1) Thus 1⎞ ⎛1 ⎛ 3 0⎞ ⎜ ⎟ ⎟⎟ T=⎜ , D = ⎜⎜ ⎟ ⎝ 2 − 2⎠ ⎝ 0 − 1⎠ Example 4: Similarity Transformation (2 of 2) To find T-1, augment the identity to T and row reduce: 1 ⎛1 ⎜⎜ ⎝2 − 2 ⎛1 0 → ⎜⎜ ⎝0 1 1 0⎞ ⎛ 1 1 1 0⎞ ⎛ 1 1 1 0⎞ ⎟⎟ → ⎜⎜ ⎟⎟ → ⎜⎜ ⎟⎟ 0 1⎠ ⎝ 0 − 4 − 2 1⎠ ⎝ 0 1 1 / 2 − 1 / 4 ⎠ 1/ 2 1/ 4⎞ 1/ 4⎞ ⎛1 / 2 ⎟⎟ → T −1 = ⎜⎜ ⎟⎟ 1/ 2 −1/ 4 ⎠ ⎝1 / 2 − 1 / 4 ⎠ Then 1 / 4 ⎞ ⎡ ⎛ 1 1⎞ ⎛ 1 1⎞ ⎤ ⎛1 / 2 ⎟⎟ ⎢ ⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟ ⎥ T AT = ⎜⎜ ⎝1 / 2 − 1 / 4 ⎠ ⎣ ⎝ 4 1⎠ ⎝ 2 − 2 ⎠ ⎦ 1 / 4 ⎞ ⎛ 3 − 1⎞ ⎛ 3 0 ⎞ ⎛1 / 2 ⎟⎟ ⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟ = D = ⎜⎜ ⎝1 / 2 − 1 / 4 ⎠ ⎝ 6 2 ⎠ ⎝ 0 − 1⎠ −1 Thus A is similar to D, and hence A is diagonalizable. Fundamental Matrices for Similar Systems (1 of 3) Recall our original system of differential equations x' = Ax. If A is n x n with n linearly independent eigenvectors, then A is diagonalizable. The eigenvectors form the columns of the nonsingular transform matrix T, and the eigenvalues are the corresponding nonzero entries in the diagonal matrix D. Suppose x satisfies x' = Ax, let y be the n x 1 vector such that x = Ty. That is, let y be defined by y = T-1x. Since x' = Ax and T is a constant matrix, we have Ty' = ATy, and hence y' = T-1ATy = Dy. Therefore y satisfies y' = Dy, the system similar to x' = Ax. Both of these systems have fundamental matrices, which we examine next. Fundamental Matrix for Diagonal System (2 of 3) A fundamental matrix for y' = Dy is given by Q(t) = eDt. Recalling the definition of eDt, we have n ⎛ λ ∞ ∞ ⎜ 1 D nt n Q(t ) = ∑ = ∑⎜ 0 n =0 ⎜ n =0 n ! ⎝0 ⎞ ⎛ e λ 1t 0 0 ⎟ ⎜ =⎜ 0 O 0 ⎟ ⎜⎜ λ t⎟ 0 0 e n ⎟ ⎠ ⎝ ⎛ ∞ (λ1t )n ⎜ 0 0 ⎞ n ⎜ ∑ n! n =0 ⎟t 0 O 0 ⎟ =⎜ ⎜ n ⎟ n! 0 λn ⎠ 0 ⎜ ⎜ ⎝ ⎞ ⎟ 0 0 ⎟ ⎟ 0 O ∞ ( λnt )n ⎟ 0 ∑ ⎟ n ! ⎟⎠ n =0 Fundamental Matrix for Original System (3 of 3) To obtain a fundamental matrix Ψ(t) for x' = Ax, recall that the columns of Ψ(t) consist of fundamental solutions x satisfying x' = Ax. We also know x = Ty, and hence it follows that ⎞ ⎛ ξ1(1) e λ 1t L ξ1( n ) e λ nt ⎞ ⎛ ξ1(1) L ξ1( n ) ⎞⎛ e λ 1t 0 0 ⎟ ⎜ ⎟ ⎜ ⎟⎜ Ψ = TQ = ⎜ M O M ⎟⎜ 0 O 0 ⎟ = ⎜ M O M ⎟ ⎜⎜ (1) λ 1t λ nt ⎟ ⎜ ξ (1) L ξ ( n ) ⎟⎜⎜ 0 ( n ) λ nt ⎟ ⎟ 0 e ξ e L ξn e ⎟ n ⎠⎝ n ⎝ n ⎠ ⎝ ⎠ The columns of Ψ(t) given the expected fundamental solutions of x' = Ax. Example 5: Fundamental Matrices for Similar Systems We now use the analysis and results of the last few slides. Applying the transformation x = Ty to x' = Ax below, this system becomes y' = T-1ATy = Dy: ⎛ 1 1⎞ ⎛ 3 0⎞ ⎟⎟ x ⇒ y′ = ⎜⎜ ⎟⎟ y x′ = ⎜⎜ ⎝ 4 1⎠ ⎝ 0 − 1⎠ A fundamental matrix for y' = Dy is given by Q(t) = eDt: ⎛ e 3t Q(t ) = ⎜⎜ ⎝0 0⎞ ⎟ −t ⎟ e ⎠ Thus a fundamental matrix Ψ(t) for x' = Ax is 1⎞⎛ e3t ⎛1 ⎟⎟⎜⎜ Ψ (t ) = TQ = ⎜⎜ ⎝ 2 − 2 ⎠⎝ 0 0 ⎞ ⎛ e 3t ⎟ = ⎜ 3t −t ⎟ e ⎠ ⎜⎝ 2e e −t ⎞ ⎟ −t ⎟ − 2e ⎠ Ch 7.8: Repeated Eigenvalues We consider again a homogeneous system of n first order linear equations with constant real coefficients x' = Ax. If the eigenvalues r1,…, rn of A are real and different, then there are n linearly independent eigenvectors ξ(1),…, ξ(n), and n linearly independent solutions of the form x (1) (t ) = ξ (1) e r1t , K , x ( n ) (t ) = ξ ( n ) e rnt If some of the eigenvalues r1,…, rn are repeated, then there may not be n corresponding linearly independent solutions of the above form. In this case, we will seek additional solutions that are products of polynomials and exponential functions. Example 1: Direction Field (1 of 12) Consider the homogeneous equation x' = Ax below. ⎛1 − 1⎞ ⎟⎟x x′ = ⎜⎜ ⎝1 3 ⎠ A direction field for this system is given below. Substituting x = ξert in for x, and rewriting system as (A-rI)ξ = 0, we obtain ⎛1 − r − 1 ⎞⎛ ξ1 ⎞ ⎛ 0 ⎞ ⎜⎜ ⎟⎟⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟ ⎝ 1 3 − r ⎠⎝ ξ1 ⎠ ⎝ 0 ⎠ Example 1: Eigenvalues (2 of 12) Solutions have the form x = ξert, where r and ξ satisfy ⎛1 − r − 1 ⎞⎛ ξ1 ⎞ ⎛ 0 ⎞ ⎜⎜ ⎟⎟⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟ ⎝ 1 3 − r ⎠⎝ ξ1 ⎠ ⎝ 0 ⎠ To determine r, solve det(A-rI) = 0: 1− r −1 1 3− r = (r − 1)(r − 3) + 1 = r 2 − 4r + 4 = (r − 2) 2 Thus r1 = 2 and r2 = 2. Example 1: Eigenvectors (3 of 12) To find the eigenvectors, we solve − 1⎞⎛ ξ1 ⎞ ⎛ 0 ⎞ ⎛1 − 2 ⎟⎟⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟ ⇔ (A − rI )ξ = 0 ⇔ ⎜⎜ ⎝ 1 3 − 2 ⎠⎝ ξ 2 ⎠ ⎝ 0 ⎠ ⎛ − 1 − 1⎞⎛ ξ1 ⎞ ⎛ 0 ⎞ ⎜⎜ ⎟⎟⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟ ⎝ 1 1⎠⎝ ξ 2 ⎠ ⎝ 0 ⎠ by row reducing the augmented matrix: 1ξ1 ⎛ − 1 − 1 0 ⎞ ⎛1 1 0 ⎞ ⎛ 1 1 0 ⎞ ⎜⎜ ⎟⎟ → ⎜⎜ ⎟⎟ → ⎜⎜ ⎟⎟ → ⎝ 1 1 0 ⎠ ⎝1 1 0 ⎠ ⎝ 0 0 0 ⎠ ⎛−ξ ⎞ ⎛ 1⎞ → ξ (1) = ⎜⎜ 2 ⎟⎟ → choose ξ (1) = ⎜⎜ ⎟⎟ ⎝ − 1⎠ ⎝ ξ2 ⎠ + 1ξ 2 0ξ 2 Thus there is only one eigenvector for the repeated eigenvalue r = 2. =0 =0 Example 1: First Solution; and Second Solution, First Attempt (4 of 12) The corresponding solution x = ξert of x' = Ax is ⎛ 1⎞ 2t x (t ) = ⎜⎜ ⎟⎟e ⎝ − 1⎠ (1) Since there is no second solution of the form x = ξert, we need to try a different form. Based on methods for second order linear equations in Ch 3.5, we first try x = ξte2t. Substituting x = ξte2t into x' = Ax, we obtain ξe 2t + 2ξte 2t = Aξte 2t or 2ξte 2t + ξe 2t − Aξte 2t = 0 Example 1: Second Solution, Second Attempt (5 of 12) From the previous slide, we have 2ξte 2t + ξe 2t − Aξte 2t = 0 In order for this equation to be satisfied for all t, it is necessary for the coefficients of te2t and e2t to both be zero. From the e2t term, we see that ξ = 0, and hence there is no nonzero solution of the form x = ξte2t. Since te2t and e2t appear in the above equation, we next consider a solution of the form x = ξte 2t + ηe 2t Example 1: Second Solution and its (6 of 12) Defining Matrix Equations Substituting x = ξte2t + ηe2t into x' = Ax, we obtain ( ξe 2t + 2ξte 2t + 2ηe 2t = A ξte 2t + ηe 2t ) or 2ξte 2t + (ξ + 2η)e 2t = Aξte 2t + Aηe 2t Equating coefficients yields Aξ = 2ξ and Aη = ξ + 2η, or (A − 2I )ξ = 0 and (A − 2I )η = ξ The first equation is satisfied if ξ is an eigenvector of A corresponding to the eigenvalue r = 2. Thus ⎛ 1⎞ ξ = ⎜⎜ ⎟⎟ ⎝ − 1⎠ Example 1: Solving for Second Solution (7 of 12) Recall that ⎛1 − 1⎞ ⎛ 1⎞ ⎟⎟, ξ = ⎜⎜ ⎟⎟ A = ⎜⎜ ⎝1 3 ⎠ ⎝ − 1⎠ Thus to solve (A – 2I)η = ξ for η, we row reduce the corresponding augmented matrix: ⎛ − 1 − 1 1⎞ ⎛1 1 − 1⎞ ⎛ 1 1 − 1⎞ ⎟⎟ → η 2 = −1 − η1 ⎟⎟ → ⎜⎜ ⎟⎟ → ⎜⎜ ⎜⎜ ⎝ 1 1 − 1⎠ ⎝1 1 − 1⎠ ⎝ 0 0 0 ⎠ ⎛ η1 ⎞ ⎛ 0 ⎞ ⎛ 1⎞ ⎟⎟ → η = ⎜⎜ ⎟⎟ + k ⎜⎜ ⎟⎟ → η = ⎜⎜ ⎝ − 1⎠ ⎝ − 1⎠ ⎝ − 1 − η1 ⎠ Example 1: Second Solution (8 of 12) Our second solution x = ξte2t + ηe2t is now ⎛ 1⎞ 2t ⎛ 0 ⎞ 2t ⎛ 1⎞ 2t x = ⎜⎜ ⎟⎟te + ⎜⎜ ⎟⎟e + k ⎜⎜ ⎟⎟e ⎝ − 1⎠ ⎝ − 1⎠ ⎝ − 1⎠ Recalling that the first solution was ⎛ 1⎞ 2t x (t ) = ⎜⎜ ⎟⎟e , ⎝ − 1⎠ (1) we see that our second solution is simply ⎛ 1⎞ 2t ⎛ 0 ⎞ 2t x (t ) = ⎜⎜ ⎟⎟te + ⎜⎜ ⎟⎟e , ⎝ − 1⎠ ⎝ − 1⎠ ( 2) since the last term of third term of x is a multiple of x(1). Example 1: General Solution (9 of 12) The two solutions of x' = Ax are ⎛ 1⎞ 2t ( 2) ⎛ 1⎞ 2t ⎛ 0 ⎞ 2t ⎜ ⎟ x (t ) = ⎜ ⎟e , x (t ) = ⎜⎜ ⎟⎟te + ⎜⎜ ⎟⎟e ⎝ − 1⎠ ⎝ − 1⎠ ⎝ − 1⎠ (1) The Wronskian of these two solutions is [ ] W x (1) , x ( 2) (t ) = e 2t te 2t − e 2t te 2t − e 2t = −e 4 t ≠ 0 Thus x(1) and x(2) are fundamental solutions, and the general solution of x' = Ax is x(t ) = c1x (1) (t ) + c2 x ( 2 ) (t ) ⎡⎛ 1⎞ 2t ⎛ 0 ⎞ 2t ⎤ ⎛ 1⎞ 2t = c1 ⎜⎜ ⎟⎟e + c2 ⎢⎜⎜ ⎟⎟te + ⎜⎜ ⎟⎟e ⎥ ⎝ − 1⎠ ⎝ − 1⎠ ⎦ ⎣⎝ − 1⎠ Example 1: Phase Plane (10 of 12) The general solution is ⎡⎛ 1⎞ 2t ⎛ 0 ⎞ 2t ⎤ ⎛ 1⎞ 2t x(t ) = c1 ⎜⎜ ⎟⎟e + c2 ⎢⎜⎜ ⎟⎟te + ⎜⎜ ⎟⎟e ⎥ ⎝ − 1⎠ ⎦ ⎝ − 1⎠ ⎣⎝ − 1⎠ Thus x is unbounded as t → ∞, and x → 0 as t → -∞. Further, it can be shown that as t → -∞, x → 0 asymptotic to the line x2 = -x1 determined by the first eigenvector. Similarly, as t → ∞, x is asymptotic to a line parallel to x2 = -x1. Example 1: Phase Plane (11 of 12) The origin is an improper node, and is unstable. See graph. The pattern of trajectories is typical for two repeated eigenvalues with only one eigenvector. If the eigenvalues are negative, then the trajectories are similar but are traversed in the inward direction. In this case the origin is an asymptotically stable improper node. Example 1: Time Plots for General Solution (12 of 12) Time plots for x1(t) are given below, where we note that the general solution x can be written as follows. ⎡⎛ 1⎞ 2t ⎛ 0 ⎞ 2t ⎤ ⎛ 1⎞ 2t x(t ) = c1 ⎜⎜ ⎟⎟e + c2 ⎢⎜⎜ ⎟⎟te + ⎜⎜ ⎟⎟e ⎥ ⎝ − 1⎠ ⎝ − 1⎠ ⎦ ⎣⎝ − 1⎠ ⎞ c1e 2t + c2te 2t ⎛ x1 (t ) ⎞ ⎛ ⎟ ⎟⎟ = ⎜⎜ ⇔ ⎜⎜ 2t 2t ⎟ ⎝ x2 (t ) ⎠ ⎝ − (c1 + c2 )e − c2te ⎠ General Case for Double Eigenvalues Suppose the system x' = Ax has a double eigenvalue r = ρ and a single corresponding eigenvector ξ. The first solution is x(1) = ξeρ t, where ξ satisfies (A-ρI)ξ = 0. As in Example 1, the second solution has the form x ( 2 ) = ξte ρ t + ηe ρ t where ξ is as above and η satisfies (A-ρI)η = ξ. Since ρ is an eigenvalue, det(A-ρI) = 0, and (A-ρI)η = b does not have a solution for all b. However, it can be shown that (A-ρI)η = ξ always has a solution. The vector η is called a generalized eigenvector. Example 2: Fundamental Matrix Ψ (1 of 2) Recall that a fundamental matrix Ψ(t) for x' = Ax has linearly independent solution for its columns. In Example 1, our system x' = Ax was ⎛1 − 1⎞ ⎟⎟x x′ = ⎜⎜ ⎝1 3 ⎠ and the two solutions we found were ⎛ 1⎞ 2t ( 2) ⎛ 1⎞ 2t ⎛ 0 ⎞ 2t ⎜ ⎟ x (t ) = ⎜ ⎟e , x (t ) = ⎜⎜ ⎟⎟te + ⎜⎜ ⎟⎟e ⎝ − 1⎠ ⎝ − 1⎠ ⎝ − 1⎠ (1) Thus the corresponding fundamental matrix is ⎛ e 2t Ψ(t ) = ⎜⎜ 2t ⎝− e t ⎞ ⎞ 2t ⎛ 1 te 2t ⎟ ⎜ ⎟⎟ =e ⎜ 2t 2t ⎟ − te − e ⎠ ⎝ − 1 − t − 1⎠ Example 2: Fundamental Matrix Φ (2 of 2) The fundamental matrix Φ(t) that satisfies Φ(0) = I can be found using Φ(t) = Ψ(t)Ψ-1(0), where ⎛ 1 0 ⎞ −1 ⎛ 1 0⎞ ⎟⎟, Ψ (0) = ⎜⎜ ⎟⎟, Ψ(0) = ⎜⎜ ⎝ − 1 − 1⎠ ⎝ − 1 − 1⎠ where Ψ-1(0) is found as follows: 1 0⎞ ⎛ 1 0 1 0⎞ ⎛ 1 0 1 0⎞ ⎛ 1 0 ⎜⎜ ⎟⎟ → ⎜⎜ ⎟⎟ → ⎜⎜ ⎟⎟ ⎝ − 1 − 1 0 1⎠ ⎝ 0 − 1 1 1⎠ ⎝ 0 1 − 1 − 1⎠ Thus t ⎞⎛ 1 0 ⎞ 2t ⎛1 − t − t ⎞ ⎛ 1 ⎟⎟⎜⎜ ⎟⎟ = e ⎜⎜ ⎟⎟ Φ(t ) = e ⎜⎜ t + 1⎠ ⎝ − 1 − t − 1⎠⎝ − 1 − 1⎠ ⎝ t 2t Jordan Forms If A is n x n with n linearly independent eigenvectors, then A can be diagonalized using a similarity transform T-1AT = D. The transform matrix T consisted of eigenvectors of A, and the diagonal entries of D consisted of the eigenvalues of A. In the case of repeated eigenvalues and fewer than n linearly independent eigenvectors, A can be transformed into a nearly diagonal matrix J, called the Jordan form of A, with T-1AT = J. Example 3: Transform Matrix (1 of 2) In Example 1, our system x' = Ax was ⎛1 − 1⎞ ⎟⎟x x′ = ⎜⎜ ⎝1 3 ⎠ with eigenvalues r1 = 2 and r2 = 2 and eigenvectors ⎛ 1⎞ ⎛ 0 ⎞ ⎛ 1⎞ ξ = ⎜⎜ ⎟⎟, η = ⎜⎜ ⎟⎟ + k ⎜⎜ ⎟⎟ ⎝ − 1⎠ ⎝ − 1⎠ ⎝ − 1⎠ Choosing k = 0, the transform matrix T formed from the two eigenvectors ξ and η is ⎛ 1 0⎞ ⎟⎟ T = ⎜⎜ ⎝ − 1 − 1⎠ Example 3: Jordan Form (2 of 2) The Jordan form J of A is defined by T-1AT = J. Now ⎛ 1 0 ⎞ −1 ⎛ 1 0 ⎞ ⎟⎟ ⎟⎟, T = ⎜⎜ T = ⎜⎜ ⎝ − 1 − 1⎠ ⎝ − 1 − 1⎠ and hence ⎛ 1 0 ⎞ ⎡⎛1 − 1⎞⎛ 1 0 ⎞⎤ ⎛ 2 1⎞ ⎟⎟ ⎢⎜⎜ ⎟⎟⎜⎜ ⎟⎟⎥ = ⎜⎜ ⎟⎟ J = T AT = ⎜⎜ ⎝ − 1 − 1⎠ ⎣⎝1 3 ⎠⎝ − 1 − 1⎠⎦ ⎝ 0 2 ⎠ −1 Note that the eigenvalues of A, r1 = 2 and r2 = 2, are on the main diagonal of J, and that there is a 1 directly above the second eigenvalue. This pattern is typical of Jordan forms. Ch 7.9: Nonhomogeneous Linear Systems The general theory of a nonhomogeneous system of equations x1′ = p11 (t ) x1 + p12 (t ) x2 + K + p1n (t ) xn + g1 (t ) x2′ = p21 (t ) x1 + p22 (t ) x2 + K + p2 n (t ) xn + g 2 (t ) M xn′ = pn1 (t ) x1 + pn 2 (t ) x2 + K + pnn (t ) xn + g n (t ) parallels that of a single nth order linear equation. This system can be written as x' = P(t)x + g(t), where ⎛ x1 (t ) ⎞ ⎛ g1 (t ) ⎞ ⎛ p11 (t ) ⎜ ⎟ ⎜ ⎟ ⎜ ⎜ x2 (t ) ⎟ ⎜ g 2 (t ) ⎟ ⎜ p21 (t ) , P(t ) = ⎜ , g(t ) = ⎜ x(t ) = ⎜ ⎟ ⎟ M M M ⎜ ⎟ ⎜ ⎟ ⎜ ⎜ x (t ) ⎟ ⎜ g (t ) ⎟ ⎜ p (t ) n n ⎝ ⎠ ⎝ ⎠ ⎝ n1 p12 (t ) L p1n (t ) ⎞ ⎟ p22 (t ) L p2 n (t ) ⎟ M O M ⎟ ⎟ pn 2 (t ) L pnn (t ) ⎟⎠ General Solution The general solution of x' = P(t)x + g(t) on I: α < t < β has the form x = c1x (1) (t ) + c2 x ( 2 ) (t ) + K + cn x ( n ) (t ) + v (t ) where c1x (1) (t ) + c2 x ( 2 ) (t ) + K + cn x ( n ) (t ) is the general solution of the homogeneous system x' = P(t)x and v(t) is a particular solution of the nonhomogeneous system x' = P(t)x + g(t). Diagonalization Suppose x' = Ax + g(t), where A is an n x n diagonalizable constant matrix. Let T be the nonsingular transform matrix whose columns are the eigenvectors of A, and D the diagonal matrix whose diagonal entries are the corresponding eigenvalues of A. Suppose x satisfies x' = Ax, let y be defined by x = Ty. Substituting x = Ty into x' = Ax, we obtain Ty' = ATy + g(t). or y' = T-1ATy + T-1g(t) or y' = Dy + h(t), where h(t) = T-1g(t). Note that if we can solve diagonal system y' = Dy + h(t) for y, then x = Ty is a solution to the original system. Solving Diagonal System Now y' = Dy + h(t) is a diagonal system of the form y1′ = r1 y1 + 0 y2 + K + 0 yn + h1 (t ) ⎫ ⎛ y1′ ⎞ ⎛ r1 ⎜ ⎟ ⎜ ⎪ y′2 = 0 y1 + r2 y2 + K + 0 yn + h2 (t )⎪ ⎜ y2′ ⎟ ⎜ 0 ⎬⇔⎜ ⎟=⎜ M ⎪ ⎜ M ⎟ ⎜M yn′ = 0 y1 + 0 y2 + K + r1 yn + hn (t ) ⎪⎭ ⎜⎝ yn′ ⎟⎠ ⎜⎝ 0 0 r2 M 0 L 0 ⎞⎛ y1 ⎞ ⎛ h1 ⎞ ⎟⎜ ⎟ ⎜ ⎟ L 0 ⎟⎜ y2 ⎟ ⎜ h2 ⎟ +⎜ ⎟ ⎟ ⎜ ⎟ O M M M ⎟⎜ ⎟ ⎜ ⎟ L rn ⎟⎠⎜⎝ yn ⎟⎠ ⎜⎝ hn ⎟⎠ where r1,…, rn are the eigenvalues of A. Thus y' = Dy + h(t) is an uncoupled system of n linear first order equations in the unknowns yk(t), which can be isolated yk′ = r1 yk + hk (t ), k = 1, K , n and solved separately, using methods of Section 2.1: yk = e rk t ∫ t t0 e − rk s hk ( s )ds + ck e rk t , k = 1, K , n Solving Original System The solution y to y' = Dy + h(t) has components yk = e rk t ∫ t t0 e − rk s hk ( s )ds + ck e rk t , k = 1, K , n For this solution vector y, the solution to the original system x' = Ax + g(t) is then x = Ty. Recall that T is the nonsingular transform matrix whose columns are the eigenvectors of A. Thus, when multiplied by T, the second term on right side of yk produces general solution of homogeneous equation, while the integral term of yk produces a particular solution of nonhomogeneous system. Example 1: General Solution of Homogeneous Case (1 of 5) Consider the nonhomogeneous system x' = Ax + g below. 1⎞ ⎛ 2e − t ⎞ ⎛− 2 ⎟ = Ax + g(t ) ⎟⎟x + ⎜⎜ x′ = ⎜⎜ ⎟ ⎝ 1 − 2 ⎠ ⎝ 3t ⎠ Note: A is a Hermitian matrix, since it is real and symmetric. The eigenvalues of A are r1 = -3 and r2 = -1, with corresponding eigenvectors ξ (1) ⎛ 1⎞ ( 2 ) ⎛1⎞ = ⎜⎜ ⎟⎟, ξ = ⎜⎜ ⎟⎟ ⎝ − 1⎠ ⎝1⎠ The general solution of the homogeneous system is then ⎛ 1⎞ −3t ⎛1⎞ −t x(t ) = c1 ⎜⎜ ⎟⎟e + c2 ⎜⎜ ⎟⎟e ⎝ − 1⎠ ⎝1⎠ Example 1: Transformation Matrix (2 of 5) Consider next the transformation matrix T of eigenvectors. Using a Section 7.7 comment, and A Hermitian, we have T-1 = T* = TT, provided we normalize ξ(1)and ξ(2) so that (ξ(1), ξ(1)) = 1 and (ξ(2), ξ(2)) = 1. Thus normalize as follows: ⎛ 1⎞ 1 ⎛ 1⎞ 1 ⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟, ξ = (1)(1) + (−1)(−1) ⎝ − 1⎠ 2 ⎝ − 1⎠ ⎛1⎞ 1 ⎛1⎞ 1 ( 2) ⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟ = ξ = (1)(1) + (1)(1) ⎝1⎠ 2 ⎝1⎠ (1) Then for this choice of eigenvectors, 1 ⎛ 1 1⎞ 1 ⎛ 1 − 1⎞ −1 ⎜⎜ ⎟⎟, T = ⎜⎜ ⎟⎟ T= 2 ⎝ − 1 1⎠ 2 ⎝ 1 1⎠ Example 1: Diagonal System and its Solution (3 of 5) Under the transformation x = Ty, we obtain the diagonal system y' = Dy + T-1g(t): ⎛ y1′ ⎞ ⎛ − 3 0 ⎞⎛ y1 ⎞ 1 ⎛1 − 1⎞⎛ 2e − t ⎞ ⎟ ⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟⎜⎜ ⎟⎟⎜⎜ ⎟⎟ + ⎜⎜ ⎟ ′ y y − 0 1 1 1 3 t 2 ⎠⎝ 2 ⎠ ⎠⎝ ⎝ ⎝ 2⎠ ⎝ ⎠ ⎛ − 3 y1 ⎞ 1 ⎛ 2e −t − 3t ⎞ ⎜ −t ⎟ ⎟⎟ + = ⎜⎜ ⎜ ⎟ − y + 2 e 3 t 2 2 ⎠ ⎝ ⎝ ⎠ Then, using methods of Section 2.1, 3 2 −t t ⇒ y1 = e − 2 2 3 y2′ + y2 = 2e −t + t ⇒ y2 = 2te −t + 2 y1′ + 3 y1 = 2e −t − 3 ⎛ t 1⎞ − 3t ⎜ − ⎟ + c1e 2 ⎝3 9⎠ 3 (t − 1) + c2e −t 2 Example 1: Transform Back to Original System (4 of 5) We next use the transformation x = Ty to obtain the solution to the original system x' = Ax + g(t): ⎛ 1 −t ⎛ t 1 ⎞ − 3t ⎞ ⎛ x1 ⎞ 1 ⎛ 1 1⎞⎛ y1 ⎞ ⎛ 1 1⎞⎜⎜ 2 e − ⎜⎝ 2 − 6 ⎟⎠ + k1e ⎟⎟ ⎜⎜ ⎟⎟ = ⎟⎟ ⎟⎟⎜⎜ ⎟⎟ = ⎜⎜ ⎜⎜ ⎟ 2 ⎝ − 1 1⎠⎝ y2 ⎠ ⎝ − 1 1⎠⎜ −t 3 −t ⎝ x2 ⎠ ( ) + − + te t k e 1 ⎜ ⎟ 2 2 ⎝ ⎠ ⎛ − 3t ⎛ ⎞ 1 ⎞ −t 4 −t ⎜ k1e + ⎜ k 2 + ⎟e + t − + te ⎟ 2⎠ 3 ⎝ ⎟, k = c1 , k = c2 =⎜ 1 2 ⎜ 1 ⎞ −t 5 ⎛ 2 2 − 3t −t ⎟ − + − + − + k e k e t te 2 ⎜ ⎟ 2 ⎜ 1 ⎟ 2 3 ⎝ ⎠ ⎝ ⎠ Example 1: Solution of Original System (5 of 5) Simplifying further, the solution x can be written as ⎞ ⎛ − 3t ⎛ 1 ⎞ −t 4 −t ⎟ ⎜ k e + ⎜ k 2 + ⎟e + t − + te ⎛ x1 ⎞ ⎜ 1 2⎠ 3 ⎝ ⎟ ⎜⎜ ⎟⎟ = ⎝ x2 ⎠ ⎜ − k e −3t + ⎛⎜ k − 1 ⎞⎟e −t + 2t − 5 + te −t ⎟ 2 ⎟ ⎜ 1 2 3 ⎝ ⎠ ⎠ ⎝ ⎛1⎞ −t 1 ⎛ 1⎞ −t ⎛1⎞ −t ⎛ 1⎞ 1 ⎛ 4 ⎞ ⎛ 1⎞ − 3 t = k1 ⎜⎜ ⎟⎟ e + k 2 ⎜⎜ ⎟⎟ e + ⎜⎜ ⎟⎟ e + ⎜⎜ ⎟⎟ te + ⎜⎜ ⎟⎟ t − ⎜⎜ ⎟⎟ 2 ⎝ − 1⎠ ⎝ 2⎠ 3 ⎝ 5⎠ ⎝ 1⎠ ⎝ 1⎠ ⎝ − 1⎠ Note that the first two terms on right side form the general solution to homogeneous system, while the remaining terms are a particular solution to nonhomogeneous system. Nondiagonal Case If A cannot be diagonalized, (repeated eigenvalues and a shortage of eigenvectors), then it can be transformed to its Jordan form J, which is nearly diagonal. In this case the differential equations are not totally uncoupled, because some rows of J have two nonzero entries: an eigenvalue in diagonal position, and a 1 in adjacent position to the right of diagonal position. However, the equations for y1,…, yn can still be solved consecutively, starting with yn. Then the solution x to original system can be found using x = Ty. Undetermined Coefficients A second way of solving x' = P(t)x + g(t) is the method of undetermined coefficients. Assume P is a constant matrix, and that the components of g are polynomial, exponential or sinusoidal functions, or sums or products of these. The procedure for choosing the form of solution is usually directly analogous to that given in Section 3.6. The main difference arises when g(t) has the form ueλt, where λ is a simple eigenvalue of P. In this case, g(t) matches solution form of homogeneous system x' = P(t)x, and as a result, it is necessary to take nonhomogeneous solution to be of the form ateλt + beλt. This form differs from the Section 3.6 analog, ateλt. Example 2: Undetermined Coefficients (1 of 5) Consider again the nonhomogeneous system x' = Ax + g: 1⎞ ⎛ 2e − t ⎞ ⎛ − 2 1⎞ ⎛ 2 ⎞ −t ⎛ 0 ⎞ ⎛− 2 ⎜ ⎟ ⎟⎟x + ⎜ ⎟⎟x + ⎜⎜ ⎟⎟e + ⎜⎜ ⎟⎟t = ⎜⎜ x′ = ⎜⎜ ⎟ ⎝ 1 − 2 ⎠ ⎝ 3t ⎠ ⎝ 1 − 2 ⎠ ⎝ 0 ⎠ ⎝ 3⎠ Assume a particular solution of the form v(t ) = ate −t + be −t + ct + d where the vector coefficents a, b, c, d are to be determined. Since r = -1 is an eigenvalue of A, it is necessary to include both ate-t and be-t, as mentioned on the previous slide. Example 2: Matrix Equations for Coefficients (2 of 5) Substituting v(t ) = ate −t + be −t + ct + d in for x in our nonhomogeneous system x' = Ax + g, 1⎞ ⎛ 2 ⎞ −t ⎛ 0 ⎞ ⎛− 2 ′ ⎜ ⎟⎟x + ⎜⎜ ⎟⎟e + ⎜⎜ ⎟⎟t , x =⎜ ⎝ 1 − 2⎠ ⎝ 0⎠ ⎝ 3⎠ we obtain ⎛ 2 ⎞ −t ⎛ 0 ⎞ − ate + (a − b )e + c = Aate + Abe + Act + Ad + ⎜⎜ ⎟⎟e + ⎜⎜ ⎟⎟t ⎝ 3⎠ ⎝ 0⎠ −t −t −t −t Equating coefficients, we conclude that ⎛ 0⎞ ⎛ 2⎞ Aa = −a, Ab = a − b − ⎜⎜ ⎟⎟, Ac = −⎜⎜ ⎟⎟, Ad = c ⎝ 3⎠ ⎝0⎠ Example 2: Solving Matrix Equation for a (3 of 5) Our matrix equations for the coefficients are: ⎛ 2⎞ ⎛ 0⎞ Aa = −a, Ab = a − b − ⎜⎜ ⎟⎟, Ac = −⎜⎜ ⎟⎟, Ad = c ⎝0⎠ ⎝ 3⎠ From the first equation, we see that a is an eigenvector of A corresponding to eigenvalue r = -1, and hence has the form ⎛α ⎞ a = ⎜⎜ ⎟⎟ ⎝α ⎠ We will see on the next slide that α = 1, and hence ⎛1⎞ a = ⎜⎜ ⎟⎟ ⎝1⎠ Example 2: Solving Matrix Equation for b (4 of 5) Our matrix equations for the coefficients are: ⎛0⎞ ⎛ 2⎞ ⎜ ⎟ Aa = −a, Ab = a − b − ⎜ ⎟, Ac = −⎜⎜ ⎟⎟, Ad = c ⎝ 3⎠ ⎝0⎠ Substituting aT = (α,α) into second equation, ⎡⎛ − 2 ⎛ α − 2 ⎞ ⎛ b1 ⎞ ⎟⎟ − ⎜⎜ ⎟⎟ ⇔ ⎢⎜⎜ Ab = ⎜⎜ ⎝ α ⎠ ⎝ b2 ⎠ ⎣⎝ 1 ⎛ − 1 1⎞⎛ b1 ⎞ ⎛ α − 2 ⎞ ⎟⎟⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟ ⇔ ⇔ ⎜⎜ ⎝ 1 − 1⎠⎝ b2 ⎠ ⎝ α ⎠ 1⎞ ⎛ 1 0 ⎞⎤⎛ b1 ⎞ ⎛ α − 2 ⎞ ⎟⎟ + ⎜⎜ ⎟⎟⎥⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟ − 2 ⎠ ⎝ 0 1⎠⎦⎝ b2 ⎠ ⎝ α ⎠ ⎛ 1 − 1⎞⎛ b1 ⎞ ⎛ α ⎞ ⎜⎜ ⎟⎟⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟ ⎝ 0 0 ⎠⎝ b2 ⎠ ⎝ α − 1⎠ Thus α = 1, and solving for b, we obtain ⎛ 0⎞ ⎛1⎞ ⎛ 0 ⎞ b = k ⎜⎜ ⎟⎟ − ⎜⎜ ⎟⎟ → choose k = 0 → b = ⎜⎜ ⎟⎟ ⎝ − 1⎠ ⎝1⎠ ⎝ 1 ⎠ Example 2: Particular Solution (5 of 5) Our matrix equations for the coefficients are: ⎛0⎞ ⎛ 2⎞ ⎜ ⎟ Aa = −a, Ab = a − b − ⎜ ⎟, Ac = −⎜⎜ ⎟⎟, Ad = c ⎝ 3⎠ ⎝0⎠ Solving third equation for c, and then fourth equation for d, it is straightforward to obtain cT = (1, 2), dT = (-4/3, -5/3). Thus our particular solution of x' = Ax + g is ⎛1⎞ −t v(t ) = ⎜⎜ ⎟⎟te ⎝1⎠ ⎛ 0 ⎞ −t − ⎜⎜ ⎟⎟e ⎝1⎠ ⎛1⎞ 1 ⎛ 4⎞ + ⎜⎜ ⎟⎟t − ⎜⎜ ⎟⎟ ⎝ 2⎠ 3 ⎝ 5⎠ Comparing this to the result obtained in Example 1, we see that both particular solutions would be the same if we had chosen k = ½ for b on previous slide, instead of k = 0. Variation of Parameters: Preliminaries A more general way of solving x' = P(t)x + g(t) is the method of variation of parameters. Assume P(t) and g(t) are continuous on α < t < β, and let Ψ(t) be a fundamental matrix for the homogeneous system. Recall that the columns of Ψ are linearly independent solutions of x' = P(t)x, and hence Ψ(t) is invertible on the interval α < t < β, and also Ψ'(t) = P(t)Ψ(t). Next, recall that the solution of the homogeneous system can be expressed as x = Ψ(t)c. Analogous to Section 3.7, assume the particular solution of the nonhomogeneous system has the form x = Ψ(t)u(t), where u(t) is a vector to be found. Variation of Parameters: Solution We assume a particular solution of the form x = Ψ(t)u(t). Substituting this into x' = P(t)x + g(t), we obtain Ψ'(t)u(t) + Ψ(t)u'(t) = P(t)Ψ(t)u(t) + g(t) Since Ψ'(t) = P(t)Ψ(t), the above equation simplifies to u'(t) = Ψ-1(t)g(t) Thus u(t ) = ∫ Ψ −1 (t ) g (t )dt + c where the vector c is an arbitrary constant of integration. The general solution to x' = P(t)x + g(t) is therefore x = Ψ(t )c + Ψ(t ) ∫ Ψ −1 ( s)g( s)ds, t1 ∈ (α , β ) arbitrary t t1 Variation of Parameters: Initial Value Problem For an initial value problem x' = P(t)x + g(t), x(t0) = x(0), the general solution to x' = P(t)x + g(t) is −1 x = Ψ(t )Ψ (t0 )x (0) t + Ψ(t ) ∫ Ψ −1 ( s )g( s )ds t0 Alternatively, recall that the fundamental matrix Φ(t) satisfies Φ(t0) = I, and hence the general solution is x = Φ(t )x (0) t + Φ(t ) ∫ Ψ −1 ( s )g( s )ds t0 In practice, it may be easier to row reduce matrices and solve necessary equations than to compute Ψ-1(t) and substitute into equations. See next example. Example 3: Variation of Parameters (1 of 3) Consider again the nonhomogeneous system x' = Ax + g: 1⎞ ⎛ 2e − t ⎞ ⎛ − 2 1⎞ ⎛ 2 ⎞ −t ⎛ 0 ⎞ ⎛− 2 ⎜ ⎟ ⎟⎟x + ⎜ ⎟⎟x + ⎜⎜ ⎟⎟e + ⎜⎜ ⎟⎟t = ⎜⎜ x′ = ⎜⎜ ⎟ ⎝ 1 − 2 ⎠ ⎝ 3t ⎠ ⎝ 1 − 2 ⎠ ⎝ 0 ⎠ ⎝ 3⎠ We have previously found general solution to homogeneous case, with corresponding fundamental matrix: ⎛ e −3t Ψ (t ) = ⎜⎜ −3t ⎝− e e −t ⎞ ⎟ −t ⎟ e ⎠ Using variation of parameters method, our solution is given by x = Ψ(t)u(t), where u(t) satisfies Ψ(t)u'(t) = g(t), or ⎛ e −3t ⎜ − 3t ⎜− e ⎝ e − t ⎞⎛ u1′ ⎞ ⎛ 2e −t ⎞ ⎟ ⎟⎜ ⎟⎟ = ⎜ −t ⎟⎜ ⎜ e ⎠⎝ u2′ ⎠ ⎝ 3t ⎟⎠ Example 3: Solving for u(t) (2 of 3) Solving Ψ(t)u'(t) = g(t) by row reduction, ⎛ e −3t ⎜ − 3t ⎜− e ⎝ e −t e −t 2e − t ⎞ ⎛ e −3t ⎟→⎜ 3t ⎟⎠ ⎜⎝ 0 e −t 2e −t 2e − t ⎞ ⎟ −t 2e + 3t ⎟⎠ ⎛ e − 3t e − t 2e −t ⎞ ⎛ e −3t 0 e −t − 3t / 2 ⎞ ⎟→⎜ ⎟ → ⎜⎜ −t −t −t −t ⎟ ⎜ e + 3t / 2 ⎠ ⎝ 0 e e + 3t / 2 ⎟⎠ ⎝ 0 e ⎛ 1 0 e 2t − 3te 3t / 2 ⎞ u1′ = e 2t − 3te 3t / 2 ⎟→ → ⎜⎜ t t ⎟ ′ u 1 3 te /2 = + + 0 1 1 3 te / 2 ⎝ ⎠ 2 It follows that ⎛ u1 ⎞ ⎛ e 2t / 2 − te 3t / 2 + e3t / 6 + c1 ⎞ ⎟ u(t ) = ⎜⎜ ⎟⎟ = ⎜⎜ t t ⎟ ⎝ u2 ⎠ ⎝ t + 3te / 2 − 3e / 2 + c2 ⎠ Example 3: Solving for x(t) (3 of 3) Now x(t) = Ψ(t)u(t), and hence we multiply ⎛ e −3t x = ⎜⎜ −3t ⎝− e e − t ⎞⎛ e 2t / 2 − te 3t / 2 + e 3t / 6 + c1 ⎞ ⎟ ⎟⎜ t t −t ⎟⎜ e ⎠⎝ t + 3te / 2 − 3e / 2 + c2 ⎟⎠ to obtain, after collecting terms and simplifying, ⎛1⎞ −t ⎛1⎞ −t 1 ⎛ 1⎞ −t ⎛ 1⎞ 1 ⎛ 4 ⎞ ⎛ 1⎞ −3t x = c1 ⎜⎜ ⎟⎟e + c2 ⎜⎜ ⎟⎟e + ⎜⎜ ⎟⎟te + ⎜⎜ ⎟⎟e + ⎜⎜ ⎟⎟t − ⎜⎜ ⎟⎟ 2 ⎝ − 1⎠ ⎝ 2⎠ 3 ⎝ 5⎠ ⎝1⎠ ⎝1⎠ ⎝ − 1⎠ Note that this is the same solution as in Example 1. Laplace Transforms The Laplace transform can be used to solve systems of equations. Here, the transform of a vector is the vector of component transforms, denoted by X(s): ⎧⎛ x1 (t ) ⎞⎫ ⎛ L{x1 (t )}⎞ ⎟⎟⎬ = ⎜⎜ ⎟⎟ = X( s ) L{x(t )} = L ⎨⎜⎜ ⎩⎝ x2 (t ) ⎠⎭ ⎝ L{x2 (t )}⎠ Then by extending Theorem 6.2.1, we obtain L{x′(t )} = sX( s ) − x(0) Example 4: Laplace Transform (1 of 5) Consider again the nonhomogeneous system x' = Ax + g: 1⎞ ⎛ 2e − t ⎞ ⎛− 2 ⎟ ⎟⎟x + ⎜⎜ x′ = ⎜⎜ ⎟ ⎝ 1 − 2 ⎠ ⎝ 3t ⎠ Taking the Laplace transform of each term, we obtain sX( s ) − x(0) = AX( s ) + G ( s ) where G(s) is the transform of g(t), and is given by ⎛ 2 (s + 1)⎞ ⎟ G ( s ) = ⎜⎜ 2 ⎟ ⎝ 3 s ⎠ Example 4: Transfer Matrix (2 of 5) Our transformed equation is sX( s ) − x(0) = AX( s ) + G ( s ) If we take x(0) = 0, then the above equation becomes sX( s ) = AX( s ) + G ( s ) or (sI − A )X( s) = G ( s) Solving for X(s), we obtain X ( s ) = (s I − A ) G ( s ) −1 The matrix (sI – A)-1 is called the transfer matrix. Example 4: Finding Transfer Matrix Then 1⎞ − 1⎞ ⎛− 2 ⎛s + 2 ⎟⎟ ⇒ (sI − A ) = ⎜⎜ ⎟⎟ A = ⎜⎜ ⎝ 1 − 2⎠ ⎝ −1 s + 2⎠ Solving for (sI – A)-1, we obtain (sI − A ) −1 1⎞ ⎛s + 2 1 ⎟⎟ ⎜⎜ = 1 s + 2⎠ ( s + 1)( s + 3) ⎝ (3 of 5) Example 4: Transfer Matrix (4 of 5) Next, X(s) = (sI – A)-1G(s), and hence 1⎞⎛ 2 ( s + 1) ⎞ ⎛s + 2 1 ⎟⎟ ⎟⎟⎜⎜ ⎜⎜ X( s ) = 2 1 s + 2 ⎠⎝ 3 s ⎠ ( s + 1)( s + 3) ⎝ or 3 ⎞ ⎛ 2(s + 2 ) + ⎟ ⎜ 2 2 ( s + 1) ( s + 3) s (s + 1)( s + 3) ⎟ ⎜ X( s ) = 2 3( s + 2) ⎟ ⎜ ⎜ (s + 1)2 ( s + 3) + s 2 (s + 1)( s + 3) ⎟ ⎠ ⎝ Example 4: Transfer Matrix (5 of 5) Thus 3 ⎞ ⎛ 2(s + 2 ) + ⎟ ⎜ 2 2 (s + 1) ( s + 3) s (s + 1)( s + 3) ⎟ X( s ) = ⎜ 2 3( s + 2) ⎟ ⎜ ⎜ (s + 1)2 ( s + 3) + s 2 (s + 1)( s + 3) ⎟ ⎠ ⎝ To solve for x(t) = L-1{X(s)}, use partial fraction expansions of both components of X(s), and then Table 6.2.1 to obtain: 2 ⎛ 1⎞ −3t ⎛ 2 ⎞ −t ⎛1⎞ −t ⎛ 1⎞ 1 ⎛ 4 ⎞ x = − ⎜⎜ ⎟⎟e + ⎜⎜ ⎟⎟e + ⎜⎜ ⎟⎟te + ⎜⎜ ⎟⎟t − ⎜⎜ ⎟⎟ 3 ⎝ − 1⎠ ⎝ 2⎠ 3 ⎝ 5⎠ ⎝1⎠ ⎝ 1⎠ Since we assumed x(0) = 0, this solution differs slightly from the previous particular solutions. Summary (1 of 2) The method of undetermined coefficients requires no integration but is limited in scope and may involve several sets of algebraic equations. Diagonalization requires finding inverse of transformation matrix and solving uncoupled first order linear equations. When coefficient matrix is Hermitian, the inverse of transformation matrix can be found without calculation, which is very helpful for large systems. The Laplace transform method involves matrix inversion, matrix multiplication, and inverse transforms. This method is particularly useful for problems with discontinuous or impulsive forcing functions. Summary (2 of 2) Variation of parameters is the most general method, but it involves solving linear algebraic equations with variable coefficients, integration, and matrix multiplication, and hence may be the most computationally complicated method. For many small systems with constant coefficients, all of these methods work well, and there may be little reason to select one over another.