Math 2280-001
Mon Apr 6
5.6 Matrix exponentials and linear systems: The analogy between first order systems of linear differential equations (Chapter 5) and scalar linear differential equations (Chapter 1) is much stronger than you may have expected. This will become especially clear on Wednesday, when we study section 5.7.
Definition Consider the linear system of differential equations for x t : where A n # n
is a constant matrix as usual. If x
1 x # = A x t , x
2 t ,... x n this system, then the matrix having these solutions as columns, t is a basis for the solution space to
F t d x
1 t x
2 t ... x n t is called a Fundamental Matrix (FM) to this system of differential equations. Notice that this equivalent to saying that X t = F t solves
X # t = A X
X 0 nonsingular i .
e . invertible
(just look column by column). Notice that a FM is just the Wronskian matrix for a solution space basis.
Example 1 page 351 x # y #
=
4 2
3 K 1 x y
4 K l 2
3 K 1 K l
= l
2
K 3 l K 10 = l C 2 l K 5 l = K 2:
6 2
3 1
0
0
0 v =
1
K 3 l = 5:
K 1 2
3 K 6
0
0
0 v =
2
1 general solution x t = c
1 e K t
1
K 3
C c
2 e
5 t
2
1
.
Possible FM:
F t = e K t
K 3e
K t
2e
5 t e
5 t general solution:
F t c = e
K t
K 3e K t
2e
5 t e
5 t c
1 c
2
Theorem: If F t is a FM for the first order system x # = A x then the solution to x # t = A x
IVP x 0 = x
0 is x t = F t
=
F x
0
.
0 proof: Since x t = F t F 0 x
0
= F t F 0 x
0
is a linear combination of the columns of is a solution to the homogeneous DE x # t = A x . Its value at t = 0 is
F t it x 0 = F 0 F 0 x
0
= F 0 F 0 x
0
= I x
0 x
0
Exercise 1) Continuing with the example on page 1, use the formula above to solve
A x # y #
IVP
=
4 2
3 K 1 x y x 0 y 0
=
1
K 1 ans: x t =
3
7 e K t
1
K 3
C
2
7 e
5 t
2
1
Remark: If F t is a Fundamental Matrix for x # = A x and if C is an invertible matrix of the same size, then F t C is also a FM. Check: Does X t = F t C satisfy
X # t = A X
?
X 0 nonsingular i .
e . invertible d dt
F t C = F # t C (universal product rule)
= A F C
= A F C .
Also, X 0 = F 0 C is a product of invertible matrices, so is invertible as well. Thus X t = F t C is an
FM.
A
(Notice this argument would not work if we had used C F t instead.)
If F t is any FM for x # = A x then X t = F t F 0 solves
X # t = A X
.
X 0 = I
Notice that there is only one matrix solution to this IVP, since the j th
column x j to x ' t = A x x 0 = e j
.
Definition The unique FM that solves
X # t = A X
X 0 = I is called the matrix exponential, e t A
...because: t is the (unique) solution
This generalizes the scalar case. In fact, notice that if we wish to solve x # t = A x
IVP x 0 = x
0 the solution is x t =e tA x
0
, in analogy with Chapter 1.
Exercise 2) Continuing with our example, for the DE x #
=
4 2 y # 3 K 1 with x y
4 2
A =
3 K 1 and FM e K t
2e
5 t
F t =
K 3e
K t e
5 t compute e t A
. Check that the solution to the IVP in Exercise 1 is indeed e t A x
0
.
with LinearAlgebra :
4 2
A d
3 K 1
:
MatrixExponential t $ A ; # check work on previous page
1
7
e K t
C
6
7
e
5 t
2
7
e
5 t
K
2
7
e K t
3
7
e
5 t
K
3
7
e K t
6
7
e K t
C
1
7
e
5 t
But wait!
Didn't you like how we derived Euler's formula using Taylor series?
Here's an alternate way to think about e tA
:
For A n # n
consider the matrix series e
A d I C A C
1
2!
A
2
C
1
3!
A
3
C ...
C
1 k !
A k
C ...
Convergence: pick a large number M, so that each entry of A satisfies a ij entry entry ij entry ij ij
A
A k
A
3
2
%
%
% n n
2 nM
M k K 1
3
2
M
...
k
% M . Then so the matrix series converges absolutly in each entry (dominated by the Calc 2 series for the scalar e
Mn
).
Then define e t A d I C tA C t
2
2!
A
2
C t
3
3!
A
3
C ...
C t k k !
A k
C ...
Notice that for X t = e t A
defined by the power series above, and assuming the true fact that we may differentiate the series term by term,
X # t = 0 C A C
2
2!
t
A
2
C
3 t
2
3!
A
3
C ....
kt k K 1 k !
A k
C ...
= A I
= A C tA
2
C tA C t
2
C
2!
A
2 t
2
2!
C
A
...
3
C
C
... t k t k K 1
A k k K 1 !
k K 1
A k K 1
K 1 !
= A X .
C ...
Also,
X 0 = I .
Thus, since there is only one matrix function that can satisfy
X # t = A X
X 0 = I we deduce
Theorem The matrix exponential e t A e t A
= I C
may be computed either of two ways: tA C
e t t A
2
2!
= F t F 0
A
2
C t
3
3!
A
3
C ...
C k t k
!
A k
C ...
Exercise 3 Let A be a diagonal matrix L , l
1
0 0 ... 0
L =
0 l
2
: :
0 ... 0
:
0 0 ...
l n
Use the Taylor series definition and the FM definition to verify twice that e t l
1
0 ...
0 e t L
=
0 e t l
2
...
0
: : : :
0 0 0 e t l n
Hint: products of diagonal matrices are diagonal, and the diagonal entries multiply, so l k
1
0 0 ... 0
L k
=
0 l k
2
0 ... 0
: : :
0 0 ...
l k n
Example How to recompute e t A
for
4 2
A d
3 K 1 using power series and Math 2270: The similarity matrix made of eigenvectors of A
1 2
S =
K 3 1 yields
4 2
3 K 1
1 2
K 3 1
=
A S = S L :
K 2 10
6 5
=
1 2
K 3 1
K 2 0
0 5 so
Thus A k
A = S L S .
= S L k
S (telescoping product), so e t A
= I C tA C
= S I C t L C t
2
2!
A
2
C t
3
3!
A t
2
2!
L
2
C
= S e t L
...
S
C
3
C ...
C k t k
!
L k k t k
!
A
C ...
S k
C ...
=
1 2
K 3 1 e
K t
0 e
0
5 t
1
7
1 K 2
3 1
=
1
7
1 2
K 3 1 e
K t
3e
5 t
K 2e e
K t
5 t
1
7 e
K t
C 6e
5 t
K 2e
K t
C 2e
5 t
=
K 3e K t
C 3e
5 t which agrees with our original computation using the FM.
6e K t
C e
5 t
Three important properties of matrix exponentials:
1) e
0
= I , where 0 is the n # n zero matrix. (Why is this true?)
2) If AB = BA then e
A C B
= e
A e
B
= e
B e
A
(but this identity is not generally true when A and B don't commute). (See homework.)
3) e
A
= e . (Combine (1) and (2).)
Using these properties there is a "straightforward" algorithm to compute e tA
even when A is not diagonalizable (and it doesn't require the use of chains). See Theorem 3 in section 5.6 (5.5 in the old text).
We'll study the details on Wednesday, but here's an example:
Exercise 4) Let
A =
2 1 0
0 2 1
0 0 2
Find e t A
by writing A = D C N where
2 0 0 0 1 0
D = 0 2 0 , N = 0 0 1 and using e t D C t N
= e tD e t N
. Hint: N
3
0 0 2 0 0 0
= 0 so the Taylor series for e t N
is very short.
Variation of parameters: This is what fundamental matrices and matrix exponentials are especially good for....they let you solve non-homogeneous systems without guessing. Consider the non-homogeneous first order system x # t = P t x C f t *
Let F t be an FM for the homogeneous system.
Since F t is invertible for all t we may do a change of functions for the non-homogeneous system: x t = F t u t plug into the non-homogeneous system (*):
F # t u t C F t u # t = P t F t u t C f t .
Since F # = P F the first terms on each side cancel eachother and we are left with
F t u # t = f t u # = F which we can integrate to find a u t , hence an x t .
f
We'll work examples on Wednesday ...