Math 304-504 Linear algebra Lecture 34: Matrix exponentials.

advertisement
Math 304-504
Linear algebra
Lecture 34:
Matrix exponentials.
Exponential function
Exponential function: f (x) = exp x = e x , x ∈ R.
We have that e x+y = e x · e y .
x n
x
Definition 1. e = lim 1 +
.
n→∞
n
1 n
≈ 2.7182818.
In particular, e = lim 1 +
n→∞
n
x2
xn
Definition 2. e = 1 + x +
+ ··· +
+ ···
2!
n!
x
Definition 3. f (x) = e x is the unique solution of
the initial value problem f ′ = f , f (0) = 1.
Complex exponentials
Definition. For any z ∈ C let
z2
zn
z
e =1+z +
+ ··· +
+ ···
2!
n!
Remark. A sequence of complex numbers
z1 = x1 + iy1 , z2 = x2 + iy2 , . . . converges
to z = x + iy if xn → x and yn → y as n → ∞.
Theorem 1 If z = x + iy , x, y ∈ R, then
e z = e x (cos y + i sin y ).
In particular, e iφ = cos φ + i sin φ, φ ∈ R.
Theorem 2 e z+w = e z · e w for all z, w ∈ C.
Proposition e iφ = cos φ + i sin φ for all φ ∈ R.
Proof: e
iφ
(iφ)n
(iφ)2
+ ··· +
+ ···
= 1 + iφ +
2!
n!
The sequence 1, i, i 2 , i 3 , . . . , i n , . . . is periodic:
1, i, −1, −i , 1, i, −1, −i , . . .
| {z } | {z }
It follows that
φ2 φ4
φ2k
e iφ = 1 −
+
− · · · + (−1)k
+ ···
2!
4!
(2k)!
2k+1
φ3 φ5
φ
+ i φ−
+
− · · · + (−1)k
+ ···
3!
5!
(2k + 1)!
= cos φ + i sin φ.
Matrix exponentials
Definition. For any square matrix A let
1
1
exp A = e A = I + A + A2 + · · · + An + · · ·
2!
n!
Matrix exponential is a limit of matrix polynomials.
Remark. Let A(1) , A(2) , . . . be a sequence of n×n
(n)
matrices, A(n) = (aij ). The sequence converges to
(n)
an n×n matrix B = (bij ) if aij → bij as n → ∞,
i.e., if each entry converges.
Theorem The matrix exp A is well defined, i.e.,
the series converges.
Properties of matrix exponentials
Theorem 1 If AB = BA then e A e B = e B e A = e A+B .
Corollary (a) e tA e sA = e sA e tA = e (t+s)A , t, s ∈ R.
(b) (e A )−1 = e −A .
Theorem 2
d tA
e = Ae tA = e tA A.
dt
tn n
t2 2
Indeed, e = I + tA + A + · · · + A + · · · ,
2!
n!
and the series can be differentiated term by term.
t n−1
d tn
d tn n
n
A =
=
A
An .
dt n!
dt n!
(n − 1)!
tA
Lemma Let A be an n×n matrix and x ∈ Rn .
Then the vector function v(t) = e tA x satisfies
v′ = Av.
d
dv
= e tA x = Ae tA x = A e tA x = Av.
Proof:
dt
dt
Theorem For any t0 ∈ R and x0 ∈ Rn the initial
value problem
dv
= Av, v(t0 ) = x0
dt
has a unique solution v(t) = e (t−t0 )A x0 .
Indeed, v(t) = e (t−t0 )A x0 = e tA e −t0 A x0 = e tA x,
where x = e −t0 A x0 is a constant vector.
Problem. Solve a system of differential equations
(
dx
dt = 2x + 3y ,
dy
dt
= x + 4y
subject to initial conditions x(0) = 2, y (0) = 1.
Let v(t) = (x(t), y (t)). Then the system can be
dv
2 3
rewritten as
= Av, where A =
.
1 4
dt
The unique solution is v(t) = e tA x0 , where
x0 = (2, 1).
Example. A = diag(a1 , a2 , . . . , ak ).
An = diag(a1n , a2n , . . . , akn ), n = 1, 2, 3, . . .
1 2
1
A + · · · + An + · · ·
2!
n!
= diag(b1 , b2 , . . . , bk ),
eA = I + A +
where bi = 1 + ai +
1 2 1 3
ai + ai + · · · = e ai .
2!
3!
Theorem If A = diag(a1 , a2 , . . . , ak ) then
e A = diag(e a1 , e a2 , . . . , e ak ),
e tA = diag(e a1 t , e a2 t , . . . , e ak t ).
Let A be an n-by-n matrix and suppose there exists
a basis v1 , . . . , vn for Rn consisting of eigenvectors
of A. That is, Avk = λk vk , where λk ∈ R.
Then A = UBU −1 , where B = diag(λ1 , λ2 , . . . , λn )
and U is a change-of-coordinates matrix whose
columns are vectors v1 , v2 , . . . , vn .
A2 = UBU −1 UBU −1 = UB 2 U −1 ,
A3 = A2 A = UB 2 U −1 UBU −1 = UB 3 U −1 .
Likewise, An = UB n U −1 for any n ≥ 1.
I + 2A − 3A2 = UIU −1 + 2UBU −1 − 3UB 2 U −1 =
= U(I + 2B − 3B 2 )U −1 .
=⇒ p(A) = Up(B)U −1 for any polynomial p(x).
=⇒ e tA = Ue tB U −1 for all t ∈ R.
Example. A =
2 3
.
1 4
The eigenvalues of A: λ1 = 1, λ2 = 5.
Eigenvectors: v1 = (3, −1), v2 = (1, 1).
Therefore A =UBU −1
, where 1 0
3 1
B=
, U=
.
0 5
−1 1
Then
t
3
1
e
e tA = Ue tB U −1 =
−1 1
0
t 5t t
3e e
3e
1 1 −1
1
=
=
4 −e t
−e t e 5t 4 1 3
0
e 5t
3 1
−1 1
−1
+ e 5t −3e t + 3e 5t
.
+ e 5t
e t + 3e 5t
Problem. Solve a system of differential equations
(
dx
dt = 2x + 3y ,
dy
dt
= x + 4y
subject to initial conditions x(0) = 2, y (0) = 1.
The unique solution:
x(t)
2
tA
= e x0 , where A =
y (t)
1
t
3e + e 5t −3e t
1
tA
e =4
−e t + e 5t
et
(
x(t) = 43 e t + 54 e 5t ,
=⇒
y (t) = − 14 e t + 54 e 5t .
3
2
, x0 =
.
4
1
+ 3e 5t
+ 3e 5t


0 1 0
Example. A = 0 0 1, a Jordan block.
0 0 0


0 0 1
2
A = 0 0 0, A3 = O, An = O for n ≥ 3.
0 0 0


1 1 12
1
e A = I + A + A2 = 0 1 1,
2
0 0 1


1 t 21 t 2
2
t
e tA = I + tA + A2 = 0 1 t .
2
0 0 1
1 1
Example. A =
, another Jordan block.
0 1
1
2
1
3
1
n
A2 =
, A3 =
, An =
.
0 1
0 1
0 1
tn n
t2 2
a(t) b(t)
tA
,
e = I +tA+ A + · · · + A + · · · =
0 a(t)
2!
n!
where a(t) = 1 + t +
2
3
t2
2!
+
t3
3!
+ · · · = et,
b(t) = t + 2 t2! + 3 t3! + · · · = te t .
t
t
e
te
Thus e tA =
.
0 et
λ 1
Example. A =
, a general Jordan block.
0 λ
0 1
We have that A = λI + B, where B =
.
0 0
Since (λI )B = B(λI ), it follows that e A = e λI e B .
Similarly, e tA = e tλI e tB .
λt
e
0
e tλI =
= e λt I ,
0 e λt
1
t
.
B 2 = O =⇒ e tB = I + tB =
0 1
λt
λt
e
te
Thus e tA = e tλI e tB =
.
0 e λt
Problem. Solve a system of differential equations
(
dx
dt = 2x + y ,
dy
dt
= 2y
subject to initial conditions x(0) = y (0) = 1.
The unique solution:
x(t)
2 1
1
tA
= e x0 , where A =
, x0 =
.
y (t)
0 2
1
2t
2t
1
t
e
te
e tA =
= e 2t
0 1
0 e 2t
(
x(t) = e 2t (1 + t),
=⇒
y (t) = e 2t .
Download