Linear systems of ordinary differential equations

advertisement
Linear systems of ordinary differential equations
(This is a draft and preliminary version of the lectures given by Prof. Colin
Atkinson FRS on 21st, 22nd and 25th April 2008 at Tecnun)
Introduction.
.te
cn
un
.es
This chapter studies the solution of a system of ordinary differential equations.
These kind of problems appear in many physical or chemical models where several variables depend upon the same independent one. For instance, Newton’s
second law applied to a particle.
m·
d2 x1
= F1 (x1 , x2 , x3 , ẋ1 , ẋ2 , ẋ3 , t)
dt2
m·
d2 x2
= F2 (x1 , x2 , x3 , ẋ1 , ẋ2 , ẋ3 , t)
dt2
m·
d2 x3
= F3 (x1 , x2 , x3 , ẋ1 , ẋ2 , ẋ3 , t)
dt2
(1)
Let us consider this system of ordinary differential equations:
dx
= −3x + 3y
dt
(2)
ww
w
dy
= −xz + rx − y
dt
dx
= xy − z
dt
where x, y, z are time-dependent variables, and r is a parameter. We find
a system of three non-linear ODE’s. The interesting thing in this example is
the strong dependence of the solution on the paramater r and the initial value
conditions (t0 , x0 , y0 , z0 ). For the above reasons, the system presents as the
chaos theory; a very known theory in Mathematics knows as the ”butterfly
effect”1 . These equations were posed by Lorenz in the study of metheorology.
1 The phrase refers to the idea that a butterfly’s wings might create tiny changes in the
atmosphere that may ultimately alter the path of a tornado or delay, accelerate or even prevent
the occurrence of a tornado in a certain location. The flapping wing represents a small change
in the initial condition of the system, which causes a chain of events leading to large-scale
alterations of events. Had the butterfly not flapped its wings, the trajectory of the system
might have been vastly different. While the butterfly does not cause the tornado, the flap
of its wings is an essential part of the initial conditions resulting in a tornado. Recurrence,
the approximate return of a system towards its initial conditions, together with sensitive
dependence on initial conditions are the two main ingredients for chaotic motion. They have
the practical consequence of making complex systems, such as the weather, difficult to predict
past a certain time range (approximately a week in the case of weather).
c 2008 Tecnun (University of Navarra)
1
There is an important relationship between the system of ODE and the ODE’s
of any order superior to the first. It is a matter of fact, that an equation of
order nth
y (n) = F (t, y, y 0 , y 00 , . . . , y (n−1) )
(3)
where y (n) = dn y/dtn . This can be converted in a system of n equations of first
order. With this change of variables
(4)
x01 = F1 (t, x1 , x2 , . . . , xn )
x02 = F2 (t, x1 , x2 , . . . , xn )
...
x0n = Fn (t, x1 , x2 , . . . , xn )
(5)
.te
cn
un
.es
or in general:
x1 = y
x2 = y 0 ⇔ x2 = x01
...
xn = y (n−1) ⇔ xn = x0n−1
x0n = F (t, x1 , x2 , . . . , xn )
ww
w
we reach n non-linear ODE’s of first order.
There are three questions to be answered:
(1) What about the existence of solutions?
(2) What about uniqueness?
(3) What is the sensitivity to the initial conditions?
We are going to see in which cases we can assert that a system of ODE has a
solution and this is unique. We must consider the following theorem
Theorem. Let us assume that in a region A of the space (t, x1 , x2 , . . . , xn ), the
functions F1 , F2 , . . . , Fn and
∂F1 ∂F1
∂F1
,
,...,
∂x1 ∂x2
∂xn
∂F2
∂F2 ∂F2
,
,...,
∂x1 ∂x2
∂xn
...
∂Fn ∂Fn
∂Fn
,
,...,
∂x1 ∂x2
∂xn
(6)
are continuous and such that the point (t0 , x01 , x02 , . . . , x0n ) is an interior point of
A. Thus there exists an interval
|t − t0 | < (7)
(local argument) where there is a unique solution of the system given by eq. 5,
x1 = Φ1 (t)
x2 = Φ2 (t) . . .
xn = Φn (t)
(8)
x02 = Φ2 (t0 ) . . .
x0n = Φn (t0 )
(9)
that fulfils the initial condition
x01 = Φ1 (t0 )
c 2008 Tecnun (University of Navarra)
2
A way to prove this theorem is to use the Taylor’s expansion of the functions.
Note.- We must point out that this is a sufficient condition theorem. So, weakening the conditions we can define a stronger expression of this theorem to get
a unique solution.
The systems are classified in the same manner as the ODE’s. They are linear
and non-linear. If the functions F1 , F2 , . . . , Fn can be written as
x0i = Pi1 (t) · x1 + Pi2 (t) · x2 + · · · + Pin (t) · xn + qi (t)
(10)
.te
cn
un
.es
with i = 1, 2, . . . , n, the system is called linear. If qi (t) are equal to zero for
all i, this system is called linear and homogeneous; if not, non-homogeneous.
For this kind of systems the theorem of existence and uniqueness is simpler and
to some extent, more satisfactory. This theorem has global character. Notice
that in the general case this theorem it is defined in the neighbourhood of the
initial conditions, consequently giving a local character to the existence and
uniqueness of the solution.
Recall: If an equation is linear, it means we can add solutions together and still
satisfy the differential equation, e.g. x01 = H(x1 ) and H is linear, then
H(c1 · x1 + c2 · x2 ) = c1 · H(x1 ) + c2 · H(x2 )
(11)
If x01 = H(x1 ) y x02 = H(x2 ), then
(c1 x1 +c2 x2 )0 = H(c1 ·x1 +c2 ·x2 ) = c1 ·H(x1 )+c2 ·H(x2 ) = c1 ·x01 +c2 ·x02 (12)
where c1 , c2 are constants. This is a nice property fulfilled when an equation is
linear.
Basic theory of linear systems of ODE’s
ww
w
Let us consider a system of n linear differential equations of first order
x01 = P11 (t) · x1 + P12 (t) · x1 + · · · + P1n (t) · xn + q1 (t)
x01 = P21 (t) · x1 + P22 (t) · x1 + · · · + P2n (t) · xn + q2 (t)
...
x01 = Pn1 · (t)x1 + Pn1 (t) · x1 + · · · + Pnn (t) · xn + qn (t)
(13)
We can write this as a matrix
where
x0 = P(t) · x + q(t)
(14)
x = (x1 x2 . . . xn )T
q(t) = (q1 (t) q2 (t) . . . qn (t))



P(t) = 

P11 (t)
P21 (t)
..
.
P12 (t)
P22 (t)
..
.
T
...
...
..
.
P1n (t)
P2n (t)
..
.
Pn1 (t) Pn2 (t) . . .
Pnn (t)
c 2008 Tecnun (University of Navarra)

(15)




3
If q(t) = 0, we have an homogeneous system and eq. 14 becomes
x0 = P(t) · x
(16)
.te
cn
un
.es
This notation emphasises the relationship between the linear systems of ODE’s
and the first order linear differential equations
Theorem. If x1 and x2 are solutions of eq. 16, so (c1 · x1 + c2 · x2 ) is solution
as well.

x01 = P(t) · x1 
⇒ (c1 · x1 + c2 · x2 )0 = P(t) · (c1 · x1 + c2 · x2 )
(17)

0
x2 = P(t) · x2
The question to be answered is: how many independent solutions of eq. 16
are there? Let us assume at the moment, if x1 , x2 , . . . , xn are solutions of the
system, consider the matrix Ψ(t) –called fundamental matrix – given by
Ψ(t) = (x1 x2 . . . xn )
(18)
Its determinant will be
x11 (t) x12 (t) . . .
x21 (t) x22 (t) . . .
|Ψ(t)| = ..
..
..
.
.
.
xn1 (t) xn2 (t) . . .
= W (t)
xnn (t) x1n (t)
x2n (t)
..
.
(19)
ww
w
where W (t) is called the wronskian of the system. These solutions will be
linearly independent at each point t in an interval (α, β) if
Example 1.
W (t) 6= 0, ∀t ∈ (α, β)
We will find it for a two by two system
P11 (t) P12 (t)
x0 =
x
P21 (t) P22 (t)
(20)
(21)
whose solutions are x1 = (x11 (t) x21 (t))T y x2 = (x12 (t) x22 (t))T . They verify
the system equations (see eq. 21)
0
x11 = P11 · x11 + P12 · x21
0
x1 = P(t) · x1 ⇔
x021 = P21 · x11 + P22 · x21
(22)
0
x
=
P
·
x
+
P
·
x
11
12
12
22
12
x02 = P(t) · x2 ⇔
x022 = P21 · x12 + P22 · x22
The fundamental matrix, Ψ(t), is
x11 (t) x12 (t)
Ψ(t) =
x21 (t) x22 (t)
c 2008 Tecnun (University of Navarra)
(23)
4
and the wronskian, W (t),
W (t) = x11 · x22 − x12 · x21
(24)
The derivative of W (t) with respect to t is
W 0 (t) = x011 · x22 + x11 · x022 − x012 · x21 − x12 · x021
and substituting
x011 , x021 , x012 , x022
(25)
as a function of x11 , x21 , x12 , x22 , we obtain
W 0 (t) = (P11 + P22 ) · (x11 · x22 − x12 · x21 ) = (trace P(t)) · W (t)
(26)
This is the Abel’s formula. Solving the differential equation
x
(trace P(s))·ds
.te
cn
un
.es
Rt
W (t) = W (t0 ) · e
t0
(27)
As e is never zero, W (t) 6= 0 for all finite value of t if trace P(t) is integrable
and W (t0 ) 6= 0.
In n-dimensions the same happens. This generalises to n-dimensions to give
Rt
dW
(trace
= (P11 + · · · + Pnn ) · W (t) ⇒ W (t) = W (t0 ) · e t0
dt
Example 2.
A Sturm-Liouville equation has the form
d
dy a(t) ·
+ b(t) · y = 0
dt
dt
This is a second order differential equation of the form
P(s))·ds
(28)
dy
d2 y
+ a1 (t) ·
+ a0 (t) · y = 0
2
dt
dt
This kind of equations were studied in 18th/19th century and early 20th century,
where a2 (t), a1 (t) and a0 (t) are linear in t. For instance, the vibration of a plate
is given by these equations. We now write eq. (28) as a system
ww
w
a2 (t) ·
x1 = y
dy
x2 =
dt
(29)
(30)
Then it gives
x01 = x2
x02 = −
(31)
0
b(t)
a (t)
· x2 −
· x1
a(t)
a(t)
(32)
We could use our theory of systems to get the connection –wronskian– between
x1 and x2 independent solutions of the system. However, we consider eq. (28)
directly assuming that y1 and y2 are two possible solutions. So
d
dy1 a(t) ·
+ b(t) · y1 = 0
(33)
dt
dt
d
dy2 a(t) ·
+ b(t) · y2 = 0
(34)
dt
dt
c 2008 Tecnun (University of Navarra)
5
We now multiply y2 by eq. (33) and y1 by eq. (34) and substracting both
expressions we get
a(t) · (y100 · y2 ) + a0 (t) · (y10 · y2 − y20 · y1 ) = 0
(35)
d
y2 · y10 − y1 · y20 = y2 · y100 − y1 · y200
dt
(36)
but
and
W (t) = y2 · y10 − y1 · y20
a(t) ·
then
.te
cn
un
.es
Then eq. (35) becomes
d
dW
a(t) · W (t) = 0
+ a0 (t) · W = 0 ⇒
dt
dt
h
dW
a0 (t) · dt
=−
⇒ W (t) = W (t0 ) · exp
W
a(t)
Z
t
t0
i
−a0 (s)
· ds
a(s)
Homogeneous linear system with constant coefficients.
Let us consider the system
x0 = A · x
(37)
where A is real , n × n constant matrix. As solution, we will try
x = er·t · a
(38)
ww
w
where a is a constant vector. Then
x0 = r · er·t · a
(39)
Then we have a solution provided that
r · er·t · a = A · er·t · a ⇔ (A − r · I) · a = 0
(40)
and for non-trivial solution (i.e. a 6= 0), r must satisfy
|A − r · I| = 0
(41)
Procedure.
Finding eigenvalues, r1 , r2 , . . . , rn , solution of |A − r · I| = 0 and corresponding
eigenvectors, a1 , a2 , . . . , an . Then if the n eigenvectors are linearly independent,
we have a general solution
x = c1 · er1 ·t · a1 + c2 · er2 ·t · a2 + · · · + cn · ern ·t · an
c 2008 Tecnun (University of Navarra)
(42)
6
where c1 , c2 , . . . , cn are arbitrary constants.
xi = ai · eri ·t , with i = 1, 2, . . . , n. Then the
a11 a12 . . . a1n
a21 a22 . . . a2n
W (t) = .
..
..
..
..
.
.
.
an1 an2 . . . ann
Recall ai = (a1i a2i . . . ani ) and
wronskian will be
(r1 +r2 +···+rn )·t
6= 0
(43)
e
Example 3.
.te
cn
un
.es
since a1 , a2 , . . . , an are linearly independent. In a large number of problems,
getting the eigenvalues can be very difficult problem.
0
x =
1
·x
1
1
4
(44)
Consider x = a · er·t . Then we require
(A − r · I) ·
a11
a21
1 − r
|A − r · I| = 4
0
0
1 =0
1 − r
=
(1 − r)2 − 4 = 0 ⇒ (1 − r)2 = 4 ⇒ 1 − r = ±2
r = 3, −1. So the eigenvalues are 3 and −1. With r = 3
−2 1
a
0
· 1 =
4 −2
a2
0
ww
w
So −2 · a1 + a2 = 0 implies that a1 = 1 and a2 = 2. Hence, the eigenvector will
be
1
2
With r = −1
2
4
1
a
0
· 1 =
2
a2
0
So 2 · a1 + a2 = 0 implies that a1 = 1 and a2 = −2. Therefore, the eigenvector
will be
1
−2
The general solution is
1
1
3·t
x = C1 ·
· e + C2 ·
· e−t
2
−2
(45)
The equation (45) is a family of solutions since C1 and C2 are arbitrary (i.e.,
if x1 and x2 are known at t = 0, we can solve eq. (45) to get C1 and C2 for
c 2008 Tecnun (University of Navarra)
7
.te
cn
un
.es
Figure 1: Phase plane of example 3
specific solution). Note: eq. (44) does not involve time explicitly. It could be
written as
dx1
x1 + x2
=
(46)
dx2
4 · x1 + x2
So we can study the problem in 2 − d space (x1 , x2 ). This is often called the
phase plane. Note that eq. (46) defines dx1 /dx2 uniquely except for points
where top and bottom terms of the quotient are zero simultaneously.

dx1
 dt  a11


=

a
 dx 
21
2
dt
a12
a22
x1
x2
⇔
ww
w


dx1



 dt = a11 · x1 + a12 · x2
(47)


dx2


= a21 · x1 + a22 · x2
dt
dx2
a21 · x1 + a22 · x2
=
dx1
a11 · x1 + a12 · x2
a11 · x1 + a12 · x2 = 0
a21 · x1 + a22 · x2 = 0
(48)
(49)
In general, what about A?
T
1. If A is hermitian (i.e., AH = A = A, where A is the complex conjugate of
the matrix), then the eigenvalues are real and we can find n linearly-independent
eigenvectors.
2. A is non-hermitian. We have the following possibilities
(2.a.) n real and disttinct eigenvalues and n independent eigenvectors.
(2.b.) Complex eigenvalues.
(2.c.) Repeated eigenvalues.
c 2008 Tecnun (University of Navarra)
8
Example 4.
0
x =
1
4
1
1
x
(50)
Put x = er·t · a to get
1 =0
1−r 1−r
4
(51)
we obtain r1 = −1 y r2 = 3. We need the eigenvectors
r1 = −1 ⇒ a1 = (1 − 2)T
r2 = 3 ⇒ a2 = (1 2)T
.te
cn
un
.es
(52)
We have two solutions
x1 =
1
−2
e
−t
x2 =
1
2
e3t
(53)
To find the general solution, we construct it via a linear combination –by adding
these two linearly independent solutions together–
1
1
x = c1 · e−t ·
+ c2 · e3·t ·
(54)
−2
2
c1 and c2 are arbitrary constants to be determined by intial conditions or other
conditions on x.
ww
w
We can plot (see Figure 2) the family of the solutions in the (x1 , x2 ) plane with
an arrow to signify the direction of time (means time increasing). Our solution
in components looks like
x1 =c1 · e3·t + c2 · e−t
x2 =2 · c1 · e3·t − 2 · c2 · e−t
we can study the motion of a pendulum, we can use systems in order to study
position and velocity. Suppose c2 ≡ 0 and we are interested in the point x1 =
x2 = 0 and note x1 /x2 = 1/2 with c1 6= 0. We only reach (0, 0) if t → −∞ since
we need e3·t → 0. If c1 ≡ 0 and c2 6= 0, then x1 /x2 = −1/2. To get (0, 0) we
need t → +∞. So (0, 0) is a special point. I can represent the time by an arrow
x1
x2
x1
t → +∞ ⇒
x2
t → −∞ ⇒
→ c1 · e−t
⇔ x2 → −2 · x1
→ −2 · c1 · e−t
→ c2 · e3·t
⇔ x2 → 2 · x1
→ 2 · c2 · e3·t
(55)
Point (0, 0) is a saddle point. Thinking of the future, we can observe that
detA = −3 < 0.
c 2008 Tecnun (University of Navarra)
9
.te
cn
un
.es
Figure 2: Phase plane of Example 4
Example 5.
Let us consider this problem.
√ −3
2
√
x0 =
·x
2 −2
In order to solve this problem, we obtain the eigenvalues
√
−3 − r
2 √
=0
2
−2 − r (56)
(57)
and r1 = −4 y r2 = −1. The eigenvectors are
ww
w
√
r1 = −4 ⇒ a1 = (− √2 1)T
r2 = −1 ⇒ a2 = (1 2)T
they are linearly independent. Hence the solutions are
√ 1
− 2
−4·t
√
x1 =
e
x2 =
e−t
1
2
(58)
(59)
And the general solution is
x = c1 · e−4·t ·
√ 1
− 2
+ c2 · e−t · √
1
2
Plotting the paths on the phase plane (x1 , x2 )
√
x1 → c2√
· e−4·t
t → −∞ ⇒
−4t ⇔ x2 → − 2 · x1
x
→
−
2
·
c
·
e
2
2
x1 → 0
t → +∞ ⇒
⇔ (x1 , x2 ) → (0, 0)
x2 → 0
(60)
(61)
Point (0, 0) is a stable node. Thinking of the future again, detA = 2 > 0,
traceA = −5 < 0 and detA < (traceA)2 /4.
c 2008 Tecnun (University of Navarra)
10
.te
cn
un
.es
Figure 3: Phase plane of Example 5
Example 6.
Let us consider another

0
x= 1
1
where A is a hermitian matrix
−r
1
1
system

1 1
0 1 x
1 0
1
−r
1
1
1
−r
=0
(62)
(63)
we obtain r1 = −1 (double) and r2 = 2. The eigenvectors are
ww
w
r1 = −1 ⇒ a1 = (1 − 1 0)T , a2 = (1 1 − 2)T
r2 = 2 ⇒ a3 = (1 1 1)T
They are linearly independent. Hence, the solutions are






1
1
1
x1 =  −1  · e−t x2 =  1  · e−t x3 =  1  · e2·t
1
0
−2
And the general solution is




 
1
1
1
x = c1 · e−t ·  −1  + c2 · e−t ·  1  + c3 · e2·t ·  1 
0
−2
1
(64)
(65)
(66)
Complex eigenvalues.
If A is non hermitian (but real) and has complex eigenvalues then the determinant |A − r · I| = 0 takes complex conjugates eigenvalues
|A − r1 · I| = 0
c 2008 Tecnun (University of Navarra)
(67)
11
As A and I are real, if we work out the conjugate of eq. 67, we obtain
|A − r1 · I| = 0
(68)
this means that if r1 is an eigenvalue, its complex conjugate is eigenvalue as
well. Therefore the eigenvectors will be complex conjugates, too:
(A − r1 · I) · x1 = 0 ⇔ (A − r1 · I) · x1 = 0
(69)
The solution associated to r1 is
.te
cn
un
.es
x1 = er1 ·t · a1 = e(λ1 +i·µ1 )·t · (u1 + i · v1 ) =
= eλ1 ·t · (cos(µ1 · t) + i · sin(µ1 · t)) · (u1 + i · v1 ) =
= eλ1 ·t · (u1 · cos(µ1 · t) − v1 · sin(µ1 · t)) + i · eλ1 ·t (u1 · sin(µ1 · t) + v1 · cos(µ1 · t))
(70)
And the other one
x1 = eλ1 ·t ·(u1 ·cos(µ1 ·t)−v1 ·sin(µ1 ·t))−i·eλ1 ·t (u1 ·sin(µ1 ·t)+v1 ·cos(µ1 ·t)) (71)
We are looking for real solution to this system. We know that a linear combination of these solutions will be a solution as well. Hence, we can take the real
and the imaginary parts of these ones
x = c1 ·eλ1 ·t (u1 ·cos(µ1 ·t)−v1 ·sin(µ1 ·t))+c2 ·eλ1 ·t (u1 ·sin(µ1 ·t)+v1 ·cos(µ1 ·t))
(72)
Example 7.
.
0
ww
w
x =
−1/2
−1
where A is not symmetric. Solving
−1/2 − r
−1
1
−1/2
x
(73)
1
=0
−1/2 − r (74)
we get r = −1/2 ± i. The eigenvector is
1
a1 =
i
(75)
Hence the solution is
1
1
(−1/2+i)·t
x1 =
e
=
e−t/2 · (cos(t) + i · sin(t))
i
i
(76)
Then
x1 =
cos(t) + i · sin(t)
− sin(t) + i · cos(t)
·e−t/2 =
cos(t)
− sin(t)
·e−t/2 +i·
sin(t)
cos(t)
·e−t/2
(77)
c 2008 Tecnun (University of Navarra)
12
The general solution is
x = c1 · e−t/2 ·
cos(t)
− sin(t)
+ c2 · e−t/2 ·
sin(t)
cos(t)
(78)
.te
cn
un
.es
Plotting this family of curves on the phase plane (x1 , x2 )
Figure 4: Phase plane of Example 7
t → +∞ ⇒
x1 → 0
⇔ (x1 , x2 ) → (0, 0)
x2 → 0
(79)
Let us study the points (x 0)T y (0 y)T . Substituting into eq. 73
ww
w
x = (x 0)T ⇒ x0 = (−x/2 − x)T ⇒
dy
=2
dx
(80)
dy
−1
x = (0 y) ⇒ x = (y − y/2) ⇒
=
dx
2
T
0
T
Point (0, 0) is a stable spiral point. Thinking of the future once again, det A =
5/4 > 0, traceA = −1 < 0 and detA > (traceA)2 /4.
Repeated eigenvalues.
Let us consider the system given by eq. 37, with A a real non-symmetric matrix.
Let us assume that one of its eigenvalues, r1 , is repeated (m1 = m > 1), and
there are not m linearly independent eigenvectors, 1 ≤ k < m. We should
obtain (m − k) additional linearly independent solutions. How do we amend
the procedure to deal with cases when there are no m linearly independent
eigenvectors?
For example, the exponential of a square matrix:
eAt = I + At +
An tn
A2 t2
+ ··· +
+ ...
2!
n!
c 2008 Tecnun (University of Navarra)
(81)
13
We recall the Cayley–Hamilton theorem. If f (λ) is the characteristic poynomial
of a matrix A,
f (λ) = det (A − λ · I)
(82)
This theorem says that f (A) = 0. For instance,
f (λ) = ri2 − (trace · A)ri + det A = ri2 − p · ri + q = 0
(83)
Hence, by Cayley-Hamilton theorem,
f (A) = A2 − p · A + q · I = 0
.te
cn
un
.es
(84)
using this theorem, any expansion function like eA can be reduced to polynomials on A. This is very useful theorem in problems of materials and continuum
mechanics.
A2 = p · A − q·I
A3 = p · A2 − q · A = p2 − q · A − p · q · I
A4 = p · A3 − q · A2 = p · p2 − 2q · A − (p2 − q) · q · I
...
(85)
So, we can use this theorem to get a finite expansion of eq. 81.
Differentiating with respect to t that expression
A3 t2
An tn−1
deAt
= A + A2 t +
+ ··· +
+ ··· =
dt
(2!
(n − 1)!
(86)
An−1 tn−1
= A I + At + · · · +
+ ...
(n − 1)!
ww
w
Hence
x=e
At
v=
= AeAt
A2 t2
An tn
I + At +
+ ··· +
+ ...
2!
n!
v
(87)
where v is a constant vector, then
dx
= AeAt v = Ax
dt
(88)
Av = A · ai = ri · ai
A2 · v = ri · A · ai = ri2 · ai
...
An v = rin · ai
(89)
So it looks all right. Hence,
Hence, substituing into eq. 87
rin · tn
ri2 · t2
+ ··· +
+ . . . · ai = eri ·t · ai
x = 1 + ri · t +
2!
n!
c 2008 Tecnun (University of Navarra)
(90)
14
Note that
e(A+B)·t = eA·t eB·t ⇔ A · B = B · A
(91)
eAt v = e(A−λI)·t eλI·t · v, ∀λ
(92)
Let us consider
I can do this for all λ since
(A − λ · I)(λ · I) = λ · A − λ2 · I = (λ · I) · (A − λ · I)
(93)
hence
.te
cn
un
.es
Let us come back to eq. 87
λ 2 · I2 · t 2
λn In · tn
eλ·I·t · v = I + λ · I · t +
+ ··· +
+ . . . v = eλ·t · v (94)
2!
n!
eAt v = eλ·t e(A−λ·I)·t · v
A·t
(95)
But x = e · v is a solution of the system. Moreover, let us assume that v = ai
where ai is an eigenvector of ri , then
t2
x = eri ·t I + t(A − ri · I) + (A − ri · I)2 + . . . · ai =
2!
(96)
t2
ri ·t
2
=e
ai + t · (A − ri · I) · ai + · (A − ri · I) · ai + . . . =
2!
But, as ai is eigenvector, it verifies
ww
w
(A − ri · I) · ai = 0 = (A − ri · I)2 · ai = (A − ri · I)3 · ai = . . .
(97)
Then eq. 96 becomes
x = eri ·t · ai
(98)
We can look for another vector v such that
(A − ri · I) · v 6= 0
y
(A − ri · I)2 · v = 0 = (A − ri · I)3 · v = . . .
(99)
Lemma. Let us assume that the characteristic polynomial of A, non-hermitian,
of order n, have repeated roots (r1 , r2 , . . . , rk , 1 ≤ k < n), of order (m1 , m2 , . . . , mk
(m1 + m2 + · · · + mk = n), respectively such that
f (λ) = (λ − r1 )m1 (λ − r2 )m2 . . . (λ − rk )mk
(100)
if A has only nj < mj eigenvectors of the eigenvalue rj (i.e., (A − rj · I) · v = 0
has nj independent solutions), then (A − rj · I)2 · v = 0 has at least nj + 1
independent solutions. In general, if (A − rj · I)m · v = 0 has got nj < mj
independent solutions, (A − rj · I)m+1 · v = 0 has at least nj + 1 independent
solutions
c 2008 Tecnun (University of Navarra)
15
Example 8.
Let it be the system
1
x0 =
0
1
1
·x
(101)
In order to solve, we take
1−r
0
1 =0
1−r (102)
.te
cn
un
.es
the root is r1 = 1, double, with only one eigenvector obtained from (A − r1 I) ·
a1 = 0
r1 = 1 ⇒ a1 = (1 0)T
(103)
We cannot solve the second solution we need. So, we have
1
x1 =
· et
0
(104)
We must determine another vector a2 such that
2
(A − r1 · I) · a2 6= 0
(105)
(A − r1 · I)2 · a2 = 0 = (A − r1 · I)3 · a2 = . . .
(106)
3
ww
w
As (A − r1 I) = (A − r1 I) = 0, any vector a2 verifies eq. 106. However,
according to the inequality given by eq. 105, a2 cannot be a linearly dependent
vector of a1 (eq. 103). So
0 1
0
1
a2 = (0 1)T ⇒
·
=
6= 0
(107)
0 0
1
0
From eq. 96, we obtain
t2
t
2
x2 = e · a2 + t · (A − r1 · I) · a2 + · (A − r1 · I) · a2 + . . . =
2!
0
1
t
t
t
=e ·
+t
+0 =e ·
1
0
1
(108)
The general solution is
t
x = c1 · e ·
1
0
t
+ c2 · e ·
t
1
(109)
Thinking of next paragraph, detA = 1 > 0, traceA = 2 > 0 and detA =
(traceA)2 /4.
c 2008 Tecnun (University of Navarra)
16
.te
cn
un
.es
Figure 5: Phase plane of Example 8
Résumé of the study of a system of two homogeneous equations with
constant coefficients.
Let us consider the system
x0 = a · x + b · y
y0 = c · x + d · y
(110)
f (r) = r2 − p · r + q
(111)
its characteristic polynomial is
ww
w
where p = a + d (the trace) y q = a · d − b · c (the determinant). Its eigenvalues
are
p
p ± p2 − 4 · q
r=
(112)
2
The study of eigenvalues and eigenvectors is very useful in order to classify
the critical point (0, 0) and to know the trajectories on the phase plane.
1. q < 0. The eigenvalues are positive and negative, respectively. Saddle point.
2. q > 0.
2.a. p2 > 4q. Both eigenvalues are either positive or negative.
- Stable node, if p < 0.
- Unstable node, if p > 0.
2.b. p2 < 4 · q. The eigenvalues are complex conjugates.
- Stable spiral, if p < 0.
- Unstable spiral, if p > 0.
- Center, if p = 0.
2.c. p2 = 4 · q. The eigenvalue is a double root of the characteristic polynomial.
- Stable node, if p < 0 and there is only one eigenvector.
- Unstable node, if p > 0 and there is only one eigenvector.
c 2008 Tecnun (University of Navarra)
17
- Sink point, if p < 0 and there are two independent eigenvectors.
- Source point, if p > 0 and there are two independent eigenvectors.
3. q = 0. It means that the matrix rank is 1 and therefore, one row can be
obtained multipliying the other one by a constant (c/a = d/b = k). Then, (0, 0)
is not an isolated critical point. There is a line y = −a · x/b, b 6= 0, of critical
points. The trajectories on the phase plane are y = k ·x+E, E being a constant.
- If p > 0, paths start at the critical points and go to the infinity.
- If p < 0, conversely, the trajectories end at the critical points.
.te
cn
un
.es
It is very convenient and useful to know the plane trace-determinant (p, q).
Fundamental matrix of a system
Let us consider the homogeneos system given by eq. 16 and let x1 , x2 , . . . , xn
be a set of its independent solutions. We know that we can build the general
solution via a linear combination of them. We denote fundamental matrix, Ψ(t),
a matrix whose columns are the solution vectors of eq. 18. The determinant of
this matrix is not zero (eq. 19). This determinant is called wronskian (eq. 20).
Let us assume that we are looking for a solution x such that x(t0 ) = x0 . Then
x0 = c1 · x1 + c2 · x2 + · · · + cn · xn = Ψ(t0 ) · c
(113)
where c = (c1 c2 . . . cn )T . As |Ψ(t)| 6= 0, ∀t, c can be obtained using the
inverse matrix of Ψ(t0 ):
c = Ψ−1 (t0 ) · x0
(114)
and the solution will be
x = Ψ(t) · Ψ−1 (t0 ) · x0
(115)
ww
w
This matrix is very useful when
Ψ(t0 ) = I
(116)
This special set of solutions builds the matrix Φ(t). It verifies
x = Φ(t) · x0
(117)
−1
We obtain that Φ(t) = Ψ(t) · Ψ (t0 ).
Moreover, with constant coefficients:
(1.) eA·t (eq. 81) is a fundamental matrix of fundamental solutions since it
verifies eq. 86 and eA·0 = I.
(2.) If we know two fundamental matrices of the system, Ψ1 and Ψ2 , there is
always a constant matrix C such that Ψ2 = Ψ1 · C, since each column of Ψ2
can be obtained by a linear combination of the columns of Ψ1 .
(3.) It can be shown that eA·t = Ψ(t) · Ψ−1 (t0 ) (see eq. 115). According to
paragraphs 1 and 2, there exists a matrix C such that
eAt = Ψ(t) · C
(118)
I = Ψ(0) · C ⇒ C = Ψ−1 (0)
(119)
This expression at t = 0
c 2008 Tecnun (University of Navarra)
18
Nonhomogeneous systems.
We have
x0 = P(t) · x + q(t)
(120)
0
We assume that we have solved x = P(t) · x. We actually have a procedure for
P(t) = A, a constant matrix.
Consider special cases
(1)
.te
cn
un
.es
If P(t) = A and A has n independent eigenvectors, the procedure is to build
the matrix T with the eigenvectors of A:
T = (a1 a2 . . . an )
(121)
Then, we change variables x = T · y with x0 = T · y0 . Going back to the system
x0 = T · y0 = A · x + q = A · T · y + q
(122)
As T is built with n eigenvectors, this is a regular matrix (det T 6= 0), so we
can work out T−1 , the inverse of T. Hence, from eq. 114
y0 = T−1 · A · T · y + T−1 · q = D · y + h
(123)
where D is the diagonal matrix of the eigenvalues of A. Therefore,
yi0 (t) = ri · yi (t) + hi (t), ∀i = 1, 2, . . . , n
Hence
ri t
·
e−ri ·t · hi (t) · dt + ci · eri ·t , ∀i = 1, 2, . . . , n
ww
w
yi (t) = e
Z
(124)
(125)
After obtaining yi , we can get x = T · y.
This method is only possible if there are n linearly independent eigenvectors,
i.e., A is a diagonalizable constant matrix. Because we can reduce the matrix
to its diagonal form, the above procedure works. In cases where we do not have
the n independent eigenvectors, the matrix A can only be reduced to its Jordan
canonical form.
(2) Variation of the parameters.
Let us consider the system of eq. 14, and we know the solution of the associated
homogeneous system (eq. 16). Then we can build the fundamental matrix of
the system, Ψ(t) (eq. 18), whose columns are the linearly independent of the
homogeneous system solutions. We are looking for a solution like
x = Ψ(t) · u(t)
c 2008 Tecnun (University of Navarra)
(126)
19
where u(t) is a vector to be determined such that eq. 118 is a solution of eq.
14). Substituting
Ψ0 (t) · u(t) + Ψ(t) · u0 (t) = P(t) · Ψ(t) · u(t) + q(t)
(127)
as we know that Ψ0 (t) = P(t) · Ψ(t),
Ψ(t)u0 (t) = q(t) ⇒ u(t) =
Z
Ψ−1 (t)q(t)dt + c
(128)
ww
w
.te
cn
un
.es
where c is a constant vector and there exists Ψ−1 (t) since the n columns of
matrix Ψ(t) are linearly independent (eq. 20). The general solution is
Z
x = Ψ(t) · Ψ−1 (t) · q(t) · dt + Ψ(t) · c
(129)
c 2008 Tecnun (University of Navarra)
20
Download