Uploaded by Anthony Flores

Chapter 7---First Order ODE System of Equations Fall2022 (1)

advertisement
"All our dreams can come true if we have the courage to pursue them." Walt Disney
Section 7.1: Introduction to Systems of First Order Linear Equations
 x1 = F1 (t , x1 , x2 , xn )
 x = F (t , x , x , x )
 2
2
1
2
n
A system of first order ordinary differential equations has the general form 

 xn = Fn (t , x1 , x2 , xn )
where each xk is a function of t. If each Fk is a linear function of x1 , x2 , ... , xn , then the system of equations
is said to be linear, otherwise it is nonlinear.
A spring-mass system from Section 3.7 provides us a second order equation which can be converted into
a system of first order equations.
Ex: We can transform u  + 0.5u  + u = 2sin t into a system of first order equations by letting
x1 = u
Thus,
and
x2 = u  .
x1 = u  = x2 and x2 = u  = __________________________
 x1 =
Then we have: 
 x2 =
 x 

  1 = 

 x2 
 x1 

 x2 
Let X  = 
,
  x1 


+




  x2 


A=




,



x 
X =  1
 x2 
,

g (t ) = 


 .

Thus,
Other higher order equations, say u  + u  − 2u = 0 , can be also converted into a system of first order
equations. Let’s transform the equation.
Let
Thus,
x1 = u
,
x2 = u 
,
and
x3 = u  .
x1 = u = x2 , x2 = u = x3 , and x3 = u  = ________________________
 x1 =

Then we have:  x2 =

 x3 =
 x1 
 
  x2  =
 
 x3 
1
In real application, physical systems lead to many systems of ODE. For example (problem 17/page 345) ,
consider a two spring-mass system.
At rest position
At t > 0, F1 (t ) acts on m1 and F2 (t ) acts on m2 , both in the positive directions.
In this system, we see that x1  x2 , k1 & k2 are stretched and k3 is compressed .
So we can draw a free-body diagram as this:
Now, applying the Newton’s 2nd Law m  a =
 F to each mass, we have:
2
"Opportunities don't happen. You create them." Chris Grosser
Section 7.2: Matrices
A matrix A is an m x n rectangular array of elements, arranged in m rows and n columns, denoted
a1n 
 a11 a12


a21 a22
a2 n 

A = ( ai j ) =




am n 
 am1 am 2
mn
Some examples of 2 x 2 matrices are given below:
3 − 2i 
 3e−2t
 2 −3 
 1
A2  2 = 
, B = 
 , C2  2 =  −t
0 5 
 4 + 5i 6 − 7i 
 e
The transpose of A = ( a i j ) is AT = (a ).
2e−3t 

4e−2t 
ji
Ex:
A2  2
1 2
1 3
T
=
  A 2 2 = 
;
3 4
 2 4
B3  2
The conjugate of A = (a ) is A = ( aij ) .
 cos t
e−2t cos t 
sin t
3cos 2t 
 cos t


=  sin t
e −2t sin t   BT 2  3 =  −2t

−2 t
 e cos t e sin t sin 2t 
 3cos 2t

sin 2t 

ij
Ex:
 −1 2 + 3i 
 −1 2 − 3i 
A=
  A=

4 
4 
 3 − 4i
 3 + 4i
A square matrix A has the same number of rows and columns. That is, A n  n
Ex:
A2  2
 a11

a
=  21


 an1
a12
a22
an 2
a1n 

a2 n 


ann 
nn
 1 2 3
1 2


=
 , B3  3 =  4 5 6 
3 4
7 8 9


A column vector x is an n x 1 matrix.
 x1 = 10 
Ex: x3  1 =  x2 = 20 
 x = 30 
 3

A row vector y is a 1 x n matrix.
Ex: y1  3 = ( y1 = 10
y2 = 20
y3 = 30 )
Note here that y = x T , and that in general, if x is a column vector x , then x T is a row vector.
The sum or difference of two m x n matrices:
1 2
 5 6
6 8
 −4 −4 
Ex: A = 
, B = 
  A+B =
 & A−B = 
.
3 4
7 8
10 12 
 −4 −4 
3
Scalar Multiplication The product of a matrix A = (ai j ) and a constant k is defined to be kA = (k ai j ) .
 cos t
 −5cos t −5e −2t cos t 
e −2t cos t 




A3  2 =  sin t
e −2t sin t   − 5A3  2 =  −5sin t −5e −2t sin t 
 −4 cos 2t −3sin 2t 
 20 cos 2t 15sin 2t 




Matrix Multiplication. (note AB does not necessarily equal BA):
Ex:
6
1  6 + 2  8   −9
1 2 
 5
 1  5 + 2  (−7)
A=
, B=
 AB = 


=
 3 −4 2  2
 −7 8 2  2
 3  5 + (−4)  (−7) 3  6 + ( −4)  8   43
 5 1 + 6  3
& BA = 
 (−7) 1 + 8  3
 e −2t
C
=
2  2  2e −2t
e −3t 

4e −3t 

,
22 

−14  2  2
  23
=
  11


2  2
 2
= 
2  1  −3 
 2  e −2t + (−3)  e −3t   2e −2t − 3e −3t 
 C 
=
=
, but  21  C22 cannot multiply.
2  1  2  2e −2t + (−3)  4e −3t   4e −2t − 12e −3t 
2 1
(
)
(
1 0

0 1
I
=
The identity matrix I is an n x n matrix given by


0 0
)
0

0


1
nn
For any square matrix A, it follows that AI = IA = A, and the dimensions of I depend on the context.
 1 0 0  x1   x1 
2 − i  1 0   1
2−i
 1

   
Ex: A  I = 

= 
 = A ; I  x =  0 1 0  x2  =  x2  = x
4  0 1   3 + 2i
4 
 3 + 2i
 0 0 1  x   x 

 3   3 
The determinant of a square matrix A is denoted as det(A) or |A|.
(Determinant can be found only from square matrices.)
For a 2x2 matrix,
a b
c d
2 4
2 4 
=
Ex: A= 
  det A = | A | =
1 −3
 1 −3 
=
a
b
c
For a 3x3 matrix, d
g
e
f =a
h
i
e
f
h
i
−b
d
f
g
i
+c
d
e
g
h
2 0 −4
 2 0 −4 


Ex: B =  0 0 3   det B = | B | = 0 0 3 =
2 1 1 
2 1 1


4
Inverse Matrix A square matrix A is nonsingular, or invertible, if there exists a matrix B such that that
AB = BA = I. Otherwise A is singular.
The matrix B, if it exists, is unique and is denoted by A-1 and is called the inverse of A.
We have: A −1 exists  detA  0
( This implies that A −1 doesn ' t exist  detA = 0 .)
We use row reduction (also called Gaussian elimination) to find A-1, by changing ( A | I ) ⟶ ( I | A-1 ) .
 1 4
Ex: Find the inverse of A = 
.
 −2 3 
Solution: First, we want to set up ( A | I ) . Then change it to ( I | A-1 ) .
 1 4 1 0


−
2
3
0
1


 1 4
Check: A  A −1 = 

 −2 3 
Ex: A faster way to find the inverse of a 2 x 2 matrix.
a
A=
c
1
A=
 −2
formula for only a 2× 2 matrix
b

A −1 =

d
4
−1
  A =
3
Differentiable
and
integrable Matrix
)
(
b
b
dA  daij 
=
&&&
A
(
t
)
dt
=

a
a aij (t )dt
dt  dt 
cos t 
 3t 2 sin t  dA  6t
Ex: A(t ) = 
=
&&&


0 
dt  − sin t
4 
 cos t
Many of the rules from calculus apply in this setting. For example:
d ( CA )
dA
=C
, where C is a constant matrix
dt
dt
d ( A + B)
dt
=
dA d B
+
dt
dt
;
d ( AB )
dt


0
 3
A(t )dt = 
0
2 

4 
 dA 
dB
=
B+ A

 dt 
 dt 
5
"I have not failed. I've just found 10,000 ways that won't work." Thomas Edison
Section 7.3: Systems of Linear Equations ; Eigenvalues & Eigenvectors
If we want to solve a system of 2 linear equations in 2 variables, like
→
→
 a11 x1 + a12 x2 = b1
, then the system can be expressed as A x = b :

a21 x1 + a22 x2 = b2
→
→
 a11 a12  x1   b1 

  =  
 a21 a22  x2   b2 
Notice: Matrix A and vector b are known and vector x is unknown which is needed to be solved.
Procedure:  Step 1: Form the matrix (A|b).

Step 2: Transform A to I using row reductions.


Step 3: Find the solution.

2 x + 4 x2 = −5
Ex 1: Solve  1
 3x1 − x2 = 4
 2 4 −5 
 2 4   x1   −5 
Rewrite: 

 x  =   → 
 3 −1  2   4 
 3 −1 4 
Or we can use the inverse matrix to solve.
A  x =b
A−1  A  x = A−1  b
I
x = A
x
−1
b
= A−1  b
a b
A=

c d 
1  d −b 


det ( A )  −c a 
1  d −b 
=


ad − bc  −c a 
 A −1 =
2 4 
A=

 3 −1
 A −1 =
6
Eigenvalues and Eigenvectors (Important)
The product of a matrix A and a vector x is another vector b , i.e. A x = b . Instead of multiply with the
big matrix A , we will try multiplying with a number r .
That is:
A x =b  r x =b

A x = rx  A x −rx = 0 
.
r is called an _________________ of matrix A , and x is called an ________________ corresponding to r .
r is obtained by finding the roots of the polynomial equation from
and x is obtained by solving
,
.
Ex 2: Find the eigenvalues and eigenvectors of the given matrix.
 5 −1
a) A = 

3 1 
Solution: We want to set det( A − rI ) = 0 to solve for r.
Now, find the matrix ( A − rI ) =
Next, det( A − rI ) =
Check:
A x = rx 
7
1
 3
b) A = 

 −13 −1
Solution:
Find the matrix ( A − rI ) =
Next, det( A − rI ) =
Check:
A x = rx 
8
"Start where you are. Use what you have. Do what you can." Arthur Ashe
Section 7.5: Homogeneous Linear Systems with Constant Coefficients
 x = a x1 + b x2
 x   a b   x1 
Let’s consider the simple system of D.E. given by  1
  1=
  
 x2 = c x1 + d x2
 x2   c d   x2 
Converting to the matrix vector notation, we obtain:
dy
From section 2.1, we solved the equation y = a y 
=ay
dt
Thus, we should have the similarity between x = Ax and y  = a y . That is, the solution of x = Ax should be
in the form: x =  er t , where an eigenvalue r and coefficient vector  are to be determined.
(
Plugging x =  e r t into x = Ax , we get  er t
) = A   e
rt
 r  er t = A   er t  r  = A  

A  − r  = 0 
(A− rI) = 0
In this format, r and  are the eigenvalues and eigenvectors of matrix A. (Use section 7.3 to solve for r and  ).
*********************************************************
For a 2 x 2 matrix with Real Eigenvalues, we have 2 cases of the equilibrium point at the origin:
• If both eigenvalues have opposite signs, the origin is called saddle point and is always unstable.
• If both eigenvalues have the same sign, the origin is called a node, and is asymptotically stable if the
eigenvalues are negative and unstable if the eigenvalues are positive.
The origin is a saddle point which is unstable
because almost all trajectories depart
from the origin as t increases.
The origin is a node which is stable
because all trajectories go to the origin
as t increases.
9
Ex 1: Find the general solution to
 x1 = x1 + x2
. Describe the behavior of the solution as t →  .

 x2 = 4 x1 − 2 x2
Identify the type of the equilibrium point at the origin.
Solution: First, find eigenvalue by setting det( A − rI ) = 0 .
Next, find eigenvectors.
10
Ex 2: Find the general solution to
 x1 = x1 − 2 x2
. Describe the behavior of the solution as t →  .

 x2 = 3 x1 − 4 x2
Identify the type of the equilibrium point at the origin.
Solution: First, find eigenvalue by setting det( A − rI ) = 0 .
Next, find eigenvectors.
11
"I find that the harder I work, the more luck I seem to have." Thomas Jefferson
Section 7.6: Complex Eigenvalues
We learned section 3.3 that if r1, 2 =    i are complex roots of the characteristic equation a y + b y + c y = 0 ,
then the general solution is
. This idea
carries over to the system of D.E. with complex eigenvalues.
We consider again a homogeneous system of n first order linear equations with constant, real coefficients,
a1n 
 x1 = a11 x1 + a12 x2 + ... + a1n xn
 x1 (t ) 
 a11 a12




 x = a x + a x + ... + a x
x2 (t ) 
a21 a22
a2 n 
 2
21 1
22 2
2n n



→
x
=
Ax
where
x
(
t
)
=
,
and
A
=











ann 
 xn = an1 x1 + an 2 x2 + ... + ann xn
 xn (t ) 
 an1 an 2
Recall from section 7.5,
is a solution of x = Ax , provided r is an eigenvalue and
 is an eigenvector of A.
The eigenvalues r1 , ..., rn are the roots of det(A – rI) = 0, and the corresponding eigenvectors satisfy
( A − r I ) = 0 .
Let r1 =  + i  be a complex eigenvalue of matrix A, and let 1 = a + i b be a corresponding
rt
eigenvectors of r1 . Then x = 1 e 1 is a complex valued solution.
Thus,
x =  e r t = (a + i b ) e(  + i ) t = (a + i b ) e  t 
e i t
= (a + i b ) e  t  (cos  t + i sin  t )
(distribute) = e  t (a cos  t + i a sin  t + i b cos  t + i 2 b sin t )
( where i 2 = −1 )
= e  t [ (a cos  t − b sin  t )
+
i ( a sin  t + b cos  t ) ]
As a result, u (t ) and v (t ) are 2 distinct real-valued solution, and the general solution to
x = Ax is :
x = C1e t u (t ) + C2e t v (t )
 x=
12
Ex: Express the general solution of the given system of equations in terms of real-valued functions.
 x = − x1 − 4 x2
a)  1
 x2 = x1 − x2
Solution:
Find eigenvalues: det( A − rI ) =
Next, find eigenvectors for r = −1 + 2i.
( A − rI )  = 0 
As t →  , x = 0 . The origin is a spiral point and is asymptotically stable.
13
 x = 2 x1 − (5 / 2) x2
b)  1
 x2 = (9 / 5) x1 − x2
Solution:
Find eigenvalues: det( A − rI ) =
Next, find eigenvectors for r =
1 3
+ i. .
2 2
( A − rI )  = 0 
As t →  , x =  . The origin is a spiral point and is unstable.
14
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
For second order systems, here are all cases:
When eigenvalues are real and
➢ have opposite signs, x →  as t →  . The origin is a
saddle point and is unstable.
➢ distinct and both positive, x →  as t →  . The
origin is a node and is unstable.
➢ distinct and both negative, x → 0 as t →  . The
origin is a node and is stable.
When eigenvalues are complex:
• with nonzero positive real part, x →  as t →  .
The origin is a spiral point and is unstable.
• with nonzero negative real part, x → 0 as t →  .
The origin is a spiral point and is asymptotically
stable.
15
"The starting point of all achievement is desire." Napoleon Hill
Section 7.9: Nonhomogeneous
Linear Systems using Variation of Parameter Method.
Let’s recall section 3.6 where we learned how to solve a second order degree nonhomogeneous equation
_____________________________. First, we find the solution of the homogenous equation
_________________________ by let y = __________ .
Then we have the characteristic equation ______________________ . Solving for the characteristic
equation, we obtain the fundamental solution set { y1 , y2 }. Therefore, we have the general solution (or the
complementary solution) yc = ____________________ .
To find the particular solution, we let yp = ____________________ . Plugging y p , yp , yp into the
nonhomogeneous equation to solve for u1 and u2 .
As a result, we get Y (t ) = y p = 
− g  y2
g  y1
dt  y1 + 
dt  y2 .
W ( y1 , y2 )
W ( y1 , y2 )
The solution of nonhomogeneous Linear Systems using Variation of Parameter Method can be find similarly,
but using the matrices.
A nonhomogeneous system of equations can be written as x ' = P ( t ) x + g ( t ) , where
P(t ) n  n
 p11 (t )

p (t )
=  21


 pn1 (t )
Let  (t ) n  n = ( x1
Then
p12 (t )
p22 (t )
pn 2 (t )
x2
p1n (t ) 
 x1 (t ) 
 g1 (t ) 





p2 n (t ) 
x2 (t ) 
g 2 (t ) 


, x(t ) n  1 =
, g(t ) n  1 =
.










pnn (t ) 
 xn (t ) 
 g n (t ) 
xn ) be a fundamental matrix for the homogeneous system x ' = P ( t ) x .
 (t )  = P  . Also,  (t ) is invertible 
Using Variation of Parameter Method (Section 3.6), assume the particular solution of the nonhomogeneous
system x ' = P ( t ) x + g ( t ) has the form x =  (t ) u ( t ) where u ( t ) is a vector to be determined.
16
Now, we are going to find x by plugging x =  (t ) u ( t ) into x ' = P x + g .
So we get :
  u  = P   u
+g
   u +   u = P   u + g
Then Use the Product Rule on the left-hand side.
( Replace   by P  as we found earlier  (t )   = P   . )
P   u +   u = P   u + g
17
Ex: Find the general solution of the system of equations.
 1 1   e−2t 
a) x = 
x+
t 
 4 −2   −2e 
We start solving a vector solution x by finding the eigenvalues and eigenvectors.
1− r
1
det( A − rI ) =
= (1 − r )(−2 − r ) − 4 1 = r 2 + r − 6 = (r + 3)(r − 2) = 0  r1 = −3 ; r2 = 2
4 −2 − r
Solution:
•
•
1
1 − (−3)

For r1 = −3: 
 1 = 0
−2 − (−3) 
 4
•
1 
1 − (2)
For r2 = 2 : 
 2 = 0
−2 − (2) 
 4
 4 1
 
 1 = 0
 4 1

 4 1

 1 = 0
0 0
 −1 1 
 −1 1 
 
 2 = 0  
 2 = 0
 4 −4 
 0 0
  =1 
 1 =  1

  2 = −4 
  = 1
 2 =  3 
  4 = 1
 1  −3t  e−3t 
 1 2 t  e 2 t 
r 2 t
   x1 = 1  e =    e = 
and
x2 = 2  e =    e =  2t 
−3t 
 −4 
 1
 −4e 
e 
Then we find the fundamental matrix  , its inverse  −1 , the matrix integral  −1  g dt , and the final solution x.
r1 t
•
 =  x1 x2  =
18
 2 −5   csc t 
x +
 ;
 1 −2   sec t 
b) x = 

t 
2
Solution: We start solving a vector solution x by finding the eigenvalues and eigenvectors.
2−r
−5
• det( A − rI ) =
= (2 − r )(−2 − r ) − 1 ( −5 ) = r 2 − 4 + 5 = r 2 + 1 = 0  r = −i ; i
1
−2 − r
1 = 1 

−5 
2−i
 2 − i −5 

  1 = 5    =  5  +  0  i
• For r1 = i : 
−(2 − i )  = 

1
 1 = 0  
 1 = 0  1 = 
   
−2 − i 
0
 2 =
  2 = 2 − i 
 1
 0
 2   −1
−5 

 5   0  
 5   0  
 5
 5
0
0
r t
x1 =  e 1 =   +   i  e(0+ i ) t =   +   i  1 (cos t + i sin t ) =   cos t + i   sin t + i   cos t + i 2   sin t
 2
 2
 −1
 −1
 2   −1 
 2   −1 
 5 

 5 

0
0
   x = C1  x1 + C2  x2 = C1    cos t −   sin t  + C2    sin t +   cos t  .
 −1
 −1
 2 

 2 

−1
−1
Then we find the fundamental matrix  , its inverse  , the matrix integral   g dt , and the final solution x.
•
 =  x1 x2  =
19
Download