1 Systems of equations

advertisement
1
Systems of equations
Tools from linear algebra are crucial for the analysis of systems of linear equations. This
overview begins with the discussion of a simple linear physical system of coupled springs and
masses. In some cases the equations of motion can be solved by simple algebraic tricks. The
attempt to generalize these methods leads to a discussion of eigenvalues and eigenvectors.
To keep the discussion simple, the focus is limited to systems of two equations with two
unknown functions.
These methods can be used to describe the behavior of nonlinear systems near an equilibrium point.
2
Springs and masses
Linear systems of differential equations also arise in mass and spring models. Suppose we
have two masses and three springs in a line connected to two walls. Let xj be the displacement
of mass mj from the equilibrium position. Suppose the springs have constants k1 , k2 , k3 .
k1
k2
k3
− − − m1 − − − m2 − − − From f = ma we get the equations
m1
d 2 x1
= k2 (x2 − x1 ) − k1 x1 + F1 (t),
dt2
(2.1)
d2 x2
m2 2 = −k3 x2 − k2 (x2 − x1 ) + F2 (t),
dt
where Fj is an external force acting on mass j.
First collect the terms with the same xj to get
d2 x1
m1 2 + (k1 + k2 )x1 − k2 x2 = F1 (t),
dt
(2.2)
d2 x2
− k2 x1 + (k2 + k3 )x2 = F2 (t).
dt2
Let’s make some simplifying assumptions to focus on an easier case. First drop the
external forcing, so F = 0. Also assume that m1 = m2 = m and k1 = k2 = k3 = k. The
system becomes
d 2 x1
(2.3)
m 2 + 2kx1 − kx2 = 0,
dt
d 2 x2
m 2 − kx1 + 2kx2 = 0.
dt
m2
1
Adding and subtracting the equations of (2.3) gives
m
d2 (x1 + x2 )
+ k(x1 + x2 ) = 0,
dt2
(2.4)
d2 (x1 − x2 )
+ 3k(x1 − x2 ) = 0.
dt2
That is, we can decouple the system by introducing new variables
m
z = x1 − x 2 .
y = x1 + x2 ,
The equations for y and z are
d2 y
+ ky = 0,
dt2
d2 z
m 2 + 3kz = 0.
dt
These are familiar equations, with general solutions
r
r
k
k
y(t) = x1 (t) + x2 (t) = α1 cos(
t) + α2 sin(
t),
m
m
r
r
3k
3k
z(t) = x1 (t) − x2 (t) = β1 cos(
t) + β2 sin(
t).
m
m
We can recover the original variables by adding and subtracting these expressions.
m
2.1
Systems with 2 × 2 matrices
Returning to the more general problem (2.2), we’d like to simplify the presentation of these
equations using matrix-vector notation.
We will focus on column vectors with 2 components
x1
X=
,
x2
and 2 × 2 matrices. A 2 × 2 matrix has the form
b11 b12
B=
.
b21 b22
Addition of matrices is componentwise. The zero 2 × 2 matrix is just
0 0
0=
.
0 0
Certain matrices and vectors can be multiplied. Suppose
b11 b12
x1
B=
, X=
.
b21 b22
x2
2
Then the product BX is a vector given by
b11 x1 + b12 x2
BX =
.
b21 x1 + b22 x2
Notice that this is the dot product of the rows of B with the column of X. There is a 2 × 2
multiplicative identity matrix,
1 0
I=
.
0 1
That is, BI = IB = B for all 2 × 2 matrices B.
Define vector functions
x1
F1
X(t) =
, F (t) =
,
x2
F2
and constant matrices
m1 0
M=
,
0 m2
(k1 + k2 )
−k2
A=
.
−k2
(k2 + k3 )
With this notation, the system (2.2) can be written as
(N H) M
d2
X(t) + AX(t) = F (t).
dt2
If F = 0 the problem is
(H) M
d2
X(t) + AX(t) = 0.
dt2
Let’s try to find special solutions of (H) having the form X(t) = cos(ωt)V (or X(t) =
sin(ωt)V ), where V is a constant vector,
v
V = 1 .
v2
Then
−M ω 2 cos(ωt)V + A cos(ωt)V = 0,
or after division by cos(ωt)
−M ω 2 V + AV = 0.
If
M
−1
1/m1
0
=
,
0
1/m2
then the equation becomes
M −1 AV = ω 2 V,
or, using the multiplicative identity I,
[M −1 A − ω 2 I]V = 0.
3
(2.5)
Let’s see how our previous example looks in this form. The system (2.3)
d 2 x1
+ 2kx1 − kx2 = 0,
dt2
d 2 x2
m 2 − kx1 + 2kx2 = 0.
dt
2k −k
x1
0
m 0 d 2 x1
+
=
.
2
−k 2k
x2
0
0 m dt x2
m
first becomes
This is equivalent to
d2
dt2
x1
2k/m −k/m
x1
0
+
=
.
x2
−k/m 2k/m
x2
0
Look for solutions X(t) = cos(ωt)V , where V is a constant vector,
v
V = 1 .
v2
Our computations give
v1
2k/m −k/m
v
0
−ω cos(ωt)
+
cos(ωt) 1 =
.
v2
−k/m 2k/m
v2
0
2
After discarding the cosines, the equation can be written as
2k/m −k/m
v1
2 v1
=ω
.
−k/m 2k/m
v2
v2
Based on our previous work the guess
v1
1
=
v2
1
seems reasonable. In fact we have
2k/m −k/m
1
k/m
1
=
= k/m
,
−k/m 2k/m
1
k/m
1
which works with ω 2 = k/m. Similarly, the attempt
v1
1
=
v2
−1
leads to
2k/m −k/m
1
3k/m
1
=
= 3k/m
,
−k/m 2k/m
−1
−3k/m
−1
which works with ω 2 = 3k/m.
Letting B = M −1 A and λ = ω 2 we have the general problem of understanding for which
λ the equation
(B − λI)V = 0
(2.6)
has solution vectors V which are not the zero vector.
Here is a fundamental result from Linear Algebra.
4
Theorem 2.1. There is a nonzero vector V satisfying the equation (B − λI)V = 0 if and
only if λ is a root of
det(B − λI) = (b11 − λ)(b22 − λ) − b21 b12 = λ2 − (b11 + b22 )λ + b11 b22 − b21 b12 = 0.
The polynomial det(B − λI) is called the characteristic polynomial of B. The roots λ are
called eigenvalues of B, and the (nonzero) vectors V are called eigenvectors. In our example
with
2k/m −k/m
−k/m 2k/m
we found, by guessing, that the eigenvalues are
λ2 = ω22 = 3k/m,
λ1 = ω12 = k/m,
with corresponding eigenvectors
1
V1 =
,
1
V2 =
1
.
−1
In this example the characteristic polynomial is
det(B − λI) = λ2 − (4k/m)λ + 3k 2 /m2 = (λ − 3k/m)(λ − k/m).
The roots are obviously 3k/m, k/m. Once you have the eigenvalues, it is a routine exercise
to solve the eigenvalue equation for eigenvectors V . Warning: things get tricky if λ1 = λ2 .
5
Download