Math 427 Introduction to Dynamical Systems Winter 2012 Lecture

advertisement
Math 427
Introduction to Dynamical Systems
Winter 2012
Lecture Notes
by
Benjamin C. Wallace
Instructor
Prof. Dan Offin
Queen’s University
Dept. of Mathematics and Statistics
1
Introduction
The goal of this course is to understand the long term behaviour of systems that function according to deterministic laws. For example, given a family of functions fµ : Rn → Rn parametrized
by µ ∈ Rl and given an initial condition x0 = (x01 , . . . , x0n ) ∈ Rn , we may study the forward orbit
m+1
of x0 under fµ , that is, the sequence (xm )∞
= fµ (xm ) = fµm (x0 ). Similarly, we
m=0 , where x
may wish to study the backward orbit (xm )−∞
m=0 . Note that when fµ is not invertible, the inverse
−m
−m 0
image x
is actually the set fµ (x ) = {y ∈ Rn : fµm (y) = x0 }. In this case, there are many
possible backwards orbits we could study. We call xm the state of the system at time m. Since
m runs over a discrete set, this is a discrete-time system.
In the continuous case, rather than defining a system via iteration of some function, we use a
differential equation. We will be dealing with first-order ordinary differential equations (ODEs).
dx
For example, for x, x0 , and fµ as before, we may study the system given by
= fµ (x(t)). If
dt
0
we choose an initial condition x , then we get an initial value problem or IVP. Such a problem is
known to always have a unique solution t 7→ Φµ (t, x0 ). In continuous time, we define the forward
orbit of x0 as the set {Φµ (t, x0 ) : t ≥ 0} and the backwards orbit as the set {Φµ (t, x0 ) : t < 0}.
We will also discuss equilibrium points of such systems, i.e. points x∗ at which f (x∗ ) = 0.
Example 1. Let fµ (x) = 1 + µx for µ 6= 0 and consider the simple discrete-time dynamical
system given by the recurrence relation
xn+1 = fµ (xn ).
It is easy to see that xn = fµn (x) = 1 + µ + . . . + µn−1 + µn x. First we notice that when µ 6= 1,
1
1
=
. Let us look at some other possibilities.
we have fµ
1−µ
1−µ
1. For |µ| < 1, we see that fµn (x) →
1
.
1−µ
2. For µ = 1, fµn (x) → ∞.
3. For µ = −1,
(
fµn (x)
=
x − 1 n even
.
−x
n odd
1
1
So if −x = x − 1 = − , the sequence (fµn (x))∞
n=1 converges to − . Otherwise, it diverges
2
2
but contains two convergent subsequences, one converging to x − 1, the other −x.
4. For µ > 1, the limit depends on the value of x.
(a) When x >
1
, f n (x) → ∞.
1−µ µ
1
(b) When x <
1
, f n (x) → −∞.
1−µ µ
5. For µ < −1, the limit again diverges.
The following definitions apply to a discrete-time dynamical system xn+1 = f (xn ), where
f : R → R.
Definition. A point x0 is periodic (with period n) if f n (x0 ) = x0 and f j (x0 ) 6= x0 for j =
1, . . . , n − 1. If n = 1, x is called a fixed point.
So when µ 6= 1 in the previous example,
1
is a fixed point of fµ .
1−µ
Definition. We say that a point x is forward asymptotic to a fixed point p when |f n (x)−p| → 0.
The set of points forward asymptotic to p is called the stable set of p and is denoted W s (p).
In the previous
example,
for |µ| < 1, every point is forward asymptotic to the fixed point
1
1
, so W s
= R.
1−µ
1−µ
Definition. A point x is backwards asymptotic for a fixed point p when there is a sequence
(xn )0n=−∞ , with each xn ∈ f −n (x), such that |x−n − p| → 0. The set of backwards asymptotic
points to p is called the unstable set of p and is denoted W u (p).
2
Linear Systems
In this section, we consider systems of linear differential equations of the form ẋ(t) = A(t)x(t),
where x(t) = (x1 (t), . . . , xn (t))T ∈ Rn and A(t) is an n × n matrix, for each t. We shall begin
by focusing on the case of autonomous systems, i.e. ones in which A(t) = A. In particular, we
shall study the system
(
ẋ(t) = Ax(t)
.
(1)
x(0) = x0
Recall that such systems always have a unique solution.
First, suppose A has n distinct real eigenvalues λ1 , . . . , λn with associated linearly independent eigenvectors v1 , . . . , vn . Recall that in this situation, these eigenvectors form a basis of Rn
and can be used to diagonalize A via the change of basis


λ1


..
,
B −1 AB = 
.


λn
h
i
where B = v1 . . . vn is an invertible matrix. Thus, letting y = B −1 x, so that x = By, we
get
ẏ(t) = B −1 ẋ(t) = B −1 Ax(t) = B −1 ABy(t).
2
That is, if y = (y1 , . . . , yn ), we have ẏi (t) = λi yi (t) for i = 1, . . . , n. Thus, for each i, we can
solve for yi (t) = eλi t y(0) or


e λ1 t

 −1
..
 B x(0).
y(t) = 
.


λn t
e
But then it follows that

eλ1 t

x(t) = B 


..
.
eλn t
 −1
 B x(0).

Alternately setting x(0) = Be1 , . . . , Ben , where e1 , . . . , en is the standard basis of Rn , we get n
linearly independent solutions x1 , . . . , xn to the system given by


eλ1 t


..
 ei = eλi t vi .
xi (t) = B 
.


λn t
e
We can form the non-singular matrix of these solutions

e λ1 t
h
i
..
X(t) = v1 . . . vn 
.


e λn t

.

Definition. A fundamental matrix solution to the system ẋ(t) = Ax(t) ∈ Rn is a mapping
t 7→ Φ(t) ∈ L(Rn , Rn ) such that Φ̇ = AΦ.
So X, as defined above, is a fundamental matrix solution to (1).
The fundamental matrix solution X of a linear system ẋ = Ax has the following properties,
which are easily seen from the definition.
1.
d
X(t) = AX(t).
dt
2. The solution to the IVP ẋ = Ax with initial condition x(0) = x0 is given by x(t) =
X(t)X−1 (0)x0 .
3. If Y(t) = X(t)X−1 (0), then Y(t) is a fundamental matrix solution to ẋ = Ax and Y(t+s) =
Y(t)Y(s).
4. With Y(t) as above, (Y(t))−1 = Y(−t).
3
2.1
Complex Eigenvalues
Next, suppose A has a complex eigenvalue λj = aj + ibj with eigenvector vj = uj + iwj . As
before, we can diagonalize A and obtain the solution
xj (t) = eλj t vj
= e(aj +ibj )t (uj + iwj )
= eaj t ((cos(bj t)uj − sin(bj t)wj ) + i(cos(bj t)wj + sin(bj t)uj )).
But by linearity of our system,
x1j (t) = eaj t (cos(bj t)uj − sin(bj t)wj )
and
x2j (t) = eaj t (sin(bj t)uj + cos(bj t)wj )
are also solutions, and moreover, they are real and independent. We can write these two solutions
together as
"
#
h
i
h
i cos(b t) sin(b t)
j
j
.
x1j x2j = eaj t uj wj
− sin(bj t) cos(bj t)
More generally, if A has both real and complex eigenvalues (but nevertheless has n linearly
independent eigenvectors), we get the fundamental matrix solution


A1 (t)
h
i

..
,
X(t) = v1 . . . vn 
.


An (t)
where Ai (t) = eλi t when λi ∈ R and
"
Ai (t) = eai t
#
cos(bi t) sin(bi t)
− sin(bi t) cos(bi t)
h
i
and vi = ui wi when λi = ai + ibi ∈ C \ R and vi = ui + iwi .
2.2
The Matrix Exponential
Here we shall give an alternative formulation of the fundamental matrix solution. First, let
us review some basic notions from analysis while establishing notation. We will denote the
Euclidean norm by | · |.
Definition. We define the operator norm
k · k : L(Rn , Rn ) → R
by
kT k = max |T x|.
|x|≤1
4
One can check that k · k indeed satisfies the properties of a norm.
Lemma 2.1. For any X ∈ Rn ,
|T x| ≤ kT k|x|.
Proof. The case x = 0 is trivial so we assume x 6= 0. Letting y =
kT k ≥ |T y| =
1
x, we see that since |y| = 1,
|x|
1
|T x|,
|x|
from which the result follows.
We can now define the matrix exponential.
Definition. We define the matrix exponential by
T
e =
∞
X
Tn
n=0
n!
.
Notice that
e
kT k
−
N
X
kT kn
n=0
n!
∞
X
Tn
=k
n!
≤
n=N +1
∞
X
n=N +1
kT kn
n!
→ 0,
so eT is absolutely and uniformly convergent for any T ∈ L(Rn , Rn ).
Proposition 2.2 (Properties of the matrix exponential).
1. e0 = I.
2. If AB = BA, then eA+B = eA eB .
3. eA(t+s) = eAt eAs .
4. det(eAt ) 6= 0.
5. If B is invertible, then eBAB
−1
= BeA B −1 .
d At
e = AeAt .
dt
Proof. Part (2) follows from the fact that the binomial theorem applies to commuting matrices.
Part (6) follows from (2) and the uniform convergence of the matrix exponential. The other
parts follows quite directly from the definition of the matrix exponential.
6.
Corollary 2.3. The matrix exponential eAt is the fundamental matrix solution to ẋ = Ax.
5
2.3
Generalized Eigenvectors
Recall the following definitions.
Definition. Let A be a matrix. The algebraic multiplicity of an eigenvalue λ of A is its multiplicity as a root of the characteristic polynomial p(x) = det(A − xI) of A. The geometric
multiplicity of λ is the dimension its associated eigenspace, i.e. dim(Ker(A − λI)).
In this section we consider the case where A does not have n linearly independent eigenvectors. For instance, let λ1 , . . . , λn be the eigenvalues of A, but suppose one of these, call it λ, has
geometric multiplicity less than its algebraic multiplicity. In this case, we must “enlarge” the
eigenspace of λ.
Definition. If m is the algebraic multiplicity of an eigenvalue λ of a matrix A, then we call a
vector v ∈ Ker((A − λI)m ) a generalized eigenvector. The set of all generalized eigenvectors of
λ is called the generalized eigenspace of λ and is a vector space.
The following lemma is quite easy to see.
Lemma 2.4. For any n × n matrix S, we have
Ker(S) ⊂ Ker(S 2 ) ⊂ . . . ⊂ Ker(S k ) = Ker(S k+1 ) = . . .
for some k.
So for some k, we have
Ker((A − λI)k−1 ) ⊂ Ker((A − λI)k ) = Ker((A − λI)k+1 ).
For such k, we can find k linearly independent non-zero vectors ξ1 , . . . , ξk satisfying
ξi ∈ Ker(A − λI)i ,
i = 1, . . . , k.
We call such ξ1 , . . . , ξk a Jordan chain. Let’s assume for now that λ ∈ R so that the ξi ∈ Rn .
Notice that
Aξ1 = λξ1
Aξ2 = λξ2 + ξ1
..
.
Aξk = λξk + ξk−1
or in matrix form,

λ 1

h
i h
i  λ ...

A ξ1 . . . ξk = ξ1 . . . ξk 
..

.

6




.
1

λ
Now suppose λ = a + ib ∈ C so that ξi = ui + iwi for i = 1, . . . , k. Proceeding as before, we
get
Au1 + iAw1 = (a + ib)(u1 + iw1 )
= (au1 − bw1 ) + i(bu1 + aw1 )
Auj + iAwj = (a + ib)(uj + iwj ) + (uj−1 + iwj−1 )
= (auj − bwj + uj−1 ) + i(buj + awj + wj−1 ),
so that
Au1 = au1 − bw1
Aw1 = bu1 + aw1
Auj = auj − bwj + uj−1
Awj = buj + awj + wj−1
for j = 2, . . . , k. In matrix form, this is
h
A u1 w1 . . . uk wk
i
h
= u1 w1 . . . uk
where
Matrices of the form of

D I2


D





D I2

i
D

wk 



..



,
..
. I2 

D
.
"
#
a b
.
D=
−b a


λ 1


..
..


.
.


 or λI + N = 
..


. I2 
D

..
.
λ




1
λ
are known as elementary Jordan blocks.
Theorem 2.5 (The Jordan normal form). For any matrix A, there is a basis v1 , . . . , vn of Rn
made up of real generalized eigenvectors of A and real and imaginary parts of complex generalized
eigenvectors of A such that letting
h
i
B = v1 . . . v n
7
we get

J1

−1

B AB = 

..

,

.
Jk
a block diagonal matrix with each Ji an elementary Jordan block.
n
−1
Corollary 2.6. Let A be a matrix and
h let v1 , . . . i, vn be a matrix of R such that B AB is in
the Jordan normal form, where B = v1 . . . vn .
eAt

e J1 t

=B


..
.
eJl t
 −1
B .

This corollary is very useful to us because each eJi can in fact be computed very easily. For
instance, Jordan blocks of real eigenvalues are of the form
Ji = λi I + Ni ,
where Ni is a nilpotent matrix. That is, Nim = 0 for some m. Thus, eJi can be computed by a
finite sum.
2.4
Invariant Subspaces
Definition. Let T : Rn → Rn be a linear map. A subspace U ⊆ Rn is said to be invariant
under T if T U ⊆ U .
For instance, any eigenspace or generalized eigenspace of an eigenvalue of a matrix A is
clearly invariant under A.
Definition. The flow of a linear system ẋ = Ax is the linear mapping given by the fundamental
matrix solution eAt .
We will be especially interested in subspaces invariant under the flow of a linear system.
Theorem 2.7. A subspace U ⊆ Rn is invariant under eAt if and only if it is invariant under
A.
Example 2. Let


−2 −1 0


A =  1 −2 0 .
0
0 3
8
This matrix has eigenvalues λ1 = −2 + i and λ2 = 3. Corresponding to λ1 are is the complex
eigenvector
 
 
0
1
 
 
1
2
v = v + iv = 1 + i 0
0
0
and corresponding to λ2 is the real eigenvector
 
0
 
3
v = 0 .
1
We thus get invariant subspaces
U1 = span(v 1 , v 2 )
and
U2 = span(v 3 ).
Also,
eAt




# 0 1 0
0 1 0 " J1 t
0 

 e

= 1 0 0
,
3t 1 0 0
0 e
0 0 1
0 0 1
where
"
eJ1 t = e−2t
#
cos t sin t
.
− sin t cos t
Now we can see that eAt acts on the invariant subspace U1 by performing a counterclockwise
rotation combined with an exponential contraction. Hence, the orbit of a point in U1 under
eAt forms a spiral asymptotically approaching the origin. Meanwhile, its action on U2 is an
exponential dilation. That is, points on U2 get “stretched” away from the origin. The action on
any other point in R3 is then a linear combination of these two actions; an exponential stretch
upwards or downwards combined with an exponential contraction and rotation inwards, towards
U2 .
Definition. We call a function I : Rn → R a conserved quantity or an integral of motion for the
system ẋ = Ax (where x ∈ Rn ) if I is a constant function of time, i.e. the composed function
I ◦ x : R → R is constant.
Definition. A set S is called an invariant set for the system ẋ = Ax if it is invariant under the
action of eAt .
Notice that if I is a conserved quantity, then its level sets are invariant sets.
9
Definition. Let A be a matrix with eigenvalues λj and corresponding eigenvectors v j (when
real) or v j + iuj (when complex). We define the stable subspace
E s = span(v j , uj : Re(λj ) < 0),
the unstable subspace
E u = span(v j , uj : Re(λj ) > 0),
and the center subspace
E c = span(v j , uj : Re(λj ) = 0)
of A.
Theorem 2.8. We have the subspace decomposition
Rn = E s ⊕ E u ⊕ E c .
Proof. This follows directly from the definitions of E s , E u , and E c and the fact that the v j and
uj form a basis of Rn .
Theorem 2.9. The following are equivalent:
1. for all x0 ∈ Rn , lim eAt x0 = 0 (respectively, lim eAt x0 = 0);
t→∞
t→−∞
2. all eigenvalues of A have negative (respectively, positive) real part; and
3. there are positive constants a, m, M , and k such that
m|tk |e−at |x0 | ≤ |eAt x0 | ≤ M e−at |x0 |
for all t ≥ 0 (respectively, change e−at to eat ).
3
Nonlinear Systems
We turn to autonomous nonlinear differential equations. That is, we consider equations of the
form ẋ = f (x), where f : E → Rn is continuously differentiable on the open subset E ⊆ Rn . We
begin by studying the continuity of solutions to an initial value problem in the initial value. So
consider the initial value problem
(
ẋ = f (x)
.
(2)
x(0) = y
Theorem 3.1 (Existence/uniqueness). Let E ⊆ Rn be open and let f : E → Rn be C 1 on E.
Given y ∈ E, there exists a > 0 such that the IVP (2) has a unique solution x : [−a, a] → Rn .
We do not prove this theorem here but note that the proof relies on Banach’s fixed-point
theorem, which we recall below.
10
Theorem 3.2 (Banach’s fixed-point theorem). Let (X, d) be a complete metric space and suppose T : X → X is a contraction mapping, i.e. there exists a constant 0 ≤ k < 1 such that
d(x, y) ≤ kd(x, y). Then T has a unique fixed point, i.e. there exists x∗ ∈ X such that T (x) = x
if and only if x = x∗ . Moreover, for any x0 ∈ X,
x∗ = lim xn ,
n→∞
where xn = T (xn−1 ).
Z
t
f (u(s, y)) ds be the solution to the IVP (2). We know that u(t, y)
Let u(t, y) = y +
0
is continuous in t for each initial condition y. However, we wish to show next that it is also
continuous in y. That is, a given point on the curve traced by the solution to the IVP should
change very little as a result of a small change in the initial condition. We first prove the
following lemma.
Lemma 3.3 (Gronwall’s inequality). Assume that g : R → R is continuous on an interval [0, a]
and suppose that for some C, K > 0,
Z t
g(t) ≤ C +
Kg(s) ds
0
for all t ∈ [0, a]. Then g(t) ≤ CeKt for t ∈ [0, a].
Z t
Proof. Let G(t) = C +
Kg(s) ds. So g(t) ≤ G(t) for t ∈ [0, a] and G0 (t) = Kg(t). We thus
get, for all t ∈ [0, a],
0
G0 (t)
Kg(t)
d
=
≤K⇔
ln(G(t)) ≤ K
G(t)
G(t)
dt
⇒ ln(G(t)) ≤ ln(G(0)) + Kt
⇒ g(t) ≤ G(t) ≤ CeKt .
Recall that a function f : E → Rn , where E ⊆ Rm is said to be differentiable at a point
x0 ∈ E if there is a linear map Df (x0 ) : Rm → Rn such that
lim
h→0
f (x0 + h) − f (x0 ) − Df (x0 )h
= 0.
|h|
In this case, we call the matrix representative of Df (x0 ), which consists of partial derivatives of
f at x0 , the Jacobian matrix of f at x0 . We are now ready to state the conditions under which
the solutions of a nonlinear IVP vary smoothly in the initial conditions.
11
Theorem 3.4 (Smooth dependence on initial data). Let E ⊆ Rn be open and let f : E → Rn
be C 1 on E. Given x0 ∈ E, there exist a, δ > 0 such that for all y ∈ Nδ (x0 ) and all t ∈ [−a, a],
the solution u(t, y) to the IVP (2) is C 1 on some interval G = [−a, a] × Nδ (x0 ) ⊆ Rn+1 .
Z t
Proof. We know that u(t, y) = y +
f (u(s, y)) ds is C 2 in t, since f is C 1 , and that u̇(t, y) =
0
f (u(t, y)), so ü(t, y) = Df (u(t, y))f (u(t, y)). Let y ∈ Nδ/2 (x0 ) and h ∈ Rn with |h| < δ/2 so
that y + h ∈ Nδ (x0 ). We have
Z
t
(f (u(s, y + h)) − f (u(s, y))) ds,
u(t, y + h) − u(t, y) = h +
0
and so letting g(t) = |u(t, y + h) − u(t, y)|, we get
Z
g(t) ≤ |h| +
t
K|u(s, y + h) − u(s, y)| ds,
0
wher K =
max kDf (x)k. Thus, by Gronwall’s inequality,
x∈Nδ (x0 )
g(t) ≤ |h|eKt ≤ |h|eKa ,
t ∈ [−a, a].
(3)
Hence, we see that y 7→ u(t, y) is continuous for y ∈ Nδ/2 (x0 ).
Next, we need to show that
lim
h→0
|u(t, y0 + h) − u(t, y0 ) − lh|
=0
|h|
for some linear transformation l : Rn → Rn . Let Φ(t, y0 ) denote the unique fundamental matrix
solution of the linear system
Φ̇(t, y) = Df (u(t, y0 ))Φ(t, y)
Φ(0, y0 ) = I.
We claim that Φ is the required linear map l. Letting u1 (s) = u(s, y0 + h) and u0 (s) = u(s, y0 ),
12
we see that
Z t
|u1 (t) − u0 (t) − Φ(t, y0 )h| = h − Φ(t, y0 )h + (f (u1 (s)) − f (u0 (s))) ds
0
Z t
= (f (u1 (s)) − f (u0 (s)) − Df (u0 (s))Φ(s, y0 )h) ds
0
Z t
|(f (u0 (s)) − f (u0 (s)) − Df (u0 (s))Φ(s, y0 )h)| ds
≤
0
Z t
|R(s)(u1 (s) − u0 (s)) + Df (u0 (s))(u1 (s) − u0 (s) − Φ(s, y0 )h)| ds
=
0
Z t
≤ (0 |h|eKa + |Df (u0 (s))(u1 (s) − u0 (s) − Φ(s, y0 )h)| ds, (by (3))
0
Z t
≤ 0 |h|eKa t +
kDf (u0 (s))k|u1 (s) − u0 (s) − Φ(s, y0 )h| ds
0
Z t
Ka
≤ 0 |h|e a +
kDf (u0 (s))k|u1 (s) − u0 (s) − Φ(s, y0 )h| ds,
0
where R(s) is a remainder term in the Taylor series expansion of f about u0 (s) evaluated at
u1 (s). But by continuity, we have that kDf (u0 (s))k = kDf (u(s, y0 ))k ≤ M for some M and for
all (s, y0 ) ∈ [−a, a] × Nδ (x0 ). Thus,
Z t
Ka
M |u1 (s) − u0 (s) − Φ(s, y0 )h| ds
|u1 (t) − u0 (t) − Φ(t, y0 )h| ≤ 0 |h|e a +
0
≤ 0 |h|e
Ka
ae
Mt
≤ 0 |h|e(K+M )a a.
This shows that y 7→ u(t, y) is differentiable and has derivative
∂ u(t, y) = Φ(t, y0 ).
∂y y=y0
Moreover, since Φ(t, y0 ) is continuous in y0 , we have that u(t, y) is C 1 over [−a, a] × Nδ (x0 ).
Corollary 3.5 (Corollary (to the proof)). Let u(t, y) be the solution to the IVP
(
ẋ(t) = f (x(t))
.
x(0) = y
Then Φ(t, y) =
∂u
(t, y) if and only if Φ(·, y) is the solution of the IVP
∂y

 d Φ(t, y) = Df (u(t, y))Φ(t, y)
dt
.

Φ(0, y) = In×n
13
Definition. Let u(t, y) be as above. Then the LTV system
˙ = Df (u(t, y))ξ(t),
ξ(t)
where ξ(t) ∈ Rn , is called the variational equation for u(t, y).
To finish off, let us adapt some definitions from the introduction to the context of continuoustime nonlinear systems.
Definition. Let u(t, y) denote the solution to the system

ẋ = f (x)
x(0) = y
evaluated at time t. If for some x0 ∈ Rn there exists T > 0 such that u(T, x0 ) = x0 , then x0 is
called a periodic point of x, the corresponding trajectory u(t, x0 ) is said to be periodic, and the
minimal such T is referred to as the period of x0 or of u(t, x0 ). If u(t, x0 ) = x0 for all t, then x0
is called a fixed point of x.
Definition. If f (x0 ) = 0, then x0 is called an equilibrium point of the vector field f .
Notice that fixed-points and equilibrium points are the same; however, it is common to see a
fixed-point in terms of the solution curves to a given differential equation while when we talk of
equilibrium points, we are thinking in terms of vector fields used to define differential equations.
Definition. If A and B are two subsets of Rn , define their distance d(A, B) =
inf
a∈A,b∈B
|a − b|.
A periodic orbit u(t, x0 ) is said to be stable if for each > 0, there is a neighbourhood U of the
trajectory Γ = {u(t, x0 ) ∈ Rn : t ∈ R} of u(t, x0 ) (i.e. an open set U containing Γ) such that
for all x ∈ U and all t ≥ 0, d(u(t, x), Γ) < . If, moreover, there is a neighbourhood V of Γ
such that for all x ∈ V , lim d(u(t, x), Γ) = 0, then u(t, x0 ) is said to be asymptotically stable. If
t→∞
u(t, x0 ) is not stable, then we say it is unstable.
Definition. Let U ⊆ Rn and consider the trajectory u(t, x). We define the stable and unstable
sets of P respectively by
W s (U ) = {x ∈ Rn : lim d(Φt (x), U ) = 0}
t→∞
u
n
W (U ) = {x ∈ R : lim d(Φt (x), U ) = 0}.
t→−∞
3.1
Example of a Non-Autonomous System
Consider the system
dy
= (a cos t + b)y − y 3 ,
dt
14
(4)
where a, b > 0. This nonlinear system is non-autonomous, so the results we have so far cannot
be directly applied to it. However, we can transform the above system into
dy
= (a cos τ + b)y − y 3
dt
dτ
=1
dt
(5)
(6)
or
ẋ(t) = f (τ, y) = (1, (a cos τ + b)y − y 3 ),
where x = (τ, y) and the planar vector field f ∈ C 1 (R2 ) is periodic in τ with period 2π. We
thus identify τ with τ + 2kπ for all k ∈ Z. This is akin to folding a vertical rectangular strip of
the plane R2 of width 2π into a cylinder and considerin the vector field f over this cylinder, i.e.
the set T × R.
Now notice that the curve y = 0 (a circular cross-section of our cylinder) is invariant invariant
under the flow defined by the system, i.e. if we set the initial condition y(0) = 0, then y(t) = 0
for all t. Now consider the curve y = a + b + 1 (another circular cross-section). Along this curve,
ẏ(t) = (a cos t + b)(a + b + 1) − (a + b + 1)3 < 0
as can be verified. Thus, the vector field points downward along this curve, i.e. this curve
repels the flow downward. But the lower circle y = 0 is fixed, hence by continuity, the compact
cylindrical region
C = {(τ, y) ∈ T × R : 0 ≤ y ≤ a + b + 1}
is positively invariant under the flow, i.e. it is invariant for t ≥ 0.
Next, we will show that there is a unique attracting periodic cycle within C. To do this, we
study iterations of the map
P : [0, a + b + 1] → [0, a + b + 1]
y0 7→ y(2π, 0, y0 ),
where we denote by y(t, τ0 , y0 ) the solution to the system (5) with initial condition (τ (0), y(0)) =
(τ0 , y0 ). Clearly, P (0) = 0, so 0 is a fixed point of P . We claim the following:
1. The fixed point 0 is repelling.
2. There exists a unique attracting fixed point ξ ∈ [0, a + b + 1] for P .
3. ξ = W s (0, a + b + 1].
To prove these claims, let Φ(t, y0 ) =
∂y
(t, 0, y0 ). Then Φ is the solution to the IVP
∂y0
Φ̇(t, y0 ) = ((a cos t + b) − (y(t, y0 ))2 )Φ(t, y0 )
Φ(0) = 1.
15
Then since y(t, 0, 0) = 0 and
P 0 (0) =
∂y
(2π, 0, 0) = Φ(2π, 0),
∂y0
we get
Z t
Φ̇(t, 0)
= a cos t + b ⇒ ln Φ(t, 0) − ln Φ(0, 0) = (a cos t + b) dt
Φ(t, 0)
0
Z t
⇒ ln Φ(t, 0) = (a cos t + b) dt
0Z 2π
0
⇒ P (0) = exp
(a cos t + b) dt = e2πb > 1.
0
So 0 is a repelling fixed point. It follows directly from the intermediate value theorem that
[0, a + b + 1] contains some other fixed point of P . By similar reasoning, if we can show that
every fixed point in the interior of [0, a + b + 1] must be stable, then it will follow that the stable
fixed point we have found is unique. Otherwise, in a manner similar to the way in which the
unstable fixed point 0 forced the existence of a stable fixed point, another stable fixed point
would lead to another unstable fixed point.
To prove this claim, begin by noticing that P (ξ) = y(2π, 0, ξ) is C 1 so that P 0 (ξ) = yξ (2π, 0, ξ)
is continuous. Moreover, P 0 (ξ) is the unique solution to the variational equation

 d y (t, 0, ξ) = [(a cos t + b) − 3y 2 (t, 0, ξ)]y (t, 0, ξ)
ξ
ξ
dt
.
y (0, 0, ξ) = 1
ξ
Dividing both sides of the above differential equation by yξ (t, 0, ξ) and evaluating the result at
a point ξ0 gives us
d
ln yξ (t, 0, ξ0 ) = (a cos t + b) − 3y 2 (t, 0, ξ0 ).
dt
Integrating over [0, 2π], we get
Z
2π
((a cos t + b) − 3y 2 (t, 0, ξ0 )) dt
0
Z 2π
= 2πb −
3y 2 (t, 0, ξ0 ) dt,
ln yξ (2π, 0, ξ0 ) =
0
and thus
P 0 (ξ0 ) = yξ (2π, 0, ξ0 ) = e2πb e−
R 2π
0
3y 2 (t,0,ξ0 ) dt
To simplify this further, integrate
d
ln y(t, 0, ξ0 ) = (a cos t + b) − y 2 (t, 0, ξ0 ),
dt
16
.
to get
Z
ln y(2π, 0, ξ0 ) − ln y(0, 0, ξ0 ) = 2πb −
2π
y 2 (t, 0, ξ0 ) dt.
0
So when ξ0 is a fixed point, the left hand side of this expression is 0 by periodicity, so
Z 2π
y 2 (t, 0, ξ0 ) dt = 2πb
0
and so
P 0 (ξ0 ) = e2πb e−2πb = e−4πb < 1,
since b > 0.
3.2
Extensibility of Solutions
In this section, we wish to identify the maximal interval on which an initial value problem has
a well-defined solution.
Lemma 3.6. Suppose x1 , defined on the open interval I1 and x2 , defined on the open interval
I2 , are solution to the initial value problem

ẋ = f (x),
, f : R → E ⊆ Rn .
x(0) = y
Then if there is a point t0 ∈ I1 ∩I2 such that x1 (t0 ) = x2 (t0 ), then x1 (t) = x2 (t) for all t ∈ I1 ∩I2 .
Proof. By existence and uniqueness, if x1 (t0 ) = x2 (t0 ), then there exists a closed interval [−a +
t0 , t0 + a] ⊆ I1 ∩ I2 on which x1 (t) = x2 (t). In particular, x1 and x2 agree on the open interval
(−a + t0 , t0 + a) obtained by removing the endpoints t0 ± a. Let I ∗ be the maximal such open
interval and suppose, by way of contradiction, that I ∗ 6= I1 ∩ I2 . Writing I ∗ = (α, β), we
must have by continuity that x1 and x2 agree at the endpoints α and β of this interval. But
then taking α and β as initial conditions to the initial value problem with differential equation
ẋ = f (x), it follows from existence and uniqueness of solutions that x1 and x2 agree on an
open interval about α and β. But this contradicts the maximality of I ∗ . Hence, we must have
I ∗ = I1 ∩ I2 .
3.3
The Flow Operator
Definition. Let f ∈ C 1 (E), where E ⊆ Rn is open. The flow of the vector field f is the unique
solution Φ(t, y) to the initial value problem

ẋ = f (x)
x(0) = y ∈ E
defined on the maximal interval of existence J(y) to this problem. We often write Φt : E → E,
where Φt (y) = Φ(t, y). This mapping is sometimes called the flow operator.
17
Note that the flow operator is in general nonlinear.
Proposition 3.7. The flow operator satisfies the group composition property
Φ(t, Φ(s, y)) = Φ(t + s, y).
Proof. This follows directly from existence and uniqueness and from the previous lemma.
A familiar example is found in the case of linear systems

ẋ = Ax
,
x(0) = y
where the flow operator is simply the matrix exponential Φ(t, y) = eAt y.
Definition. Suppose U, V ⊆ Rn are open and F : U → V is continuous on U . Then F is called
a homeomorphism if there exists a continuous function G : V → U such that F ◦ G = idV and
G ◦ F = idU . If both F and G are C 1 on their respective domains, then F (and by symmetry,
G) is called a diffeomorphism.
For instance, x1/3 is a homeomorphism on R but not a diffeomorphism because its derivative
is undefined at 0. Of greater significance is the fact that for f ∈ C 1 (E), the flow operator
Φt : E → E of ẋ = f (x) is a diffeomorphism. We have already shown that Φ is continuously
differentiable. Since the inverse of Φt is simply the flow operator Φ−t , it follows that it too is
C 1.
The following familiar theorem serves as an excellent tool for proving that a given mapping
is a diffeomorphism.
Theorem 3.8 (Inverse function theorem). Suppose that F : Rn → Rn is C 1 and let p ∈ Rn be
such that F (p) = q ∈ Rn and such that DF (p) is non-singular. Then there exist neighborhoods
U and V of p and q, respectively, and a C 1 function G : V → U such that G ◦ F = idV and
F ◦ G = idU .
We do not prove the inverse function theorem. A proof can be obtained via Banach’s fixed
point theorem.
Corollary 3.9. If F (p) = q and DF (p) is non-singular, then with G as in the above, DG(q) =
[DF (p)]−1 .
Proof. Since G ◦ F (x1 , . . . , xn ) = (x1 , . . . , xn ),
I = D(G ◦ F )(p)
= DG(F (p)) · DF (p)
= DG(q) · DF (p),
from which our claim follows.
18
Let f ∈ C 1 (E) and let u : W 0 → W be a diffeomorphism for some open sets W 0 and W . We
can “pull back” the differential equation ẋ = f (x) to the domain W 0 via the function u by a
simple change of coordinates. Supposing x = u(y), we get
f (x) =
dx
∂u
dy
=
(y) .
dt
∂y
dt
The resulting expression for ẏ is given in terms of a specially named vector field.
Definition. Let f and u be as above. We define the pullback of f to be the vector field defined
by
−1
∂u
f ∗ (y) =
(y)
f (u(y)).
∂y
Let us recall another important theorem.
Theorem 3.10 (Implicit function theorem). Let f : Rn ×Rm → Rm be C 1 on an open set Ũ × Ṽ
∂f
(x0 , y0 )
and suppose f (x0 , y0 ) = 0 for some x0 ∈ Rn and y0 ∈ Rm . Moreover, assume that
∂y
(i.e the matrix of partial derivatives of f with respect to the y variables only) is non-singular.
Then there exist neighbourhoods U of x0 and V of y0 and a C 1 function g : U → V such that
f (x, y) = 0 if and only if y = g(x) for all (x, y) ∈ U × V .
Proof. Let us define F : Rn × Rm → Rn × Rm by F (x, y) = (x, f (x, y)) = (x, y 0 ), where
y 0 = f (x, y). Then F (x0 , y0 ) = (x0 , f (x0 , y0 )) = (x0 , 0). Now notice that since


In×m 0
DF =  ∂f
∂f  ,
∂x
∂y
∂f
is non-singular,
∂y
∂f
∂f
det(DF ) = det(In×m ) det
= det
6= 0.
∂y
∂y
then by the assumption that
So by the inverse function theorem, there are open sets U × V about (x0 , y0 ) and U × V 0 about
(x0 , 0) and a smooth function G : U × V 0 → U × V such that F ◦ G = idU ×V 0 and G ◦ F = idU ×V .
Moreover, writing G(x, y 0 ) = (G1 (x, y 0 ), G2 (x, y 0 )), where G1 (x, y 0 ) = x and G2 (x, y 0 ) = y, we
see that y 0 = f (x, y) = 0 if and only if y = G2 (x, 0) = g(x), where g : U → V is smooth and
g(x0 ) = y0 .
3.4
The Poincaré Map
Let us consider a vector field f : E → Rn which is C 1 on the open subset E ⊆ Rn . Also, let
S = {x ∈ Rn : l(x) = h} be a hyperplane in Rn , where h ∈ R and l : Rn → R is a linear map.
19
We will call an open1 subset Σ ⊆ S of such a hyperplane a section of Rn and say that f is
transverse to Σ if f (x) is never tangent to Σ for x ∈ Σ. Let us rephrase this a little bit.
We denote a vector at a point x ∈ Rn by a pair (x, v) ∈ Rn × Rn and define the tangent
plane of Σ at x to be the set
Tx Σ = {(x, v) : l(v) = 0}
of vectors at x that are tangent to Σ. Thus, for f to be transverse to Σ means that (x, f (x)) ∈
/
1
Tx Σ for any x ∈ Σ. Equivalently, for any x ∈ Σ, we must have l(f (x)) 6= 0. Since f is C , we
can assume without loss of generality that l(f (x)) > 0 for x ∈ Σ. Clearly, f (x) cannot change
sign on Σ otherwise it would be 0 somewhere on Σ.
Now suppose that for some x ∈ Σ and T > 0 we have ΦT (x) ∈ Σ, where Φt : E → E is
the flow of the vector field f . Then we can find some open neighbourhood N ⊆ Σ of x (in the
relative topology of Σ) such that for all x0 ∈ N we have |ΦT (x) − ΦT (x0 )| < and l(f (x0 )) > 0.
Shrinking Σ down to the size of this neighbourhood, we can see that there is a return mapping
T : Σ → R such that ΦT (x0 ) (x0 ) ∈ Σ for all x0 ∈ Σ.
Definition. In the above scenario, we call the mapping P : Rn → Rn defined by P (x) = ΦT (x) (x)
the Poincaré mapping on the section Σ.
Theorem 3.11. The return map T : Rn → R is continuously differentiable.
Proof. Let F : R × Rn be the C 1 function defined by F (t, ξ) = l(Φt (ξ)) − h, so that F (T, x0 ) = 0.
Moreover,
d
∂F
(T, x0 ) = l
Φt (ξ) ξ=x0 = l(f (ΦT (x0 ))) 6= 0,
∂t
dt
t=T
by transversality. Thus, we can apply the inverse function theorem to get an open neighbourhood
Σ0 ⊆ Σ of x0 and an open neighbourhood V ⊆ R of T and a C 1 function T : Σ0 → V such that
for all (t, ξ) ∈ V × Σ0 we have F (t, ξ) = 0 if and only if t = T (ξ). In addition, T (x0 ) = T . In
other words, the first return time map T is C 1 .
We thus have the follow important property of the Poincaré map.
Corollary 3.12. The Poincaré map P as above is continuously differentiable.
Proof. With F (t, ξ) = l(Φt (ξ)) − h as above, we have that P (x) = F (T (x), x) is a composition
of C 1 functions and so is itself C 1 .
Theorem 3.13. If v ∈ Tx0 Σ, then the directional derivative of the Poincaré map at x0 in the
direction v is given by
DP (x0 )v = f (x0 )h∇T (x0 ), vi + uξ (T, x0 )v,
1
If S is a subspace of a topological space (X, τ ), the relative topology on S is the topology {S ∩ U : U ∈ τ }
consisting of intersections of S with the open sets of X.
20
where u(t, ξ) is the solution to the IVP

ẋ(t) = f (x(t))
x(0) = ξ
.
Proof. This follows by a straightforward application of the chain rule:
d DP (x0 )v =
P (x0 + v)
d =0
= ut (T (x0 ), x0 )h∇T (x0 ), vi + uξ (T (x0 ), x0 )v
= f (x0 )h∇T (x0 ), vi + uξ (T, x0 )v.
3.5
Stability of Periodic Solutions
Theorem 3.14. Let P be the Poincaré map corresponding to a periodic trajectory u(t, x0 ) in
the plane R2 . Then u(t, x0 ) is asymptotically stable if and only if P 0 (x) < 1 for some x along
this trajectory.
Recall that if u(t, ξ) is the solution to the IVP

ẋ(t) = f (x(t))
x(0) = ξ
,
then uξ (t, x0 ) is the solution to the variational equation

 d u (t, x0 ) = Df (u(t, x0 ))u (t, x0 )
ξ
ξ
dt
u (0, x ) = I
0
n×n
ξ
.
Let us denote Φ(t, x0 ) = uξ (t, x0 ).
Lemma 3.15. f (u(t, x0 )) = Φ(t, x0 )f (x0 ).
Proof. By evaluating the derivative of f with respect to t along the solution u(t, x0 ), we get
d
d
f (u(t, x0 )) = Df (u(t, x0 )) u(t, x0 )
dt
dt
= Df (u(t, x0 ))f (u(t, x0 )).
It is also clear that f (u(0, x0 )) = f (x0 ). Thus, since Φ(t, x0 ) is the fundamental matrix solution
to the equation
d
f (u(t, x0 )) = Df (u(t, x0 ))f (u(t, x0 )),
dt
it follows that f (u(t, x0 )) = Φ(t, x0 )f (x0 ).
21
Corollary 3.16. f (x0 ) = Φ(T, x0 )f (x0 ).
Theorem 3.17. If Γ is the periodic orbit of x0 and Σ is a section such that x0 = Γ ∩ Σ, then
Φ(T, x0 ) has eigenvalues 1, λ1 , . . . , λn−1 , where λ1 , . . . , λn−1 are the eigenvalues of DP (x0 ).
Proof. That 1 is an eigenvalue is evidenced directly by the above corollary. To show that the
remaining n − 1 eigenvalues of Φ(T, x0 ) are identical to those of DP (x0 ), we choose
α = {f (x0 ), s1 , . . . , sn−1 }
as a basis of Rn , where s1 , . . . , sn−1 is a basis of the tangent space Tx0 Σ of Σ at x0 . The vectors
in α are indeed linearly independent by transversality of f (x0 ) to the tangent space. Under this
basis, the matrix representative of Φ(T, x0 ) is given in block form by


1 ··· a ···

 . .

 ..
..


[Φ(T, x0 )]α = 
,

 0
b


..
..
.
.
n−1
where a = (a1 , . . . , an−1 ) and b = (bij )i,j=1
. Also in the basis α, a tangent vector v ∈ Tx0 Σ has
the form


 
0
0

  .. 

 v1  
 . 

,
=
[v]α = 

 ..   v 
 .   Σ 

..
vn−1
.


v1
 . 
. 
where vΣ = 
 . . Thus,
vn−1


avΣ
 . 
 .. 


[Φ(T, x0 )]α [v]α = 
.
 bvΣ 


..
.
Also, since DP (x0 )v ∈ Tx0 Σ, it follows from Theorem 3.13 that


0
 . 
 .. 


[DP (x0 )v]α = 
.
 bvΣ 


..
.
So the eigenvalues of b and of DP (x0 ) are the same. But these are precisely the remaining
eigenvalues of Φ(T, x0 ).
22
Definition. The matrix Φ(T, x0 ) as above is called the monodromy matrix of the periodic
solution u(t, x0 ).
Next, we present a useful formula for computing the derivative of the Poincaré map at its
corresponding periodic point. We will use Liouville’s formula for linear homogeneous systems
of the form

 dΦ = A(t)Φ(t)
dt
.
Φ(0) = I
n×n
Liouville’s formula tells us that
T
Z
det Φ(t) = exp
tr(A(s)) ds.
0
Theorem 3.18. Suppose f is a planar vector field with periodic point x0 of period T . Let Σ be
a local section such that x0 = Γ ∩ Σ, where Γ is the periodic orbit of x0 . For a corresponding
Poincaré map P : Σ → Σ, we have
Z
0
T
P (x0 ) = exp
div f (u(t, x0 )) dt.
0
Proof. Let Φ(T, x0 ) = ux0 (t, x0 ), where u(t, x0 ) is the solution of

ẋ(t) = f (x(t))
.
x(0) = x
0
Since P is simply a function from R to itself, P 0 (x0 ) is its own (and only) eigenvalue. Thus, the
monodromy matrix M = Φ(T, x0 ) has eigenvalues 1 and P 0 (x0 ). Therefore,
Z
0
T
tr(Df (u(t, x0 ))) dt.
P (x0 ) = det M = exp
0
∂f1
 1
But writing f = (f1 , f2 ) so that Df =  ∂x
∂f2
∂x1

tr(Df (u(t, x0 ))) =

∂f1
∂x2 
∂f2 , we see that
∂x2
∂f1
∂f2
(u(t, x0 )) +
(u(t, x0 )) = div(f (u(t, x0 ))).
∂x1
∂x2
Example 3. Consider the system

ẋ = f (x, y) = x − y − x(x2 + y 2 )
1
ẏ = f (x, y) = x + y − y(x2 + y 2 )
2
23
.
We can pull the vector field f = (f1 , f2 ) back to a domain that uses polar coordinates defined
by (x, y) = u(r, θ) = (r cos θ, r sin θ), obtaining the vector field g(r, θ) via the formula
g(r, θ) = [Du(r, θ)]−1 f (u(r, θ))
"
#−1 "
#
cos θ −r sin θ
r cos θ − r sin θ − r3 cos θ
=
sin θ r cos θ
r cos θ + r sin θ − r3 sin θ
"
#"
#
1 r cos θ r sin θ r cos θ − r sin θ − r3 cos θ
=
r − sin θ cos θ
r cos θ + r sin θ − r3 sin θ
"
#
r − r3
=
.
1
Hence, our system takes on the much simpler form

ṙ = r − r3
θ̇ = 1
.
It is no longer even coupled. Moreover, it is clear that setting with the initial condition r(0) = 1,
we get the solution curve u(t, 1, θ(0)) = (1, θ(0) + t), which is clearly periodic, with period 2π
and orbit Γ = S1 . We can let θ(0) = 0 for convenience. Notice then that u(t, 1, θ) = (1, t), so
the angle θ will always be equal to the time t at any point on this particular trajectory. Hence,
we can safely ignore θ.
To study the periodic solution u(t, 1) = u(t, 1, 0), we let Σ = {(r, 2π) : 0 < r < ∞} and let
τ (r) be the return time function to Σ. Thus, τ (1) = 2π. The corresponding Poincaré map is
then P (r) = u1 (τ (r), 1) and has derivative (evaluated at r = 1)
∂P 0
u1 (2π, r, 0).
P (1) =
∂r r=1
In order to compute this, we shall study the variational equation

 d u (t, r) = Dg(u(t, 1))u (t, 1)
ξ
ξ
dt
,
u (0, 1) = I
ξ
where ξ = (r0 , θ0 ). Writing
∂r
 ∂r0
uξ =  ∂θ
∂r0


∂r
∂θ0 
∂θ  ,
∂θ0
where u = (r, θ), and since
"
#
1 − 3r2 (t, r0 ) 0
Dg(u(t, r0 )) =
,
0
0
24
the variational equations reduce to the initial value problem

∂r
d ∂r


(t, 1) = −2
(t, 1)
dt ∂r0
∂r0
∂r


(0, 1) = 1
∂r0
,
∂r
∂r
(t, 1) = e−2t . Thus, P 0 (1) =
(2π, 1) = e−4π < 1, so the periodic
∂r0
∂r0
solution u(t, 1) is asymptotically stable.
which has solution
3.6
Lotka-Volterra System
The Lotka-Volterra system is described by the differential equations

ẋ = f (x, y) = x(A − By)
1
,
ẏ = f (x, y) = y(Cx − D)
2
where A, B, C, D > 0. This system is important in modeling two populations, one of prey (x)
and one of predator (y). It is easily seen that (x∗ , y ∗ ) = (D/C, A/B) is an equilibrium point
for this system. Let us consider the transversal section Σ = {(x, y ∗ ) : x∗ < x < ∞} with return
map τ : Σ → R. We shall denote a generic point on Σ by q = (x, y ∗ ). The Poincaré map on Σ
is defined by
(P1 (q), P2 (q)) = P (q) = Φτ (q) (q).
Note that P really only depends on x, so we will abuse notation by writing (P1 (x), P2 (x)) =
P (x) = P (q).
We let Φ(t, x, y) = DΦt (x, y) be the derivative with respect to the initial condition (x, y) of
the solution curve Φt (x, y). Then we know that Φ(t, x, y) satisfies the variational equation

 d Φ(t, x, y) = Df (Φt (x, y))Φ(t, x, y)
dt
,
Φ(0, x, y) = I
2×2
where f = (f1 , f2 ).
Proposition 3.19.
P10 (x)
f2 (q)
=
exp
f2 (P (q))
Z
τ (q)
div f (Φt (q)) dt.
0
Proof. Begin by writing
"
P10 (x)f2 (P (q))
#
P10 (x) f1 (P (q))
= f2 (Φτ (q) (q)) = det
.
0
f2 (P (q))
Recall also that
DP (q) · v = f (Φτ (q) (q))h∇τ (q), vi + DΦτ (q) (q) · v,
25
for v ∈ R2 . Letting v = (1, 0), we get
" #
"
#
1
P10 (x)
∂τ
.
= f (P (q)) (q) + Φ(τ (q), q)
∂x
0
0
Thus,
"
" #
#
1
∂τ
P10 (x)f2 (P (q)) = det f (P (q)) (q) f (P (q)) + det Φ(τ (q), q)
f (P (q))
∂x
0
"
" #
#
1
= det Φ(τ (q), q)
f (P (q))
0
"
" #
#
1
= det Φ(τ (q), q)
Φ(τ (q), q)f (q)
0
"
#
1 f1 (q)
= det Φ(τ (q), q) det
0 f2 (q)
Z τ (q)
= f2 (q) exp
div f (Φt (q)) dt.
0
where the equalities follow respectively from the fact that the
determinant of a matrix
is mul∂τ
tilinear in its columns, linear dependence of the columns of f (P (q)) (q) f (P (q)) , Lemma
∂x
(3.15), commutativity of the determinant and matrix multiplication, and Liouville’s formula.
Since for the Lotka-Volterra system, div f =
ẋ ẏ
+ , it follows from this proposition that
x y
f2 (q)
f2 (P1 (x))
= P10 (x)
.
x
P1 (x)
Integrating both sides of this equation from x∗ to x gives us
Z x
f2 (q)
∗
F (x) − F (x ) =
dx
x
∗
Zxx
f2 (P (x)) 0
P (x) dx
=
P (x)
x∗
Z P (x)
f2 (u, y ∗ )
=
du
u
P (x∗ )
= F (P (x)) − F (P (x∗ )),
f2 (x, y ∗ )
where F (x) is the antiderivative of
. This implies that F (x) = F (P (x)). But since
x
∗
f2 (x, y ) is easily seen to be monotone on Σ, it must be the case that x = P (x) for x > x∗ , i.e.
the points x > x∗ are all periodic.
26
3.7
Continuation of Periodic Solutions
Consider a system ẋ = f (x, ), where f : E × U → Rn is C 1 and E ⊆ Rn and U ⊆ Rm are
open. We wish to study the qualitative effects of changing the parameter ∈ U . In this section,
in particular, we will study the conditions under which periodic solutions for some parameter
= 0 can be continued as changes. More precisely, suppose Φt (x, ) is the flow of this system
and suppose that for some x0 ∈ E and T > 0 we have ΦT (x0 , 0 ) = x0 . Then we wish to ask
whether we can find functions x0 : U → Rn and T : U → R such that ΦT () (x0 (), ) = x0 ().
Another way of phrasing this is by asking what happens to the periodic solution with initial
point x0 as is perturbed.
Before we answer this question, we should take care of the more basic concern involving
the smooth (i.e. C 1 ) dependence of solutions to this system in the parameter . However,
we can easily reduce the question of smooth dependence in the parameter to !our previous
x
understanding of smooth dependence in the initial condition. Simply let u =
∈ Rn+m so
!
f (x, )
that u̇ =
= g(u). Then we know that the flow Ψt corresponding to the vector field g
0
!
Φt (x, )
is smooth in all variables. But Ψt (u) = Ψt (x, ) =
, so Φt (x, ) is smooth in (x, ).
Theorem 3.20. If f ∈ C 1 (E × U ) for E ⊆ Rn and U ⊆ Rm open, then the flow Φt (x, )
associated to f is smooth in (x, ).
In what follows, let us take 0 = 0 for simplicity. If x0 is periodic for this parameter, we can
construct a transversal section Σ to the periodic orbit of x0 . It would seem then, by continuity,
that small perturbations in should not affect the fact that orbits originating in Σ should return
to Σ. Of course, their respective return times and return locations may change. However, we
are mainly interested in the qualitative dynamics, so in this case the existence of periodic orbits.
We make this discussion more precise in the theorem that follows.
Definition. If (x0 , 0 ) is an equilibrium point of f , then it is called elementary if Df (x0 , 0 ) has
no zero eigenvalues. If (x0 , 0 ) is a periodic point of f with period T , then it is called elementary
if the monodromy matrix evaluated at (x0 , 0 ), i.e. Dx ΦT (x0 , 0 ) has 1 (which we know to be an
eigenvalue) as a simple eigenvalue (i.e. 1 has multiplicity one in the characteristic polynomial of
the monodromy matrix).
Theorem 3.21. An elementary equilibrium (respectively, periodic) point (x0 , 0 ) can always be
continued smoothly, i.e. there exist neighbourhoods V and W of x0 and 0 , respectively, and a
smooth function g : W → V such that g(0 ) = x0 and (g(), ) is an equilibrium (respectively,
periodic) point for all ∈ W .
Proof. This follows from the implicit function theorem and Theorem 3.17.
27
3.8
Van der Pol Oscillator
The van der Pol oscillator, an important example of a dynamical system, is defined by the
second-order differential equation
ẍ + (1 − x2 )ẋ + ω 2 x = 0.
We can convert this into the system

ẋ = −y
ẏ = −(1 − x2 )y + x
.
More generally, we can consider any system given by
u̇ = Au + g(u),
"
#
0 −1
, and g = (g1 , g2 )T . This last form makes clear the fact that
where u = (x, y) , A =
1 0
when = 0, the system is linear. Thus, if Φt (u, ) denotes the flow of the system with parameter
evaluated at u, then
"
#" #
cos
t
−
sin
t
x0
Φt (u, 0) = eAt u =
.
sin t cos t
y0
T
So for = 0, all solutions to the system are periodic with period 2π. We wish to extend this
result to the case where 6= 0. However, for = 0, the monodromy matrix is simply the identity,
which does not have 1 as a simple eigenvalue.
To get around this problem, consider the section Σ = {(x, 0) : x > 0} and define the
parametrized Poincaré map P (ξ, ) = ΦT (ξ,) (ξ, ) for ξ = (x, 0) ∈ Σ. So we know that P (ξ, 0) =
ξ. Also, define the displacement function δ(ξ, ) = x(T (ξ, ), ξ, ) − ξ. Since δ has no simple
zeros, let’s take a look at the Taylor expansion of δ about = 0, which is given by
δ(ξ, ) = δ (ξ, 0) + O(2 ) = ∆(ξ, ),
where ∆(ξ, ) = δ (ξ, 0) + O() (∆ is known as a reduced displacement function). If ∆ has simple
zeros, then they can be continued by the implicit function theorem. So we want to find ξ0 such
that δ (ξ0 , 0) = 0 and δξ (ξ0 , 0) 6= 0. To begin, we take the derivative
d x(t, ξ, 0)T (ξ, 0) + x (T (ξ, 0), ξ, 0).
δ (ξ, 0) = dt t=T (ξ,0)
d Since ξ ∈ Σ, we get
x(t, ξ, 0) = −y(t, ξ, 0) = 0 since ξ ∈ Σ and so
dt t=T (ξ,0)
δ (ξ, 0) = x (T (ξ, 0), ξ, 0) = π1
28
∂
Φt (ξ, 0, 0) .
∂
To evaluate this, note that
d
Φt (ξ, 0, 0) = f (Φt (ξ, 0, 0)), so
dt
d ∂
∂
Φt (ξ, 0, 0) = Df (Φt (ξ, 0, 0)) Φt (ξ, 0, 0).
dt ∂
∂
∂
Φt (ξ, 0, 0) = 0.
But we have the initial condition Φt (ξ, 0, 0) = ξ + f (Φt (ξ, 0)), and so
∂
[[Incomplete]]
3.9
Hopf Bifurcation
Consider the system
(
ẋ = µx − y − (x2 + y 2 )x
ẏ = x + µy − (x2 + y 2 )y
√
It can be seen that as µ pass through 0 (from the left), a periodic orbit is born at r = µ. This
kind of phenomenon can be found in a more general setting under some assumptions.
1. Assume vector field F (x, µ) = A(µ)x + f (x, µ) has an equilibrium point at x = 0 for all µ,
i.e. f (0, µ) = 0.
2. Suppose the linear map A(µ) has eigenvalues λ1 (µ), λ1 (µ), λ3 , . . . , λn . We assume that
α(0) = 0, α0 (0) 6= 0, and β(0) 6= 0.
3. Finally, we assume the non-resonance conditions
λk
∈
/ Z for k = 3, . . . , n.
λ1
Theorem 3.22 (Hopf-Andronov). Under the above assumptions, there exists a smooth family
of periodic solutions x(t) = u(t, x0 (), ), µ = µ(), and T () on 0 ≤ ≤ 0 such that
u(t, x0 (), ) → 0
µ() → 0
2π
.
T () →
β(0)
[[Proof]]
29
Download