ODE Oral Exam Notes 2008 1 Existence and uniqueness 1.1

advertisement
ODE Oral Exam Notes 2008
Chaitanya Ekanadham
1
Existence and uniqueness
1.1
Definitions
ODE/IVP, Lipschitz, C1 , local/global solution, method of successive approx.,
1.2
Useful Theorems
Theorem 1. Gronwall’s lemma
Suppose g, β are nonneg. cnts functions and g(t) ≤ α +
Rt
some interval I ⊆ R. Then g(t) ≤ αe
τ
β(s)ds
Z
t
β(s)g(s)ds on
τ
on I.
Depends on: FTOC
Z t
Z t
Proof idea: Look at the function f (t) = exp(− β(s)ds) β(s)g(s)ds.
τ
τ
Z t
Z t
Z s
′
Since f (t) ≤ αβ(t)exp(− β(s)ds) it follows by FTOC that f (t) ≤
αβ(s)exp(−
β(r)dr)ds).
τ
τ
τ
Z t
Rt
Rt
Bound
β(s)g(s)ds = e τ β(s)ds f (t) above by αe τ β(s)ds using this inequality.
τ
Theorem 2. Picard-Lindelof existence/uniqueness theorem
Suppose f : (I × U) ⊆ (R × Rd ) → R is continuous in (t, y) and uniformly
Lipschitz in y. Then ∀(τ, ξ) ∈ (I × U), the ODE [ẏ = f (t, y) with y(τ ) = ξ]
has a unique solution on {t ∈ I : |t − τ | < α} for some α > 0.
Depends on: Completeness of R, Gronwalls lemma (for uniqueness
Proof idea: 5 steps:
1. Choose a, b s.t. {t : |t − τ | < a} ⊆ I and {y : |y − ξ| < b} ⊆ U. Let
α = min a, Mb where M is an upper bound of |f | on a closed set in I ×U
(so that f can’t got outside of U on a small interval around τ ).
Z t
2. Set ϕ0 (t) = ξ and ∀n ≥ 0 define ϕn+1 (t) = ξ +
f (s, ϕn (s))ds. The
τ
idea is that a fixed point of this iterative process must be a solution to
the ODE (verify using FTOC). So it suffices to show that the sequence
ϕn converges to a limit function ϕ that is a fixed point.
1
k
k+1
|t−τ |
where
3. Show by induction that ∀k ≥ 0, |ϕk+1(t) − ϕk (t)| ≤ M C(k+1)!
C is the Lipschitz constant of f . So varphin (t) converges uniformly
and absolutely to some cnts ϕ(t) on {t ∈ I : |t − τ | < α}.
Z t
4. Show that ϕ(t) is a fixed point of the process, i.e. that |ϕ(t)− f (s, ϕ(s))ds| =
τ
0 by expanding ϕ(t) = lim ϕn (t) and using the uniform convergence and
Lipschitz properties.
5. Show uniqueness by taking two solutions ϕ, Ψ that are fixed points
of the iteration and showing that |ϕ(t) − ψ(t)| = 0 using Gronwall’s
lemma.
Theorem 3. Cauchy-Peano existence theorem
Suppose f : (I × U) ⊆ (R × Rd ) → R is continuous in (t, y). Then
∀(τ, ξ) ∈ (I × U), the ODE [ẏ = f (t, y) with y(τ ) = ξ] has a unique solution on {t ∈ I : |t − τ | < α} for some α > 0.
Depends on: Arzela-Ascoli theorem
Proof idea: The basic idea is to define a family of approximate solutions
{ϕǫn (t)}, extract a convergent subsequence, and show that the limit function
is a solution.
Z t
1. Define ϕǫn (t) = ξ + 1t>τ [ f (s, ϕǫ(s − ǫ))ds].
τ
2. Show that {ϕǫn (t)} are uniformly bounded on some interval containing
τ . Choose α small s.t. ϕǫ (t) does not exit U on {t ∈ I : |t − τ | < α}
and also s.t. |f | ≤ M on {(t, ϕǫ (t)) : τ ≤ t ≤ τ + α}.
3. Show that {ϕǫn (t)} are equicontinuous by showing that they are unif.
Lipschitz (because |ϕ′ǫ (t)| ≤ |f | ≤ M).
4. Apply Arzela-Ascoli theorem to extract a subsequence (ϕnj (t)) → ϕ(t)∀t ∈
[τ, τ + α].
5. Show that ϕ(t) is a fixed point of the iterative process using similar
tricks to previous Theorem (use triangle inequality and ǫ, δ argument).
6. Do the same for [τ − α, α].
2
1.3
Important examples
1. Continuity of f alone is not sufficient for uniqueness. Consider the
2
ODE ẏ(t) = |y(t)|1/2 on t ∈ [0, 1] with y(0) = 0. y(t) ≡ 0 and y(t) = t4
are both solutions.
2. The Lipschitz condition on f is not necessary (example here).
3. An ODE can have EXACTLY
2
Properties/extension of solutions
2.1
Definitions
2.2
Useful Theorems
Theorem 1. Continuation of solutions
Suppose y(t) is a solution to the ODE [ẏ = f (t, y) with y(τ ) = ξ] on t ∈ (a, b).
Let D = I × U be the domain of f . If f is bounded on D, then:
1. lim− y(t) and lim+ y(t) exist
t→a
t→b
2. (a, lim+ y(t)) ∈ D and/or (b, lim− y(t)) ∈ D ⇒ y(t) can be continued to
t→a
t→b
the right and/or left.
Depends on: completeness of R i.e. cauchy criterion
+
Proof idea:
Z t2 Consider two points t1 < t2 ∈ (a, b) approaching a . |y(t1 ) −
|f (s, y(s))|ds ≤ M|t2 − t1 | → 0 as they both approach a (Likey(t2 )| ≤
t1
wise for t1 , t2 → b− ). By cauchy criterion, they y(t) has a finite limit ya at a.
If (a, ya ) ∈ D, one can verify that the function ỹ(t) extending y(t) to [a, b)
also satisfies the integral form of the ODE. Reapplying Cauchy-Peano at the
point (a, ya ) gives the extension.
Remark: The only case where continuation fails s if the finite limit is not
in Domain(f ) or if f is unbounded (i.e. finite-time blowup). See example
below.
Theorem 2. (Continuity w.r.t. initial conditions/parameters)
Suppose f : D = (I × U) ⊆ (R × Rd ) → R is continuous and Lipschitz in
(t, y). Suppose ψ(t) is a solution to the ODE [ẏ = f (t, y)]. Then ∃δ > 0 s.t.
∀(τ, ξ) ∈ Vδ ≡ {(τ ′ , ξ ′) : τ ′ ∈ (a, b), |ξ ′ − ψ(τ ′ )| < δ}, ∃ a unique soln ϕ(t, τ, ξ)
3
on (a, b) with ϕ cnts in (t, τ, ξ) on (a, b) × Vδ .
Depends on: Gronwall, continuation of solution, successive approximation,
uniform limit of cnts fns is cnts
Proof idea: 3 steps:
1. Choose δ ′ > 0 s.t. Vδ ⊆ D (one must exist). Choose δ < δ ′ e−K(b−a)
where K is the Lipschitz constant of f . We know that ∀(τ, ξ) ∈ Vδ , ∃!
a soln ϕ(t, τ, ξ) on some small interval containing τ .
Z t
2. We can show that |ϕ(t) − ψ(t)| ≤ |ξ − ψ(τ )| + K
|ϕ(s) − ψ(s)|ds
τ
∀t where ϕ is defined (just expand ϕ and ψ in their recursive integral
form). Apply Gronwall’s lemma to show that |ϕ − ψ| ≤ δ ′ ⇒ ϕ(t, τ, ξ)
can be continued throughout (a, b) (since Vδ′ ⊆ D).
3. Show ϕ(t, τ, ξ) is cnts by showing it is the uniform limit of the sequence
of cnts fns defined by:
Z
t
ϕ0 (t, τ, ξ) = ψ(t) − ψ(τ ) + ξ, ϕn+1 (t, τ, ξ) = ξ +
Method is similar to before -
f (s, ϕn (s, τ, ξ))ds.
τ
(a) Show by induction that ∀n ≥ 0, |ϕn+1 (t)−ϕn (t)| ≤
This shows uniform convergence.
K n+1 |t−τ |n+1 |ξ−ψ(τ )|
.
(n+1)!
(b) Also NTS that |ϕn (t) − ψ| ≤ δ ′ so that ϕn (t) are all defined on
(a, b) (do this by expanding ϕn (t) as a telescoping sum). Thus the
uniform limit ϕ(t) satisfies desired ODE.
Remark: This extends the case where f = f (t, y, µ) depends on some
parameter µ using Vδ = {(τ ′ , ξ ′ , µ′) : τ ′ ∈ (a, b),|ξ ′ − ψ(τ ′ )| + |µ′ − µ′0 | < δ}.
Theorem 3. (Differentiability w.r.t. initial conditions/parameters)
Let f, D, ψ, ϕ(t, τ, ξ) be as in previous theorem and suppose also that J(t, y) =
(t,τ,ξ)
i (t,y)
] exists and is cnts on D. Then ϕ(t, τ, ξ) is C 1 and det [ dϕidξ
] =
[ dfdx
j
j
i,j
i,j
Z t
exp( trJ(s, ϕ(s))ds).
τ
Depends on:
Proof idea: Proof is complicated. Refer to Thm 7.2 in Coddington. Very
briefly:
4
(t,τ,ξ)
1. To show dϕidξ
exists, we note that it is the solution to the linear
j
(matrix-valued) ODE [ẏ(t) = J(t, ϕ(t, τ, ξ))y] (Plug in solution, switch
order of differentiation, and apply chain rule).
2. The det part follows from next theorem
Theorem 4. (Properties of matrix-valued solutions to ODEs)
Suppose A(t) is a cnts n×n matrix and that Φ(t) is a matrix-valued function
′
satisfying the matrix ODE [Φ′ (t) = A(t)Φ(t) ∀t ∈ [a, b]. Then
Z [det Φ(t)] =
t
(trA(t)) det Φ(t). As a consequence, det Φ(t) = det Φ(τ )exp(
trA(s)ds).
τ
Depends on:
Proof idea: Pure computation with some tricks. Use the permutation expansion of det Φ(t) and then take dtd by repeated applying product rule to
n
X
get that [det Φ(t)]′ =
det Φj (t) where Φj (t) is obtained by replacing the
j=1
j
th
row of Φ(t), [Φj∗ (t)] with
[Φ′j∗ (t)]
n
X
a∗k (t)Φkj (t)]. Perform row op=[
k=1
erations to normalize j th row to [ajj (t)Φj∗ (t)] ⇒ det Φj (t) = ajj (t) det Φ(t),
giving the result.
2.3
Important examples
1. Finite-time blow up (where continuation fails): consider the ODE [ẏ =
y 2 ] with y(1) = −1. The solution on (0, ∞) is y(t) = −t−1 . But clearly
f (t, y) = y 2 is not bounded on (t, y) ∈ [0, 1] × R, so the theorem cannot
apply, which makes sense since the solutin blows up at t = 0.
3
3.1
Linear/Autonomous ODE
Definitions
autonomous ODE, linear ODE, critical point, positive/negative attractor,
node, saddle, focus, center, stable/unstable manifold, first integral, fundamental matrix, linearly independent solutions
5
3.2
Critical points in 2D linear systems
Consider the ODE ẋ = Ax where A ∈ mathbbR2×2 is a nonsingular matrix
and x = (x1 , x2 ) : R → R2 . Let λ1 , λ2 be the eigenvalues of A and let
A = SJS −1 be the Jordan decomposition of A. Consider the following cases
at the critical point at x = 0:
1. Node: λ1 , λ2 are real and have the same sign.
(a) A diagonalizable: look at z = S −1 x. Then ż = S −1 Ax =
JS −1 x = Jz ⇒ zj = cj eλj t for j = 1, 2. If the eigenvalues are negative, 0 is a positive attractor since z(t) → 0 ⇒ x(t) = Sz(t) → 0.
Otherwise it is a negative attractor. Notice also that we have
z2 = C|z1 |α with α > λ2 /λ1 > 0
(b) A not diagonalizable: λ = λ1 = λ2 and solving for z gives
z2 = c2 eλt and z1 = c1 eλt + c2 teλt . If λ < / > 0, 0 is again a
positive/negative attractor.
2. Saddle: λ1 , λ2 are real and λ2 < 0 < λ1 (WLOG). Solving for z(t)
again gives zj (t) = cj eλj t for j = 1, 2. So z2 = cz1β with β < 0. So z1 →
±∞ and z2 → 0 as t → ∞. There is a stable subspace ES = y − axis
and unstable subspace EU = x − axis with R = ES EU .
3. Focus: λ1 , λ2 = µ ± iω with µ, ω 6= 0. We can use the same transformation z = S −1 x to get complex solutions z1,2 (t) = c1,2 eµt e±iωt .
So x = Sz1 is also a (possibly complex) solution. Noting that the
real/imaginary part of a solution is again a solution (since A is real),
we get solutions ℜSz1 and ℑSz1 . For µ < / > 0 these correspond to a
spiral going radially inward/outward.
4. Centre: λ1,2 = ±iω, ω 6= 0. Following the same process above, we have
solutions that are oscillatory. Since z1,2 (t) are 2π-periodic, it follows
that so are the solutions x1,2 (t).
3.3
Useful Theorems
Theorem 1. Solutions to autonomous ODE’s are phase-invariant
Suppose x(t) solves the autonomous ODE ẏ = f (y). Then ∀t0 ∈ R, x̂(t) =
x(t − t0 ) is also a solution.
Depends on: chain rule
Proof idea: follows directly from chain rule.
6
Theorem 2. Let V be the set of solutions to the ODE [ẋ = A(t)x] where
t ∈ I, x ∈ R, and A(t) : R → Rn×n is continuous. Then V is vector space of
dimension n.
Depends on: uniqueness of solutions, linear independence in regular vector
spaces
Proof idea: Clearly the basic properties of a vector space holds in V (linear
combinations of solutions are also solutions). In addition we can construct
a basis for V by taking a basis {ξj } for the range of the solution (e.g., the
canonical basis of Rn ). Fixing τ ∈ I, we have n unique solutions ϕj (t)to
the ODE solving ϕj (τ ) = ξj . Then we can use the fact that {ξj } is a basis
to show that V = span{ϕj } and that {ϕj } are linearly independent (simply
evaluate at τ ).
Theorem 3. (Necessary and sufficient conditions for a fundamental matrix)
Suppose M(t) solves the matrix-valued ODE [Ṁ = AM]. Then M is a fundamental matrix ⇔ det M(τ ) 6= 0 for some τ ∈ I.
Depends on: uniqueness, linear algebra, property of the determinant of
the solution of matrix-valued ODE’s
Proof idea: Let ϕj (t) be the n columns of M.
P
1. ⇒: Suppose M is P
a f.m. Then
cj ϕj (t) = 0 ∀t ⇒ cj = 0, ∀j. Fix
τ ∈ I and suppose
cj ϕj (τ ) = 0. By uniqueness (of the zero solution),
P
we have that
cj ϕj (t) = 0 ∀t ∈ I ⇒ cj = 0 ∀j. So ϕj (τ ) are l.i. and
so detM(τ ) 6= 0.
2. ⇐: If det M(τ ) 6= 0 for some τ ∈ I. By theorem 4 above we have
det M(t) 6= 0 ∀t ∈ I. Thus the ϕj (t)’s must be l.i. and so M is a f.m.
Theorem 4. (Multiplication by nonsingular matrices preserves f.m’s)
Suppose Φ(t) is a f.m. for the ODE ẏ = A(t)y and C is a nonsingular constant matrix. Then Φ(t)C is a f.m. and every f.m. is of the form Φ(t)C̃ for
some nonsingular C̃.
Depends on: linear algebra and matrix calculus, previous theorem
Proof idea:
1. ⇒: (Φ(t)C)′ = A(Φ(t)C) and (Φ(t)C) is nonsingular
since its determinant is the product of 2 nonzero determinants.
7
2. ⇐: Take 2 f.m.’s Φ1 (t) and Φ2 (t). Let Ψ(t) = Φ−1
1 Φ2 so that Φ2 = Φ1 Ψ.
Differentiating both sides wrt t (using product rule for RHS) and using
the fact that Φ1 and Φ2 are solutions, we get Ψ′ (t) ≡ 0 so Ψ is constant
and is nonsingular since Φ1 and Φ2 are.
Theorem 5. (Adjoint systems)
Let Φ(t) be a f.m. for the ODE [ẏ = A(t)y]. Then Ψ(t) is a f.m. for the
ODE [ẏ = −A(t)∗ y] ⇔ Ψ∗ Φ = C for some constant nonsingular matrix C.
Depends on: previous theorem
Proof idea: Use the fact that Φ(t)Φ(t)−1 = I ⇒ (Φ(t)−1 )′ = −Φ(t)−1 Φ′ (t)Φ(t)−1 =
−Φ(t)−1 A(t) ⇒ Φ−∗ solves the adjoint system and apply the previous theorem (for both directions).
Remark: If A is antisymmetric (A = −A∗ ) then we have Φ∗ Φ = C ⇒ all
solutions have constant norm.
Theorem 6. (Solution for constant coefficient linear ODE)
Suppose An×n is a matrix and τ ∈ I ⊆ R, ξ ∈ Rn . Then ϕ(t) = e(t−τ )A ξ is a
solution to the ODE ẏ = Ay with y(τ ) = ξ.
Depends on: properties of the matrix exponential
Proof idea: Write out the definition of e(t−τ )A as infinite series, and differentiate the sum term-by-term.
Remark: eA is typically computed by writing A = SJS −1 ⇒ eA = SeJ S −1
where J is the jordan form. If J is a Jordan block (with λ on the diagonal
and 1’s on the superdiagonal) then etJ has the same term tr eλt /r! on the r’th
superdiagonal.
Theorem 7. (Reduction of order for linear ODE’s)
Let A(t) : R → Rn×n and suppose {ϕj }m
j=1 are m linearly independent solutions to the ODE ẏ = Ay with m < n. Then we can reduce the problem of
finding n linearly independent solutions the ODE by solving another ODE
of dimension n − m.
Depends on: manipulations
Proof idea: 4 steps:
8
1. Construct the n × n matrix M to have {ϕj }m
j=1 as the first m columns
and the remaining columns as {ej }nj=m+1 . WLOG we can assume that
the upper-left m × m submatrix has nonzero determinant on some interval I˜ ⊆ I.
2. Make the change of variables x = My to get the new ODE U ′ y +
Uy ′ = AUy. Expand this out separately for the first m rows and the
latter n − m rows. Use the fact that ϕj ’s are solutions to simplify the
equations.
n
3. The first system will allow you to solve {yj′ }m
j=1 in terms of {ϕij , aik , yk }k=m+1 .
4. Plugging these into the second system results in a linear ODE system
of n − m variables {yj }nj=m+1 .
Theorem 8. (Solution for nonhomogeneous linear ODEs)
Let A(t) : R → Rn×n and b(t) : R → RnZ. Then if Φ(t) is a f.m. for the linear
t
Φ−1 (s)b(s)ds] is a solution to the
ODE [ẏ = A(t)y] then ϕ(t) = Φ(t)[ξ +
the ODE [ẏ = A(t)x + b(t), y(τ ) = ξ].
τ
Depends on:
Proof idea: Guess a solution of the nohomogeneous ODE of the form ϕ(t) =
Φ(t)γ(t) where γ : R → Rn . Plug in to get the constraint γ ′ (t) = Φ−1 (t)b(t)
and integrate to get the desired form of γ(t).
3.4
Important examples
1. A time-dependent matrix A(t) can have linearly independent columns
but det A(t) ≡ 0 ∀t. Consider ϕ1 (t) = (t, 0)T and ϕ2 (t) = (t2 , 0)T . The
point of the above theorem is that such matrices are not fundamental
matrices for ODE’s.
2.
3.
9
4
Linear systems with periodic coefficients
4.1
Definitions:
characteristic/Floquet multipliers + exponents, ’log’ of a matrix
4.2
Useful theorems:
Theorem 1. (Factorization of f.m.’s for periodic systems)
Suppose A(t) is T -periodic matrix. Then ∃ a T -periodic matrix P (t) and a
constant matrix R s.t. Φ(t) = P (t)etR is a f.m. for the ODE [ẏ = A(t)y].
Depends on: Nonsingular conjugation of f.m.’s, invariance to phase-shift
for periodic systems
Proof idea: Φ(t) is an f.m. ⇒ Φ(t + T ) is an f.m. Therefore Φ(t + T ) =
Φ(t)C for some nonsingular constant matrix C. Find a ’log’ of C, R s.t.
C = etR , define P (t) = Φ(t)e−tR . One can check that P (t) = P (t + T ) and
clearly P (t)etR = Φ(t).
Remark:
1. One can derive the log of a nonsingular matrix by separately taking
the log of each Jordan block. Since the eigenvalues are nonzero, we
can always define the complex logarithm logz = ln|z| + i arg z. For
diagonal matrices the log is obvious, and for blocks with 1’s on the
superdiagonal and λ’s on the diagonal, write J = (λI)(I + D) where
D has zeros everywhere except λ−1 along the superdiagonal. Then use
log(1 + z) = z − z 2 /2 + z 3 /3 − ... to compute log(I + D).
2. If we take another f.m. Φ̂(t) s.t. Φ̂(t)C = Φ(t) then applying the same
procedure in the proof gives Pˆhi(t + T ) = Φ̂(t)[CeT R C −1 ] Thus all
f.m.’s result in a family of similar matrices ( etR ) with unique nonzero
eigenvalues called characteristic multipliers.
Theorem 2. (Properties of Floquet exponents/multipliers)
Suppose A : R → Rn×n is T -periodic and let {ρj }nj=1 , {λj }nj=1 be the Floquet multipliers and Floquet exponents of the ODE [ẏ = A(t)y], respectively.
Z
n
n
Y
RT
X
1 T
tr(A(s))ds
Then
ρj = e 0
and
λj =
tr(A(s))ds (mod 2πi
).
T
T
0
j=1
j=1
Depends on: Properties of the determinant of an f.m.
10
Proof idea: Let Φ(t) be an f.m. of the above ODE
Z t with Φ(0) = I. From
a previous theorem we have that det Φ(t) = exp( tr(A(s))ds). Evaluating
0
TR
at t = T and noting that det Φ(T ) = det(P (T )) det(e
) =
n
Y
ρj (since
j=1
P (T ) = P (0) = I) gives the first result.
Theorem 3. (Linearization of autonomous ODE about a periodic solution)
Suppose ϕ(t) is a T -periodic solution to the autonomous ODE [ẏ = f (y)].
df
Then 1 is a floquet multiplier of the ODE [ẏ = dx
|ϕ(t) y] obtained via linearization around ϕ(t).
Depends on: Previous theorem
Proof idea: Noting that ϕ̇(t) solves the linearized ODE (by the chain rule),
df
(ϕ(t)) T -periodic, we can see that
and that the linearized ODE has A(t) = dx
since we can complete an FM Φ(t) whose first column is ϕ̇(t) and use the
fact that Φ(t + T ) = Φ(t)C ⇒ 1 is an eigenalue of C = eT R ⇒ 1 is a floquet
exponent.
Theorem 4. (Wronskian and linear independence of solutions for special
systems)
Suppose {ϕj }nj=1 are are n solutions to the ODE [Ln y = 0] where Ln =
n
X
dn−j
aj n−j is the linear differential operator. Then {ϕj } are linearly indedt
j=0
pendent ⇔ W (ϕ1 , ..., ϕn )(t) 6= 0 ∀t ∈ I.
Depends on: properties of fundamental matrices
Proof idea: First note that the matrix which the Wronskian is a determinant of is a fundamental matrix iff the solutions are LI. By the linearity
of the differential operator, clearly the functions {ϕj (t)}nj=1 are LD ⇔ the
vector-valued functions {ϕ̂j (t)}nj=1, with the kth component being the kth
derivative, are LD ⇔ W (ϕ1, ..., ϕn )(τ ) 6= 0 for some τ ∈ I ⇔ W 6= 0 ∀t ∈ I.
Theorem 5. (Wronskian gives the equation for a given solution)
Let {ϕj }nj=1 be C n functions on I with W (ϕ1 , ..., ϕn )(t) 6= 0 on I.Then there
exists a unique homogeneous differential equation of order n for which the
11
(n−1) T
matrix Φ(t) formed by taking the jth column as (ϕj ϕ′j ...ϕj
mental matrix.The ODE is:
1 ,...,ϕn )
(−1)n WW(x,ϕ
= 0.
(ϕ1 ,...,ϕn)
) is a funda-
Depends on: previous thoerem
Proof idea: Linear independence of the ϕj ’s comes from the previous theorem and the assumption that W (ϕ1 , .., ϕn ) 6= 0 on I. Clearly each ϕj solves
the equation. Also by an expansion of W (x, ϕ1 , ..., ϕn ) in thePfirst column
shows that is an nth order ODE of the desired type, i.e.
aj x(j) = 0
with an = 1. Uniqueness comes from the fact that the ϕ’s are a basis
for the solution space, and so the coefficient matrix is determined uniquely
(P ˙hi(t) = A(t)Φ(t) ⇒ A(t) = P ˙hi(t)Φ−1 (t)).
Theorem 6. (Nonhomogeneous solutions with Wronskians)
Suppose {ϕj (t)}nj=1 are n LI solutions to the system Ln y = 0. Then a
solution to the nonhomogeneous system [Ln y = b(t), y(τ ) = ξ] is ψ(t) =
Z t
n
X
WK (ϕ1 , ..., ϕn )(s)
ϕk (t)
ψh (t) +
b(s)ds where ψh is the unique solution
W
(ϕ
1 , ..., ϕn )(s)
τ
k=1
to the homogeneous system with the same initial condition and Wk is the
same as the W except the kth column is replaced by [0, ..., 0, 1]T .
Depends on: previous theorem on nonhomogeneous linear ODE, properties
of Wronskian
Proof idea: This is just reinterpreting Ln y = b(t) as a system of ODE’s
in n dimensions, applying the theorem for nonhomogeneous linear ODE,
and using the Cramer’s representation of the matrix inverse to write the
coefficients using the Wronskian
4.3
Important examples:
1.
2.
3.
5
Stability
Theorem 1. (Stability of solutions to constant coeff linear systems)
Consider the 0 solution to the ODE [ẏ = Ay] for some matrix A ∈ Rn×n with
12
eigenvalues {λj }nj=1 .Then:
1. ℜλk < 0 ∀k ⇒ the 0 solution is asymptotically stable
2. ℜλk ≤ 0 ∀k and λk with 0 real part are nondefective ⇒ the 0 solution
is stable
3. ∃k s.t. ℜλk > 0 or a defective λk with 0 real part ⇒ the 0 solution is
unstable
Depends on: Jordan form and solution to constant coeff linear systems
Proof idea: This follows directly from the fact that the solution is of the
form ϕ(t) = ξSetJ S −1 where A = SJS −1 . Clearly in the first 2 cases,
ketJ k → 0 whereas in the last case, it diverges.
Theorem 2. (Stability of solutions to perturbed constant coeff linear systems)
Consider the 0 solution to the ODE [ẏ = Ay + B(t)y + f (t, y)] for some
matrix A ∈ Rn×n with eigenvalues {λj }nj=1 . Suppose the following hold:
1. kB(t)k → 0 as t → ∞
2.
|f (t,y)|
|y|
→ 0 as |y| → 0 uniformly in t
Then if ℜλk < 0 ∀k, the 0 solution is asymptotically stable. Furthermore, convergence is exponential, i.e. ∃C, δ, µ s.t. |y(t0)| < δ ⇒ |y(t)| ≤
Ce−µ(t−t0 ) ∀t ≥ t0 .
Depends on: Previous theorem,Duhummel-type formula, Gronwall’s Lemma
Proof idea: By the assumptions we can show that ketA k ≤ Ce−µt where
µ = max{λk } < 0. We
Z can write the solution as:
t
e(t−s)A [B(s) + f (s, ϕ(s))]ds.
ϕ(t) = e(t−τ )A ϕ(τ ) +
τ
Choose T s.t. t ≥ T ⇒ kB(t)k ≤ ǫ and choose η s.t. |x| < η ⇒ |f (t, x)| ≤
ǫ|x|. Take absolute value of above expression and bound the part of the
integrand in brackets by 2ǫ|ϕ(s)|. Then apply Gronwall’s lemma to the
function g(t) = eµ(t−τ ) |ϕ(t)| to get that: |ϕ(t)| ≤ C|ϕ(τ )|e(2ǫC−µ)(t−τ ) .
Remark:
Any solution of the ODE thus goes to 0 exponentiall, i.e.
log |ϕ(t)|
≤ −µ.
lim sup
t
t→∞
13
Theorem 3. (Stability of solutions to periodic coeff linear systems)
Consider the 0 solution to the ODE [ẏ = A(t)y] for some periodic matrix
A(t) ∈ Rn×n . The same exact stability results hold above if we substitue
’eigenvalues of the coeff matrix’ with the Floquet exponents.
Depends on: Previous theorem,change of variable
Proof idea: We reduce the ODE to a linear constant coeff. ODE by guessing a solution of the form y(t) = P (t)x(t) where Φ(t) = P (t)etR is the decomposition of the f.m. for the original ODE. The result is the ODE [ẋ = Rx],
and we apply the above theorem.
Theorem 4. (Stability of solutions to perturbed periodic coeff linear systems)
Consider the 0 solution to the ODE [ẏ = A(t)y + B(t)y + f (t, y)] for some
matrix A(t) ∈ Rn×n is T -periodic and the same assumptions on B(t) and
f (t, y) hold. Then if the characteristic exponents of the system all have negative real part, the 0 solution is asymptotically stable.
Depends on: Previous theorem, decomposition of the fundamental matrix
for periodic systems, Change of variables
Proof idea: Let Φ(t) = P (t)etB be the decomposed fundamental matrix for
the linear system. Guess a solution x(t) = P (t)y(t) for the nonlinear system.
Substituting it in and manipulating (product rule etc.) gives an ODE for
y(t) of the form: ẏ = By + P −1 F (t, P (t)y(t)). Apply the previous theorem.
Theorem 5. (Instability of solutions to perturbed constant coeff linear systems)
Consider the 0 solution to the ODE [ẏ = Ay +B(t)y +f (t, y)] for some matrix
A ∈ Rn×n with eigenvalues {λj }nj=1 with the same restrictions on B, f . Then
if ∃k s.t. ℜλk > 0, the 0 solution is unstable.
Depends on: dependencies...
Proof idea: Factor A = SJS −1 and take y = S −1 x. Then ẏ = Jy +
S −1 F (t, Sy). Let R2 and ρ2 be the sum of squared-norms of the components
of y corresopnding to positive-real evals and negative-real-part evals, respectively. Show that Ṙ ≥ σ2 R − ǫR and ρ̇ ≤ ǫ(R + ρ) ⇒ dtd (R − ρ) ≥ σ4 (R − ρ),
which implies exponential growth and thus instability.
14
Theorem 6. (Relationship between stability of periodic solution and linearization about the solution)
Consider the ODE [ẋ = F (t, x)] with F being T-periodic. Suppose ϕ(t)
is a T-periodic solution. Now consider the linearization about ϕ(t), i.e.
[ẏ = dF
(t, ϕ(t))y + g(t, y)] (periodic coefficient linear system with o(y 2 ) nondx
linear term). Then:
1. If the n characteristic exponents of the linear system have negative real
part, then ϕ(t) is Lyapunov and asymptotically stable.
2. If F = F (x) is independent of t (autonomous system) and n − 1 char
exponents have negative real part, then ϕ(t) is orbitally and asymptotically stable.
Depends on: First variation and application of previous theorem on asuymptotic stability
Proof idea:
1. We see this by applying the previous theorem to the linearization (first variation) about ϕ(t), which gives that 0 is an asymptotically stable solution to the first variation system. In other words,
the difference of another solution to ϕ(t) evolves according to the first
variation and thus goes to 0 asymptotically, i.e. ϕ(t) is asymptotically
Lyapunov stable.
˙ is a periodic solu2. Since the system is autonomous, we know that ϕ(t)
tion to the first variation. If we denote Φ(t) as a fundamental matrix
for the first variation system. We know that since Φ(t + T ) is also a
fundamental matrix it is related to Φ(t) by a nonsingular matrix with
first column e1 = [100...0]T . Thus 1 is an eigenvalue of this matrix, i.e.
0 is a characteristic exponent. Clearly we cannot get Lyapunov asymptotic stability since ϕ(t + δ) is a solution which does not converge to
ϕ(t). However, we can get asymptotic orbital stability, even a stronger
result:
∃C s.t. |ϕ(t) − x(t + C)| → 0 as t → 0.
The actual proof is really long and complicated - see Coddington p323327.
Theorem 7. (Lyapunov functions and stability)
Consider the ODE [ẏ = f (t, y)] with f (t, 0) = 0. Then:
1. If we can find a positive definite function V (t, x) (i.e. ∃W (x) cnts s.t.
V (t, x) ≥ W (x) > 0 ∀x ∈ D\{0} and t ≥ t0 ) s.t. ∃ a neighborhood
15
of U around 0 on which (Lt V )(t, x) ≡
stable solution.
δV
δt
+ f (t, x) δV
≤ 0, then 0 is a
δx
2. If strict inequality holds (Lt V < 0 on U), then 0 is asymptotically
stable.
3. If Lt V is positive definite on U, then 0 is unstable. Depends on:
dependencies...
Proof idea:
1. By assumption V (t, x) ≥ m > 0 on some annulus B(0, R)\B(0, r)
centered at 0 and there is a δ s.t. 0 ≤ V (t, x) ≤ m/2 on B(0, δ). If a solution x(t) starts in the δ-ball at t = t0 , then V (t, x(t))−V (t, x(t0 )) ≤ 0
by integrating the assumption on Lt V ⇒ V (t, x(t)) < m/2 ⇒ x(t) ∈
B(0, r). But r is arbitrarily small, so 0 is stable.
2. Suppose there is a trajectory x(t) starting in B(0, a) at time t0 . Then we
know that there is a δ s.t. if x(t) enters B(0, δ) it can never come out (by
previous step), so |x(t)| > δ ∀t ≥ t0 . By integrating the assumption,
V (t, x(t)) − V (t0 , x(t0 )) < −µ(t − t0 ) where Lt V < −µ < 0, which
contradicts the positive definiteness of V .
3. Similar argument to previous step. Take an annulus around 0 (with arbitrarily small outer ring) where V (t, x) is bounded below and above by
positive values. Take a trajectory starting in the annulus and integrate
out the assumption to get that V (t, x(t)) → +∞.
Theorem 9. (Poincare-Bendixson theorem)
Consider the autonomous 2D system [ẋ = f (x)] with f : R2 → R2 C 1 on an
open set M ⊆ R2 . The asymptotic behavior of the trajectory starting from
any point x ∈ M must be one of 3 cases - it either tends to a fixed point, is
a periodic orbit, or tends to a periodic orbit.
More formally, let Φ(x, t) : M → R2 be the location of the trajectory
starting at x after time t. Let ω(x) = {y ∈ M : ∃(tn ) → ∞ s.t. Φ(x, tn ) → y}
(i.e. the set of points ’tended’ to by the trajectory). Suppose we are given
x ∈ M with ω(x) compact and nonempty. Then if ω(x) does not contain
fixed points, it is exactly a periodic orbit in M.
Depends on: lots of things to do with planar geometry
Proof idea: The general approach is as follows:
1. For a given x ∈ M, let T be the trajectory starting from x. Then ω(x)
can intersect any transversal of T not more than once.
16
2. Any ω-limit-point of an ω-limit point lies on a periodic orbit
3. If ω(x) contains a nondegenerate periodic orbit P , then ω(x) = P .
5.1
Important examples
1. Perturbed constant coeff linear system is given by x′′ + x + µx′ + x2 = 0
2 T
where µ is a ’damping term’. Then the nonlinear term
√ is just [x 0]
and the matrix in the linear term has eigenvalues
±1 for µ = 0.
µ2 −4
−µ±
2
which is
2. Consider the orbital stability theorem on the 2D ODE system defined
by the equation x′′ + f (x)x′ + g(x) = 0, assumoing there is some Tperiodic solution ϕ(t). The system is autonomous and we know that
Z T
1
the sum of the char. exponenents λ1 + λ2 = − T
f (ϕ(s))ds and
0
Z T
λ1 = 0. It follows that if
f (ϕ(s))ds > 0, then ϕ(t) is asymptotically
orbitally stable.
6
0
Perturbed systems
Theorem 1. (How different is a solution of a ’perturbed’ system?)
Consider the perturbed system [ẋ = f0 (t, x) + ǫf1 (t, x) + ... + ǫm fm (t, x) +
ǫm+1 R(t, x)] with x(t0 ) = η. Suppose that fi is cnts in t and C m+1−i in x for
1 ≤ i ≤ m, and that R is cnts in both arguments. Then:
|x − (x0 (t) + ... + ǫm xm (t))| ≤ Cǫm+1 for t ∈ [t0 , t0 + h] (where C may depend
on h) where x = x0 + ǫx1 + ....
Depends on: Gronwall’s lemma, Duhummel’s formula
Proof idea: ẋi can be derived by substituting the ǫ-expansion of x(t) in
the original ODE and equating powers of ǫ. This gives a Duhummel formula
for each xi . To apply Gronwall’s lemma to the expression to be bounded,
subtract the sum of these Duhummel expressions from that of x(t). Bound
the integral using the smoothness assumption and apply Gronwall.
17
Theorem 2. (How different is a solution of a ’perturbed’ nonautonomous
system with periodic solution?)
Consider the perturbed system [ẋ = g(t, x) + ǫh(t, x) = f (t, x, ǫ)] with g, h
T -periodic and f cnts in all arguments, Lipschitz in x. Suppose that the
nonperturbed system has a T -periodic solution p(t). If the first variation
of the non-perturbed system has no T -periodic solution then the perturbed
solution has a T -periodic solution for ǫ small.
Depends on:
Proof idea:
Theorem 3. (How different is a solution of a ’perturbed’ autonomous system with periodic solution?)
Consider the perturbed system [ẋ = g(x) + ǫh(x, ǫ) = f (x, ǫ)] with f cnts in
(x, ǫ) and C 1 in x. Suppose that the nonperturbed system has a T -periodic
solution p(t). If 1 is a simple floquet multiplier of the first variation of the
nonperturbed system, there exists a periodic solution of the perturbed system with period T (ǫ).
Depends on:
Proof idea:
18
Download