Chapter 2 Existence Theory and Properties of Solutions

advertisement
Chapter 2
Existence Theory and Properties of
Solutions
This chapter contains some of the most important results of the course. Our first goal is to
prove a theorem that guarantees the existence and uniqueness of a solution to an initial value
problem on some, possibly small, interval. We then investigate the issue of how large this
interval might be. The last section of the chapter provides some insight into how a solution
of an initial value problem changes when the differential equation or initial conditions are
altered.
2.1
Introduction
Consider an nth order differential equation in the form
y (n) = g(t, y, y 0 , y 00 , · · · , y (n−1) ).
It is a standard practice to convert such an nth order equation into a first order system by
defining
x1 = y
x2 = y 0
..
.
xn = y (n−1) .
We will denote vectors in Rn by x = (x1 , · · · xn ) so that our scalar equation is now represented
in vector form as


x2
 x3

dx


0
= x (t) =  ..
 = f (t, x(t)).
dt
 .

g(t, x1 , x2 , · · · , xn )
1
2
CHAPTER 2. EXISTENCE THEORY AND PROPERTIES OF SOLUTIONS
Consequently it suffices to focus upon 1st ordinary differential equations denoted by
x0 (t) = f (t, x(t))
(2.1.1)
where x ∈ Rn and f (t, x) ∈ Rn is defined on an open set U ⊆ R × Rn . A solution of (2.1.1)
is a differentiable function
ξ : J → Rn
where J is an open interval in R and for t ∈ J, (t, ξ(t)) ∈ U , and
ξ 0 (t) = f (t, ξ(t)).
A solution ξ(t) of the initial value problem (IVP)
x0 (t) = f (t, x(t))
x(t0 ) = x0 .
(2.1.2)
is a solution of the differential equation (2.1.1) that also satisfies the initial condition ξ(t0 ) =
x0 .
Example 2.1.1
Rcall from Example (1.2.3) that the IVP
x0 =
x
+ t = f (t, x)
t
x(0) = x0
has infinitely many solutions if x0 = 0 and no solution if x0 6= 0. This suggests that continuity
of f (t, x) would be a minimal condition to ensure existence of a solution to an IVP.
Example 2.1.2
Consider
x0 = f (t, x) = x1/3 .
By separation of variables we get the family of solutions
2
ξ(t) = ( (t + c))3/2
3
Now consider the IVP
x0 = x1/3
x(0) = 0.
2.1. INTRODUCTION
3
For each c > 0 we obtain a solution ξc where
( 2
( 3 (t − c))3/2 , t ≥ c
ξc (t) =
0,
t ≤ c.
Thus we see that continuity of f (t, x) is not enough to ensure uniqueness.
Fig. 2.1.1. There are infinitely many solutions to the IVP in Example 2.1.2.
Our goal is to prove that under appropriate hypotheses on f (t, x), the initial value problem (2.1.2) has a solution defined on an interval (t0 −², t0 +²) and any two such solutions must
agree on their common domain. The above examples suggest that an appropriate notion of
smoothness must be assumed of f (t, x). To describe the regularity that will be required we
need to introduce some terminology. For x ∈ Rn we denote the sup, or l∞ norm by
|x| = max{|xn |}.
n
Let (X, d) be a metric space and denote the open ball of radius r around x0 by
Br (x0 ) = {x | d(x, x0 ) < r}.
B r (x0 ) will denote the closed ball {x| d(x, x0 ) ≤ r}. Let J² (t) = (t − ², t + ²) ⊂ R and assume
that f (t, x) : U ⊆ R × Rn → Rn . The existence and uniqueness results we prove are obtained
by assuming that f (t, x) satisfies a Lipschitz condition. More precisely, we say that f (t, x)
is locally Lipschitz with respect to x if for any (t0 , x0 ) ∈ U there exists L ≥ 0 and ² > 0, so
that J² (t0 ) × B² (x0 ) ⊆ U and
|f (t, x) − f (t, y)| ≤ L|x − y|, for t ∈ J² (t0 ) and x, y ∈ B² (x0 ).
It is easily verified that if f (t, x) is continuous and the partial derivatives ∂fi /∂xi exist
and are continuous on U , then f is locally Lipschitz with respect to the second variable.
Previously we saw that the IVP of Example(2.1.2) has infinitely many solutions. Note that
the function f (x) = x1/3 is not Lipschitz at the origin.
4
CHAPTER 2. EXISTENCE THEORY AND PROPERTIES OF SOLUTIONS
The notion of a contractive mapping is central to many existence arguments. If α statisfies
0 < α < 1, and T : X → X is a mapping, we say T is an α-contraction if
d(T (x), T (y)) ≤ αd(x, y) for all x, y ∈ X.
If T (p) = p, we call p is a fixed point of T . We will denote iterates of T by
T 0 (x) = x, T 1 (x) = T (x), T 2 (x) = T (T 1 (x)), · · · T n (x) = T (T n−1 (x)).
The following lemma is crucial.
Lemma 2.1.1
[Contraction Mapping Lemma]. Let (X, d) be a complete metric space
and T : X → X an α-contraction. Then T has a unique fixed point p. In fact, for any
x ∈ X, the iterates T n (x) converge to p.
Proof.
Define f : X → [0, ∞) by f (x) = d(T (x), x). In other words, f (x) is the distance
T moves x. Note that f (p) = 0 if T (p) = p and observe that f is continuous. Indeed
f (x) = d(x, T (x))
≤ d(x, y) + d(y, T (y)) + (T (y), T (x))
≤ d(x, y) + f (y) + αd(x, y)
and so
f (x) − f (y) ≤ (1 + α)d(x, y).
Interchanging x and y we see
|f (x) − f (y)| ≤ (1 + α)d(x, y).
There are two inequalities satisfied by f . First,
f (T (x)) = d(T (T (x)), T (x)) ≤ αd(T (x), x) = αf (x).
(2.1.3)
For the second inequality note that for x, y ∈ X,
d(x, y) ≤ d(x, T (x)) + d(T (x), T (y)) + d(T (y), y)
≤ f (x) + αd(x, y) + f (y)
and so
d(x, y) ≤
f (x) + f (y)
.
1−α
(2.1.4)
2.2. EXISTENCE AND UNIQUENESS OF SOLUTIONS
5
Now let x0 be any point in X and xn = T n (x0 ). Then from (2.1.3)
f (xn ) ≤ αn f (x0 )
and so f (xn ) → 0 as n → ∞. It follows from (2.1.4) that for any n, m
d(xn , xm ) ≤
f (xn ) + f (xm )
.
1−α
For n, m sufficiently large we can make the right hand side as small as we like and hence
{xn } is a Cauchy sequence. Since X is complete, there exists a p ∈ X such that xn → p.
Because f is continuous, f (xn ) → f (p), and so f (p) = 0, i.e., p is a fixed point of T .
To show uniqueness suppose q is another fixed point. Then f (q) = 0 and from (2.1.4) we
see d(p, q) = 0.
2.2
Existence and Uniqueness of Solutions
It turns out that continuity of f (t, x) is sufficient to guarantee existence of a solution to the
IVP
x0 (t) = f (t, x(t))
x(t0 ) = x0 .
This result is referred to as Peano’s Theorem. Example (2.1.2) in the previous section showed
that we need additional hypotheses on f (t, x) to ensure uniqueness. The condition we need
is Lipschitz continuity.
The next theorem is a first form of our Existence and Uniqueness Theorem.
Theorem 2.2.1
Let f : U ⊆ R × Rn → Rn , U open and f (t, x) continuous and locally
Lipschitz with respect to the second variable. The following two statements hold.
(1) Select (t0 , x0 ) ∈ U . For all ² > 0 sufficiently small there is a differentiable function
ξ : (t0 − ², t0 + ²) → Rn such that

(t, ξ(t)) ∈ U,
t ∈ J² (t0 ) 

ξ 0 (t) = f (t, ξ(t)), t ∈ J² (t0 )
(2.2.1)


ξ(t0 ) = x0 .
That is, ξ is a solution of the initial value problem.
(2) If ξ1 : J²1 (t0 ) → Rn and ξ2 : J²2 (t0 ) → Rn are two differentiable functions that satisfy
(2.2.1), then ξ1 and ξ2 agree on some open interval around t0 .
6
CHAPTER 2. EXISTENCE THEORY AND PROPERTIES OF SOLUTIONS
First we need the following lemma.
Lemma 2.2.1
A function ξ : J² (t0 ) → Rn is differentiable and satisfies (2.2.1) if and
only if for t ∈ J² (t0 ), (t, ξ(t)) ∈ U , ξ is continuous, and satisfies
Z
t
ξ(t) = x0 +
f (s, ξ(s))ds, t ∈ J² (t0 ).
(2.2.2)
t0
Proof.
Let ξ be a differentiable function that satisfies (2.2.1). Since ξ 0 (t) = f (t, ξ(t))
and f is continuous, ξ 0 is continuous. Thus by the Fundamental Theorem of Calculus
Z t
Z t
0
ξ (s)ds = ξ(t) − ξ(t0 ) = ξ(t) − x0 =
f (s, ξ(s)) ds
t0
t0
and so ξ(t) satisfies (2.2.2).
Conversely suppose ξ satisfies the conditions of the second part of the lemma. Then
clearly
ξ(t0 ) = x0
and by the Fundamental Theorem of Calculus,
ξ 0 (t) = f (t, ξ(t)).
Thus ξ is differentiable and satisfies the IVP.
The proof of Theorem(2.2.1) is based on the Contractive Mapping Lemma where the
underlying metric space will be a closed subset of BC(J² (t0 ); Rn ), the space of bounded
continuous functions
ξ : J² (t0 ) → Rn
where for ξ1 , ξ2 ∈ BC(J² (t0 )),
d(ξ1 , ξ2 ) = sup {|(ξ1 − ξ2 )(t)|}
t∈J² (t0 )
= ||ξ1 − ξ2 ||.
Proof of Theorem 2.2.1.
Since f is locally Lipschitz with respect to the second
variable, we can find an r > 0 such that [t0 − r, t0 + r] × B r (x0 ) ⊂ U and
|f (t, x) − f (t, y| ≤ L|x − y| for all (t, x), (t, y) ∈ [t0 − r, t0 + r] × B r (x0 ).
2.2. EXISTENCE AND UNIQUENESS OF SOLUTIONS
7
Since [t0 − r, t0 + r] × B r (x0 ) is compact and f is continuous, there exists an M for which
|f (t, x)| ≤ M for all (t, x) ∈ [t0 − r, t0 + r] × B r (x0 ).
Choose ² > 0 so small such that
²<r
²M < r
²L < 1.
Let X ⊂ BC(J² (t0 ); Rn ) be the space of continuous functions
ξ : J² (t0 ) → B r (x0 ).
Then X is a closed subset of BC(J² (t0 ); Rn ) and hence is complete. Note that if ξ ∈ X, t ∈
J² (t0 ) and ² < r, then we certainly have (t, ξ(t)) ∈ J² (t0 ) × B r (x0 ) ⊆ U. For ξ ∈ X, define
T ξ on J² (t0 ) by
Z
t
T ξ(t) = x0 +
f (s, ξ(s))ds.
t0
By the Fundamental Theorem of Calculus T ξ is continuous and
Z t
Z t
|T ξ(t) − x0 | = |
f (s, ξ(s))ds| ≤ |
|f (s, ξ(s)|ds|
t0
t0
≤ M |t − t0 | < ²M < r.
Hence T ξ(t) ∈ Br (x0 ) and so T ξ ∈ X. Thus T : X → X.
We now show T is a contraction. If ξ, ζ ∈ X,
Z t
Z t
| T ξ(t) − T ζ(t)| = | x0 +
f (s, ξ(s)) ds − x0 −
f (s, ζ(s)) ds |
Z
≤
t0
t
t0
| f (s, ξ(s)) − f (s, ζ(s)) | ds |
Z
≤ L|
t
| ξ(s) − ζ(s) | ds |
t0
Z
≤ L|
t0
t
||ξ − ζ|| ds |
t0
= L|t − t0 | ||ξ − ζ||
≤ ²L||ξ − ζ||.
8
CHAPTER 2. EXISTENCE THEORY AND PROPERTIES OF SOLUTIONS
Thus
sup |T ξ(t) − T ζ(t)| = ||T ξ − T ζ|| ≤ ² L||ξ − ζ||
t∈J² (t0 )
and since ²L < 1, T is a contraction. Hence T has a fixed point and so there exists ξ ∈ X
such that
Z
t
T ξ(t) = ξ(t) = x0 +
f (s, ξ(s))ds.
t0
By Lemma(2.2.1), this ξ(t) is a solution (2.2.1)
Before proceeding to the proof of the second statement, note that given (t0 , x0 ) we first
choose r such that Kr = [t0 − r, t0 + r] × B r (x0 ) ⊂ U. Once we select ² so that ² < r, ²M <
r, ²L < 1, we can consider the set X ⊆ BC(J² (t0 ); Rn ) in which T has a fixed point. In
this sense the set X may be regarded as a one-paremeter family X(²) and the fixed point,
though unique in X(²), does depend on ².
To prove the second statement of the proposition suppose ξ1 , ξ2 are two solutions of the
IVP. The intersection of their domains is an open interval, say (t0 − ², t0 + ²). Since ξ1 (t0 ) =
ξ2 (t0 ) = x0 , and ξ1 , ξ2 are continuous we can select ² such that ξ1 , ξ2 : J² (t0 ) → B r (x0 ). We
can further decrease ² if necessary so that ² < r, ²M < r and ²L < 1. With this choice
of ², we then get that ξ1 , ξ2 ∈ X(²) and since T : X(²) → X(²) has a unique fixed point,
ξ1 (t) = ξ2 (t), t ∈ J² (t0 ).
In summary, we have that for (t0 , x0 ) ∈ U there exists ² > 0 such that the IVP has a
solution ξ that is defined on (t0 − ², t0 + ²) if ² < r, ²M < r, ²L < 1. Note that in showing
T ξ(t) ∈ B r (x0 ) we showed all iterates satisfy
|T ξ − x0 | ≤ M |t − t0 | < M ².
In particular the graph of the solution to (2.2.1) lies in the region R as depicted in the figure
below. Note that if M is large, the graph of the solution may escape the set Kr unless the
domain of the solution is restricted as required by the condition ²M < r.
2.2. EXISTENCE AND UNIQUENESS OF SOLUTIONS
9
Fig. 2.2.1. The graph of T ξ(t).
An improved statement of Theorem 2.2.1 constitutes our main Existence and Uniqueness
Theorem. Note that this a ‘local’ result in that the time interval on which the solution exists
may be small.
Theorem 2.2.2 [Existence and Uniqueness]
Assume f : U ⊆ R×Rn → Rn is continuous
and locally Lipschitz with respect to the second variable. If (t0 , x0 ) ∈ U , then there is an
² > 0 such that the IVP
x0 = f (t, x)
x(t0 ) = x0
has a unique solution on the interval (t0 − ², t0 + ²).
Proof
We know that for all sufficiently small ², the initial value problem has a solution.
We need only prove that if ξ1 , ξ2 satisfy the IVP on J² (t0 ), then ξ1 = ξ2 on J² (t0 ).
Let S = {t ∈ (t0 − ², t0 + ²)| ξ1 (t) = ξ2 (t)}. S is not empty since ξ(t0 ) = ξ2 (t0 ). Since
ξ1 and ξ2 are continuous, S is closed in J² (t0 ). Let b
t ∈ S and x
b = ξ1 (b
t) = ξ2 (b
t)). Then ξ1 , ξ2
b
solve the IVP with initial condition (t, x
b). By the previous proposition, ξ1 and ξ2 agree on
an open interval J ⊆ S containing say b
t. Hence S is open and closed and since J² (t0 ) is
connected, S = J² (t0 ).
The proof of Theorem(2.2.1) can be used to obtain a sequence of approximations that
converge to the solution of the IVP. It is customary to begin the iteration process with
10
CHAPTER 2. EXISTENCE THEORY AND PROPERTIES OF SOLUTIONS
x(t) = x0 . Then
Z
t
x1 (t) = T x = x0 +
f (s, x0 )ds
t0
Z
t
2
x2 (t) = T x = x0 +
f (s, x1 (s))ds
t0
..
.
..
.
Z
t
n
xn (t) = T x = x0 +
f (s, xn−1 (s))ds.
t0
From our results we know that {xn (t)} converges to a solution of the IVP in some neighborhood of t0 . This sequence of approximate solutions are known as Picard iterates. The
usefulness of approximating a solution by this procedure has been somewhat enhanced by
the availability of computer algebra systems such and Maple and Mathematica.
2.3
Continuation and Maximal Intervals of Existence
Our existence theorem is of local nature in that it provides for the existence of a solution to
the IVP
x0 (t) = f (t, x(t))
x(t0 ) = x0
defined in an interval (t0 − ², t0 + ²)
Example 2.3.1
The solution of
x0 = x2
x(0) = 1
is
x(t) =
1
1−t
Here U = R × R. Note that the solution is defined for −∞ < t < 1. As t → 1− , the graph of
the x(t) leaves every closed and bounded subset of U . We will prove a theorem that reflects
this general behavior. That is, the solution of an IVP can be defined on an interval (m1 , m2 )
where either m2 = +∞ or the graph of the solution escapes every closed and bounded subset
of U as x → m2 (and similarly for m1 ).
2.3. CONTINUATION AND MAXIMAL INTERVALS OF EXISTENCE
11
Throughout this section we suppose f : U ⊆ R × Rn → Rn , U open, and f (t, x) is
continuous on U and locally Lipschitz in x. Suppose ξ(t) is a solution of
x0 = f (t, x)
x(t0 ) = x0
that is defined for γ < t < δ and (δ, ξ(δ − )) ∈ U . Now consider the IVP
x0 = f (t, x)
x(δ) = ξ(δ − ).
We know this problem has a solution, say ψ(t), defined on δ ≤ t < δ + ². Define
½
ξ(t) γ < t < δ
y(t) =
ψ(t) δ ≤ t < δ + ².
Clearly y(t) is continuous. Moreover,
Z t
−
y(t) = ξ(δ ) +
f (s, ψ(s)) ds for δ ≤ t < δ + ²
δ
and
Z
−
δ
ξ(δ ) = x0 +
f (s, ξ(s))ds.
x0
Hence
Z
Z
δ
y(t) = x0 +
f (s, ξ(s)ds +
t0
Z
or
t
f (s, ψ(s))ds
δ
t
f (s, y(s))ds, δ ≤ t < δ + ².
y(t) = x0 +
t0
Since we clearly have
Z
t
y(t) = x0 +
f (s, y(s))ds, γ < t < δ,
t0
it follows from Lemma(2.2.1) that
y 0 (t) = f (t, y(t)),
γ <t<δ+²
y(t0 ) = x0
and so y(t) is a solution of the IVP that is defined on a larger interval.
The above process is referred to as continuation to the right. In the same way one could
construct a continuation to the left. By our uniqueness result any extension of the solution
12
CHAPTER 2. EXISTENCE THEORY AND PROPERTIES OF SOLUTIONS
from (γ, δ) to (γ − ²1 , δ + ²2 ) is unique. The geometric interpretation of the continuation
process is displayed in Figure 2.3.1.
Fig. 2.3.1. The continuation process.
Definition (2.3.1) Let ξ be a solution of an ordinary differential equation on an interval J.
A function ξ˜ is called a continuation of ξ if
˜
i) ξ˜ is defined on an interval J˜ where J ⊂ J.
ii) ξ˜ = ξ for t ∈ J, and
˜
iii) ξ˜ satisfies the ordinary differential equation on J.
Theorem 2.3.1
Assume f : U ⊆ R × Rn → Rn , U open and f (t, x) continuous and
locally Lipschitz with respect to the second variable. Then there exists a solution ξ(t) of the
IVP
x0 = f (t, x)
x(t0 ) = x0
defined on an interval (m1 , m2 ) with the property that if ψ is any other solution of the IVP,
the domain of ψ is contained in (m1 , m2 ).
Proof
Let M denote the set of all intervals on which solutions of the IVP are defined.
That M is not empty follows from the Existence Theorem. Let M1 be the set of all right
hand endpoints of M and M2 the set of all left hand endpoints. Take
m1 = inf M1 , m2 = sup M2 .
2.3. CONTINUATION AND MAXIMAL INTERVALS OF EXISTENCE
13
Pick any b
t ∈ (m1 , m2 ). Then there exists a solution of the IVP whose interval of definition
b Define a solution ξ(t) on (m1 , m2 ) by setting ξ(b
bb
includes b
t, say ξ.
t) = ξ(
t). By uniqueness it
follows that ξ(t) is well defined and is a solution for all t ∈ (m1 , m2 ).
The interval (m1 , m2 ) is called the maximal interval of existence corresponding to (t0 , x0 ).
Furthermore, the maximal interval must be open (verify this).
Example 2.3.2
Take U to be the right half plane and consider
1
1
cos( )
2
t
t
x(t0 ) = x0
x0 (t) =
Then x(t) = c − sin( 1t ) and the IVP can be solved for any inital condition (t0 , x0 ), t0 > 0.
Note that the maximal interval of existence is (0, ∞) and limt→0+ x(t) does not exist.
Example 2.3.3
Consider
x0 = −3x4/3 sin(t)
x(t0 ) = x0 .
Solutions are x(t) ≡ 0 and x(t) = (c − cos t)−3 where c is determined by the initial data
(t0 , x0 ). Nontrivial solutions are defined on (−∞, ∞) only if |c| > 1. Thus, the the maximal
interval of existence may depend on the initial conditions. Moreover, this example and
Example(2.3.1) suggest that the graph of a solution tends to infinity at a finite endpoint of
the maximal interval of existence. This is indeed the case when f (t, x) is bounded, but the
complete story is a bit more involved. The next few theorems address this issue and clarify
these suggestions.
Theorem 2.3.2
Assume f : U ⊆ R × Rn → Rn , U open and f (t, x) continuous and
locally Lipschitz with respect to the second variable and bounded on U . If ξ(t) is a solution
of the IVP,
x0 (t) = f (t, x)
x(t0 ) = x0
and defined for γ < t < δ, then the limits
lim ξ(t), lim− ξ(t)
t→γ +
t→δ
exist. If (δ, ξ(δ − )), (γ, ξ(γ + )) ∈ U , then the solution can be extended to the right and left.
14
CHAPTER 2. EXISTENCE THEORY AND PROPERTIES OF SOLUTIONS
Proof
Let t1 , t2 ∈ (γ, δ). Then
Z
|ξ(t1 ) − ξ(t2 )| ≤ |
t1
|f (s, ξ(s))|ds|
t2
≤ B|t1 − t2 |.
If we pick {tn } such that tn → δ − , then for any ² > 0,
|ξ(tn ) − ξ(tm )| ≤ B|tn − tm | < ²
for all n, m sufficiently large. Hence {ξ(tn )} is Cauchy and so converges. Thus limn→∞ ξ(tn )
exists. An identical argument applies for limt→δ− ξ(t).
The second assertion follows immediately from the remarks preceding the definition of
continuation.
Compare this theorem with the result of Example(2.3.2) in which f (t, x) = t12 cos( 1t )
was not bounded on U . As we observed, the solution did not have a limit at the left hand
endpoint of its maximal interval of existence.
Theorem 2.3.3
Assume f : U ⊆ R × Rn → Rn , U open and f (t, x) continuous and
locally Lipschitz with respect to the second variable and bounded on U . Let (m1 , m2 ) denote
the maximal interval of existence of the solution ξ of the IVP
x0 = f (t, x)
x(t0 ) = x0 .
Then either m2 = ∞ or (m2 , ξ(m−
2 )) is on the boundary of U . A similar statement holds for
m1 .
Proof.
First suppose m2 < ∞ were finite. From the previous theorem, ξ(m−
2 ) exists
−
and if (m2 , ξ(m2 )) ∈ U then the solution could be extended to the right. It must follow that
(m2 , ξ(m−
2 )) lies on the boundary of U . Similarly for m1 .
Example 2.3.4
Reconsider the example
x0 = x2
x(0) = 1
Here U = R2 and ξ(t) =
1
.
1−t
Define
UA = {(t, x) | |t| < ∞, |x| < A}.
2.3. CONTINUATION AND MAXIMAL INTERVALS OF EXISTENCE
15
The maximal interval of existence is (m1 , m2 ) = (−∞, 1) and as t → m−
2 the graph of the
solution will always meet the boundary of UA when t = 1 − 1/A.
In general suppose f (t, x) is is continuous and locally Lipschitz with respect to the second
variable on all of R × Rn and the solution of an IVP has a maximal interval of existence,
(m1 , m2 ) where m2 < ∞. One may modify the ideas in the previous example and apply
Theorem(2.3.2) to conclude that as t → m−
2 the graph of the solution always meets the
boundary |x| = A of the set UA . Since A can be arbitrarily large, the following theorem
must follow. (The details are left as an exercise.)
Corollary 2.3.1
Let U = R × Rn and (m1 , m2 ) denote the maximal interval of existence
of the IVP. If |m2 | < ∞, then
lim− |ξ(t)| = ∞.
t→m2
(Similarly for m1 ).
This corollary provides a method for determining when a solution is global, that is,
defined for all time t. In particular, if f (t, x) is defined on all of R × Rn , then a solution is
global if it does not blow up in finite time. These ideas are illustrated in the next examples.
Example 2.3.5
Consider the equation for the damped, nonlinear pendulum.
y 00 (t) + αy 0 + sin y = 0, α > 0
y(0) = y0 , y 0 (0) = v0 .
Rewrite the problem as a first order system,
x1 = y
x2 = y 0 .
Then
d
x =
dt
0
µ
x1
x2
¶
µ
x2
=
−αx2 − sin x1
¶
µ
y0
.
x(0) =
v0
¶
= f (x)
Since ∂fi /∂xj are continuous for all (x1 , x2 ), f is locally Lipschitz. Hence for any intitial
conditions the IVP has a unique solution. We now show the solution is global, i.e., it exists
for all t.
16
CHAPTER 2. EXISTENCE THEORY AND PROPERTIES OF SOLUTIONS
In a standard way, one first multiplies the equation by y 0 to get
y 0 (y 00 + αy 0 + sin y) = 0
and
d 1 0 2
( (y ) − cos y) = −α(y 0 )2 ≤ 0
dt 2
or
d 1 0 2
( (y ) + (1 − cos y)) ≤ 0.
dt 2
Thus
1 0
(y (t))2 + (1 − cos y(t)) ≤ 12 (y 0 (0))2 + (1 − cos y(0))
2
= 12 (v0 )2 + (1 − cos y0 ).
Let
1
1
1 − cos y0 + (v0 )2 = p20
2
2
and since (1 − cos y) ≥ 0 we have,
1 0 2 1 2
(y ) ≤ p0
2
2
or
|y 0 | ≤ |p0 |.
Since
Z
y(t) = y0 +
t
y 0 (s)ds
0
it follows that
|y(t)| ≤ |y0 | + |t|p0
and so |y(t)| < ∞ for all t.
Example 2.3.6
Consider the IVP
x00 + α(x, x0 )x0 + β(x) = u(t)
x(0) = x0 ,
x0 (0) = v0
where α, αx , αx0 , β, β 0 are continuous and α ≥ 0, zβ(z) ≥ 0. We will show that all solutions
are global.
2.3. CONTINUATION AND MAXIMAL INTERVALS OF EXISTENCE
17
First, it is a straightforward matter to verify that the IVP has a local solution for any
initial data. If we multiply the differential equation by the solution, say ξ(t), then
Z ξ(t)
d 1 0 2
β(s) ds) = −α(ξ, ξ 0 )(ξ 0 )2 + u(t)ξ 0 (t)
( (ξ ) +
dt 2
0
≤ uξ 0 ≤ 12 (u2 + (ξ 0 )2 ).
Since zβ(z) ≥ 0,
Z
ξ
β ds ≥ 0.
0
Call
1
F (t) = (ξ 0 )2 +
2
Then
Z
ξ(t)
β(s) ds.
0
1
F (t) ≥ (ξ 0 )2 ,
2
and from the above inequalities we see
1
1
F 0 (t) ≤ ((ξ 0 )2 + u2 ) ≤ F (t) + u2 ,
2
2
or
1
F 0 (t) − F (t) ≤ u2 .
2
Thus
or
d −t
1
(e F ) ≤ e−t u2
dt
2
Z t
t
e−s u2 (s) ds.
F (t) − F (0) ≤ e
0
Thus we may write
or
1 0 2
(ξ ) ≤ F (t) ≤ G(t)
2
|ξ 0 (t)| ≤ H(t)
where G(t), H(t) are functions that are finite for all t. With this bound on the derivative
we then get
Z t
|ξ(t)| ≤ |x0 | + |
|ξ 0 (s)| ds < ∞, for all t.
0
The preceding examples and Theorem(2.3.3) are special cases of the next result.
18
CHAPTER 2. EXISTENCE THEORY AND PROPERTIES OF SOLUTIONS
Theorem 2.3.4
Assume f : U ⊆ R × Rn → Rn , U open and f (t, x) continuous and
locally Lipschitz with respect to the second variable. Let ξ(t) be the solution of the IVP
x0 = f (t, x)
x(t0 ) = x0
and (m1 , m2 ) its maximal interval of existence. If m2 < ∞ and E is any compact subset of
U , then there exists an ² > 0 such that (t, ξ(t)) is not in E if t > (m2 −²) (and similarly for m1 .
Proof.
Consider the closed set U c = Rn+1 − U and let d(E, U c ) = ρ > 0. Now pick a
∗
closed set E ⊂ U such that E ⊂ E ∗ and d(E, E ∗ ) < ρ/2.
We will assume that (t, ξ(t)) ∈ E for all t ∈ (m1 , m2 ) and obtain a contradiction. To
this end, choose M such that |f (x, t)| ≤ M for all (t, x) ∈ E ∗ and select r < ρ/2. Pick any
(t̃, x̃) ∈ E and let
Kr = J r (t̃) × B r (x̃).
Note that if (t, x) ∈ Kr , max{|t − t̃|, |x − x̃|} ≤ r < ρ/2 and Kr ⊂ E ∗ . The IVP has a
unique solution that exists on an interval |t − t̃| < ² where ² < r, ²M < r, ²L < 1 and L is a
Lipshitz constant on the set E ∗ . Moreover, the same M and L will work for any (t̃, x̃) since
Kr ⊂ E ∗ . Now select t̂ ∈ (m2 − ², m2 ). Then (t̂, ξ(t̂)) ∈ E so the IVP
x0 = f (t, x)
x(t̂) = ξ(t̂)
has a unique solution ψ(t) that exists on |t − t̂| ≤ ². Then
(
ξ(t), m1 < t < t̂
ζ(t) =
ψ(t), t̂ ≤ t < t̂ + ²
is a continuation of ξ(t) defined on (m1 , t̂ + ²). But
t̂ + ² > m2 − ² + ² > m2
contradicting the maximality of (m2 , m2 ).
2.4
Dependence on Data
In an initial value problem
x0 = f (t, x)
x(t0 ) = x0
2.4. DEPENDENCE ON DATA
19
one might regard t0 , x0 and f (t, x) as measured values or inputs in the formulations of a
physical model. Consequently it is important to know if small errors or changes in this data
would result in small changes in the solutions of IVP. That is, does the solution depend
continuously on (t0 , x0 ) and f (t, x) in some sense.
Denote the solution the IVP by ξ(t, t0 , x0 ) where
ξ(t0 , t0 , x0 ) = x0 .
We will show that under reasonable assumptions on f , ξ is continuous in the variables to
t0 , x0 and small changes in f result in small changes in ξ. The following theorem is an
indespensible result in the study of differential equations and is central to our results of this
section.
Theorem 2.4.1 [Gronwall’s Inequality]
Let f1 (t), f2 (t), p(t) be continuous on [a,b] and
p ≥ 0. If
Z t
f1 (t) ≤ f2 (t) +
p(s)f1 (s) ds, t ∈ [a, b],
a
then
Z
t
f1 (t) ≤ f2 (t) +
Z t
p(s)f2 (s) exp[ p(u)du] ds.
a
Proof.
Define
s
Z
t
g(t) =
p(s)f1 (s)ds,
a
so
Z
0
g (t) = p(t)f1 (t) ≤ p(t)(f2 (t) +
t
p(s)f1 (s) ds).
a
We then get
g 0 (t) − p(t)g(t) ≤ p(t)f2 (t),
Rt
Rt
d
(g(t)e− a p(u)du ) ≤ p(t)f2 (t)e− a p(u)du ,
dt
Z t
R
Rs
− at p(u)du
g(t)e
≤
p(s)f2 (s)e− a p(u)du ds
a
and
Z
g(t) ≤
t
Rt
p(s)f2 (s)e
s
p(u)du
ds.
a
Now f1 (t) ≤ f2 (t) + g(t) and so the result follows.
There are some special cases of Gronwall’s inequality that should be noted.
20
CHAPTER 2. EXISTENCE THEORY AND PROPERTIES OF SOLUTIONS
(1). If p(x) = k and f2 (x) = δ are constant, then Gronwall gives
f1 (x) ≤ δek(x−a)
(2). If
Z
f1 (x) ≤ k
x
f1 (t)dt,
k≥0
a
then f1 (x) ≡ 0.
(3). Suppose |z 0 (x)| ≤ µ|z(x)| for a ≤ x ≤ b and z(a) = 0, then
¯Z x
¯ Z x
Z x
¯
¯
0
0
¯
z (t)dt¯¯ ≤
|z (t)|dt ≤ µ
|z(t)|dt
¯
a
1
a
Z
and so
|z(x)| ≤ µ
x
|z(t)| dt.
a
It follows by (2), that |z(x)| ≡ 0.
Theorem 2.4.2
Suppose ξ(t), ψ(t) satisfy
y 0 = f (t, y)
y(t0 ) = y0
z 0 = g(t, z)
z(t0 ) = z0
where f, g : U ⊆ R × Rn → Rn , are continuous and locally Lipschitz with respect to the
second variable with Lipschitz constant K. If
|f (t, u) − g(t, u)| ≤ ², (t, u) ∈ U,
then
|ξ(t) − ψ(t)| ≤ |y0 − z0 |eK|t−t0 | +
² K|t−t0 |
− 1).
(e
K
First assume t ≥ t0 . Then
Rt
ξ(t) − ψ(t) = y0 − z0 + t0 f (s, ξ(s)) − g(s, ψ(s) ds
Rt
= y0 − z0 + t0 [f (s, ξ(s)) − f (s, ψ(s)] + [f (s, ψ(s)) − g(s, ψ(s))] ds.
Proof.
2.4. DEPENDENCE ON DATA
21
Z
Thus
|ξ(t) − ψ(t)| ≤ |y0 − z0 | + ²(t − t0 ) + K
t
|ξ(s) − ψ(s)| ds.
t0
Now apply Gronwall with
f1 = |ξ − ψ|
f2 = |y0 − z0 | + ²(t − t0 )
p = k.
Then
Z
|ξ(t) − ψ(t)| ≤ ²(t − t0 ) + |y0 − z0 | + K
t
(²(s − t0 ) + |y0 − z0 |)eK(t−s) ds
t0
¯t
Z t
eK(t−s) ¯¯
= ²(t − t0 ) + |y0 − z0 | + K{(²(s − t0 ) + |y0 − z0 |)
+²
eK(t−s) ds}
−K ¯t0
t0
¯t
1
²(t − t0 ) + (y0 − z0 )
eK(t−s) ¯¯
K(t−t0 )
= ²(t − t0 ) + |y0 − z0 | + K{(
+ |y0 − z0 |e
)
} + ²(
−K
K
−K ¯t0
²
= |y0 − z0 |eK(t−t0 ) + (eK(t−t0 ) − 1).
K
If t < t0 , a similar argument gives
|ξ(t) − ψ(t)| ≤ |y0 − t0 |ek(t0 −t) +
² k(t0 −t)
(e
− 1)
K
and the result follows.
Example 2.4.1
Consider the initial value problems,
½ 0
y = f (t, y) = 1 + t2 + y 2 , Ricatti’s Equation
(1)
y(0) = y0
½ 0
z = g(t, z) = 1 + z 2
(2)
z(0) = y0
Of course problem (2) is easily solved. If we were to approximate the solution to (2) by that
of (1) on the set
U = {(t, u)||t| < 1/2, |u| < 1},
we would like to estimate the error. In the notation of Theorem(2.4.2)
|f (t, u) − g(t, u)| = |t2 | <
1
=²
4
22
CHAPTER 2. EXISTENCE THEORY AND PROPERTIES OF SOLUTIONS
Also
∂g
∂f
| = |2u| ≤ 2, | | = |2u| ≤ 2
∂u
∂u
and so we can take the common Lipschitz constant to be K = 2. Then
|
² K|t−t0 |
(e
− 1)
k
1
≤ (e2(1) − 1) ≈ 0.2.
2
|y(t) − z(t)| ≤
If, however, we were to restrict |t| < 1/4 then we get a much better approximation,
|y(t) − z(t)| ≤
1
(1.6487 − 1) ≈ . 0203
32
2.4. DEPENDENCE ON DATA
23
Exercises for Chapter 2
1. A solution y = φ(x) to
y 00 + sin(x)y 0 + (1 + cos(x))y = 0
is tangent to the x-axis at x = π. Find φ(x).
2. Show that the initial value problem
y0 =
1
,
1 + y2
y(0) = 1
has a unique solution that exists on the whole line.
3. Consider the initial value problem
y 00 (x) + F 0 (y) = 0,
y(x0 ) = y0 , y 0 (x0 ) = v0
(a) If F ∈ C 2 (R), carefully explain why the Fundamental Existence and Uniqueness
theorem guarantees that this initial value problem has a unique solution for any point
(x0 , y0 ) ∈ R2 .
(b) Suppose that F (u) > 0, u ∈ R. Prove that the solution to the initial value
problem exists for all x ∈ R.
4. Consider the equation
xy
+ sin(x).
1 + y2
(a) Explain why for each (x0 , y0 ) ∈ R2 there is a solution of the differential equation
that satisfies y(x0 ) = y0 that is defined in some neighborhood of x0 .
(b) Show that any solution of the differential equation satisfies
y 0 (x) =
2
|y(x)| ≤ k1 ek2 x
for constants k1 , k2 .
(c) Prove that each solution of the differential equation can be extended to all of R.
5. Consider
y 00 + q(x)y = 0
y(x0 ) = y0 , y 0 (x0 ) = v0
where q ∈ C[a, b], x0 ∈ [a, b].
(a) Carefully explain why this problem has a unique solution.
(b) Show that if a solution has a zero in [a, b] it must be simple.
24
CHAPTER 2. EXISTENCE THEORY AND PROPERTIES OF SOLUTIONS
6. Consider the equation
y 00 + (1 + ap(x))y = 0
where a is a nonnegative constant and p(x) ∈ C(R), |p(x)| ≤ 1. Let D be the domain
D = {(x, y)| 0 ≤ x ≤ ρ, 0 ≤ y ≤ 1} and let y = φ(x) denote the solution of the initial
value problem
y 00 + (1 + ap(x))y = 0,
y(0) = 0, y 0 (0) = 1.
Suppose we approximate the solution of the initial value problem by sin(x) on the
domain D. Estimate kφ(x) − sin(x)k for 0 ≤ x ≤ ρ.
7. Estimate the error in using the approximate solution y(x) = e−x
0 ≤ x ≤ 1/2 for the initial value problem
y 00 (x) + xy(x) = 0
y(0) = 1, y 0 (0) = 0
3 /6
Download