First Order Differential Equations

advertisement
Chapter 4
First Order Differential
Equations
4.1
First order Linear Differential Equations
A differential equation is an equation involving one or more derivatives of
unknown functions. For instance, the position of a moving particle on a
straight line is a function u(t) of time t governed by the Newton’s law of
motion:
m u00 (t) = f (t, u(t), u0 (t)),
where m is the mass of the particle, f represents the force acting on the
particle which depends on the time t ∈ (a, b), the position u(t) and the
velocity u0 (t).
A differential equation of a unknown function in one variable is called
an ordinary differential equation (abbreviated by O.D.E.), the one of a
function in more than one variable is called a partial differential equation
(abbreviated by P.D.E.).
Differential equations appear naturally in diverse areas of science and
the humanities, such problems as detection of art forgeries, the diagnosis of
diabetes, and population dynamics, etc.
The order of a D.E. is the order of the highest derivative of the function
u(t) that appears in the equation. The above equation is of order 2. In
general, an n-th order differential equation is of the form:
F (t, u(t), u0 (t), u00 (t), . . . , u(n) (t)) = 0.
A solution of a differential equation
y (n) (t) = f (t, y 0 , . . . , y (n−1) )
105
106
Chapter 4.
Differential Equations
on an interval (a, b) is a real function y = φ(t) with φ0 , . . . , φ(n) that satisfies
the equation: i.e.,
φ(n) (t) = f (t, φ0 , . . . , φ(n−1) ),
for all t ∈ (a, b).
Basic problems in D.E. are the following:
(1) (Existence) Does a D.E. have a solution?
(2) (Uniqueness) If yes, how many of them are there?.
(3) (Practicality) Determine a solution, and find all of them.
Even if, we know that a solution exists, it may not be possible to find
one in terms of elementary functions such as polynomials, trigonometric,
exponential, logarithmic, or hyperbolic functions. In this case, one can
take trial and error methods, or approximate the solution numerically using
computers.
Definition 4.1.1 An O.D.E. F (t, y, y 0 , . . . , y (n) ) = 0 is said to be linear,
if F is a linear function of the derivatives of the unknown function y. A
general form of order n linear differential equation is of the form:
an (t)y (n) + an−1 (t)y (n−1) + · · · + a1 (t)y = g(t),
where ai (t)’s are functions in t on the interval I = (a, b). When g(t) ≡ 0,
for all t, the equation is called a homogeneous linear D.E.
Thus a first order D.E. is of the form
dy
= f (t, y),
dt
which represents the slope of the graph of the solution function y = φ(x)
at each point (t, y). i.e., at each point (t, y) in the domain of f , one can
draw a short line segment with the slope f (t, y). The collection of some of
them is called the direction field for the D.E. By plotting as many of them as
possible, one can see how the graph of the solution looks like as the following
example shows.
For a first order linear D.E., f (t, y) = p(t)y + g(t) with continuous functions p(t) and g(t) on (a, b). Then we can rewrite it as
dy
+ p(t)y = g(t).
dt
The equation is homogeneous if g(t) = 0.
4.1.
First order Linear Differential Equations
107
3−y
1
3
Example 4.1.1 Consider dy
dt = − 2 y + 2 = 2 .
On the line y = 3, dy
dt = 0 means the slope of the solutions are 0, on
dy
1
the line y = 2, dt = 2 means the slope of the solutions are 12 , on the line
y = 1, dy
dt = 1 means the slope of the solutions are 1, and on the line y = 4,
dy
1
1
=
−
dt
2 means the slope of the solutions are 1 2 , etc. These horizontal lines
are called isoclines of the solutions. By drawing tangent curves to those
slope segments, one can see the solutions, called integral curves:
y
6
integral curves
5
4
isoclines
3
2
1
-
x
Now, the equation can be rewritten as
1
1
dy = − dt, for y 6= 3.
y−3
2
By taking integrations of both sides, we get:
1
ln |y − 3| = − t + c, c a constant,
2
1
c − 12 t
|y − 3| = e e
= Ce− 2 t ,
1
y = 3 + Ce− 2 t .
If y = 3, it is already a solution with dy
dt = 0, which is already contained in
the expression of y in the last equation for C = 0. In particular, the curve
1
that passes through an initial condition (0, 2) is y = 3 − e− 2 t .
¤
Note that the solution in this example can be rewritten as
1
1
ye 2 t − 3e 2 t = C.
This is the direct integral of its differentiation:
µ
¶
1
1
1
3 1t
1
3
0
t
0
(y + y)e 2 − e 2 = 0, or
y + y−
e 2 t = 0.
2
2
2
2
108
Chapter 4.
Differential Equations
This shows that the original D.E. can be integrated directly by multi1
plying a function e 2 t . This is called an integrating factor. How can we
find such an integrating factor?
We first consider a homogeneous first order linear D.E. y 0 + p(t)y =
0
0. This equation may be rewritten as yy = −p(t), whose left side is the
derivative of ln |y(t)|, i.e.,
Z
d
ln |y(t)| = −p(t), or ln |y(t)| = − p(t)dt + c,
dt
¯
µ Z
¶
µZ
¶¯
¯
¯
¯
or |y(t)| = C exp − p(t)dt , or ¯y(t) exp
p(t)dt ¯¯ = C.
Note that if the absolute value of a continuous function is a constant, so is
itself. Thus,
µZ
¶
µ Z
¶
y(t) exp
p(t)dt = C, or y(t) = C exp − p(t)dt ,
which is the integral of
µZ
0
(y + p(t)y) exp
¶
p(t)dt = 0,
¡R
¢
and hence, the integrating factor is exp p(t)dt . This solution is called
the general solution of the D.E. since every solution of the D.E. must be
of this form. Usually, we are looking for the specific solution y(t) of the
D.E. y 0 (t) + p(t)y(t) = 0 which takes the value y0 at some initial time t0 :
y(t0 ) = y0 . For this solution, we integrate both sides of the D.E. from t0 to
t:
Z t
Z t
d
ln |y(s)|ds = −
p(s)ds,
t0
t0 dt
¯
¯
Z t
¯ y(t) ¯
¯
¯
and so, ln |y(t)| − ln |y(t0 )| = ln ¯
=−
p(s)ds,
y(t0 ) ¯
t0
¯
µZ t
¶¯
¯
¯ y(t)
¯
or, ¯
exp
p(t)dt ¯¯ = 1,
y(t0 )
t
µZ0 t
¶
y(t)
or,
exp
p(t)dt
= ±1.
y(t0 )
t0
But at t = t0 , the value is 1, and so the constant is 1. Therefore,
µ Z t
¶
µ Z t
¶
y(t) = y(t0 ) exp − p(t)dt = y0 exp − p(t)dt .
t0
This is called the particular solution.
t0
4.1.
First order Linear Differential Equations
109
Example 4.1.2 Find the solution of the initial value problem:
dy
+ (sin t)y = 0,
dt
3
y(0) = .
2
¡ R
¢
The general solution is y(t) = C exp − sin sds = Cecos t , and the
particular solution is
µ Z t
¶
3
3
y(t) = exp −
sin sds = ecos t−1 .
¤
2
2
0
Example 4.1.3 Find the solution of the initial value problem:
dy
2
+ et y = 0,
dt
y(1) = 2.
The particular solution is
¶
µ Z t
2
es ds .
y(t) = 2 exp −
1
¤
Remarks: The solution to Example 4.1.2 is given explicitly, while the
one to Example 4.1.3 can not be evaluated. However, they are both equally
valid and equally useful in two reasons: First, there are very simple numerical schemes to evaluate the integral in any degree of accuracy with the aid of
a computer. Second, even if the solution to Example 4.1.2 is given explicitly,
we still cannot evaluate it at any time t without some sort of calculating aid
like digital computers.
Now we consider the nonhomogeneous equation: y 0 (t) + p(t)y = q(t).
Just like for a homogeneous equation, by multiplying an integrating factor µ(t) to both sides: y 0 µ(t) + p(t)µ(t)y = q(t)µ(t), we want to have the
left side to be the derivative of yµ(t): Thus,
In general, suppose that µ(t) is an integrating factor for a first order
linear D.E. y 0 + p(x)y = q(x). Then
d
(yµ(t)) = y 0 µ(t) + µ0 (t)y
dt
= y 0 µ(t) + p(t)µ(t)y = q(t)µ(t),
and so it has to be µ0 (t) = p(t)µ(t), or
µZ
µ(t) = C exp
µ0 (t)
µ(t)
= p(t), whose solution is
¶
µZ
¶
p(s)ds = exp
p(s)ds ,
110
Chapter 4.
Differential Equations
where we took C = 1 since we need only one such solution. Thus original
equation is
µZ
¶
d
d
(yµ(t)) = y exp
p(s)ds = q(t)µ(t).
dt
dt
By integration,
µZ
¶
Z
exp
p(t)dt y =
q(t)µ(t)dt + C,
µ Z
¶ µZ
¶
y(t) = exp − p(t)dt
q(t)µ(t)dt + C ,
which is the general solution.
If an initial condition y(t0 ) = y0 is given, then we take the definite
d
(yµ(t)) = q(t)µ(t) from t0 to t to get:
integrals of dt
Z t
µ(t)y − µ(t0 )y0 =
q(s)µ(s)ds
t0
µ
¶
Z t
1
or y(t) =
µ(t0 )y0 +
q(s)µ(s)ds .
µ(t)
t0
Example 4.1.4 Find the solution of the initial value problem:
dy
− 2ty = t,
dt
y(1) = 2.
¡ R
¢
2
The integrating factor is µ(t) = exp − 2tdt = e−t , and so the equation becomes:
¶
µ
dy −t2
dy
2
−t2
− 2ty
=
(e y) = te−t .
e
dt
dt
Thus the general solution is
Z
2
e−t y =
2
2
te−t dt + C =
−e−t
+ C,
2
1
2
or y(t) = − + Cet .
2
From the initial value y(1) = 2, the constant C is determined to be
1
2
2 = y(1) = − + Ce1
2
5 −1
e .
or C =
2
4.2. SEPARABLE EQUATIONS
111
Thus the particular solution is:
1 5 2
y(t) = − + et −1 .
¤
2 2
Example 4.1.5 Find the general solution of the initial value problem:
dy
1
,
+y =
dt
1 + t2
y(2) = 3.
¡R
¢
The integrating factor is µ(t) = exp 1dt = et , and so the equation
becomes:
µ
¶
dy t
et
t dy
e
+y
=
(e y) =
.
dt
dt
1 + t2
Hence, from the initial condition y(2) = 3,
Z t
Z t
es
d s
(e y(s))ds =
ds,
2
2 1+s
2 ds
Z t
es
t
2
e y(t) − 3e =
ds,
2
2 1+s
µ
Z
−t
2
or y(t) = e
3e +
2
4.2
t
¶
es
ds .
1 + s2
¤
Separable Equations
A more general D.E. than the first order linear D.E. is the following form:
f (y) dy
dt = g(t), which is said to be separable variables. The left side can
be the derivative of some function F (y) of y so that
d
dy
F (y(t)) = f (y)
= g(t).
dt
dt
Thus, the general solution is of the form:
Z
Z
Z
F (y(t)) = g(t)dt + c, or
f (y)dy = g(t)dt + c.
If an initial condition y(t0 ) = y0 is given, the particular solution can be
found by determining the integral constant cR from the general solution, or
by integrating the given equation F (y(t)) = g(t)dt + c from t0 to t:
Z t
Z y
Z t
F (y(t)) − F (y0 ) =
g(s)ds, or
f (r)dr =
g(s)ds.
t0
y0
t0
112
Chapter 4.
Differential Equations
Example 4.2.1 Find the solution of the initial value problem:
ey
dy
− (t + t3 ) = 0,
dt
y(1) = 1.
Rewrite this as ey dy = (t + t3 )dt, and integrating both sides, we get
2
4
ey(t) = t2 + t4 + c. Taking logarithms,
y(t) = ln(
t2 t4
+ + c),
2
4
which is the general solution.
(i) By setting t = 1 and y0 = 1 in the general solution, we get
3
3
1 = ln( + c), or c = e − .
4
4
Thus,
3 t2 t4
y(t) = ln(e − + + ).
4
2
4
Ry r
Rt
3
(ii) From 1 e dr = 1 (s + s )ds, we get
3 t2 t4
t2 t4 1 1
+ − − , or y(t) = ln(e − + + ).
2
4
2 4
4
2
4
Example 4.2.2 Find the solution of the initial value problem:
ey − e =
dy
= 1 + y2,
dt
¤
y(0) = 0.
1
Rewrite this as 1+y
2 dy = dt, and integrating both sides, we get
Z y
Z t
1
dr =
ds, or tan−1 y = t, or y = tan t.
2
1
+
r
0
0
Note that this solution y = tan t goes to ∞ at finite times t = ± π2 which is
not expected from the original nice equation. Thus solutions usually exists
only on a finite open interval (a, b), rather than for all time. Moreover,
different solutions of the same differential equation usually go to infinity at
different times: For example, if the initial condition is given by y(0) = 1 in
this problem, then
Z t
Z y
1
dr =
ds,
2
0
1 1+r
tan−1 y − tan−1 1 = t,
π
y = tan(t + ),
4
whose domain is (− 3π
4 ,
π
4 ).
¤
4.2.
Separable Equations
113
Example 4.2.3 Find the solution of the initial value problem:
y
Ry
1
y,
dy
+ (1 + y 2 ) sin t = 0,
dt
y(0) = 1.
y
Rewrite this as 1+y
2 dy = − sin tdt, and integrating both sides, we get
R
t
r
dr = 0 − sin sds, or 12 ln(1 + y 2 ) − 12 ln 2 = cos t − 1. Solving this for
1+r2
y(t) = ±(2e−4 sin
2 t
2
− 1)1/2 .
2 t
2 t
Since y(0) > 0, we take y(t) = (2e−4 sin 2 − 1)1/2 , provided 2e−4 sin 2 ≥ 1,
2 t
or e4 sin 2 ≤ 2. Since the logarithm is monotonically increasing, we have
√
¯ ¯
¯t¯
ln 2
−1
¯ ¯ ≤ sin
.
¯2¯
2
Therefore, the solution exists only on the open interval
√
√
ln 2
ln 2
−1
−1
(−2 sin
, 2 sin
).
2
2
√
2
This means that y(t) just disappears at t = ±2 sin−1 ln
2 , without going to
infinity.
However, this difficulty can be anticipated from the original equation: If
we rewrite the equation as
dy
(1 + y 2 ) sin t
=−
,
dt
y
then the differential equation is not defined when y(t) = 0. Thus if a solution
y(t) achieves 0 at some time t = a, then we can not expect it to be defined
for t > √a. This is exactly what happened here, since y(±a) = 0 for a =
2
2 sin−1 ln
¤
2 .
Example 4.2.4 Find the solution of the initial value problem:
dy
= (1 + y)t,
dt
y(0) = −1.
1
In this case, one can not have 1+y
dy = tdt, since y(0) = −1 so that
1 + y(0) = 0. However, y(t) = −1 is already one solution of this initial value
problem, which turns out to be the only one solution.
In general, for an initial value problem: dy
dt = f (y)g(t) with y(t0 ) = y0 ,
if f (y0 ) = 0, then we will show later that y(t) = y0 is the only solution of
df
this initial value problem provided that dy
exists and is continuous.
¤
114
Chapter 4.
Differential Equations
Example 4.2.5 Find the solution of the initial value problem:
(1 + ey )
dy
= cos t,
dt
π
y( ) = 3.
2
The particular solution can be found by
Z y
Z t
r
(1 + e )dr =
cos sds,
3
π/2
y
y+e
= 2 + e3 + sin t.
This equation can not be solved explicitly for y as a function in t. Thus, the
solution to this kind of initial value problem is an implicit solution. However,
one can always find y(t) numerically by using a digital computer.
¤
Example 4.2.6 Find all solutions of the D.E.:
t
dy
=− .
dt
y
The general solution can be found by integrating ydy = −tdt to get
y 2 + t2 = c2 .
The solutions are closed and we can not solve for y as a single valued
function of t, since the D.E. is not defined when y = 0 nevertheless the
circles t2 + y 2 = c2 are perfectly well-defined even when y = 0. Hence, we
will call the circles solution curves of the D.E.
¤
4.2.1
Population models
It seems impossible to model the growth of a species by a differential equation since the population of any species always changes by integer amounts
and so the population can never be a differentiable function of time. However, if a given population is very large, then the change of the population
by one is very small compared to the given population. Thus, we make
the approximation that large populations changes continuously, and even
differentiably with time.
Let p(t) denote the population of a given species at time t, and let r(t, p)
denote the growth rate which is the difference between its birth rate and
death rate. If this population is isolated, that is, there is no net immigration
4.2.
Population Models
115
or emigration, then the rate of change of the population is proportional to
the current population: i.e., dp(t)
dt = r(t, p)p(t). In the most simplistic model,
we assume that the growth rate r = a is a constant. Thus, the differential
equation governing the population growth becomes:
dp(t)
= ap(t), a is a constant,
dt
which is linear and is known as the Malthusian law of population
growth. For an initial value p(t0 ) = p0 , the particular solution is an exponential growth:
p(t) = p0 ea(t−t0 ) .
Usually, this exponential growth of the population fits very well for small
population in short time periods, but not for large population in long time
periods. This is due to neglecting the competition between individual member themselves for the limited living space, natural resources and food available as population gets very large. Therefore, the growth rate r(t, p) is a
function in time t and the current population p(t). We want to choose it to
satisfy that r ≈ a when p is small, and r decreases as p grows larger, and
r < 0 when p is sufficiently large. The simplest one with these properties is
r(t, p) = a − bp(t) for some positive constant b.
Therefore, the modified equation is
dp(t)
p(t)
= (a − bp(t))p(t) = a(1 −
)p(t),
dt
K
where K = ab . This equation is known as the Verhulst equation or the
logistic equation and the numbers a and b are called the vital coefficients of the population. It was first introduced in 1837 by the Belgian
mathematical-biologist Verhulst.
Before deriving the solution, we first look at the main features of the
solution that can be discovered directly from the differential equation itself
by using geometric reasoning, even without solving it. This is important
because the same methods can often be used on more complicated equations
whose solutions are more difficult to obtain.
The graph of the right side of the equation, which is a parabola is given
in the following figure:
116
Chapter 4.
dp(t)
dt
aK
4
Differential Equations
p
6
6
p=K
K
-
¾-
p(t)
K
K
2
K
2
p=0
- t
dp(t)
For 0 < p < K, dp(t)
dt > 0 means p is increasing, for p > K, dt < 0
means p is decreasing, and for p = 0 or p = K, dp(t)
dt = 0 means p does
not change. Thus the constant solutions p(t) = 0 and p(t) = K are called
equilibrium solutions. The points p = 0 and p = K on the p axis are
called equilibrium points or critical points. Based on these observation,
we can sketch the graph of the solution p(t) versus t depending on the initial
value p(0) = p0 , which is depicted in the right side of the above figure.
The fundamental theorem of O.D.E., which will be proven later, guarantees that two different solutions never pass through the same point. Thus
while solutions approach the equilibrium solution p(t) = K as t → ∞, they
do not attain this value at any finite time. We refer to p = K as the saturation level, or as the environmental carrying capacity for the given
species.
In many situations it is good enough to have the qualitative information
about the solution p(t) shown in the figure above. However, if we wish to
have more detailed description of the logistic growth, then we have to solve
the equation.
The logistic equation is separable and so, for a initial condition p(t0 ) =
p0 , the solution is
Z
p
p0
Note that, in
to be A =
1
a
1
ar−br2
=
and B =
dr
=
ar − br2
A
1
r(a−br) = r
b
a . Thus,
Z
t
t0
ds = t − t0 .
B
+ a−br
, the constants A and B are found
4.2.
Population Models
Z
p
p0
117
dr
ar − br2
=
=
=
¶
1
b
+
dr
r a − br
p0
¯
¯¶
µ
¯ a − bp0 ¯
1
p
¯
¯
+ ln ¯
ln
a
p0
a − bp ¯
¯
¯
1
p ¯¯ a − bp0 ¯¯
ln
= t − t0 .
a p0 ¯ a − bp ¯
1
a
Z
pµ
Since the right side of this equation is positive, one can easily show that
a−bp0
a−bp is always positive for t0 < t < ∞. Thus,
p a − bp0
,
p0 a − bp
p a − bp0
,
p0 a − bp
a(t − t0 ) = ln
or ea(t−t0 ) =
or p(a − bp0 ) = p0 (a − bp)ea(t−t0 ) ,
³
´
or a − bp0 + bp0 ea(t−t0 ) p(t) = ap0 ea(t−t0 ) ,
or p(t) =
=
ap0 ea(t−t0 )
a − bp0 + bp0 ea(t−t0 )
ap0
.
bp0 + (a − bp0 )e−a(t−t0 )
Observe first that as t → ∞,
p(t) →
a
ap0
= = K,
bp0
b
which means that the population always approaches the limiting value ab
regardless of its initial value.
Secondly, p(t) is monotonically increasing function in time if 0 < p0 < ab .
Moreover, since
d2 p
dp
dp
= a − 2bp
= (a − 2bp)p(a − bp),
2
dt
dt
dt
dp
dt
a
is increasing if p(t) < 2b
, and decreasing if p(t) >
p(t) must be of the following form:
a
2b .
Hence the graph of
118
Chapter 4.
Differential Equations
p
6
a
b
p(t)
a
2b
p(t0 )
t0
t
- t
Such a curve is called a logistic or S-shaped curve. In reality these
prediction came from an experiment on the protozoa Paramecium caudatum
performed by the mathematical biologist G.F. Gause. Starting with five
Paramecium placed in a small test tube containing 0.5 cm3 of nutritive
medium, for six days the number of individuals in the tube was counted daily.
The population increased at a rate of 230.9% per day when the number were
low. The number of individuals increased rapidly at first, and then more
slowly, until towards the fourth day it attained a maximum level of 375,
saturating the test tube. from this data, if the Paramecium caudatum grow
2.309
2
according to the logistic equation dp
dt = ap−bp , then a = 2.309 and b = 375 .
With p(t0 ) = p(0) = 5, the logistic law predicts that
p(t) =
375
.
1 + 74e−2.309t
The comparison of this prediction with the actual measurements was remarkably good.
Remark: Let p(t) denote the human population of the earth at time
t. It was estimated that the earth’s human population was increasing at
an average rate of 2% per year during the period 1960-1970. In January 1,
1965, the earth’s population was estimated to be 3.34 billion people.
The exponential growth expectation from the linear differential equation
is
p(t) = (3.34)109 e0.02(t−1965) .
From this prediction, the population of the earth will be doubled every T
years:
e0.02T = 2.
4.2.
Population Models
119
solving this for T gives T = 50 ln 2 ≈ 34.6 years. This is in excellent agreement with the observed value. However, in the distant future, the earth’s
population will be 200,000 billion in the year 2515, 1,800,000 billion in the
year 2625 and 3,600,000 billion in the year 2660. These are astronomical
numbers whose significance is difficult to image. The total surface area of
the earth is approximately 1,860,000billon square feet, whose 80% is covered
by water. Assuming we are willing to live on boats as well as land, by the
year 2515 each person will have 9.3 square feet, by the year 2625 only one
quare feet per person, etc. Thus, this model seems unreasonable.
In the logistic law of population growth, some ecologist have estimated
that the natural value of a = 0.029. Moreover, since the human population
was increasing at the rate of 2% per year when p0 = (3.34)109 in 1965, from
1 dp
p dt = a − bp, we see that
0.02 = 0.029 − b(3.34)109 , or b = (2.695)10−12 .
Therefore, according to the logistic law of population growth, the human
population of the earth will tend to the limiting value
a
0.029
=
= 10.76 billion people.
b
(2.695)10−12
For the population of the earth in the year 2000,
(0.029)(3.34)109
0.009 + (0.02)e−(0.029)35
(29)(3.34)
=
109
9 + 20e−1.015
= 5, 96 billion people!,
p(2000) =
which is in excellent agreement with the reality!
4.2.2
Brachistochrone problem
One of the famous problem in the history of mathematics is the brachistochrone problem posed by Johann Bernoulli in 1696 to challenge his
contemporary mathematicians, especially his elder brother Jacob Bernoulli.
Correct solutions were found by the two Bernoullis, I. Newton, G. Leibniz,
and M. L’Hospital.
”Among all the curves joining the pick A of a hill and the base B of the
hill, find the curve which gives the shortest possible time to descent from A
to B without friction.”
120
Chapter 4.
Differential Equations
This problem is important in the development of mathematics as one of
the forerunners of the calculus of variations.
In solving this problem, we begin with the fundamental principle of optics
discovered by Heron, the Alexandrian scientist of the first century A.D.: A
light ray travels along a path of taking shortest time.
It follows that the light that is reflected at a mirror takes the direction making the angle of reflection equal to the angle of incidence: see the
following figure:
P
Q
R0
R
Q0
It also leads to another well known principle of Snell for deflection of
light, the light going into water from the air is deflected: Suppose that the
speed of the light in the first medium is v1 and in the second medium is
v2 , and the angle of incidence is α1 and that of deflection is α2 as in the
following figure:
P
v1
α1
a
¾ c − x-
X
¾
-
x
α2
¾
c
v2
b
- Q
The total time from P to Q is
p
√
b2 + (c − x)2
a2 + x2
T (x) =
+
.
v1
v2
4.2.
Population Models
121
The minimum of T (x) occurs at x where
dT
dx
sin α1
∴
v1
=
=
x
c−x
√
= 0,
+ p
2
2
2
v1 a + x
v2 b + (c − x)2
sin α2
,
v2
which is called the Snell’s law of deflection.
Now, the light travels several media overlapped with velocities v1 , v2 , v3 ,
. . ., vn , and angles of incidence α1 , α2 , α3 , . . ., αn as the following figure:
α1
x
v1
α2
v2
α3
v3
y
β
α
vn
v
αn
By the Snell’s law, we have
sin α1
sin α2
sin α3
sin αn
=
=
= ··· =
= c, a constant.
v1
v2
v3
vn
As the layer gets thinner and thinner, and then eventually the speed
changes continuously, the path of the light will become a smooth curve such
that at a point of the curve the speed and the slope of the tangent line will
satisfy
sin α
= c, a constant.
v
We now go back to our problem. By the Newton’s equation of motion,
we get
p
v = 2gy,
where g = dv
dt is the gravitational acceleration, v = gt =
dy
= tan β and
y = 12 gt2 is the distance travelled. Since dx
sin α = cos β =
dy
dt
is the speed,
1
1
1
=p
=q
,
2
sec β
dy 2
1 + tan β
1 + ( dx
)
122
Chapter 4.
Differential Equations
we have
sin α
v
µ
¶
dy 2
y 1+( )
dx
y
dy
( )2
k − y dx
µ
¶1/2
y
dy
k−y
c=
³
Set
y
k−y
´1/2
=
1
q
√
dy 2
2gy 1 + ( dx
)
= k, with k =
1
,
2gc2
= 1,
= dx.
= tan ϕ. Then we get
y = k sin2 ϕ.
The differential of this is dy = 2k sin ϕ cos ϕdϕ. From the last equality,
dx = tan ϕdy = 2k sin2 ϕdϕ = k(1 − cos 2ϕ)dϕ.
Finally, by integrating this, we get
k
k
(2ϕ − sin 2ϕ) = a(θ − sin θ), with a = , θ = 2ϕ,
2
2
k
y = k sin2 ϕ = (1 − cos 2ϕ) = a(1 − cos θ),
2
x =
which describe the cycloid.
4.3
Exact Equations
Consider a function y = y(t) satisfying an equation φ(t, y) = y + sin(t + y) =
c. By the implicit differentiation, we have
d
dy
φ(t, y) = cos(t + y) + (1 + cos(t + y))
= 0.
dt
dt
In reversed order, if we are given a differential equation
cos(t + y) + (1 + cos(t + y))
dy
=0
dt
4.3.
Exact Equations
123
which is of the form
d
φ(t, y) = 0, y = y(t),
dt
then we can easily find the solution
φ(t, y) = y + sin(t + y) = c.
In general, the most general first order differential equations that we can
solve are of the following form:
d
φ(t, y) = 0, y = y(t),
dt
whose solution is φ(t, y) = c.
Now the question is, for a given differential equation, how we can recognize when it can be put in this form. Note that the function φ(t, y) in the
equation has two variables t and y. In general, the derivative of a function
in more than one variables will be discussed in Calculus II later.
In this section, we briefly introduce the derivatives of such a function
of more than one variables. It involves partial derivatives: Let z = φ(x, y)
be a function of two variables x and y on a domain U in R2 . The partial
derivative of φ with respect to x is the usual derivative of φ by holding the
other variable y as a constant:
∂φ
φ(x + h, y) − φ(x, y)
(x, y) = φx (x, y) = lim
,
h→0
∂x
h
provided the limit exists. Then the derivative of z is defined as
dz = φx (x, y)dx + φy (x, y)dy.
It is also known that if the first partial derivatives are continuous on U, then
the second order partial derivatives satisfy: φxy (x, y) = φyx (x, y) on U.
For a given function φ(t, y), where y = f (t), the derivative of φ is
d
∂φ ∂φ dy
φ(t, y(t)) =
+
.
dt
∂t
∂y dt
Theorem 4.3.1 The differential equation M (t, y) + N (t, y) dy
dt = 0 can be
d
written as dt
φ(t, y) = 0 if and only if there exists a function φ(t, y) such
∂φ
that M (t, y) = ∂φ
∂t and N (t, y) = ∂y .
The next question is, for given two functions M (t, y) and N (t, y), how
do we know that there is a function φ(t, y) such that M (t, y) = ∂φ
∂t and
N (t, y) = ∂φ
?
∂y
124
Chapter 4.
Differential Equations
Theorem 4.3.2 Let M (t, y) and N (t, y) be continuous and have continuous
partial derivatives on R = (a, b) × (c, d). Then there exists a function φ(t, y)
∂φ
∂M
∂N
such that M (t, y) = ∂φ
∂t and N (t, y) = ∂y if and only if ∂y = ∂t in R.
Proof: Suppose that M (t, y) = ∂φ
∂t and N (t, y) =
in an advanced calculus course that
∂φ
∂y .
Then it maybe proved
∂M
∂2φ
∂2φ
∂N
=
=
=
.
∂y
∂y∂t
∂t∂y
∂t
∂φ
∂t
For the converse, we are looking for a function φ(t, y) such that M (t, y) =
and N (t, y) = ∂φ
∂y . Define φ by
Z
φ(t, y) =
M (t, y)dt + h(y),
for some function h in y which is to be found according to
Z
∂M (t, y)
∂φ
=
dt + h0 (y),
N (t, y) =
∂y
∂y
Z
∂M (t, y)
or h0 (y) = N (t, y) −
dt.
∂y
The left hand side h0 (y) is a function of y alone, while the right hand side
is a function in t and y, which is possible only when
µ
¶
Z
∂M (t, y)
∂
N (t, y) −
dt = 0.
∂t
∂y
However, since
µ
¶
Z
∂M (t, y)
∂N (t, y) ∂M (t, y)
∂
N (t, y) −
dt =
−
,
∂t
∂y
∂t
∂y
³
R ∂M (t,y) ´
(t,y)
(t,y)
∂
N
(t,
y)
−
dt = 0 if and only if ∂N∂t
= ∂M∂y
. I particular, if
∂t
∂y
∂N
∂t
6 ∂M
=
∂y , then there is no such function φ. On the other hand, if
then we can find
¶
Z µ
Z
∂M (t, y)
dt dy.
h(y) =
N (t, y) −
∂y
∂N
∂t
∂φ
Consequently, M (t, y) = ∂φ
∂t and N (t, y) = ∂y for
¶
Z
Z µ
Z
∂M (t, y)
φ(t, y) = M (t, y)dt +
N (t, y) −
dt dy.
∂y
=
∂M
∂y ,
¤
4.3.
Exact Equations
125
Definition 4.3.1 The differential equation M (t, y) + N (t, y) dy
dt = 0 is said
∂N
∂M
to be exact if ∂t = ∂y .
Remarks: (1) The domain discussed in Theorem 4.3.2 can be any region
in R2 which contains no holes.
(2) When we say the solution of an exact differential equation is given
by φ(t, y) = c, what we really mean is that the equation φ(t, y) = c can be
solved for y as a function of t and c. In most cases, the solution can not be
solved explicitly for y as a function of t. However, a computer may be used
to compute y(t) to any desired accuracy.
∂φ
(3) Practically, from the equations M (t, y) = ∂φ
∂t and N (t, y) = ∂y , we
have:
Z
Z
φ(t, y) = M (t, y)dt + h(y), φ(t, y) = N (t, y)dt + k(t).
Usually h(y) or k(t) can be determined by inspection.
Example 4.3.1 Find the general solution of
3y + et + (3t + cos y)
dy
= 0.
dt
∂N
t
Solution: Since ∂M
∂y = 3 = ∂t for M (t, y) = 3y+e and N (t, y) = 3t+cos y,
the equation is exact.
Method 1: From M (t, y) = ∂φ
∂t ,
Z
φ(t, y) =
(3y + et )dt + h(y) = et + 3ty + h(y),
∂φ
= 3t + h0 (y) = 3t + cos y,
∂y
=⇒ h0 (y) = cos y, =⇒ h(y) = sin y + c,
=⇒ N (t, y) =
∴ φ(t, y) =
Method 2: From M (t, y) =
et + 3ty + sin y = c.
∂φ
∂t
and N (t, y) =
φ(t, y) = et + 3ty + h(y),
∂φ
∂y ,
we get
φ(t, y) = 3ty + sin y + k(t).
By comparison, we see that h(y) = sin y and k(t) = et , so that φ(t, y) =
et + 3ty + sin y.
¤
126
Chapter 4.
Differential Equations
Example 4.3.2 Find the solution of the initial value problem:
3t2 y + 8ty 2 + (t3 + 8t2 y + 12y 2 )
dy
= 0,
dt
y(2) = 1.
∂N
2
2
2
Solution: Since ∂M
∂y = 3t + 16ty = ∂t for M (t, y) = 3t y + 8ty and
N (t, y) = t3 + 8t2 y + 12y 2 , the equation is exact.
∂φ
From M (t, y) = ∂φ
∂t and N (t, y) = ∂y , we get
φ(t, y) = t3 y + 4t2 y 2 + h(y),
φ(t, y) = t3 y + 4t2 y 2 + 4y 3 + k(t).
By comparison, we see that h(y) = 4y 3 and k(t) = 0, so that φ(t, y) =
t3 y + 4t2 y 2 + 4y 3 = c. By setting t = 2 and y = 1, we get c = 28.
¤
Example 4.3.3 Find the solution of the initial value problem:
4t3 et+y + t4 et+y + 2t + (t4 et+y + 2y)
dy
= 0,
dt
y(0) = 1.
4
3 t+y = ∂N for M (t, y) = 4t3 et+y +t4 et+y +2t
Solution: Since ∂M
∂y = (t +4t )e
∂t
and N (t, y) = t4 et+y + 2y, the equation is exact.
From N (t, y) = ∂φ
∂y ,
Z
φ(t, y) =
(t4 et+y + 2y)dy + k(t) = t4 et+y + y 2 + k(t),
∂φ
= (4t3 + t4 )et+y + k 0 (t) = 4t3 et+y + t4 et+y + 2t,
∂t
=⇒ k 0 (t) = 2t, =⇒ k(t) = t2 + c,
=⇒ M (t, y) =
∴ φ(t, y) = t4 et+y + y 2 + t2 = c.
By setting t = 0 and y = 1, we get c = 1.
¤
Sometimes, a given equation M (t, y) + N (t, y) dy
dt = 0 is not exact, but by
multiplying some suitable function µ(t, y), called an integrating factor, to
the equation, we get an exact equation: i.e., the equation
µ(t, y)M (t, y) + µ(t, y)N (t, y)
dy
=0
dt
4.3.
Exact Equations
127
can be exact. By Theorem 4.3.2, this is exact if and only if
∂
∂
(µ(t, y)M (t, y)) =
(µ(t, y)N (t, y)),
∂y
∂t
∂µ
∂M
∂µ
∂N
or M
+µ
= N
+µ
.
∂y
∂y
∂t
∂t
There are only two special cases where we can find an explicit solution
of this equation: when µ is either a function of t alone, or a function of y
alone. If µ is a function of t alone, then the above equation reduces to
µ
¶
µ
¶
∂µ
∂M
∂N
1 ∂µ
1 ∂M
∂N
N
=µ
−
, or
=
−
,
∂t
∂y
∂t
µ ∂t
N
∂y
∂t
which is meaningful only when
µ
¶
∂N
1 ∂M
−
= R(t)
N
∂y
∂t
¡R
¢
is a function of t alone. In this case, µ(t) = exp R(t)dt is an integrating
factor. A similar situation occurs if µ is a function of y alone.
In general, the function
µ
¶
1 ∂M
∂N
−
N
∂y
∂t
is almost always a function of both t and y. Only for very special pairs of
functions M and N is it a function of t, or y, alone. This is the reason why
we cannot solve vary many differential equations.
Example 4.3.4 Find the general solution of
y2
dy
+ 2yet + (y + et )
= 0.
2
dt
Solution: Since
µ
¶
∂N
1
y + et
1 ∂M
t
t
−
=
((y
+
2e
)
−
e
)
=
= 1 6= 0,
N
∂y
∂t
y + et
y + et
R
the equation is not exact, but has an integrating factor µ(t) = exp( 1dt) =
et . Thus, the equivalent equation
et
y2
dy
+ 2yet + et (y + et )
=0
2
dt
128
Chapter 4.
Differential Equations
is exact. Then as previous cases,
φ(t, y) = et
y2
+ ye2t + h(y),
2
φ(t, y) = et
y2
+ ye2t + k(t).
2
2
By comparison, we see that h(y) = 0 = k(t), so that φ(t, y) = et y2 +ye2t = c.
Since this is a quadratic equation in y, we can solve this equation for y as a
function of t to get
p
¤
y(t) = −et ± e2t + 2ce−t .
Example 4.3.5 Use the methods of this section to find the general solution
of
dy
+ p(t)y = q(t).
dt
Solution: For M (t, y) = p(t)y − q(t) and N (t, y) = 1, we have
1
N
µ
∂M
∂N
−
∂y
∂t
¶
1
= (p(t) − 0) = p(t) 6= 0,
1
R
the equation is not exact, but has an integrating factor µ(t) = exp( p(t)dt).
Thus, the equivalent equation
µ(t)(p(t)y − q(t)) + µ(t)
is exact. Then, as previous cases, from
Since
∂φ
∂t
∂φ
∂y
dy
=0
dt
= µ(t), φ(t, y) = µ(t)y + k(t).
= µ(t)M (t, y), we get
µ0 (t)y + k 0 (t) = µ(t)(p(t)y − q(t)).
Since µ0 (t) = µ(t)p(t), we get k 0 (t) = −µ(t)q(t) and so
Z
φ(t, y) = µ(t)y + k(t) = µ(t)y − µ(t)q(t)dt = c,
which was obtained in Section 4.1.
¤
4.4. EXISTENCE AND UNIQUENESS THEOREM
4.4
129
Existence and Uniqueness Theorem
Given an initial value problem: dy
dt = f (t, y),with y(t0 ) = y0 , how do we
know whether there is a solution? If we know it has a solution, how can
we find one explicit solution? If we found one, are there any other solutions
too? If yes, how many are there? Those are very often naturally asked
questions in mathematics.
In some cases (like linear differential equations), as we have seen in the
earlier sections the existence of a solution of the initial value problem can be
established directly by actually solving the problem and exhibiting a formula
for the solution. However, in general, there is no general method of solving
the equation that applies in all cases, and actually for almost all differential
equations, finding one explicit solution is almost impossible even if we know
that the solution exists.
Therefore, for the general case it is necessary to adopt an indirect approach that establishes the existence of a solution of the initial value problem, but usually does not provide a practical means of finding an explicit
solution.
Note however that, in actual applications, it is usually more than sufficient to approximate the solution y(t) of the equation in four decimal places,
and this can be done quite easily by using computers.
The following existence and uniqueness theorem guarantees the validity
of this computations.
Theorem 4.4.1 [Fundamental Theorem of Ordinary Differential Equation
I] Suppose that the differential equation dy
dt = f (t, y) is defined on the rectangle R = [t0 , t0 + a] × [y0 − b, y0 + b], and that f and ∂f
∂y are continuous in
R. Let
b
M = max |f (t, y)|, α = min(a, ).
M
(t,y)∈R
Then the initial value problem
dy
= f (t, y(t)),
dt
with y(t0 ) = y0 , (t0 , y0 ) ∈ R,
(4.1)
has a unique solution y(t) on the interval [t0 , t0 + a]. Similar result holds
for t < t0 .
The heart of the proof is in constructing a sequence of functions that
converges to a limit function satisfying the initial value problem. Actually,
since the individual member of the sequence needs not satisfy the desired
130
Chapter 4.
Differential Equations
conditions, it is usually impossible to compute the members of the sequence
explicitly more than a few members, so that the limit function can not be
found explicitly except for very rare cases. Nevertheless, it is possible to
show that the sequence in question converges and the limit function has the
desired properties. The argument is fairly intricate and depends, in parts, on
techniques and results that are usually encountered in a course of advanced
calculus. Thus, at the first reading of this book, the readers may skip this
part.
The strategy for the proof is in the following three steps:
(1) Construct a sequence of functions yn (t) which come closer and closer
to the solution of (4.1),
(2) Show that the sequence of functions yn (t) has a limit y(t) on a suitable
domain [t0 , t0 + a],
(3) Prove that the limit y(t) is a solution of 4.1) on the interval [t0 , t0 + a].
Proof: (1) Construction of a sequence of functions yn (t): By integrating
both sides of the equation (4.1), we get
Z t
y(t) = y0 +
f (s, y(s))ds.
(4.2)
t0
Thus, y(t) is a solution of (4.1) if and only if it is a continuous solution of
(4.2).
Let us guess a first solution of (4.2) to be a constant function y0 (t) = y0 .
Then, we define
Z
t
y1 (t) = y0 +
t0
f (s, y0 (s))ds.
If y1 (t) = y0 , then y(t) = y0 is indeed a solution of (4.2). If not, we define
Z t
y2 (t) = y0 +
f (s, y1 (s))ds,
t0
and so on. In this manner, we define
Z
yn+1 (t) = y0 +
t
t0
f (s, yn (s))ds,
to obtain a sequence {yn (t)} of functions, called Picard iterates. It turns
out that this Picard iterates always converges on a suitable interval to a
solution y(t) of (4.2).
4.4.
Existence and Uniqueness Theorem
131
(2) Convergence of the Picard iterates: We can not expect the Picard
iterates to converge for all t. Thus, we first try to find an interval in which
all the iterates yn (t) are uniformly bounded (that is, |yn (t)| ≤ K for all n
and t in the interval, and for some fixed constant K).
Lemma 4.4.2 Choose any two positive numbers a and b, and let R be the
rectangle [t0 , t0 + a] × [y0 − b, y0 + b]. Let
M = max |f (t, y)|,
α = min(a,
(t,y)∈R
b
).
M
Then
|yn (t) − y0 | ≤ M (t − t0 ),
for t0 ≤ t ≤ t0 + α.
This lemma claims that the graph of yn (t) is sandwiched between the
lines y = y0 + M (t − t0 ) and y = y0 − M (t − t0 ), for t0 ≤ t ≤ t0 + α:
6
y0 + b
y0 (t)
t0
6
y0
y0 (t)
y0−b
t0 + α = t0 + a
-
t0
t0 + α = t0 +
b
M
In fact, by the construction of α, the graph of yn (t) is contained in
R = [t0 , t0 + a] × [y0 − b, y0 + b].
Proof: Use induction on n. The lemma is is trivially true for n = 0 since
y0 (t) = y0 . Suppose it is true for n, so that |yn (t) − y0 | ≤ M (t − t0 ). Then
¯Z t
¯ Z t
¯
¯
¯
|f (s, yn (s))|ds ≤ M (t − t0 ),
|yn+1 (t) − y0 | = ¯ f (s, yn (s))ds¯¯ ≤
t0
for t0 ≤ t ≤ t0 + α.
t0
¤
132
Chapter 4.
Differential Equations
We can rewrite yn (t) as
yn (t) = y0 (t) + (y1 (t) − y0 (t)) + (y2 (t) − y1 (t)) + · · · + (yn (t) − yn−1 (t)).
Thus yn (t) converges if and only if the series
(y1 (t) − y0 (t)) + (y2 (t) − y1 (t)) + · · · + (yn (t) − yn−1 (t)) + · · ·
P
converges, or absolutely converges: i.e., ∞
n=1 |yn (t)−yn−1 (t)| < ∞. Observe
that
¯Z t
¯
¯
¯
¯
|yn (t) − yn−1 (t)| = ¯ [f (s, yn−1 (s)) − f (s, yn−2 (s))]ds¯¯
t0
Z t
≤
|f (s, yn−1 (s)) − f (s, yn−2 (s))| ds
t0
¯
Z t¯
¯ ∂(f (s, ξ(s)) ¯
¯
¯ |yn−1 (s) − yn−2 (s)| ds,
=
¯
¯
∂y
t0
where ξ(s) lies between yn−1 (s) and yn−2 (s) from the intermediate value
theorem. By Lemma 4.4.2, (s, ξ(s)) all lie in the rectangle R for s < t0 + α,
and so
Z t
|yn (t) − yn−1 (t)| ≤ L
|yn−1 (s) − yn−2 (s)| ds, t0 ≤ t ≤ t0 + α,
t0
¯
¯
¯ ∂(f (s, y) ¯
¯.
where L = max ¯¯
∂y ¯
(t,y)∈R
For n = 2 and 3, this inequality becomes:
Z t
Z t
LM (t − t0 )2
,
|y2 (t) − y1 (t)| ≤ L
|y1 (s) − y0 |ds ≤ L
M (s − t0 )ds =
2
t0
t0
Z t
Z t
(s − t0 )2
L2 M (t − t0 )3
2
|y3 (t) − y2 (t)| ≤ L
|y2 (s) − y1 (s)|ds ≤ L M
ds =
,
2
3!
t0
t0
|yn (t) − yn−1 (t)| ≤
Ln−1 M (t − t0 )n
,
n!
t0 ≤ t ≤ t0 + α.
Therefore,
∞
X
|yn (t) − yn−1 (t)| ≤
n=1
∞
X
Ln−1 M (t − t0 )n
n!
n=1
∞
X
Ln−1 M αn
n!
n=1
"∞
#
M Lα
M X (Lα)n
=
(e − 1) < ∞.
=
L
n!
L
≤
n=1
4.4.
Existence and Uniqueness Theorem
133
Thus, the Picard iterates yn (t) converges for all t ∈ [t0 , t0 + α] to some limit
function y(t).
¤
(3) y(t) satisfies the initial-value problem: To show that y(t) is continuous and satisfies
Z t
f (s, y(s))ds.
y(t) = y0 +
t0
Since
Z
yn+1 (t) = y0 +
t
t0
f (s, yn (s))ds,
by taking the limits of both sides we get:
y(t) =
lim yn+1 (t)
Z t
= y0 + lim
f (s, yn (s))ds
n→∞
n→∞ t
0
t
Z
?
= y0 +
= y0 +
t0
Z t
f (s, lim yn (s))ds
n→∞
f (s, y(s))ds.
t0
We want to show the equality in the middle. For this, we show that
¯Z t
¯
Z t
¯
¯
¯ f (s, y(s))ds −
f (s, yn (s))ds¯¯ → 0,
¯
t0
as n → ∞.
t0
Note that the graph of y(t) lies in R on [t0 , t0 + α] since that of yn (t) are in
R. Hence,
¯Z t
¯
Z t
¯
¯
¯ (f (s, y(s)) − f (s, yn (s)))ds¯ ≤
|f (s, y(s)) − f (s, yn (s))| ds
¯
¯
t0
t0
Z t
≤ L
|y(s) − yn (t)|ds.
t0
Moreover, by the construction of yn (t) and y(t), we have
y(s) − yn (s) =
∞
X
(yk (s) − yk−1 (s)),
k=n+1
134
Chapter 4.
Differential Equations
∞
X
|y(s) − yn (s)| ≤ M
≤ M
k=n+1
∞
X
Lk−1
(s − t0 )k
k!
Lk−1
αk
k!
k=n+1
∞
X
(Lα)k
,
k!
k=n+1
¯Z t
¯
Z
∞
X
¯
¯
(Lα)k t
¯
¯
∴ ¯ {f (s, yn (s)) − f (s, yn (s))} ds¯ ≤ M
ds
k!
t0
t0
=
M
L
k=n+1
∞
X
≤ Mα
k=n+1
(Lα)k
→ 0,
k!
as n → ∞, since the last summation is the tail end of the convergent Taylor
series of eLα .
To show y(t) is continuous, we prove that for any ε > 0 we can find δ > 0
such that
|y(t + h) − y(t)| < ε, if |h| < δ.
Since we do not know the explicit form of y(t), we cannot compare y(t+h)
and y(t) directly. However, note that
|y(t + h) − y(t)| ≤ |y(t + h) − yN (t + h)| + |yN (t + h) − yN (t)| + |yN (t) − y(t)|.
Since yn (t) → y(t), as n → ∞, for t ∈ [t0 , t0 + α], we can take N so large
that
∞
M X (Lα)k
²
< .
L
k!
3
k=n+1
Then
²
²
and |yN (t) − y(t)| < ,
3
3
for h sufficiently small so that t + h < t0 + α. Moreover, since yN (t) is obtained from N repeated integration of continuous functions, it is continuous,
and so that one can take δ > 0 so small that
²
for |h| < δ.
|yN (t + h) − yN (t)| <
3
|y(t + h) − yN (t + h)| <
Consequently, for |h| < δ,
|y(t + h) − y(t)| <
²
²
²
+ + = ε.
3 3 3
4.4.
Existence and Uniqueness Theorem
135
(4) Uniqueness of y(t): Suppose that z(t) is another solution. Then
Z t
Z t
y(t) = y0 +
f (s, y(s))ds, and z(t) = y0 +
f (s, z(s))ds.
t0
t0
Thus, for (t, y) ∈ R,
¯Z t
¯
¯
¯
|y(t) − z(t)| = ¯¯ (f (s, y(s)) − f (s, z(s)))ds¯¯
t0
Z t
≤
|f (s, y(s)) − f (s, z(s))| ds
t0
Z
t
= L
|y(t) − z(t)| ds.
t0
Lemma 4.4.3 Let w(t) be a nonnegative function such that
Z t
w(t) ≤ L
w(s)ds.
t0
Then w(t) is identically zero.
Proof: If the differentiation preserved the inequality so that w0 (t) ≤ Lw(t)
from the condition, we would have
Z t0
−L(t−t0 )
0≤e
w(t) ≤ w(t0 ) ≤ L
w(s)ds = 0
t0
and so w(t) = 0, and proof is done.
However, the differentiation does not preserve
R t inequality, while integral
does. Thus we make a trick of setting u(t) = t0 w(s)ds. Then
Z
u0 (t) = w(t) ≤ L
t
w(s)ds ≤ Lu(t).
t0
Rt
Now this implies 0 ≤ e−L(t−t0 ) u(t) ≤ u(t0 ) = t00 u(s)ds = 0, for t ≥ t0 ,a nd
Rt
so u(t) = 0 and 0 ≤ w(t) ≤ L t0 w(s)ds = Lu(t) = 0.
¤
Lemma 4.4.3 implies |y(t) − z(t)| = 0 or y(t) = z(t) for all t ∈ [t0 , t0 + α].
This completes Theorem 4.4.1.
¤
136
Chapter 4.
Differential Equations
Example 4.4.1 Compute the Picard iterates for the initial value problem:
y0 = 1 + y3 ,
y0 (t) = 1
Z
y1 (t) = 1 +
t
(1 + 1)ds = 1 + 2(t − 1),
1
Z
y2 (t) = 1 +
y(1) = 1.
t©
ª
1 + [1 + 2(t − 1)]3 ds,
1
= 1 + 2(t − 1) + 3(t − 1)2 + 4(t − 1)3 + 2(t − 1)4 .
¤
Example 4.4.2 Compute the Picard iterates for the initial value problem:
y 0 = y, y(0) = 1, and show that they converge to y(t) = et .
y0 (t) = 1
Z
y1 (t) = 1 +
1ds = 1 + t,
1
Z
y2 (t) = 1 +
t
(1 + s)ds = 1 + t +
1
Z
yn (t) = 1 +
t
t
(1 + s + · · · +
1
t2
,
2!
sn−1
t2
tn
)ds = 1 + t + + · · · + ,
(n − 1)!
2!
n!
which converges to et .
¤
Example 4.4.3 Consider the initial value problem:
dy
= (sin 2t)y 1/3 ,
dt
y(0) = 0.
One solution could be y(t) = 0. If we ignore the initial condition y(0) = 0
and rewrite the equation as
1 dy
= sin 2t,
dt
y 1/3
we get, by integration,
3y 1/3
2
or
Z
t
1 − cos 2t
sin 2sds =
= sin2 t.
2
0
p
y(t) = ± 8/27 sin3 t,
=
which are two other solutions. This non-uniqueness of the solution is due
to the fact that the right hand side of the equation does not have a partial
derivative with respect to y at y = 0.
¤
4.4.
Existence and Uniqueness Theorem
137
Example 4.4.4 Consider the initial value problem:
dy
2
= f (t, y) = t2 + e−y ,
dt
Choose a =
1
2
y(0) = 0.
and b = 1. Then on the rectangle R = [0, 21 ] × [−1, 1],
5
1
2
M = max (t2 + e−y ) = 1 + ( )2 = .
2
4
(t,y)∈R
Thus, for α = min(1/2, 4/5) = 1/2, the solution y(t) exists for t ∈ [0, α] =
[0, 1/2], by Theorem 4.4.1, and |y(t)| ≤ 1.
¤
Example 4.4.5 Consider the initial value problem:
dy
2
= f (t, y) = y 3 + e−t ,
dt
Choose a =
1
9
y(0) = 1.
and b = 1. Then on the rectangle R = [0, 91 ] × [0, 2],
2
M = max (y 3 + e−t ) = 1 + 23 = 9.
(t,y)∈R
Thus, for α = min(1/9, 1/9) = 1/9, the solution y(t) exists for t ∈ [0, α] =
[0, 1/9], by Theorem 4.4.1, and 0 ≤ y(t) ≤ 2.
¤
Example 4.4.6 Consider the initial value problem:
dy
= f (t, y) = 1 + y 2 ,
dt
y(0) = 0.
On the rectangle R = [0, a] × [−b, b],
M = max (1 + t2 ) = 1 + b2 .
(t,y)∈R
b
Thus, for α = min(a, 1+b
2 ), the solution y(t) exists for t ∈ [0, α]. Thus, the
b
1
largest α that we can achieve is the maximum value of 1+b
2 , which is 2 .
Thus, Theorem 4.4.1 predicts that y(t) exists for 0 ≤ t ≤ 12 . However, since
y(t) = tan t exists on [0, π2 ), Theorem 4.4.1 has some limitation.
¤
Example 4.4.7 Consider the initial value problem:
dy
= f (t, y),
dt
y(t0 ) = y0 .
138
Chapter 4.
Differential Equations
Suppose that |f (t, y)| ≤ K on [t0 , ∞) × R. Then, on the rectangle
R = [t0 , t0 + a] × [y0 − b, y0 + b],
M = max (1 + t2 ) ≤ K,
(t,y)∈R
for any a > 0 and b > 0. Thus, for α = min(a, Kb ), the solution y(t) exists
for t ∈ [t0 , t0 + α]. Now we can make α = min(a, Kb ) as large as desired by
choosing a and b sufficiently large. hence, y(t) exists for t ≥ t0 .
¤
Download
Study collections