Lectures on Differential Equations

advertisement
Lectures on Differential Equations
Philip Korman
Department of Mathematical Sciences
University of Cincinnati
Cincinnati Ohio 45221-0025
Copyright @ 2008, by Philip L. Korman
Contents
Introduction
vi
1 First Order Equations
1.1 Integration by Guess-and-Check . . . .
1.2 First Order Linear Equations . . . . .
1.2.1 The Integrating Factor . . . . .
1.3 Separable Equations . . . . . . . . . .
1.3.1 Problems . . . . . . . . . . . .
1.4 Some Special Equations . . . . . . . .
1.4.1 Homogeneous Equations . . . .
1.4.2 The Logistic Population Model
1.4.3 Bernoulli’s Equation . . . . . .
1.4.4 ∗ Riccati’s Equation . . . . . . .
1.4.5 ∗ Parametric Integration . . . . .
1.4.6 Some Applications . . . . . . .
1.5 Exact Equations . . . . . . . . . . . .
1.6 Existence and Uniqueness of Solution .
1.7 Numerical Solution by Euler’s method
1.7.1 Problems . . . . . . . . . . . .
1.8∗ The existence and uniqueness theorem
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
1
3
5
9
12
15
15
17
20
21
22
24
26
30
31
33
37
2 Second Order Equations
2.1 Special Second Order Equations . . . . . . . . . . . . . . . .
2.1.1 y is not present in the equation . . . . . . . . . . . .
2.1.2 x is not present in the equation . . . . . . . . . . . .
2.1.3 ∗ The trajectory of pursuit . . . . . . . . . . . . . . . .
2.2 Linear Homogeneous Equations with Constant Coefficients .
2.2.1 Characteristic equation has two distinct real roots .
.
.
.
.
.
.
41
41
42
43
44
46
47
ii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
CONTENTS
2.2.2
2.3
2.4
2.5
2.6
2.7
2.8
2.9
2.10
2.11
2.12
2.13
2.14
Characteristic equation has only one (repeated) real
root . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Characteristic equation has two complex conjugate roots . . .
2.3.1 Euler’s formula . . . . . . . . . . . . . . . . . . . . .
2.3.2 The General Solution . . . . . . . . . . . . . . . . . .
2.3.3 Problems . . . . . . . . . . . . . . . . . . . . . . . . .
Linear Second Order Equations with Variable Coefficients . .
Some Applications of the Theory . . . . . . . . . . . . . . . .
2.5.1 The Hyperbolic Sine and Cosine Functions . . . . . .
2.5.2 Different Ways to Write a General Solution . . . . . .
2.5.3 Finding the Second Solution . . . . . . . . . . . . . . .
2.5.4 Problems . . . . . . . . . . . . . . . . . . . . . . . . .
Non-homogeneous Equations. The easy case. . . . . . . . . .
Non-homogeneous Equations. When one needs something extra. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Variation of Parameters . . . . . . . . . . . . . . . . . . . . .
Convolution Integral . . . . . . . . . . . . . . . . . . . . . . .
2.9.1 Differentiating Integrals . . . . . . . . . . . . . . . . .
2.9.2 Yet Another Way to Compute a Particular Solution .
Applications of Second Order Equations . . . . . . . . . . . .
2.10.1 Vibrating Spring . . . . . . . . . . . . . . . . . . . . .
2.10.2 Problems . . . . . . . . . . . . . . . . . . . . . . . . .
2.10.3 Meteor Approaching the Earth . . . . . . . . . . . . .
2.10.4 Damped Oscillations . . . . . . . . . . . . . . . . . . .
Further Applications . . . . . . . . . . . . . . . . . . . . . . .
2.11.1 Forced and Damped Oscillations . . . . . . . . . . . .
2.11.2 Discontinuous Forcing Term . . . . . . . . . . . . . . .
2.11.3 Oscillations of a Pendulum . . . . . . . . . . . . . . .
2.11.4 Sympathetic Oscillations . . . . . . . . . . . . . . . . .
Oscillations of a Spring Subject to a Periodic Force . . . . . .
2.12.1 Fourier Series . . . . . . . . . . . . . . . . . . . . . . .
2.12.2 Vibrations of a Spring Subject to a Periodic Force . .
Euler’s Equation . . . . . . . . . . . . . . . . . . . . . . . . .
2.13.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . .
2.13.2 The Important Class of Equations . . . . . . . . . . .
Linear Equations of Order Higher Than Two . . . . . . . . .
2.14.1 The Polar Form of Complex Numbers . . . . . . . . .
2.14.2 Linear Homogeneous Equations . . . . . . . . . . . . .
2.14.3 Non-Homogeneous Equations . . . . . . . . . . . . . .
2.14.4 Problems . . . . . . . . . . . . . . . . . . . . . . . . .
iii
49
51
51
51
53
56
60
60
61
63
64
65
69
70
72
72
73
74
74
76
80
81
82
82
84
85
86
87
87
91
91
91
92
95
95
96
99
99
iv
CONTENTS
3 Using Infinite Series to Solve Differential Equations
3.1 Series Solution Near a Regular Point . . . . . . . . . . . . .
3.1.1 Maclauren and Taylor Series . . . . . . . . . . . . .
3.1.2 A Toy Problem . . . . . . . . . . . . . . . . . . . . .
3.1.3 Using Series When Other Methods Fail . . . . . . .
3.2 Solution Near a Mildly Singular Point . . . . . . . . . . . .
3.2.1∗ Derivation of J0 (x) by differentiation of the equation
3.3 Moderately Singular Equations . . . . . . . . . . . . . . . .
3.3.1 Problems . . . . . . . . . . . . . . . . . . . . . . . .
103
. 103
. 103
. 105
. 106
. 110
. 114
. 115
. 121
4 Laplace Transform
4.1 Laplace Transform And Its Inverse . . .
4.1.1 Review of Improper Integrals . .
4.1.2 Laplace Transform . . . . . . . .
4.1.3 Inverse Laplace Transform . . . .
4.2 Solving The Initial Value Problems . . .
4.2.1 Step Functions . . . . . . . . . .
4.3 The Delta Function and Impulse Forces
4.4 Convolution and the Tautochrone curve
4.5 Distributions . . . . . . . . . . . . . . .
4.5.1 Problems . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
125
125
125
126
127
129
132
134
136
141
143
. . . . . . .
. . . . . . .
Coefficients
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
149
149
149
151
155
155
156
157
160
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5 Linear Systems of Differential Equations
5.1 The Case of Distinct Eigenvalues . . . . . . . . .
5.1.1 Review of Vectors and Matrices . . . . . .
5.1.2 Linear First Order Systems with Constant
5.2 A Pair of Complex Conjugate Eigenvalues . . . .
5.2.1 Complex Valued Functions . . . . . . . .
5.2.2 General Solution . . . . . . . . . . . . . .
5.3 Exponential of a matrix . . . . . . . . . . . . . .
5.3.1 Problems . . . . . . . . . . . . . . . . . .
6 Non-Linear Systems
6.1 Predator-Prey Interaction . . .
6.2 Competing species . . . . . . .
6.3 An application to epidemiology
6.4 Lyapunov stability . . . . . . .
6.4.1 Problems . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
164
. 164
. 168
. 173
. 175
. 178
v
CONTENTS
7 Fourier Series and Boundary Value Problems
7.1 Fourier series for functions of an arbitrary period . . . .
7.1.1 Even and odd functions . . . . . . . . . . . . . . .
7.1.2 Further examples and the convergence theorem . .
7.2 Fourier cosine and Fourier sine series . . . . . . . . . . .
7.3 Two point boundary value problems . . . . . . . . . . . .
7.3.1 Problems . . . . . . . . . . . . . . . . . . . . . . .
7.4 Heat equation and separation of variables . . . . . . . . .
7.4.1 Separation of variables . . . . . . . . . . . . . . . .
7.5 Laplace’s equation . . . . . . . . . . . . . . . . . . . . . .
7.6 The wave equation . . . . . . . . . . . . . . . . . . . . . .
7.6.1 Non-homogeneous problems . . . . . . . . . . . . .
7.6.2 Problems . . . . . . . . . . . . . . . . . . . . . . .
7.7 Calculating the Earth’s temperature . . . . . . . . . . . .
7.7.1 The complex form of the Fourier series . . . . . . .
7.7.2 Temperatures inside the Earth and wine cellars . .
7.8 Laplace’s equation in circular domains . . . . . . . . . . .
7.9 Sturm-Liouville problems . . . . . . . . . . . . . . . . . .
7.9.1 Fourier-Bessel series . . . . . . . . . . . . . . . . .
7.9.2 Cooling of a cylindrical tank . . . . . . . . . . . .
7.10 Green’s function . . . . . . . . . . . . . . . . . . . . . . .
7.11 Fourier Transform . . . . . . . . . . . . . . . . . . . . . .
7.12 Problems on infinite domains . . . . . . . . . . . . . . . .
7.12.1 Evaluation of some integrals . . . . . . . . . . . . .
7.12.2 Heat equation for −∞ < x < ∞ . . . . . . . . . .
7.12.3 Steady state temperatures for the upper half plane
7.12.4 Using Laplace transform for a semi-infinite string .
7.12.5 Problems . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
180
. 180
. 182
. 184
. 187
. 189
. 191
. 195
. 196
. 202
. 206
. 209
. 212
. 215
. 215
. 217
. 218
. 222
. 225
. 227
. 228
. 231
. 233
. 233
. 234
. 235
. 236
. 237
8 Numerical Computations
8.1 The capabilities of software systems, like Mathematica
8.2 Solving boundary value problems . . . . . . . . . . .
8.3 Solving nonlinear boundary value problems . . . . . .
8.4 Direction fields . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
242
242
245
248
250
9 Appendix
254
.1
The chain rule and its descendants . . . . . . . . . . . . . . . 254
.2
Partial fractions . . . . . . . . . . . . . . . . . . . . . . . . . . 255
.3
Eigenvalues and eigenvectors . . . . . . . . . . . . . . . . . . 256
Introduction
This book is based on several courses that I taught at the University of
Cincinnati. Chapters 1-4 are based on the course “Differential Equations”
for sophomores in science and engineering. These are actual lecture notes,
more informal than many textbooks. While some theoretical material is
either quoted, or just mentioned without proof, I have tried to show all of
the details when doing problems. I have tried to use plain language, and not
to be too wordy. I think, an extra word of explanation has often as much
potential to confuse a student, as to be helpful. I have also tried not to
overwhelm students with new information. I forgot who said it first: “one
should teach the truth, nothing but the truth, but not the whole truth”.
How useful are differential equations? Here is what Isaac Newton said:
“It is useful to solve differential equations”. And what he knew was just
the beginning. Today differential equations are used widely in science and
engineering. It is also a beautiful subject, and it lets students see Calculus
“in action”.
It is a pleasure to thank Ken Meyer and Dieter Schmidt for constant
encouragement, while I was writing this book. I also wish to thank Ken
for reading the entire book, and making a number of useful suggestions,
like doing Fourier series early, with applications to periodic vibrations and
radio tuning. I wish to thank Roger Chalkley, Tomasz Adamowicz, Dieter
Schmidt, and Ning Zhong for a number of useful comments.
vi
Chapter 1
First Order Equations
1.1
Integration by Guess-and-Check
Many problems in differential equations end with a computation of an integral. One even uses the term “integration of a differential equation” instead
of “solution”. We need to be able to compute integrals quickly.
Recall the product rule
(f g)0 = f g 0 + f 0 g .
Example
Z
xex dx. We need to find the function, whose derivative is xex .
If we try a guess: xex , then its derivative
(xex )0 = xex + ex
has an extra term ex . To remove this extra term, we subtract ex from the
initial guess, i.e.,
Z
xex dx = xex − ex + c.
By differentiation, we verify that this is correct. Of course, integration by
parts may also be used.
Z
1
Example
x cos 3x dx. Starting with the initial guess x sin 3x, whose
3
derivative is x cos 3x + 13 sin 3x, we compute
Z
1
1
x cos 3x dx = x sin 3x + cos 3x + c.
3
9
1
2
CHAPTER 1. FIRST ORDER EQUATIONS
We see that the initial guess is the product f (x)g(x), chosen in such a
way that f (x)g 0(x) gives the integrand.
Z
1
Example xe−5x dx. Starting with the initial guess − xe−5x , we compute
5
Z
Example
Z
1
1
xe−5x dx = − xe−5x − e−5x + c.
5
25
1
x2 sin 3x dx. The initial guess is − x2 cos 3x. Its derivative
3
1
− x2 cos 3x
3
0
2
= x2 sin 3x − x cos 3x
3
2
has an extra term − x cos 3x. To remove this term, we modify our guess:
3
2
1
− x2 cos 3x + x sin 3x. Its derivative
3
9
1
2
− x2 cos 3x + x sin 3x
3
9
still has an extra term
Z
0
= x2 sin 3x +
2
sin 3x
9
2
sin 3x. So we make the final adjustment
9
1
2
2
x2 sin 3x dx = − x2 cos 3x + x sin 3x +
cos 3x + c .
3
9
27
This is easier than integrating by parts twice.
Example
Z
p
x
x2
+ 4 dx. We begin by rewriting this integral as
Z
2
x x2 + 4
One usually computes this integral by a substitution u = x Z+ 4, with
du = 2x dx. Forgetting a constant multiple, the integral becomes u1/2 du.
Ignoring a constant multiple again, this evaluates to u3/2 . Returning to the
original variable, we have our initial guess x2 + 4
3/2
. Differentiation
3/2
1/2
d 2
x +4
= 3x x2 + 4
dx
gives us the integrand with an extra factor of 3. To fix that, we multiply
the initial guess by 13 :
Z
p
x x2 + 4 dx =
3/2
1 2
x +4
+ c.
3
1/2
dx.
3
1.2. FIRST ORDER LINEAR EQUATIONS
1
dx. Instead of using partial fractions, let us
(x2 + 1)(x2 + 4)
try to split the integrand as
Example
Z
x2
1
1
− 2
.
+1 x +4
We see that this is off by a factor. The correct formula is
1
1
=
2
2
(x + 1)(x + 4)
3
Then
Z
(x2
1
1
− 2
2
x +1 x +4
.
1
1
1
x
dx = tan−1 x − tan−1 + c .
2
+ 1)(x + 4)
3
6
2
Sometimes one can guess the splitting twice, as in the following case.
Z
1
Example
dx.
2
x (1 − x2 )
x2
1
1
1
1
1
1 1 1
1 1
= 2+
= 2+
= 2+
+
.
2
2
(1 − x )
x
1−x
x
(1 − x)(1 + x)
x
21−x 21+x
Then (for |x| < 1)
Z
1.2
x2
1
1 1
1
dx = − − ln(1 − x) + ln(1 + x) + c .
2
(1 − x )
x 2
2
First Order Linear Equations
Background
Suppose we need to find the function y(x) so that
y 0 (x) = x.
This is a differential equation, because it involves a derivative of the unknown
function. This is a first order equation, as it only involves the first derivative.
Solution is of course
x2
(2.1)
y(x) =
+ c,
2
where c is an arbitrary constant. We see that differential equations have
infinitely many solutions. The formula (2.1) gives us the general solution.
4
CHAPTER 1. FIRST ORDER EQUATIONS
Then we can select the one that satisfies an extra initial condition. For
example, for the problem
(2.2)
y 0 (x) = x
y(0) = 5
we begin with the general solution given in formula (2.1), and then evaluate
it at x = 0
y(0) = c = 5.
So that c = 5, and solution of the problem (2.2) is
y(x) =
x2
+ 5.
2
The problem (2.2) is an example of an initial value problem. If the variable
x represents time, then the value of y(x) at the initial time is prescribed to
be 5. The initial condition may be prescribed at other values of x, as in the
following example:
y0 = y
y(1) = 2e .
Here the initial condition is prescribed at x = 1, e denotes the Euler number
e ≈ 2.718. Observe that while y and y 0 are both functions of x, we do not
spell this out. This problem is still in the Calculus realm. Indeed, we are
looking for a function y(x) who’s derivative is the same as y(x). This is
a property of the function ex , and its constant multiples. I.e., the general
solution is
y(x) = cex,
and then the initial condition gives
y(1) = ce = 2e,
so that c = 2. The solution is then
y(x) = 2ex .
We see that the main thing is finding the general solution. Selecting c
to satisfy the initial condition, is usually easy.
1.2. FIRST ORDER LINEAR EQUATIONS
5
Recall from Calculus that
d g(x)
e
= eg(x) g 0(x).
dx
In case g(x) is an integral, we have
R
d R p(x) dx
e
= p(x)e p(x) dx ,
dx
(2.3)
because derivative of the integral is p(x).
1.2.1
The Integrating Factor
Let us find the general solution of the equation
(2.4)
y 0 + p(x)y = g(x),
where p(x) and g(x) are given functions. This is a linear equation, as we
have a linear function of y and y 0 . Because we know p(x), we can calculate
the function
R
µ(x) = e p(x) dx ,
and its derivative
(2.5)
R
µ0 (x) = p(x)e
p(x) dx
= p(x)µ.
We now multiply the equation (2.4) by µ(x), giving
(2.6)
y 0 µ + yp(x)µ = µg(x).
Let us use the product rule and the formula (2.5) to calculate a derivative
d
[yµ] = y 0 µ + yµ0 = y 0 µ + yp(x)µ.
dx
So, we may rewrite (2.6) in the form
(2.7)
d
[µy] = µg(x).
dx
This allows us to compute the general solution. Indeed,
we know the function
R
on the right. By integration we express µ(x)y(x) = µ(x)g(x) dx, and then
solve for y(x).
In practice one needs to memorize the formula for µ(x), and the form
(2.7). Computation of µ(x) will involve a constant of integration. We will
always set c = 0, because the method works for any c.
6
CHAPTER 1. FIRST ORDER EQUATIONS
Example Solve
y 0 + 2xy = x
y(0) = 2 .
Here p(x) = 2x and g(x) = x. Compute
R
µ(x) = e
Equation (2.7) takes the form
2x dx
2
= ex .
d h x2 i
2
e y = xex .
dx
Integrate both sides, and then perform integration by a substitution u = x2
(or use guess-and-check)
x2
e y=
Z
2
xex dx =
Solving for y
1 x2
e + c.
2
y(x) =
1
2
+ ce−x .
2
y(0) =
1
+ c = 2,
2
From the initial condition
i.e., c = 32 . Answer: y(x) =
1 3 −x2
+ e .
2 2
Example Solve
1
y 0 + y = cos 2t , y(π/2) = 1 .
t
Here the independent variable is t, y = y(t), but the method is of course the
same. Compute
R 1
µ(t) = e t dt = eln t = t,
and then
d
[ty] = t cos 2t.
dt
Integrate both sides, and perform integration by parts
ty =
Z
t cos 2t dt =
1
1
t sin 2t + cos 2t + c.
2
4
7
1.2. FIRST ORDER LINEAR EQUATIONS
Divide by t
y(t) =
1
1 cos 2t c
sin 2t +
+ .
2
4 t
t
The initial condition gives
y(π/2) = −
1 1
c
+
= 1,
4 π/2 π/2
i.e.,
1
c = π/2 + ,
4
and so the solution is
(2.8)
y(t) =
1
1 cos 2t π/2 + 14
sin 2t +
+
.
2
4 t
t
This function y(t) gives us a curve, called the integral curve. The initial
condition tells us that y = 1 when t = π/2, i.e., the point (π/2, 1) lies on
the integral curve. What is the maximal interval on which the solution (2.8)
is valid? I.e., starting with the initial t = π/2, how far can we continue the
solution to the left and to the right of the initial t? We see from (2.8) that
the maximal interval is (0, ∞). At t = 0 the solution y(t) is undefined.
Example Solve
x
dy
+ 2y = sin x ,
dx
y(−π) = −2 .
Here the equation is not in the form (2.4), for which the theory applies. We
divide the equation by x
dy
2
sin x
+ y=
.
dx x
x
Now the equation is in the right form, with p(x) = x2 and g(x) =
before, we compute using the properties of logarithms
R
µ(x) = e
And then
2
x
dx
2
= e2 ln x = eln x = x2 .
d h 2 i
sin x
x y = x2
= x sin x.
dx
x
sin x
x .
As
8
CHAPTER 1. FIRST ORDER EQUATIONS
Integrate both sides, and perform integration by parts
2
x y=
Z
x sin x dx = −x cos x + sin x + c,
giving us the general solution
y(x) = −
cos x sin x
c
+ 2 + 2.
x
x
x
The initial condition implies
y(−π) = −
1
c
+
= −2.
π π2
Solve for c:
c = −2π 2 + π.
2
x
Answer: y(x) = − cosx x + sin
+ −2πx2+π . This solution is valid on the interval
x2
(−∞, 0) (that is how far it can be continued to the left and to the right,
starting from the initial x = −π).
Example Solve
dy
1
=
, y(1) = 0 .
dx
y−x
We have a problem: not only this equation is not in the right form, it is
1
a nonlinear equation, because y−x
is not a linear function of y, no matter
what the number x is. We need a little trick. Let us pretend that dy and
dx are numbers, and take reciprocals of both sides of the equation
dx
= y − x,
dy
or
dx
+ x = y.
dy
Let us now think of y as independent variable, and x as a function of y, i.e.,
x = x(y). Then the last equation
is linear, with p(y) = 1 and g(y) = y. We
R
1 dy
proceed as usual: µ(y) = e
= ey , and
d y
[e x] = yey .
dy
Integrating
y
e x=
Z
yey dy = yey − ey + c,
9
1.3. SEPARABLE EQUATIONS
y
3
2
1
0.5
æ
1.0
1.5
2.0
2.5
3.0
x
-1
Figure 1.1: The integral curve x = y−1+2e−y , with the initial point marked
or
x(y) = y − 1 + ce−y .
To find c we need a initial condition. The original initial condition tells
us that y = 0 for x = 1. For the inverse function x(y) this translates to
x(0) = 1. So that c = 2.
Answer: x(y) = y − 1 + 2e−y (see the Figure 1.1).
The rigorous justification of this method is based on the formula for
the derivative of an inverse function, that we recall next. Let y = y(x) be
some function, and y0 = y(x0 ). Let x = x(y) be its inverse function. Then
x0 = x(y0 ), and we have
dx
(y0 ) =
dy
1.3
1
dy
dx (x0 )
.
Separable Equations
Background
Suppose we have a function F (y), and y in turn depends on x, i.e., y = y(x).
So that in effect F depends on x. To differentiate F with respect to x we
use the Chain Rule from Calculus
d
dy
F (y(x)) = F 0 (y(x)) .
dx
dx
10
CHAPTER 1. FIRST ORDER EQUATIONS
The method
Suppose we are given two functions F (y) and G(x), and let us use the
corresponding lower case letters to denote their derivatives, i.e., F 0 (y) = f (y)
and G0 (x) = g(x). Our goal is to solve the equation (i.e., to find the general
solution)
dy
(3.1)
f (y)
= g(x).
dx
This is a nonlinear equation.
We begin by rewriting this equation, using the upper case functions
F 0 (y)
dy
= G0 (x).
dx
Using the Chain Rule, we rewrite the equation as
d
d
F (y) =
G(x).
dx
dx
If derivatives of two functions are the same, these functions differ by a
constant, i.e.,
(3.2)
F (y) = G(x) + c.
This is the desired general solution! If one is lucky, it may be possible to
solve this relation for y as a function of x. If not, maybe one can solve for
x as a function of y. If both attempts fail, one can use an implicit plotting
routine to draw the integral curves, i.e., the curves given by (3.2).
We now describe a simple procedure, which leads from the equation (3.1)
dy
to its solution (3.2). Let us pretend that dx
is not a notation for a derivative,
but a ratio of two numbers dy and dx. Clearing the denominator in (3.1)
f (y) dy = g(x) dx.
We have separated the variables, everything involving y is now on the left,
while x appears only on the right. Integrate both sides:
Z
f (y) dy =
Z
g(x) dx,
which gives us immediately the solution (3.2).
Example Solve
dy
= x(1 + y 2 ).
dx
11
1.3. SEPARABLE EQUATIONS
To separate the variables, we multiply by dx, and divide by (1 + y 2 )
dy
dy =
1 + y2
Z
Z
x dx .
I.e., the general solution is
1
arctan y = x2 + c,
2
which we can also put into the form
1
y = tan( x2 + c).
2
Example Solve
xy 2 + x dx + ex dy = 0 .
This is an example of a differential equation, written in differential form.
2
dy
= − xyex+x ,
(Dividing through by ex dx, we can put it into a familiar form dx
although there is no need to do that.)
By factoring, we are able to separate the variables:
ex dy = −x(y 2 + 1) dx ;
Z
dy
=−
y2 + 1
Z
xe−x dx ;
tan−1 y = xe−x + e−x + c .
Answer: y(x) = tan xe−x + e−x + c .
x
d
Recall that by the Fundamental Theorem
of Calculus dx
a f (t) dt =
Rx
f (x), for any constant a. The
integral Ra f (t) dt gives us an anti-derivative
R
of f (x), i.e., we may write f (x) dx = ax f (t) dt + c. Here we can let c be
an arbitrary constant, and a fixed, or the other way around.
R
Example Solve
dy
2
= ex y 2 ,
dx
y(1) = −2 .
Separation of variables
Z
dy
dy =
y2
Z
2
ex dx
12
CHAPTER 1. FIRST ORDER EQUATIONS
gives on the right an integral that cannot be evaluated in elementary functions. We shall change it to a definite integral, as above. We choose a = 1,
because the initial condition was given at x = 1:
Z
dy
dy =
y2
Z
1
=
y
x
−
Z
x
1
2
et dt + c ,
2
et dt + c .
1
When x = 1, we have y = −2, which means that c = 12 . Answer: y(x) =
R x t2
1
− R x t2
1 . For any x, the integral 1 e dt can be quickly computed
1 e dt + 2
by a numerical integration method, e.g., by the trapezoidal method.
1.3.1
Problems
I. Integrate by Guess-and-Check:
1.
Z
5x
xe
dx.
3.
Z
4.
Z
x cos 2x dx.
5.
Z
√
7.
Z
(x2
8.
Z
(ln x)5
dx.
x
1
xe− 2 x dx.
2.
Z
x sin 3x dx.
Ans. e−x/2 (−4 − 2x) + c.
1
Ans. x cos 2x +
2
2
1 2 1
x −
sin 2x + c.
2
4
p
x
dx.
Ans. x2 + 1 + c.
x2 + 1
Z
1 x
1 2
2
6.
dx.
Ans.
ln
x
+
1
−
ln
x
+
2
+ c.
(x2 + 1)(x2 + 2)
2
2
9.
Z
10.
Z
1
dx.
+ 1)(x2 + 9)
3
x2 ex dx.
e2x sin 3x dx.
Ans.
Ans.
Ans.
1
1
x
tan−1 x −
tan−1 + c.
8
24
3
1
(ln x)6 + c.
6
1 x3
e + c.
3
Ans. e2x
2
3
sin 3x −
cos 3x + c.
13
13
Hint: Look for the anti-derivative in the form Ae2x sin 3x + Be2x cos 3x, and
determine the constants A and B by differentiation.
13
1.3. SEPARABLE EQUATIONS
II. Find the general solution of the linear problems:
1
y = cos x.
x
Ans. y =
2. xy 0 + 2y = e−x .
Ans. y =
1. y 0 +
c
(x + 1)e−x
−
.
x2
x2
(x − 1)ex
c
.
Ans. y = 3 +
x
x3
3. x4 y 0 + 3x3 y = x2 ex .
4.
dy
= 2x(x2 + y).
dx
5. xy 0 − 2y = xe1/x .
6. y 0 + 2y = sin 3x.
c
cos x
+ sin x +
.
x
x
2
Ans. y = cex − x2 − 1.
Ans. y = cx2 − x2 e1/x.
Ans. y = ce−2x +
2
3
sin 3x −
cos 3x.
13
13
7. x yy 0 − 1 = y 2 .
Hint: Set v = y 2 . Then v 0 = 2yy 0 , and one obtains a linear equation for
v = v(x).
Ans. y 2 = −2x + cx2 .
III. Find the solution of the initial value problem, and state the maximum
interval on which this solution is valid:
1
π
sin(x)
1. y 0 + y = cos x, y( ) = 1.
Ans. y = cos(x)+x
; (0, ∞).
x
x
2
2. xy 0 +(2+x)y = 1, y(−2) = 0.
3. x(y 0 − y) = ex , y(−1) =
4. (t + 2)
Ans. y =
1
.
e
1 3e−x−2
1
+
− 2 ; (−∞, 0).
2
x
x
x
Ans. y = ex ln |x| + ex ; (−∞, 0).
dy
+ y = 5, y(1) = 1.
dt
Ans. y =
5t − 2
; (−2, ∞).
t+2
5. ty 0 − 2y = t4 cos t, y(π/2) = 0.
π
Ans. y = t3 sin t + t2 cos t − t2 ; (−∞, ∞). Solution is valid for all t.
2
dr
+ r = 5tet , r(2) = 0.
dt
dy
1
7.
= 2
, y(2) = 0.
dx
y +x
6. t ln t
Hint: Obtain a linear equation for
Ans. r =
dx
.
dy
5et − 5e2
; (1, ∞).
ln t
Ans. x = −2 + 4ey − 2y − y 2 .
14
CHAPTER 1. FIRST ORDER EQUATIONS
8∗ . Find a solution (y = y(t)) of y 0 + y = sin 2t, which is a periodic function.
Hint: Look for a solution in the form y(t) = A sin 2t + B cos 2t, plug this
into the equation, and determine the constants A and B.
1
2
Ans. y = sin 2t − cos 2t.
5
5
∗
9 . Show that the equation y 0 + y = sin 2t has no other periodic solutions.
Hint: Consider the equation that the difference of any two solutions satisfies.
10∗ . For the equation
y 0 + a(x)y = f (x)
assume that a(x) ≥ a0 > 0, where a0 is a constant, and f (x) → 0 as x → ∞.
Show that any solution tends to zero as x → ∞.
Rx
Hint: Write the integrating factor as µ(x) = e
∞ as x → ∞. Then express
y=
Rx
0
0
a(t) dt
≥ ea0 x , so that µ(x) →
µ(t)f (t) dt + c
,
µ(x)
and use Lopital’s rule.
IV. Solve by separating the variables
1.
dy
2
=
.
3
dx
x(y + 1)
Ans.
2. ex dx − ydy = 0, y(0) = −1.
y4
+ y − 2 ln x = c.
4
√
Ans. y = − 2ex − 1.
3. (x2 y 2 + y 2 )dx − yxdy = 0.
Ans. y = e
x2
2
+ln x+c
= cxe
x2
2
.
2
Ans. y = − √
.
2
2 t +1−3
4. y 0 (t) = ty 2 (1 + t2 )−1/2 , y(0) = 2.
1
5. (y − xy + x − 1)dx + x2 dy = 0, y(1) = 0.
0
x2
6. y = e y, y(2) = 1.
0
2
7. y = xy + xy, y(0) = 2.
Ans. y =
Rx
Ans. y = e
2
2
et dt
Ans. y =
.
2e
x2
2
3 − 2e
x2
2
.
e − exx
.
e
1.4. SOME SPECIAL EQUATIONS
1.4
1.4.1
15
Some Special Equations
Homogeneous Equations
Let f (t) be a given function. If we set here t = xy , we obtain a function f ( xy ).
Then f ( xy ) is a function of two variables x and y, but it depends on them
in a special way. One calls such function to be homogeneous. For example,
y−4x
x−y is a homogeneous function, because we can put it into the form
y − 4x
y/x − 4
=
,
x−y
1 − y/x
i.e., here f (t) =
t−4
1−t .
Our goal is to solve the equation
(4.1)
y
dy
= f ( ).
dx
x
Set v = yx . Since y is a function of x, the same is true of v = v(x).
Solving for y, y = xv, and then by the product rule
dy
dv
= v+x .
dx
dx
Switching to v in (4.1)
dv
= f (v).
dx
This a separable equation! Indeed, after taking v to the right, we can separate the variables
Z
Z
dv
dx
dv =
.
f (v) − v
x
(4.2)
v+x
After solving this for v(x), we can express the original unknown y = xv(x).
In practice, one should try to remember the formula (4.2).
Example Solve
dy
x2 + 3y 2
=
dx
2xy
y(1) = −2 .
To see that the equation is homogeneous, we rewrite it as
dy
1x 3y
=
+
.
dx
2y 2x
16
CHAPTER 1. FIRST ORDER EQUATIONS
Set v = yx , or y = xv. Observing that
v+x
Simplify
x
x
y
= v1 , we have
dv
11 3
=
+ v
dx
2v 2
dv
11 1
1 + v2
=
+ v=
.
dx
2v 2
2v
Separating variables
Z
2v
dv =
1 + v2
Z
dx
.
x
We now obtain the general solution, by doing the following steps (observe
that ln c is another way to write an arbitrary constant)
ln (1 + v 2 ) = ln x + ln c = ln cx;
1 + v 2 = cx;
√
v = ± cx − 1;
√
y(x) = xv = ±x cx − 1;
From the initial condition
√
y(1) = ± c − 1 = −2.
It follows that we need to select “minus”, and c = 5.
√
Answer: y(x) = −x 5x − 1.
We mention next an alternative (equivalent) definition: the function
f (x, y) is called homogeneous if
f (tx, ty) = f (x, y)
for all constants t.
If this holds, then setting t = x1 , we see that
y
f (x, y) = f (tx, ty) = f (1, ),
x
i.e., f (x, y) is a function of xy , and the old definition applies.
Example Solve
dy
y
=
√ , with x > 0 .
dx
x + xy
1.4. SOME SPECIAL EQUATIONS
17
It is more straightforward to use the new definition to verify that the function
y
f (x, y) = x+√
xy is homogeneous:
f (tx, ty) =
(ty)
y
p
=
√ = f (x, y) .
x + xy
(tx) + (tx)(ty)
Letting y/x = v or y = xv we rewrite the equation
v + xv 0 =
x+
xv
v
√
√ .
=
x xv
1+ v
We proceed to separate the variables:
x
v
v 3/2
dv
√ −v = −
√ ;
=
dx
1+ v
1+ v
√
Z
Z
1+ v
dx
dv
=
−
;
x
v 3/2
−2v −1/2 + ln v = − ln x + c;
The integral on the left was evaluated by performing division, and splitting
it into two pieces. Finally, we replace v by y/x:
−2
r
x
y
+ ln = − ln x + c ;
y
x
−2
r
x
+ ln y = c .
y
We have obtained an implicit representation of a family of solutions.
When separating the variables, we had to assume that v 6= 0 (in order
to divide by v 3/2 ). In case v = 0, we obtain another solution: y = 0.
1.4.2
The Logistic Population Model
Let y(t) denote the number of rabbits on a tropical island at time t. The
simplest model of population growth is
y 0 = ay
y(0) = y0 .
This model assumes that initially the number of rabbits was equal to some
number y0 > 0, while the rate of change of population, i.e., y 0 (t), is proportional to the number of rabbits. Here a is a given constant. The population
18
CHAPTER 1. FIRST ORDER EQUATIONS
of rabbits grows, which results in a faster and faster rate of growth. One
expects an explosive growth. Indeed, solving the equation, we get
y(t) = ceat .
From the initial condition y(0) = c = y0 , which gives us y(t) = y0 eat, i.e.,
exponential growth. This is the notorious Malthusian model of population
growth. Is it realistic? Yes, sometimes, for a limited time. If the initial
number of rabbits y0 is small, then for a while their number may grow
exponentially.
A more realistic model, which can be used for a long time is the Logistic
Model:
y 0 = ay − by 2
y(0) = y0 .
Here y = y(t); a, b and y0 are positive constants that are given. If y0
is small, then at first y(t) is small. Then the by 2 term is negligible, and
we have exponential growth. As y(t) increases, this term is not negligible
anymore, and we can expect the rate of growth to get smaller and smaller.
(Writing the equation as y 0 = (a − b y)y, we can regard the a − b y term as
the rate of growth.) In case the initial number y0 is large (when y0 > ab ),
the quadratic on the right is negative, i.e., y 0 (t) < 0, and the population
decreases. If y0 = ab , we expect that y(t) = ab for all t. We now solve the
problem to confirm our guesses.
This problem can be solved by separation of variables. Instead, we use
another technique. Divide both sides of the equation by y 2
y −2 y 0 = ay −1 − b.
1
Introduce a new unknown function v(t) = y −1 (t) = y(t)
. By the generalized
0
−2
0
power rule v = −y y , so that we can rewrite the last equation as
−v 0 = av − b,
or
v 0 + av = b.
This a linear equation for v(t)! To solve it, we follow the familiar steps, and
then we return to the original unknown function y(t):
R
µ(t) = e
a dt
= eat ;
19
1.4. SOME SPECIAL EQUATIONS
y
2.5
2.0
1.5
1.0
0.5
0.2
0.4
0.6
0.8
1.0
1.2
1.4
t
Figure 1.2: The solution of y 0 = 5y − 2y 2 , y(0) = 0.2
d h at i
e v = beat;
dt
at
e v=b
Z
eat dt =
v=
y(t) =
b at
e + c;
a
b
+ ce−at ;
a
1
=
v
b
a
1
.
+ ce−at
To find the constant c, we use the initial condition
y(0) =
c=
y(t) =
b
a
1
= y0 ;
+c
1
b
− ;
y0 a
1
b
a
+
1
y0
−
b
a
e−at
.
The problem is solved. Observe that limt→+∞ y(t) = a/b, no matter what
initial value y0 we take. The number a/b is called the carrying capacity. It
tells us the number of rabbits in the long run, that our island will support.
A typical solution curve, called the logistic curve is given in Figure 1.2.
20
1.4.3
CHAPTER 1. FIRST ORDER EQUATIONS
Bernoulli’s Equation
Let us solve the equation
y 0 (t) = p(t)y(t) + g(t)y n(t).
Here p(t) and g(t) are given functions, n is a given constant. We see that
the logistic model above is just a particular example of Bernoulli’s equation.
We divide this equation by y n
y −n y 0 = p(t)y 1−n + g(t).
Introduce a new unknown function v(t) = y 1−n (t). Compute v 0 = (1 −
1
n)y −n y 0 , i.e., y −n y 0 = 1−n
v 0 , and rewrite the equation as
v 0 = (1 − n)p(t)v + (1 − n)g(t).
This is a linear equation for v(t)! After solving for v(t), we calculate y(t) =
1
v 1−n (t).
Example Solve
t
y0 = y + √ .
y
Writing this equation in the form y 0 = y + ty −1/2 , we see that this is a
Bernoulli’s equation, with n = −1/2, i.e., we need to divide through by
y −1/2 . But that is the same as multiplying through by y 1/2 , which we do,
obtaining
y 1/2 y 0 = y 3/2 + t .
We now let v(t) = y 3/2 , v 0 (t) = 32 y 1/2 y 0 , obtaining a linear equation for v,
which we solve as usual:
2 0
v = v + t;
3
3
3
v 0 − v = t;
2
2
d h −3t i 3 −3t
e 2 v = te 2
dt
2
Z
3
3
3
3
3
2
e− 2 tv =
te− 2 t dt = −te− 2 t − e− 2 t + c;
2
3
3
2
v = −t − + ce 2 t .
3
µ(t) = e−
R
3
2
dt
3
= e− 2 t ;
Returning to the original variable y, we have the answer: y = −t −
3
2
+ ce 2 t
3
2/3
.
21
1.4. SOME SPECIAL EQUATIONS
1.4.4 ∗ Riccati’s Equation
Let us try to solve the equation
y 0 (t) + a(t)y(t) + b(t)y 2 (t) = c(t).
Here a(t), b(t) and c(t) are given functions. In case c(t) = 0, this is a
Bernoulli’s equation, which we can solve. For general c(t) one needs some
luck to solve this equation. Namely, one needs to guess a particular solution
p(t). Then a substitution y(t) = p(t) + z(t) produces a Bernoulli’s equation
for z(t), which we can solve.
There is no general way to find a particular solution, which means that
one cannot always solve Riccati’s equation. Occasionally one can get lucky.
Example Solve
y 0 + y 2 = t2 − 2t .
We have a quadratic on the right, which suggest that we look for the solution in the form y = at + b. Plugging in, we get a quadratic on the left too.
Equating the coefficients in t2 , t and constant terms, we obtain three equations to find a and b. In general three equations with two unknowns will not
have a solution, but this is a lucky case, a = −1, b = 1, i.e., p(t) = −t + 1
is a particular solution. Substituting y(t) = −t + 1 + v(t) into the equation,
and simplifying, we get
v 0 + 2(1 − t)v = −v 2 .
This is a Bernoulli’s equation. As before, we divide through by v 2 , and then
0
set z = 1v , z 0 = − vv2 , to get a linear equation:
v −2 v 0 + 2(1 − t)v −1 = −1;
µ = e−
R
2(1−t) dt
= et
t2 −2t
e
2 −2t
z=
d h t2 −2t i
2
e
z = et −2t ;
dt
;
Z
z 0 − 2(1 − t)z = 1;
et
2 −2t
dt .
The last integral cannot be evaluated through elementary functions (Mathematica can evaluate it through a special function, called Erfi). So we
leave this integral unevaluated. We then get z from the last formula, after
which we express v, and finally y. We have obtained a family of solutions:
22
CHAPTER 1. FIRST ORDER EQUATIONS
2
et −2t
y(t) = −t + 1 + R t2 −2t . (The usual arbitrary constant c is now “inside”
e
dt
the integral.) Another solution: y = −t + 1 (corresponding to v = 0).
Example Solve
6
.
t2
This time we look for solution in the form y(t) = a/t. Plugging in, we
determine a = 2, i.e., p(t) = 2/t is a particular solution (a = −3/2 is also a
possibility). Substitution y(t) = 2/t + v(t) produces a Bernoulli’s equation
(4.3)
y 0 + 2y 2 =
8
v 0 + v + 2v 2 = 0 .
t
7
Solving it as before, we obtain v(t) = 8
, and v = 0. Solutions:
ct − 2t
2
7
y(t) = + 8
, and also y = 2t .
t
ct − 2t
Let us outline an alternative approach to this problem. Set y = z1 , where
z = z(t) is a new unknown function. Plugging into (4.3), then clearing the
denominators, we have
z0
1
6
− 2 + 2 2 = 2;
z
z
t
6z 2
.
t2
This is a homogeneous equation, which can be solved as before.
−z 0 + 2 =
Think what are other equations, for which this neat trick would work. (But
do not try to publish your findings, this was done by Jacoby around two
hundred years ago.)
There are some important ideas that we have learned in this subsection.
Knowledge of one particular solution may help to “crack open” the equation,
and get all of its solutions. Also, the form of this particular solution depends
on the function in the right hand side of the equation.
1.4.5 ∗ Parametric Integration
Let us solve the problem (here y = y(x))
y=
q
1 − y02
y(0) = 1 .
23
1.4. SOME SPECIAL EQUATIONS
Unlike the previous problems, this equation is not solved for the derivative
y 0 (x). Solving for y 0 (x), and then separating the variables, one may indeed
find the solution. Instead, let us assume that
y 0 (x) = sin t,
where t is a parameter. From the equation
y=
q
1 − sin2 t =
√
cos2 t = cos t,
if we assume that cos t > 0. Recall differentials: dy = y 0 (x) dx, or
dx =
dy
y 0 (x)
=
− sin t dt
= −dt,
sin t
i.e.,
x = −t + c.
We have obtained a family of solutions in a parametric form:
x = −t + c
y = cos t .
Solving for t, t = −x + c, i.e., y = cos(−x + c). From the initial condition
c = 0, giving us the solution y = cos x. This solution is valid on infinitely
many disjoint intervals where cos x ≥ 0 (because we see from the equation
that y ≥ 0). This problem admits another solution: y = 1.
For the equation
5
y0 + y0 = x
we do not have an option of solving for y 0 (x). Parametric integration appears
to be the only way to solve it. We let y 0 (x) = t, so that from the equation
x = t5 + t, and dx = (5t4 + 1) dt. Then
dy = y 0 (x) dx = t(5t4 + 1) dt .
5 6
1 2
4
I.e., dy
dt = t(5t + 1), which gives y = 6 t + 2 t + c. We have obtained a
family of solutions in a parametric form:
x = t5 + t
y = 56 t6 + 12 t2 + c .
24
CHAPTER 1. FIRST ORDER EQUATIONS
1.4.6
Some Applications
Differential equations arise naturally in geometric and physical problems.
Example Find all positive decreasing functions y = f (x), with the following property: the area of the triangle formed by the vertical line going down
from the curve, the x-axis and the tangent line to this curve is constant,
equal to a > 0.
y
6
(x0 , f (x0 ))
c
@
@
@
@
@
y = f (x)
@
x1
- x
The triangle formed by the tangent line, the line x = x0 , and the x-axis
Let (x0 , f (x0 )) be an arbitrary point on the graph of y = f (x). Draw
the triangle in question, formed by the horizontal line x = x0 , the x-axis,
and the tangent line to this curve. The tangent line intersects the x-axis at
some point x1 to the right of x0 , because f (x) is decreasing. The slope of
the tangent line is f 0 (x0 ), so that the point-slope equation of the tangent
line is
y = f (x0 ) + f 0 (x0 )(x − x0 ) .
At x1 , we have y = 0, i.e.,
0 = f (x0 ) + f 0 (x0 )(x1 − x0 ) .
Solve this for x1 , x1 = x0 −
triangle is
0)
− ff0(x
(x0 ) ,
f (x0 )
f 0 (x0 ) .
It follows that the horizontal side of our
while the vertical side is f (x0 ). The area of the right
25
1.4. SOME SPECIAL EQUATIONS
triangle is then
−
1 f 2 (x0 )
= a.
2 f 0 (x0 )
(Observe that f 0 (x0 ) < 0, so that the area is positive.) The point x0 was
arbitrary, so that we replace it by x, and then we replace f (x) by y, and
f 0 (x) by y 0 :
1 y2
y0
1
− 0 = a; or − 2 =
.
2y
y
2a
We solve this differential equation by taking antiderivatives of both sides:
1
1
=
x + c;
y
2a
Answer: y(x) =
This is a family of hyperbolas. One of them is y =
2a
x+2ac
.
2a
.
x
Example A tank holding 10L (liters) is originally filled with water. A saltwater mixture is pumped into the tank at a rate of 2L per minute. This
mixture contains 0.3 kg of salt per liter. The excess fluid is flowing out of
the tank at the same rate (i.e., 2L per minute). How much salt does the
tank contain after 4 minutes.
Let t be the time (in minutes) since the mixture started flowing, and let
y(t) denote the amount of salt in the tank at time t. The derivative y 0 (t)
gives the rate of change of salt per minute. The salt is pumped in at a rate
of 0.6 kg per minute. The density of salt at time t is y(t)
10 (i.e., each liter of
y(t)
solution in the tank contains 10 kg of salt). So the salt flows out at the
0
rate 2 y(t)
10 = 0.2 y(t) kg/min. The difference of these two rates is y (t). I.e.,
y 0 = 0.6 − 0.2y .
This is a linear differential equation. Initially, there was no salt in the tank,
i.e., y(0) = 0 is our initial condition. Solving this equation together with
the initial condition, we have y(t) = 3 − 3e−0.2t. After 4 minutes we have
y(4) = 3 − 3e−0.8 ' 1.65 kg of salt in the tank.
Now suppose a patient has food poisoning, and doctors are pumping
water to flush his stomach out. One can compute similarly the weight of
poison left in the stomach at time t.
26
CHAPTER 1. FIRST ORDER EQUATIONS
1.5
Exact Equations
Let us begin by recalling partial derivatives. If a function f (x) = x2 + a
depends on a parameter a, then f 0 (x) = 2x. If g(x) = x2 + y 2 , with a
dg
= 2x. Another way to denote this derivative is
parameter y, we have dx
gx = 2x. We can also regard g as a function of two variables, g = g(x, y) =
x2 +y 2 . Then a partial derivative with respect to x is computed by regarding
∂g
y to be a parameter, gx = 2x. Alternative notation: ∂x
= 2x. Similarly, a
∂g
partial derivative with respect to y is gy = ∂y = 2y. It gives us the rate of
change in y, when x is kept fixed.
The equation (here y = y(x))
y 2 + 2xyy 0 = 0
can be easily solved if we rewrite it in the equivalent form
d 2
xy = 0.
dx
Then xy 2 = c, and the general solution is
c
y(x) = ± √ .
x
We wish to play the same game for general equations of the form
(5.1)
M (x, y) + N (x, y)y 0(x) = 0.
Here the functions M (x, y) and N (x, y) are given. In the above example
M = y 2 and N = 2xy.
Definition The equation (5.1) is called exact if there is a function ψ(x, y)
(with continuous derivatives up to second order), so that we can rewrite
(5.1) in the form
d
(5.2)
ψ(x, y) = 0.
dx
The general solution of the exact equation is
(5.3)
ψ(x, y) = c.
There are two natural questions: when is the equation (5.1) exact, and
if it is exact, how does one find ψ(x, y)?
27
1.5. EXACT EQUATIONS
Theorem 1 Assume that the functions M , N , My and Nx are continuous
in some disc D : (x − x0 )2 + (y − y0 )2 < r 2 . Then the equation (5.1) is exact
in D if and only if the following partial derivatives are equal
(5.4)
My = Nx
for all points (x, y) in D.
This theorem says two things: if the equation is exact, then the partials
are equal, and conversely, if the partials are equal, then the equation is
exact.
Proof:
1. Assume the equation (5.1) is exact, i.e., it can be written in
the form (5.2). Performing the differentiation in (5.2), we write it as
ψx + ψy y 0 = 0.
But this is the same equation as (5.1), i.e.,
ψx = M
ψy = N .
Taking the second partials
ψxy = My
ψyx = Nx .
We know from Calculus that ψxy = ψyx , therefore My = Nx.
2. Assume that My = Nx. We will show that the equation (5.1) is then
exact by producing ψ(x, y). We had just seen that ψ(x, y) must satisfy
(5.5)
ψx = M (x, y)
ψy = N (x, y) .
Take anti-derivative in x of the first equation
(5.6)
ψ(x, y) =
Z
M (x, y) dx + h(y),
where h(y) is an arbitrary function of y. To determine h(y), plug the last
formula into the second line of (5.5)
ψy =
or
(5.7)
0
Z
My (x, y) dx + h0 (y) = N (x, y),
h (y) = N (x, y) −
Z
My (x, y) dx ≡ p(x, y).
28
CHAPTER 1. FIRST ORDER EQUATIONS
Observe that we have denoted by p(x, y) the right side of the last equation.
It turns out that p(x, y) does not really depend on x! Indeed,
∂
p(x, y) = Nx − My = 0,
∂x
because it was given to us that My = Nx . So that p(x, y) is a function of y
only, so let us denote it p(y). The equation (5.7) takes the form
h0 (y) = p(y).
We integrate this to determine h(y), which we then plug into (5.6) to get
ψ(x, y).
♦
The equation
M (x, y) dx + N (x, y) dy = 0
is an alternative form of (5.1).
Example Solve
ex sin y + y 3 − (3x − ex cos y)
dy
= 0.
dx
Here M (x, y) = ex sin y + y 3 , N (x, y) = −3x + ex cos y. Compute
My = ex cos y + 3y 2
Nx = ex cos y − 3 .
The partials are not the same, the equation is not exact, and our theory
does not apply.
Example Solve (for x > 0)
Here M (x, y) =
y
x
y
+ 6x dx + (ln x − 2) dy = 0 .
x
+ 6x and N (x, y) = ln x − 2. Compute
My =
1
= Nx ,
x
and so the equation is exact. To find ψ(x, y), we observe that the equations
(5.5) take the form
y
ψx = + 6x
x
29
1.5. EXACT EQUATIONS
ψy = ln x − 2 .
Take anti-derivative in x of the first equation
ψ(x, y) = y ln x + 3x2 + h(y),
where h(y) is an arbitrary function of y. Plug this into the second equation
ψy = ln x + h0 (y) = ln x − 2,
i.e.,
h0 (y) = −2.
Integrating, h(y) = −2y, and so ψ(x, y) = y ln x + 3x2 − 2y, giving us the
general solution
y ln x + 3x2 − 2y = c .
2
c−3x
We can solve this equation for y, y(x) = ln
x−2 . Observe that when solving
for h(y), we choose the integration constant to be zero, because at the next
step we set ψ(x, y) equal to c, an arbitrary constant.
Example Find the constant b, for which the equation
ye2xy + x dx + bxe2xy dy = 0
is exact, and then solve the equation with that b.
Setting equal the partials My and Nx, we have
e2xy + 2xye2xy = be2xy + 2bxye2xy .
We see that b = 1. When b = 1 the equation becomes
ye2xy + x dx + xe2xy dy = 0,
and we know that it is exact. We look for ψ(x, y) as before:
ψx = ye2xy + x
ψy = xe2xy .
Just for practice, let us begin this time with the second equation. Taking
an antiderivative in y in the second equation
1
ψ(x, y) = e2xy + h(x) ,
2
30
CHAPTER 1. FIRST ORDER EQUATIONS
where h(x) is an arbitrary function of x. Plugging this into the first equation,
ψx = ye2xy + h0 (x) = ye2xy + x .
This tells us that h0 (x) = x, h(x) = 21 x2 , and then ψ(x, y) = 12 e2xy + 21 x2 .
1
1
1
ln(2c − x2 ).
Answer: e2xy + x2 = c, or y = 2x
2
2
Exact equations are connected with the conservative vector fields. Recall
that a vector field F(x, y) =< M (x, y), N (x, y) > is called conservative if
there is a function ψ(x, y), called the potential, such that F(x, y) = ∇ψ(x, y).
Recalling that ∇ψ(x, y) =< ψx, ψy >, we have ψx = M , and ψy = N , the
same relations that we had for exact equations.
1.6
Existence and Uniqueness of Solution
Let us consider a general initial value problem
y 0 = f (x, y)
y(x0 ) = y0 ,
with a given function f (x, y), and given numbers x0 and y0 . Let us ask two
basic questions: is there a solution of this problem, and if there is, is the
solution unique?
Theorem 2 Assume that the functions f (x, y) and fy (x, y) are continuous
in some neighborhood of the initial point (x0 , y0 ). Then there exists a solution, and there is only one solution. The solution y = y(x) is defined on
some interval (x1 , x2 ) that includes x0 .
One sees that the conditions of this theorem are not too restrictive,
so that the theorem tends to apply, providing us with the existence and
uniqueness of solution. But not always!
Example Solve
√
y0 = y
y(0) = 0 .
31
1.7. NUMERICAL SOLUTION BY EULER’S METHOD
√
The function f (x, y) = y is continuous (for y ≥ 0), but its partial
1
derivative in y, fy (x, y) = 2√
y , is not even defined at the initial point (0, 0).
2
The theorem does not apply. One checks that the function y = x4 solves
our initial value problem (for x ≥ 0). But here is another solution: y = 0.
(Having two different solutions of the same initial value problem is like
having two primadonnas in the same theater.)
Observe that the theorem guarantees existence of solution only on some
interval (it is not “happily ever after”).
Example Solve for y = y(t)
y0 = y2
y(0) = 1 .
Here f (t, y) = y 2 , and fy (t, y) = 2y are continuous functions. The theorem
1
applies. By separation of variables, we determine the solution y(t) =
.
1−t
As time t approaches 1, this solution disappears, by going to infinity. This
is sometimes called “blow up in finite time”.
1.7
Numerical Solution by Euler’s method
We have learned a number of techniques for solving differential equations,
however the sad truth is that most equations cannot be solved. Even a
simple looking equation like
y0 = x + y3
is totally out of reach. Fortunately, if you need a specific solution, say the
one satisfying the initial condition y(0) = 1, it can be easily approximated
(we know that such solution exists, and is unique).
In general we shall deal with the problem
y 0 = f (x, y)
y(x0 ) = y0 .
Here the function f (x, y) is given (in the example above we had f (x, y) =
x + y 3 ), and the initial condition prescribes that solution is equal to a given
32
CHAPTER 1. FIRST ORDER EQUATIONS
number y0 at a given point x0 . Fix a step size h, and let x1 = x0 + h,
x2 = x0 + 2h, . . . , xn = x0 + nh. We will approximate y(xn ), the value of
solution at xn . We call this approximation yn . To go from the point (xn , yn )
to the point (xn+1 , yn+1 ) on the graph of solution y(x), we use the tangent
line approximation:
yn+1 ' yn + y 0 (xn )(xn+1 − xn ) = yn + y 0 (xn )h = yn + f (xn , yn )h .
(We have expressed y 0 (xn ) = f (xn , yn ) from the differential equation.) The
resulting formula is easy to implement, it is just one loop, starting with
(x0 , y0 ).
One continues the computations until the point xn goes as far as you
want. Decreasing the step size h, will improve the accuracy. Smaller h’s
will require more steps, but with the power of modern computers that is
not a problem, particularly for simple examples, as the one above. In that
example x0 = 0, y0 = 1. If we choose h = 0.05, then x1 = 0.05, and
y1 = y0 + f (x0 , y0 )h = 1 + (0 + 13 ) 0.05 = 1.05.
Continuing, we have x2 = 0.1, and
y2 = y1 + f (x1 , y1 )h = 1.05 + (0.05 + 1.053 ) 0.05 ' 1.11 .
Then x3 = 0.15, and
y3 = y2 + f (x2 , y2 )h = 1.11 + (0.1 + 1.113) 0.05 ' 1.18 .
If you need to approximate the solution on the interval (0, 0.4), you will
need to make 5 more steps. Of course, it is better to program a computer.
Euler method is using tangent line approximation, i.e., the first two terms
of the Taylor series approximation. One can use more terms of the Taylor
series, and develop more sophisticated methods (which is done in books on
numerical methods). But here is a question: if it is so easy to compute numerical approximation to solution, why bother learning analytical solutions?
The reason is that we seek not just to solve a differential equation, but to
understand it. What happens if the initial condition changes? The equation
may include some parameters, what happens if they change? What happens
to solutions in the long term?
33
1.7. NUMERICAL SOLUTION BY EULER’S METHOD
1.7.1
Problems
I. Determine if the equation is homogeneous, and if it is, solve it:
dy
y + 2x
=
.
dx
x
dy
x2 − xy + y 2
2.
=
.
dx
x2
1.
3.
Ans. y = cx + 2x ln x.
Ans. y =
−x + cx + x ln x
.
c + ln x
dy
y 2 + 2x
=
.
dx
y
y 2 + 2xy
2x2
dy
=
,
y(1)
=
2.
Ans.
y
=
.
dx
x2
3 − 2x
y
y
5. xy 0 − y = x tan .
Ans. sin = cx.
x
x
2
2
p
x +y
, y(1) = −2.
Ans. y = − x2 ln x2 + 4x2 .
6. y 0 =
xy
4.
y + x−1/2 y 3/2
, with x > 0, y > 0.
7. y =
√
xy
0
Ans. 2
r
y
= ln x + c.
x
II. Solve the following Bernoulli’s equations
1
2x
1. y 0 − y = y 2 , y(2) = −2.
Ans. y = − 2
.
x
x −2
2.
dy
y 2 + 2x
=
.
dx
y
3
2
x 1
3. y + x y = 3y.
Ans. y =
+ + ce2x , and y = 0.
3 6
√
3
Hint: When dividing the equation by y, one needs to check if y = 0 is a
solution, which indeed it is.
0
√
3
p
Ans. y = ± −1 − 2x + ce2x .
4. y 0 + xy = y 3 .
5. The equation
dy
y 2 + 2x
=
dx
y
could not be solved in the last problem set,pbecause it is not homogeneous.
Can you solve it now?
Ans. y = ± ce2x − 2x − 1.
III. Determine if the equation is exact, and if it is, solve it:
1. (2x + 3x2 y) dx + (x3 − 3y 2 ) dy = 0.
Ans. x2 + x3 y − y 3 = c.
34
CHAPTER 1. FIRST ORDER EQUATIONS
2. (x + sin y) dx + (x cos y − 2y) dy = 0.
3.
x
2y 3
dx
+
dy = 0.
x2 + y 4
x2 + y 4
Ans.
1 2
x + x sin y − y 2 = c.
2
Ans. x2 + y 4 = c.
4. Find a simpler solution for the preceding problem.
5. (6xy−cos y) dx+(3x2 +x sin y+1) dy = 0.
Ans. 3x2 y−x cos y+y = c.
Ans. x2 + y 2 − xy = 3.
6. (2x − y) dx + (2y − x) dy = 0, y(1) = 2.
7. Find the value of b for which the following equation is exact, and then
solve the equation, using that value of b
(yexy + 2x) dx + bxexy dy = 0 .
Ans. b = 1, y =
1
ln(c − x2 ).
x
IV 1. Use parametric integration to solve
3
y0 + y0 = x .
3 4 1 2
t + t + c.
4
2
2. Use parametric integration to solve
Ans. x = t3 + t, y =
2
y = ln(1 + y 0 ) .
Ans. x = 2 tan−1 t + c, y = ln(1 + t2 ). Another solution: y = 0.
3. Use parametric integration to solve
y 0 + sin(y 0 ) = x .
1 2
t + t sin t + cos t + c.
2
4. A tank has 100L of water-salt mixture, which initially contains 10 kg of
salt. Water is flowing in at a rate of 5L per minute. The new mixture flows
out at the same rate. How much salt remains in the tank after an hour?
Ans. x = t + sin t, y =
Ans. 0.5 kg.
5. A tank has 100L of water-salt mixture, which initially contains 10 kg of
salt. A water-salt mixture is flowing in at a rate of 3L per minute, and each
1.7. NUMERICAL SOLUTION BY EULER’S METHOD
35
liter of it contains 0.1 kg of salt. The new mixture flows out at the same
rate. How much salt remains in the tank after t minutes?
Ans. 10 kg.
6. Water is being pumped into patient’s stomach at a rate of 0.5 L per
minute to flush out 300 grams of alcohol poisoning. The excess fluid is
flowing out at the same rate. The stomach holds 3 L. The patient can be
discharged when the amount of poison drops to 50 grams. How long should
this procedure last?
7. Find all curves y = f (x) with the following property: if you draw a
tangent line at any point (x, f (x)) on this curve, and continue the tangent
line until it intersects the x -axis, then the point of intersection is x2 .
Ans. y = cx2 .
8. Find all positive decreasing functions y = f (x), with the following property: in the triangle formed by the vertical line going down from the curve,
the x-axis and the tangent line to this curve, the sum of two sides adjacent
to the right angle is constant, equal to a > 0.
Ans. y − a ln y = x + c.
9. (i) Apply Euler’s method to
y 0 = x(1 + y), y(0) = 1 .
Take h = 0.25 and do four steps, obtaining an approximation for y(1).
(ii) Apply Euler’s method to
y 0 = x(1 + y), y(0) = 1 .
Take h = 0.2 and do five steps, obtaining another approximation for y(1).
(iii) Solve the above problem analytically, and determine which one of the
two approximations is better.
10∗ . (From the Putnam competition, 2009) Show that any solution of
y0 =
satisfies lim y(x) = ∞.
x→∞
x2 − y 2
x2 (y 2 + 1)
36
CHAPTER 1. FIRST ORDER EQUATIONS
Hint: Using “partial fractions”, rewrite the equation as
y0 =
1 + 1/x2
1
− 2.
2
y +1
x
Assume, on the contrary, that y(x) is bounded when x is large. Then y 0 (x)
exceeds a positive constant for all large x, and therefore y(x) tends to infinity,
a contradiction (observe that x12 becomes negligible for large x).
11. Solve
x(y 0 − ey ) + 2 = 0 .
Hint: Divide the equation by ey , then set v = e−y , obtaining a linear equation for v = v(x).
Ans. y = − ln(x + cx2 ).
12. Solve the integral equation
y(x) =
Z
x
y(t) dt + x + 1 .
1
Hint: Differentiate the equation, and also evaluate y(1).
Ans. y = 3ex−1 − 1.
V 1. Find two solutions of the initial value problem
y 0 = (y − 1)1/3, y(1) = 1 .
Is it good to have two solutions of the same initial value problem? What
“went wrong”? (I.e., why the existence and uniqueness theorem does not
apply?)
2. Find all y0 , for which the following problem has a unique solution
x
, y(2) = y0 .
y0 = 2
y − 2x
Hint: Apply the existence and uniqueness theorem.
Ans. All y0 except ±2.
3. Show that the function
x|x|
solves the problem
4
√
y0 = y
y(0) = 0
for all x.
Hint: Consider separately the cases when x > 0, x < 0, and x = 0.
1.8.∗ THE EXISTENCE AND UNIQUENESS THEOREM
37
1.8 ∗ The existence and uniqueness theorem
For the initial value problem
(8.1)
y 0 = f (x, y)
y(x0 ) = y0
we shall prove a more general existence and uniqueness theorem than the
one stated before. We define a rectangular box B around the initial point
(x0 , y0 ) to be the set of points (x, y), satisfying x0 − a ≤ x ≤ x0 + a and
y0 − b ≤ x ≤ y0 + b, for some positive a and b. It is known from Calculus
that in case f (x, y) is continuous on B, it is bounded on B, i.e., for some
constant M > 0
(8.2)
|f (x, y)| ≤ M, for all points (x, y) in B .
Theorem 3 Assume that f (x, y) is continuous on B, and for some constant
L>0
(8.3)
|f (x, y1 ) − f (x, y2 )| ≤ L|y1 − y2 |,
for any two points (x, y1 ) and (x, y2 ) in B. Then the initial value problem
b
b
has a unique solution, which is defined for x on the interval (x0 − M
, x0 + M
).
Proof: We shall prove the existence of solutions first, and we shall assume
that x > x0 (the case when x < x0 is similar). Integrating the equation in
(8.1) over the interval (x0 , x), we convert the initial value problem (8.1) into
an equivalent integral equation
(8.4)
y(x) = y0 +
Z
x
f (t, y(t)) dt .
x0
(If y(x) solves (8.4), then y(x0 ) = y0 , and by differentiation y 0 = f (x, y).)
By (8.3), −M ≤ f (t, y(t)) ≤ M , and so any solution of (8.4) lies between
two straight lines
y0 − M (x − x0 ) ≤ y(x) ≤ y0 + M (x − x0 ) .
For x0 ≤ x ≤ x0 + Mb these lines stay in the box B. We denote ϕ(x) = y0 +
M (x−x0 ), and call this function a supersolution, while ψ(x) = y0 −M (x−x0 )
is called a subsolution.
1. An easier case: Let us make an additional assumption that f (x, y) is
increasing in y, i.e., if y2 > y1 , then f (x, y2 ) > f (x, y1 ) for any two points
38
CHAPTER 1. FIRST ORDER EQUATIONS
(x, y1 ) and (x, y2) in B. We shall construct a solution of (8.1) as a limit of
the sequence of iterates ψ(x), y1 (x), y2(x), . . . , yn (x), . . . defined as follows
y1 (x) = y0 +
y2 (x) = y0 +
Z
Z
x
f (t, ψ(t)) dt ,
x0
x
x0
f (t, y1 (t)) dt , . . . , yn (x) = y0 +
Z
x
x0
f (t, yn−1 (t)) dt .
We claim that for all x on the interval x0 < x ≤ x0 +
inequalities hold
b
M
the following
ψ(x) ≤ y1 (x) ≤ y2 (x) ≤ . . . < yn (x) ≤ . . . .
Indeed, we have f (t, ψ(t)) ≥ −M , and so
y1 (x) = y0 +
x
Z
x0
f (t, ψ(t)) dt ≥ y0 − M (x − x0 ) = ψ(x) ,
giving us the first of these inequalities. Then
y2 (x) − y1 (x) =
Z
x
x0
[f (t, y1 (t)) − f (t, ψ(t))] dt ≥ 0 ,
using the preceding inequality and monotonicity of f (x, y). So that y1 (x) ≤
y2 (x), and the other inequalities are established similarly. Next, we claim
b
all of these iterates are below
that for all x in the interval x0 < x ≤ x0 + M
the supersolution ϕ(x), i.e., we have
(8.5)
ψ(x) ≤ y1 (x) ≤ y2 (x) ≤ . . . ≤ yn (x) ≤ . . . ≤ ϕ(x) .
Indeed, we have f (t, ψ(t)) ≤ M , and so
y1 (x) = y0 +
Z
x
x0
f (t, ψ(t)) dt ≤ y0 + M (x − x0 ) = ϕ(x) ,
It follows that y1 (x) stays in the box B for x0 < x ≤ x0 +
y2 (x) = y0 +
Z
b
M,
and then
x
x0
f (t, y1 (t)) dt ≤ y0 + M (x − x0 ) = ϕ(x) ,
and so on for all yn (x).
b
At each x in (x0 , x0 + M
) the sequence {yn (x)} is non-decreasing, bounded
above by the number ϕ(x). Hence, this sequence has a limit which we denote
1.8.∗ THE EXISTENCE AND UNIQUENESS THEOREM
39
by y(x). By the monotone convergence theorem, we may pass to the limit
in
Z x
yn (x) = y0 +
f (t, yn−1 (t)) dt ,
x0
concluding that y(x) gives the desired solution of the integral equation (8.4).
2. The general case. Define g(x, y) = f (x, y) + Ay. If we choose the
constant A large enough, then the new function g(x, y) will be increasing in
y. Indeed, using our condition (8.3),
g(x, y2) − g(x, y1) = f (x, y2 ) − f (x, y1 ) + A (y2 − y1 )
≥ −L (y2 − y1 ) + A (y2 − y1 ) = (A − L) (y2 − y1 ) > 0 ,
for any two points (x, y1 ) and (x, y2 ) in B, provided that A > L and y2 > y1 .
We now consider an equivalent equation
y 0 + Ay = f (x, y) + Ay = g(x, y) .
Multiplying both sides by the integrating factor eAx , we put this into the
form
d h Ax i
e y = eAx g(x, y) .
dx
Set z(x) = eAx y(x), then y(x) = e−Ax z(x), and the new unknown function
z(x) satisfies
z 0 = eAx g x, e−Ax z
(8.6)
z(x0 ) = eAx0 y0 .
The function eAx g x, e−Ax z is increasing in z. The easier case applies, so
that the solution z(x) of (8.6) exists. But then y(x) = e−Ax z(x) exists.
Finally, we prove the uniqueness of solution. Let z(x) be another solution
b
), i.e.,
of (8.4) on the interval (x0 , x0 + M
z(x) = y0 +
Z
x
f (t, z(t)) dt .
x0
Subtracting this for (8.4), we have
y(x) − z(x) =
Z
x
x0
[f (t, y(t)) − f (t, z(t))] dt .
40
CHAPTER 1. FIRST ORDER EQUATIONS
Assume first that x is in [x0 , x0 +
estimate
≤ L(x − x0 )
Then using the condition (8.3), we
x
Z
|y(x) − z(x)| ≤
1
2L ].
x0
|f (t, y(t)) − f (t, z(t))| dt ≤ L
max
1
[x0,x0 + 2L
]
|y(x) − z(x)| ≤
Z
x
x0
|y(t)) − z(t)| dt
1
max |y(x) − z(x)| .
1
2 [x0 ,x0 + 2L
]
It follows that
max
1
[x0 ,x0 + 2L
]
But then max[x0 ,x0 +
|y(x) − z(x)| ≤
1
]
2L
1
max |y(x) − z(x)| .
1
2 [x0 ,x0 + 2L
]
1
|y(x)−z(x)| = 0, and so y(x) = z(x) on [x0 , x0 + 2L
].
1
1
Let x1 = x0 + 2L
. We have just proved that y(x) = z(x) on [x0 , x0 + 2L
], and
in particular y(x1 ) = z(x1 ). Repeating (if necessary) the same argument on
1
[x1 , x1 + 2L
], and so on, we will eventually conclude that y(x) = z(x) on
b
(x0 , x0 + M ).
♦
Chapter 2
Second Order Equations
2.1
Special Second Order Equations
Probably the simplest second order equation is
y 00 (x) = 0.
Taking antiderivative
y 0 (x) = c1 .
We have denoted the arbitrary constant c1 , because we expect another
arbitrary constant to make an appearance. Indeed, taking another antiderivative, we get the general solution
y(x) = c1 x + c2 .
We see that general solutions for second order equations depend on two
arbitrary constants.
General second order equation for the unknown function y = y(x) can
often be written as
y 00 = f (x, y, y 0),
where f is a given function of its three variables. We cannot expect all such
equation to be solvable, as we could not even solve all first order equations.
In this section we study special second order equations, which are reducible
to first order equations, greatly increasing their chances to be solved.
41
42
2.1.1
CHAPTER 2. SECOND ORDER EQUATIONS
y is not present in the equation
Let us solve for y(t) the equation
ty 00 − y 0 = t2 .
We see derivatives of y in this equation, but not y itself. We denote y 0 (t) =
v(t), and v(t) is our new unknown function. Clearly, y 00 (t) = v 0 (t), and the
equation becomes
tv 0 − v = t2 .
This is a first order equation for v(t)! It is actually a linear first order
equation, so that we solve it as before. Once we have v(t), we determine the
solution y(t) by integration. Details:
1
v0 − v = t ;
t
µ(t) = e−
R
1
t
dt
1
= e− ln t = eln t =
1
;
t
d 1
v = 1;
dt t
1
v = t + c1 ;
t
y 0 = v = t2 + c1 t ;
y(t) =
t3
t2
+ c1 + c2 .
3
2
Let us solve the following equation for y(x):
2
y 00 + 2xy 0 = 0 .
Again, y is missing in the equation. By setting y 0 = v, with y 00 = v 0 , we
obtain a first order equation. This time the equation for v(x) is not linear,
but we can separate the variables. Here are the details:
v 0 + 2xv 2 = 0 ;
dv
= −2xv 2 .
dx
This equation has a solution v = 0, giving y = c. Assuming v 6= 0, we
continue
Z
Z
dv
dv = − 2x dx ;
v2
43
2.1. SPECIAL SECOND ORDER EQUATIONS
1
= −x2 − c1 ;
v
1
y0 = v = 2
;
x + c1
Let us now assume that c1 > 0. Then
−
Z
y(x) =
x2
1
1
x
dx = √ arctan √ + c2 .
+ c1
c1
c1
If c1 = 0 or c1 < 0, we get two different formulas for solution! Indeed, in
case c1 = 0, we have y 0 = x12 , and the integration gives us, y = − x1 + c3 , the
second family of solutions. In case c1 < 0, we can write
y0 =
1
1
1
1
−
;
=
2c1 x − c1 x + c1
x2 − c21
Integrating, we get the third family of solutions,
y=
1
1
ln |x − c1 | −
ln |x + c1 | + c4 .
2c1
2c1
And, the fourth family of solutions is y = c.
2.1.2
x is not present in the equation
Let us solve for y(x)
3
y 00 + yy 0 = 0 .
All three functions appearing in the equation are functions of x, but x itself
is not present in the equation.
On the curve y = y(x) the slope y 0 is a function of x, but it is also a
function of y. We therefore set y 0 = v(y), and v(y) will be our new unknown
function. By the Chain Rule
y 00 (x) =
d
dy
v(y) = v 0 (y)
= v 0 v.
dx
dx
and our equation takes the form
v 0 v + yv 3 = 0 .
This a first order equation! To solve it, we begin by factoring
v v 0 + yv 2 = 0.
44
CHAPTER 2. SECOND ORDER EQUATIONS
If the first factor is zero, y 0 = v = 0, we obtain a family of solutions y = c.
Setting the second factor to zero
dv
+ yv 2 = 0 ,
dy
we have a separable equation. Separating the variables
−
Z
dv
=
v2
Z
y dy ;
1
y 2 + 2c1
= y 2 /2 + c1 =
;
v
2
dy
2
=v= 2
.
dx
y + 2c1
To find y(x) we need to solve another first order equation. Again, we separate the variables
Z Z
y 2 + 2c1 dy = 2 dx ;
y 3 /3 + 2c1 y = 2x + c2 .
This gives us the second family of solutions.
2.1.3 ∗ The trajectory of pursuit
A car is moving along a highway (the x axis) with a constant speed a. A
drone is flying in the skies (at a point (x, y) which depends on time t) with
a constant speed v. Find the trajectory for the drone so that the tangent
line always passes through the car. Assume that the drone starts at a point
(x0 , y0 ), and the car starts at x0 .
If X gives the position of the car then
(1.1)
dy
y
=−
.
dx
X −x
We have X = x0 + at. Then we rewrite (1.1) as
x0 + at − x = −y
dx
.
dy
Differentiate this formula with respect to y, and simplify
(1.2)
dt
1 d2 x
=− y 2.
dy
a dy
45
2.1. SPECIAL SECOND ORDER EQUATIONS
q
ds
1
On the other hand, v =
, and ds = dx2 + dy 2 , so that dt = ds =
dt
v
1q 2
2
dx + dy , and then
v
dt
1
=−
dy
v
(1.3)
(Observe that
dt
dy
s
dx
dy
2
+1.
< 0, so that minus is needed in front of the square root.)
Comparing (1.2) with (1.3), and writing x0 (y) =
at the equation of the motion for the drone
a
y x (y) =
v
00
q
dx
dy ,
x00 (y) =
d2 x
dy2 ,
we arrive
x0 2 (y) + 1 .
y
6 c(x0 , y0 )
c
x0
@
@c(x, y)
@
@
R
@
@
@
@c
y = f (x)
- x
X
The unknown function x(y) is not present in this equation. We therefore
set x0 (y) = p(y), and x00 (y) = p0 (y), and separate the variables
aq 2
dp
=
p + 1,
dy
v
y
Z
dp
a
=
2
v
p +1
p
Z
dy
.
y
The integral on the left is computed by a substitution p = tan θ, giving
ln p +
q
p2 + 1 =
a
(ln y + ln c) ,
v
46
CHAPTER 2. SECOND ORDER EQUATIONS
p+
We write this as
q
a
p2 + 1 = cy v
q
(1.4)
(with a new c) .
a
p2 + 1 = cy v − p ,
and square both sides, getting
1 = c2 y
2a
v
a
− 2cy v p .
We solve this for p = x0 (y):
x0 (y) =
a
1 a
1
cy v − y − v .
2
2c
The constant c we determine from (1.4). At t = 0, p = x0 (y0 ) = 0, and so
−a
c = y0 v . Then
a
a
1 y v
1 y −v
0
x (y) =
−
.
2 y0
2 y0
Integrating, and using that x(y0 ) = x0 , we finally obtain (assuming v 6= a)
y0
x(y) =
2(1 + a/v)
2.2
"
y
y0
1+ a
v
#
y0
−1 −
2(1 − a/v)
"
y
y0
1− a
v
#
− 1 + x0 .
Linear Homogeneous Equations with Constant
Coefficients
We wish to find solution y = y(t) of the equation
ay 00 + by 0 + cy = 0 ,
where a, b and c are given numbers. This is arguably the most important
class of differential equations, because it arises when applying Newton’s second law of motion. If y(t) denotes displacement of an object at time t, then
this equation relates the displacement with velocity y 0 (t) and acceleration
y 00 (t). This is equation is linear, because we only multiply y and its derivatives by constants, and add. The word homogeneous refers to the right hand
side of this equation being zero.
Observe, if y(t) is a solution, so is 2y(t). Indeed, plug this function into
the equation:
a(2y)00 + b(2y)0 + c(2y) = 2 ay 00 + by 0 + cy = 0 .
2.2. LINEAR HOMOGENEOUS EQUATIONS WITH CONSTANT COEFFICIENTS47
The same argument shows that c1 y(t) is a solution for any constant c1 . If
y1 (t) and y2 (t) are two solutions, similar argument will show that y1 (t)+y2 (t)
and y1 (t) − y2 (t) are also solutions. More generally, c1 y1 (t) + c2 y2 (t) is a
solution for any c1 and c2 . (This is called a linear combination of two
solutions.) Indeed,
a (c1 y1 (t) + c2 y2 (t))00 + b (c1 y1 (t) + c2 y2 (t))0 + c (c1 y1 (t) + c2 y2 (t)) =
c1 (ay100 + by10 + cy1 ) + c2 (ay200 + by20 + cy2 ) = 0 .
We now try to find a solution of the form y = ert , where r is a constant
to be determined. We have y 0 = rert and y 00 = r 2 ert , so that plugging into
the equation gives
a(r 2 ert ) + b(rert) + cert = ert ar 2 + br + c = 0 .
Dividing by a positive quantity ert , we get
ar 2 + br + c = 0 .
This is a quadratic equation for r, called the characteristic equation. If r is
a root (solution) of this equation, then ert solves our differential equation.
When solving a quadratic equation, it is possible to encounter two real roots,
one (repeated) real root, or two complex conjugate roots. We will look at
these cases in turn.
2.2.1
Characteristic equation has two distinct real roots
Call the roots r1 and r2 , and r2 6= r1 . Then er1 t and er2 t are two solutions,
and their linear combination gives us the general solution
y(t) = c1 er1 t + c2 er2 t .
As we have two constants to play with, one can prescribe two additional
conditions for the solution to satisfy.
Example Solve
y 00 + 4y 0 + 3y = 0
y(0) = 2
y 0 (0)
= −1 .
We prescribe that at time zero the displacement is 2, and the velocity is
−1. These two conditions are usually referred to as initial conditions, and
48
CHAPTER 2. SECOND ORDER EQUATIONS
together with the differential equation, they form an initial value problem.
The characteristic equation is
r 2 + 4r + 3 = 0 .
Solving it (say by factoring as (r + 1)(r + 3) = 0), we get its roots r1 = −1
and r2 = −3. The general solution is then
y(t) = c1 e−t + c2 e−3t .
Then y(0) = c1 + c2 . Compute y 0 (t) = −c1 e−t − 3c2 e−3t , and therefore
y 0 (0) = −c1 − 3c2 . Initial conditions tell us that
c1 + c2 = 2
−c1 − 3c2 = −1 .
We have two equations to find two unknowns c1 and c2 . We find that
c1 = 5/2 and c2 = −1/2 (say by adding the equations). Answer: y(t) =
5 −t 1 −3t
e − e .
2
2
Example Solve
y 00 − 4y = 0 .
The characteristic equation is
r2 − 4 = 0 .
Its roots are r1 = −2 and r2 = 2. The general solution is then
y(t) = c1 e−2t + c2 e2t .
More generally, for the equation
y 00 − a2 y = 0
( a is a given constant)
the general solution is
y(t) = c1 e−at + c2 eat .
This should become automatic, because such equations appear often.
Example Find the constant a, so that the solution of the initial problem
9y 00 − y = 0,
y(0) = 2, y 0 (0) = a .
2.2. LINEAR HOMOGENEOUS EQUATIONS WITH CONSTANT COEFFICIENTS49
is bounded as t → ∞, and find that solution.
We begin by writing down (automatically!) the general solution
1
1
y(t) = c1 e− 3 t + c2 e 3 t .
1
1
1
1
Compute y 0 (t) = − c1 e− 3 t + c2 e 3 t , and then the initial conditions give
3
3
y(0) = c1 + c2 = 2
y (0) = − 13 c1 + 13 c2 = a .
0
Solving the system of two equation for c1 and c2 (by multiplying the second
equation through by 3 and adding it to the first equation), we get c2 = 1+ 32 a
and c1 = 1 − 32 a. Solution of the initial value problem is then
1
1
3
3
y(t) = 1 − a e− 3 t + 1 + a e 3 t .
2
2
In order for this solution to stay bounded as t → ∞, the coefficient in front
1
of e 3 t must be zero. I.e., 1 + 23 a = 0, and a = − 23 . The solution then
1
becomes y(t) = 2e− 3 t .
Finally, observe that if r1 and r2 are roots of the characteristic equation,
then we can factor the characteristic polynomial as
(2.1)
2.2.2
ar 2 + br + c = a(r − r1 )(r − r2 ) .
Characteristic equation has only one (repeated) real
root
This is the case we get r2 = r1 , when solving the characteristic equation.
We still have a solution y1 (t) = er1 t . Of course, any constant multiple of this
solution is also a solution, but to form a general solution we need another
truly different solution, as we saw in the preceding case. It turns out that
y2 (t) = ter1 t is that second solution, and the general solution is then
y(t) = c1 er1 t + c2 ter1 t .
To justify that y2 (t) = ter1 t is a solution, we observe that in this case
formula (2.1) becomes
ar 2 + br + c = a(r − r1 )2 .
50
CHAPTER 2. SECOND ORDER EQUATIONS
Square out the quadratic on the right as ar 2 − 2ar1 r + ar12 . Because it is
equal to the quadratic on the left, the coefficients of both polynomials in r 2 ,
r, and the constant terms are the same. We equate the coefficients in r:
(2.2)
b = −2ar1 .
To plug y2 (t) into the equation, we compute its derivatives
y 0 (t) = er1 t +
2
r1 t
r1 t
00
r1 t
2
r1 te = e (1 + r1 t), and similarly y2 (t) = e
2r1 + r1 t . Then
ay200 + by20 + cy2 = aer1 t 2r1 + r12 t + ber1 t (1 + r1 t) + cter1 t =
er1 t (2ar1 + b) + ter1 t ar12 + br1 + c = 0 .
In the last line the first bracket is zero because of (2.2), and the second
bracket is zero because r1 is a solution of the characteristic equation.
Example 9y 00 + 6y 0 + y = 0.
The characteristic equation
9r 2 + 6r + 1 = 0
has a double root r = − 13 . The general solution is then
1
1
y(t) = c1 e− 3 t + c2 te− 3 t .
Example Solve
y 00 − 4y 0 + 4y = 0
y(0) = 1, y 0 (0) = −2 .
The characteristic equation
r 2 − 4r + 4 = 0
has a double root r = 2. The general solution is then
y(t) = c1 e2t + c2 te2t .
Here y 0 (t) = 2c1 e2t + c2 e2t + 2c2 te2t , and from the initial conditions
y(0) = c1 = 1
0
y (0) = 2c1 + c2 = −2 .
From the first equation c1 = 1, and then c2 = −4. Answer: y(t) = e2t −4te2t .
2.3. CHARACTERISTIC EQUATION HAS TWO COMPLEX CONJUGATE ROOTS51
2.3
2.3.1
Characteristic equation has two complex conjugate roots
Euler’s formula
Recall the Maclauren’s formula
1 2
1
1
1
z + z3 + z4 + z5 + . . . .
2!
3!
4!
5!
√
Plug in z = i θ, where i = −1 the imaginary unit, and θ is a real number.
Calculating the powers, and separating the real and imaginary parts, we
have
ez = 1 + z +
1
1
1
1
2
3
4
5
2! (i θ) + 3! (i θ) + 4! (i θ) + 5! (i θ) + . . .
1 2
1
1 4
1
= 1 + i θ − 2!
θ − 3!
i θ3 + 4!
θ + 5!
i θ5 + . . .
1 2
1 4
1 3
1 5
1 − 2!
θ + 4!
θ + . . . + i θ − 3!
θ + 5!
θ +...
ei θ = 1 + i θ +
=
= cos θ + i sin θ .
We have derived Euler’s formula:
(3.1)
ei θ = cos θ + i sin θ .
Replacing θ by −θ, we have
(3.2)
e−i θ = cos(−θ) + i sin(−θ) = cos θ − i sin θ .
Adding the last two formulas, we express
(3.3)
cos θ =
ei θ + e−i θ
.
2
Subtracting from (3.1) the formula (3.2), and dividing by 2i
(3.4)
2.3.2
sin θ =
ei θ − e−i θ
.
2i
The General Solution
Recall that to solve the equation
ay 00 + by 0 + cy = 0
52
CHAPTER 2. SECOND ORDER EQUATIONS
we begin with solving the characteristic equation
ar 2 + br + c = 0 .
Complex roots come in conjugate pairs: if p + iq is one root, then p − iq is
the other. These roots are of course different, so that we have two solutions
z1 = e(p+iq)t and z2 = e(p−iq)t. The problem with these solutions is that
they are complex-valued. If we add z1 + z2 , we get another solution. If we
divide this new solution by 2, we get yet another solution. I.e., the function
z1 + z2
y1 (t) =
is a solution of our equation, and similarly the function
2
z1 − z2
y2 (t) =
is another solution. Using the formula (3.3), compute
2i
y1 (t) =
e(p+iq)t + e(p−iq)t
eiqt + e−iqt
= ept
= ept cos qt .
2
2
This is a real valued solution of our equation! Similarly,
y2 (t) =
e(p+iq)t − e(p−iq)t
eiqt − e−iqt
= ept
= ept sin qt
2i
2i
is our second solution. The general solution is then
y(t) = c1 ept cos qt + c2 ept sin qt .
Example Solve y 00 + 4y 0 + 5y = 0.
The characteristic equation
r 2 + 4r + 5 = 0
can be solved quickly by completing the square:
(r + 2)2 + 1 = 0; (r + 2)2 = −1;
r + 2 = ±i; r = −2 ± i .
Here p = −2, q = 1, and the general solution is y(t) = c1 e−2t cos t +
c2 e−2t sin t.
Example Solve y 00 + y = 0.
The characteristic equation
r2 + 1 = 0
2.3. CHARACTERISTIC EQUATION HAS TWO COMPLEX CONJUGATE ROOTS53
has roots ±i. Here p = 0 and q = 1, and the general solution is y(t) =
c1 cos t + c2 sin t. More generally, for the equation
y 00 + a2 y = 0
( a is a given constant)
the general solution is
y(t) = c1 cos at + c2 sin at .
This should become automatic, because such equations appear often.
Example Solve
y 00 + 4y = 0,
y(π/3) = 2, y 0 (π/3) = −4 .
The general solution is
y(t) = c1 cos 2t + c2 sin 2t .
Compute y 0 (t) = −2c1 sin 2t + 2c2 cos 2t. From the initial conditions
√
2π
2π
1
3
y(π/3) = c1 cos
+ c2 sin
= − c1 +
c2 = 2
3
3
2
2
√
2π
2π
+ 2c2 cos
= − 3c1 − c2 = −4 .
3
3
√
√
This gives c1 = 3 − 1, c2 = 3 + 1. Answer:
√
√
y(t) = ( 3 − 1) cos 2t + ( 3 + 1) sin 2t .
y 0 (π/3) = −2c1 sin
2.3.3
Problems
I. Solve the second order equations, with y missing:
1. 2y 0 y 00 = 1.
2
Ans. y = ± (x + c1 )3/2 + c2 .
3
2. xy 00 + y 0 = x.
Ans. y =
x2
+ c1 ln x + c2 .
4
3. y 00 + y 0 = x2 .
Ans. y =
x3
− x2 + 2x + c1 e−x + c2 .
3
54
CHAPTER 2. SECOND ORDER EQUATIONS
π
Ans. y = 2 tan−1 x − .
2
4. xy 00 + 2y 0 = (y 0 )2 , y(1) = 0, y 0 (1) = 1.
II. Solve the second order equations, with x missing:
1. yy 00 − 3 (y 0 )3 = 0.
Ans. 3 (y ln y − y) + c1 y = −x + c2 , and y = c.
2. yy 00 + (y 0 )2 = 0.
Ans. y 2 = c1 + c2 x (this includes the y = c family).
3. y 00 = 2yy 0 , y(0) = 0, y 0 (0) = 1. Ans. y = tan x.
2
4∗ . y 00 y = 2xy 0 , y(0) = 1, y 0 (0) = −4.
Hint: Write:
00
y y−y
02
02
= (2x − 1)y ;
y 00 y − y 0 2
= 2x − 1;
y0 2
y
− 0
y
0
= 2x − 1 .
Integrating, and using the initial conditions
−
1
(2x − 1)2
y
2
=
x
−
x
+
=
.
y0
4
4
4x
Ans. y = e 2x−1 .
III. Solve the linear second order equations, with constant coefficients
1. y 00 + 4y 0 + 3y = 0.
2. y 00 − 3y 0 = 0.
3. 2y 00 + y 0 − y = 0.
4. y 00 − 3y = 0.
Ans. y = c1 e−t + c2 e−3t .
Ans. y = c1 + c2 e3t .
Ans.
1
y = c1 e−t + c2 e 2 t .
√
Ans. y = c1 e−
3t
5. 3y 00 − 5y 0 − 2y = 0.
6. y 00 − 9y = 0, y(0) = 3, y 0 (0) = 3.
7. y 00 + 5y 0 = 0, y(0) = −1, y 0 (0) = −10.
8. y 00 + y 0 − 6y = 0, y(0) = −2, y 0 (0) = 3.
9. 4y 00 − y = 0.
√
3t
+ c2 e
10. 3y 00 − 2y 0 − y = 0, y(0) = 1, y 0 (0) = −3.
.
Ans. y = e−3t + 2e3t .
Ans. y = −3 + 2e−5t.
7
3e2t
Ans. y = − e−3t −
.
5
5
Ans. y = 3e−t/3 − 2et .
2.3. CHARACTERISTIC EQUATION HAS TWO COMPLEX CONJUGATE ROOTS55
11. 3y 00 − 2y 0 − y = 0, y(0) = 1, y 0 (0) = a. Then find the value of a for
1
which the solution is bounded, as t → ∞.
Ans. a = − .
3
IV. Solve the linear second order equations, with constant coefficients
1. y 00 + 6y 0 + 9y = 0.
Ans. y = c1 e−3t + c2 te−3t .
2. 4y 00 − 4y 0 + y = 0.
Ans. y = c1 e 2 t + c2 te 2 t .
1
1
3. y 00 − 2y 0 + y = 0, y(0) = 0, y 0 (0) = −2.
Ans. y = −2tet .
V. Using Euler’s √
formula, compute: 1. eiπ
9π
4. e2πi
5. 2ei 4
2. e−iπ/2
4. 9y 00 − 6y 0 + y = 0, y(0) = 1, y 0 (0) = −2.
3. ei
3π
4
6. Show that sin 2θ = 2 sin θ cos θ, and cos2θ = cos2 θ − sin2 θ.
Hint: Begin with ei2θ = (cos θ + i sin θ)2 . Apply Euler’s formula on the left,
and square out on the right. Then equate the real and imaginary parts.
7∗ . Show that sin 3θ = 3 cos2 θ sin θ − sin3 θ, and also
cos 3θ = −3 sin2 θ cos θ + cos3 θ.
Hint: Begin with ei3θ = (cos θ + i sin θ)3 . Apply Euler’s formula on the left,
and “cube out” on the right. Then equate the real and imaginary parts.
VI. Solve the linear second order equations, with constant coefficients
1. y 00 + 4y 0 + 8y = 0.
2. y 00 + 16y = 0.
Ans. y = c1 e−2t cos 2t + c2 e−2t sin 2t.
Ans. y = c1 cos 4t + c2 sin 4t.
3. y 00 − 4y 0 + 5y = 0, y(0) = 1, y 0 (0) = −2.
4. y 00 + 4y = 0, y(0) = −2, y 0 (0) = 0.
5. 9y 00 + y = 0, y(0) = 0, y 0 (0) = 5.
6. y 00 − y 0 + y = 0.
t
Ans. y = e 2
Ans. y = e2t cos t − 4e2t sin t.
Ans. y = −2 cos 2t.
1
Ans. y = 15 sin t.
3
√
√ !
3
3
c1 cos
t + c2 sin
t .
2
2
7. 4y 00 + 8y 0 + 5y = 0, y(π) = 0, y 0 (π) = 4.
8. y 00 + y = 0, y(π/4) = 0, y 0 (π/4) = −1.
1
Ans. y = −8eπ−t cos t.
2
Ans. y = − sin(t − π/4).
56
CHAPTER 2. SECOND ORDER EQUATIONS
VII.
1. Consider the equation (y = y(t))
y 00 + by 0 + cy = 0
with positive constants b and c. Show that all of its solutions tend to zero,
as t → ∞.
2. Consider the equation
y 00 + by 0 − cy = 0
with positive constants b and c. Assume that some solution is bounded, as
t → ∞. Show that this solution tends to zero, as t → ∞.
2.4
Linear Second Order Equations with Variable
Coefficients
Linear Systems
Recall that a system of two equations (here the numbers a, b, c, d, g and h
are given, while x and y are unknowns)
ax+by = g
cx+dy = h
has a unique
if and only if the determinant of the system is non solution
a b zero, i.e., = ad − bc 6= 0. This is justified by explicitly solving the
c d system:
ah − cg
dg − bh
x=
, y=
.
ad − bc
ad − bc
a b
It is also easy to justify that a determinant is zero, i.e., c d
= 0, if
and only if its columns are proportional, i.e., a = γb and c = γd, for some
constant γ.
2.4. LINEAR SECOND ORDER EQUATIONS WITH VARIABLE COEFFICIENTS57
General Theory
We consider an initial value problem for linear second order equations
(4.1)
y 00 + p(t)y 0 + g(t)y = f (t)
y(t0 ) = α
y 0 (t0 ) = β .
We are given the coefficient functions p(t) and g(t), and the function f (t).
The constants t0 , α and β are also given, i.e., at some initial “time” t = t0
the values of the solution and its derivative are prescribed. Is there a solution
to this problem? If so, is solution unique, and how far can it be continued?
Theorem 4 Assume that the coefficient functions p(t), g(t) and f (t) are
continuous on some interval that includes t0 . Then the problem (4.1) has a
solution, and only one solution. This solution can be continued to the left
and to the right of the initial point t0 , so long as the coefficient functions
remain continuous.
If the coefficient functions are continuous for all t, then the solution can
be continued for all t, −∞ < t < ∞. This is better than what we had for
first order equations. Why? Because the equation here is linear. Linearity
pays!
Corollary Let z(t) be a solution of (4.1) with the same initial data: z(t0 ) =
α and z 0 (t0 ) = β. Then z(t) = y(t) for all t.
Let us now study the homogeneous equations (for y = y(t))
(4.2)
y 00 + p(t)y 0 + g(t)y = 0 ,
with the given coefficient functions p(t) and g(t). Although the equation
looks relatively simple, its analytical solution is totally out of reach, in general. (One has to either solve it numerically, or use infinite series.) In this
section we shall study some theoretical aspects. In particular, we shall prove
that linear combination of two solutions, that are not constant multiple of
one another, gives a general solution (a fact that we had intuitively used for
equations with constant coefficients).
We shall need a concept of the Wronskian determinant of two functions
y1 (t) and y2 (t), or Wronskian, for short:
y (t) y (t)
2
W (t) = 10
y1 (t) y20 (t)
= y1 (t)y20 (t) − y10 (t)y2 (t) .
58
CHAPTER 2. SECOND ORDER EQUATIONS
Sometimes the Wronskian is written as W (y1 , y2 )(t) to stress its dependence
on y1 (t) and y2 (t). For example,
cos 2t
W (cos 2t, sin 2t)(t) = −2 sin 2t
sin 2t
2 cos 2t
= 2 cos2 2t + 2 sin2 2t = 2 .
Given the Wronskian and one of the functions, one can determine the
other one.
Example If f (t) = t, and W (f, g)(t) = t2 et , find g(t).
Solution: Here f 0 (t) = 1, and so
t g(t)
W (f, g)(t) = 1 g 0 (t)
= tg 0 (t) − g(t) = t2 et .
This is a linear first order equation for g(t). We solve it as usual, obtaining
g(t) = tet + ct .
If g(t) = cf (t), with some constant c, then we compute that W (f, g)(t) =
0 for all t. The converse statement is not true. For example, the functions
f (t) = t2 and
(
t2
if t ≥ 0
g(t) =
2
−t if t < 0
are not constant multiples of one another, but W (f, g)(t) = 0. This is seen
by computing the Wronskian separately in case t ≥ 0, and for t < 0.
Theorem 5 Let y1 (t) and y2 (t) be two solutions of (4.2), and W (t) is their
Wronskian. Then
R
(4.3)
W (t) = ce− p(t) dt .
where c is some constant.
This is a remarkable fact. Even though we do not know y1 (t) and y2 (t),
we can compute their Wronskian.
Proof: We differentiate the Wronskian W (t) = y1 (t)y20 (t) − y10 (t)y2 (t):
W 0 = y1 y200 + y10 y20 − y10 y20 − y100 y2 = y1 y200 − y100 y2 .
Because y1 is a solution of (4.2), we have y100 + p(t)y10 + g(t)y1 = 0, or
y100 = −p(t)y10 − g(t)y1 , and similarly y200 = −p(t)y20 − g(t)y2 . With these
formulas, we continue
W 0 = y1 (−p(t)y20 − g(t)y2 ) − (−p(t)y10 − g(t)y1 ) y2
= −p(t) (y1 y20 − y10 y2 ) = −p(t)W .
2.4. LINEAR SECOND ORDER EQUATIONS WITH VARIABLE COEFFICIENTS59
We have obtained a linear first order equation for W (t). Solving it by
separation of variables, we get (4.3).
♦
Corollary We see from (4.3) that either W (t) = 0 for all t, when c = 0, or
else W (t) is never zero, in case c 6= 0.
Theorem 6 Let y1 (t) and y2 (t) be two solutions of (4.2), and W (t) is their
Wronskian. Then W (t) = 0, if and only if y1 (t) and y2 (t) are constant
multiples of each other.
We just saw that if two functions are constant multiples of each other
then their Wronskian is zero, while the converse (opposite) statement is not
true in general. But if these functions happen to be solutions of (4.2), then
the converse is true.
Proof: Assume that the Wronskian of two solutions
y1 (t) and
y2 (t) is zero.
y (t ) y2 (t0 ) In particular it is zero at any point t0 , i.e., 10 0
= 0. When a
0
y1 (t0 ) y2 (t0 ) 2 × 2 determinant is zero, its columns are proportional. Let us assume the
second column is γ times the first, where γ is some number, i.e., y2 (t0 ) =
γy1 (t0 ) and y20 (t0 ) = γy10 (t0 ). Assume, first, that γ 6= 0. Consider the
function z(t) = y2 (t)/γ. It is a solution of the homogeneous equation (4.2),
and it has initial values z(t0 ) = y1 (t0 ) and z 0 (t0 ) = y10 (t0 ) the same as y1 (t).
It follows that z(t) = y1 (t), i.e., y2 (t) = γy1 (t). In case γ = 0, a similar
argument shows that y2 (t) = 0, i.e., y2 (t) = 0 · y1 (t).
♦
Definition We say that two solutions y1 (t) and y2 (t) of (4.2) form a fundamental set, if for any other solution z(t), we can find two constants c01 and
c02 , so that z(t) = c01 y1 (t) + c02 y2 (t). In other words, the linear combination
c1 y1 (t) + c2 y2 (t) gives us all solutions of (4.2).
Theorem 7 Let y1 (t) and y2 (t) be two solutions of (4.2), that are not constant multiples of one another. Then they form a fundamental set.
Proof: Let y(t) be a solution of the equation (4.2). Let us try to find the
constants c1 and c2 , so that z(t) = c1 y1 (t) + c2 y2 (t) satisfies the same initial
conditions as y(t), i.e.,
(4.4)
z(t0 ) = c1 y1 (t0 ) + c2 y2 (t0 ) = y(t0 )
z 0 (t0 ) = c1 y10 (t0 ) + c2 y20 (t0 ) = y 0 (t0 ) .
This is a system of two linear equations to find c1 and c2 . The determinant
of this system is just the Wronskian of y1 (t) and y2 (t), evaluated at t0 . This
60
CHAPTER 2. SECOND ORDER EQUATIONS
y
15
10
y = cosh t
5
-3
-2
1
-1
2
3
t
Figure 2.1: The cosine hyperbolic function
determinant is not zero, because y1 (t) and y2 (t) are not constant multiples
of one another. (This determinant is W (t0 ). If W (t0 ) = 0, then W (t) = 0
for all t, by the Corollary to Theorem 5, and then by Theorem 6, y1 (t)
and y2 (t) would have to be constant multiples of one another.) It follows
that the 2 × 2 system (4.4) has a unique solution c1 = c01 , c2 = c02 . The
function z(t) = c01 y1 (t) + c02 y2 (t) is then a solution of the same equation
(4.2), satisfying the same initial conditions, as does y(t). By Corollary to
Theorem 4, y(t) = z(t) for all t. So that any solution y(t) is a particular
case of the general solution c1 y1 (t) + c2 y2 (t).
♦
Finally, we mention that two functions are called linearly independent, if
they are not constant multiples of one another. So that two solutions y1 (t)
and y2 (t) form a fundamental set, if and only if they are linearly independent.
2.5
Some Applications of the Theory
We give some practical applications of the theory from the last section. But
first, we recall the function sinh t and cosh t.
2.5.1
The Hyperbolic Sine and Cosine Functions
One defines
cosh t =
et + e−t
2
and
sinh t =
et − e−t
.
2
In particular, cosh 0 = 1, sinh 0 = 0. Observe that cosh t is an even function,
while sinh t is odd. Compute:
d
cosh t = sinh t
dt
and
d
sinh t = cosh t .
dt
61
2.5. SOME APPLICATIONS OF THE THEORY
y
15
10
5
-3
-2
y = sinh t
1
-1
2
3
t
-5
-10
-15
Figure 2.2: The sine hyperbolic function
These formulas are similar to those for cosine and sine. By squaring out,
one sees that
cosh2 t − sinh2 t = 1, for all t.
We see that the derivatives, and the algebraic properties of the new functions
are similar to those for cosine and sine. (There are other similar formulas.)
However, the graphs of sinh t and cosh t look totally different: they are not
periodic, and they are unbounded, see Figures 2.1 and 2.2.
2.5.2
Different Ways to Write a General Solution
For the equation
(5.1)
y 00 − a2 y = 0
you remember that the functions e−at and eat form a fundamental set, and
y(t) = c1 e−at + c2 eat is the general solution. But y = sinh at is also a
solution, because y 0 = a cosh at and y 00 = a2 sinh at = a2 y. Similarly, cosh at
is a solution. It is not a constant multiple of sinh at, so that together they
form another fundamental set, and we have another form of the general
solution of (5.1)
y = c1 cosh at + c2 sinh at .
This is not a “new” general solution, it can be reduced to the old one, by
expressing cosh at and sinh at through exponentials. However, the new form
is useful.
Example Solve: y 00 − 4y = 0, y(0) = 0, y 0 (0) = −5.
We write the general solution as y(t) = c1 cosh 2t + c2 sinh 2t. Using that
cosh 0 = 1 and sinh 0 = 0, we have y(0) = c1 = 0. With c1 = 0, we update
the solution y(t) = c2 sinh 2t. Now, y 0 (t) = 2c2 cosh 2t and y 0 (0) = 2c2 = −5.
I.e., c2 = − 52 . Answer: y(t) = − 52 sinh 2t.
62
CHAPTER 2. SECOND ORDER EQUATIONS
Yet another form of the general solution of (5.1) is
y(t) = c1 e−a(t−t0 ) + c2 ea(t−t0 ) ,
where t0 is any number.
Example Solve: y 00 − 9y = 0, y(2) = −1, y 0 (2) = 9.
We select t0 = 2, writing the general solution as
y(t) = c1 e−3(t−2) + c2 e3(t−2) .
Compute y 0 (t) = −3c1 e−3(t−2) + 3c2 e3(t−2), and use the initial conditions
c1 + c2 = −1
−3c1 + 3c2 = 9 .
We see that c1 = −2 and c2 = 1. Answer: y(t) = −2e−3(t−2) + e3(t−2).
We can write the general solution, centered at t0 , for other simple equations as well. For the equation
y 00 + a2 y = 0
the functions cos a(t−t0 ) and sin a(t−t0 ) are both solutions, for any value of
t0 , and they form a fundamental set (because they are not constant multiples
of one another). We can then write the general solution as
y = c1 cos a(t − t0 ) + c2 sin a(t − t0 ) .
Example Solve: y 00 + 4y = 0, y( π5 ) = 2, y 0 ( π5 ) = −6.
We write general solution as
y = c1 cos 2(t −
π
π
) + c2 sin 2(t − ) .
5
5
Using the initial conditions, we quickly compute c1 and c2 . Answer: y =
π
π
2 cos 2(t − ) − 3 sin 2(t − ).
5
5
63
2.5. SOME APPLICATIONS OF THE THEORY
2.5.3
Finding the Second Solution
Next we solve the Legendre equation (for −1 < t < 1)
(1 − t2 )y 00 − 2ty 0 + 2y = 0 .
It has a solution y = t. (Lucky!) We need to find the other one. We divide
the equation by 1 − t2 , to put it into the form (4.2) from the previous section
y 00 −
2t
2
y0 +
y = 0.
2
1−t
1 − t2
Call the other solution y(t). By the Theorem 5, we can calculate the Wronskian of two solutions:
t
W (t, y) = 1
y(t)
y 0 (t)
R
= ce
2t
1−t2
dt
.
We shall set here c = 1, because we need just one solution, different from t.
I.e.,
R 2t
2
1
dt
ty 0 − y = e 1−t2 = e− ln(1−t ) =
.
1 − t2
This is a linear equation, which we solve as usual:
R
1
1
1
− 1/t dt
y0 − y =
;
µ(t)
=
e
= e− ln t = ;
t
t(1 − t2 )
t
d 1
1
1
y = 2
; y=
2
dt t
t (1 − t ) t
Z
1
t2 (1 − t2 )
dt .
We have computed the last integral before by guess-and-check, so that
y=t
Z
1
1
1
dt = −1 − t ln(1 − t) + t ln(1 + t) .
t2 (1 − t2 )
2
2
We set the constant of integration c =0, because we need
just one solution
1
1+t
other than t. Answer: y(t) = c1 t + c2 −1 + t ln
.
2
1−t
64
2.5.4
CHAPTER 2. SECOND ORDER EQUATIONS
Problems
I. 1. Find the Wronskians of the following functions
(i) f (t) = e3t, g(t) = e− 2 t.
7 5
Ans. − e 2 t .
2
(ii) f (t) = e2t , g(t) = te2t .
Ans. e4t.
1
(iii) f (t) = et cos 3t, g(t) = et sin 3t.
Ans. 3e2t.
2. If f (t) = t2 , and the Wronskian W (f, g)(t) = t5 et , find g(t).
Ans. g(t) = t3 et − t2 et + ct2 .
3. Let y1 (t) and y2 (t) be any two solutions of
y 00 − t2 y = 0 .
Show that W (y1 (t), y2 (t))(t) = constant.
II. For the following equations one solution is given. Using Wronskians, find
the second solution, and the general solution
1. y 00 − 2y 0 + y = 0,
y1 (t) = et .
2. (t−2)y 00 −ty 0 +2y = 0,
3. ty 00 + 2y 0 + ty = 0,
y1 (t) =
Ans. y = c1 et +c2 −t2 + 2t − 2 .
y1 (t) = et .
sin t
t .
Ans. y = c1
sin t
cos t
+ c2
.
t
t
III. Solve the problem, using the hyperbolic sine and cosine
1. y 00 − 4y = 0, y(0) = 0, y 0 (0) = − 13 .
2. y 00 − 9y = 0, y(0) = 2, y 0 (0) = 0.
1
Ans. y = − sinh 2t.
6
Ans. y = 2 cosh 3t.
3. y 00 − y = 0, y(0) = −3, y 0 (0) = 5.
Ans. y = −3 cosh t + 5 sinh t.
IV. Solve the problem, by using the general solution centered at the initial
point
1. y 00 + y = 0, y(π/8) = 0, y 0 (π/8) = 3.
2. y 00 + 4y = 0, y(π/4) = 0, y 0 (π/4) = 4.
Ans. y = 3 sin(t − π/8).
Ans. y = 2 sin 2(t − π/4) = 2 sin(2t − π/2) = −2 cos 2t.
3. y 00 − 2y 0 − 3y = 0, y(1) = 1, y 0 (1) = 7.
Ans. y = 2e3(t−1) − e−(t−1) .
2.6. NON-HOMOGENEOUS EQUATIONS. THE EASY CASE.
65
V. 1. Show that the functions y1 = t and y2 = sin t cannot be both solutions
of
y 00 + p(t)y 0 + g(t)y = 0 ,
no matter what p(t) and g(t) are.
Hint: Consider W (y1 , y2 )(0), then use Theorems 4 (Corollary) and 5.
2. Assume that the functions y1 = 1 and y2 = cos t are both solutions of
some equation
y 00 + p(t)y 0 + g(t)y = 0 .
What is the equation?
Hint: Use Theorem 4 to determine p(t), then argue that g(t) must be zero.
Answer. The equation is
y 00 − cot ty 0 = 0 .
2.6
Non-homogeneous Equations. The easy case.
We wish to solve non-homogeneous equation
(6.1)
y 00 + p(t)y 0 + g(t)y = f (t) .
Here the coefficient functions p(t), g(t) and f (t) are given. The corresponding homogeneous equation is
(6.2)
y 00 + p(t)y 0 + g(t)y = 0 .
Suppose we know the general solution of the homogeneous equation: y(t) =
c1 y1 (t)+c2 y2 (t). Our goal is to get the general solution of the non-homogeneous
equation. Suppose we can somehow find a particular solution Y (t) of the
non-homogeneous equation, i.e.,
(6.3)
Y 00 + p(t)Y 0 + g(t)Y = f (t) .
From the equation (6.1) we subtract the equation (6.3):
(y − Y )00 + p(t)(y − Y )0 + g(t)(y − Y ) = 0 .
We see that the function v = y − Y is a solution the homogeneous equation
(6.2), and therefore in general v(t) = c1 y1 (t) + c2 y2 (t). We now express
y = Y + v, i.e.,
y(t) = Y (t) + c1 y1 (t) + c2 y2 (t) .
66
CHAPTER 2. SECOND ORDER EQUATIONS
In words: the general solution of the non-homogeneous equation is equal to
the sum of any particular solution of the non-homogeneous equation, and
the general solution of the corresponding homogeneous equation.
Example Solve y 00 + 9y = −4 cos 2t.
The general solution of the corresponding homogeneous equation y 00 +
9y = 0 is y(t) = c1 sin 3t + c2 cos 3t. We look for a particular solution in the
form Y (t) = A cos 2t. Plug it in, then simplify:
−4A cos 2t + 9A cos 2t = −4 cos 2t;
5A cos 2t = −4 cos 2t ,
4
i.e., A = − 54 , and Y (t) = − 54 cos 2t. Answer: y(t) = − cos 2t + c1 sin 3t +
5
c2 cos 3t.
This was an easy example: the y 0 term was missing. If y 0 term was
present, we would need to look for Y (t) in the form Y (t) = A cos 2t+B sin 2t.
Prescription 1 If the right side of the equation (6.1) has the form a cos αt+
b sin αt, look for a particular solution in the form Y (t) = A cos αt + B sin αt.
More generally, if the right side of the equation has the form (at2 + bt +
c) cos αt + (et2 + gt + h) sin αt, look for a particular solution in the form
Y (t) = (At2 + Bt + C) cos αt + (Et2 + Gt + H) sin αt. Even more generally, if
the polynomials are of higher power, we make the corresponding adjustment.
Example Solve y 00 + 3y 0 + 9y = −4 cos 2t + sin 2t.
We look for a particular solution in the form Y (t) = A cos 2t + B sin 2t.
Plug it in, then combine the like terms
−4A cos 2t − 4B sin 2t + 3 (−2A sin 2t + 2B cos 2t) + 9 (A cos 2t + B sin 2t)
= −4 cos 2t + sin 2t;
(5A + 6B) cos 2t + (−6A + 5B) sin 2t = −4 cos 2t + sin 2t .
Equating the corresponding coefficients
5A + 6B = −4
−6A + 5B = 1 .
2.6. NON-HOMOGENEOUS EQUATIONS. THE EASY CASE.
67
26
26
and B = − 19
Solving this system, A = − 61
61 . I.e., Y (t) = − 61 cos 2t −
19
61 sin 2t. The general solution of the corresponding homogeneous equation
y 00 + 3y 0 + 9y = 0
√
√
3
3
26
is y = c1 e− 2 t sin 3 2 3 t + c2 e− 2 t cos 3 2 3 t. Answer: y(t) = − cos 2t −
61
√
√
19
3 3
3 3
− 32 t
− 32 t
sin 2t + c1 e
t + c2 e
t.
sin
cos
61
2
2
Example Solve y 00 + 2y 0 + y = t − 1.
On the right we see a linear polynomial. We look for particular solution
in the form Y (t) = At + B. Plug this in
2A + At + B = t − 1;
Equating the corresponding coefficients, we have A = 1, and 2A + B = −1,
i.e., B = −3. So that Y (t) = t−3. The general solution of the corresponding
homogeneous equation
y 00 + 2y 0 + y = 0
is y = c1 e−t + c2 te−t . Answer: y(t) = t − 3 + c1 e−t + c2 te−t .
Example Solve y 00 + y 0 − 2y = t2 .
On the right we have a quadratic polynomial (two of its coefficients
happened to be zero). We look for a particular solution as a quadratic
Y (t) = At2 + Bt + C. Plug this in
2A + 2At + B − 2(At2 + Bt + C) = t2 .
Equating the coefficients in t2 , t, and the constant terms, we have
−2A = 1
2A − 2B = 0
2A + B − 2C = 0
From the first equation A = − 12 , from the second one B = − 21 , and from
the third C = A + 12 B = − 34 . So that Y (t) = − 12 t2 − 21 t − 43 . The general
solution of the corresponding homogeneous equation is y(t) = c1 e−2t + c2 et .
1
1
3
Answer: y(t) = − t2 − t − + c1 e−2t + c2 et .
2
2
4
68
CHAPTER 2. SECOND ORDER EQUATIONS
Prescription 2 If the right side of the equation (6.1) is a polynomial of
degree n: a0 tn + a1 tn−1 + . . . + an−1 t + an , look for a particular solution
as a polynomial of degree n: A0 tn + A1 tn−1 + . . . + An−1 t + An , with the
coefficients to be determined.
And on to the final possibility.
Prescription 3 If the right side of the equation (6.1) is a polynomial
of de αt
n
n−1
gree n, times an exponential : a0 t + a1 t
+ . . . + an−1 t + an e , look
for a particular solution as a polynomial of degree n times the same exponential: A0 tn + A1 tn−1 + . . . + An−1 t + An eαt, with the coefficients to be
determined.
Example Solve y 00 + y = te−2t .
We look for a particular solution in the form Y (t) = (At + B)e−2t .
Compute Y 0 (t) = Ae−2t − 2(At + B)e−2t = (−2At + A − 2B)e−2t , Y 00 (t) =
−2Ae−2t − 2(−2At + A − 2B)e−2t = (4At − 4A + 4B)e−2t . Plug Y (t) in
4Ate−2t − 4Ae−2t + 4Be−2t + Ate−2t + Be−2t = te−2t .
Equate the coefficients in te−2t , and in e−2t
5A = 1
−4A + 5B = 0 ,
which gives A = 1/5 and B = 4/25, so that Y (t) = (1/5t + 4/25)e−2t.
Answer: y(t) = (1/5t + 4/25)e−2t + c1 sin t + c2 cos t.
Example Solve y 00 − 4y = t2 + 3et , y(0) = 0, y 0 (0) = 2.
Resoning as in the beginning of this section, we can find Y (t) as a sum
of two pieces Y (t) = Y1 (t) + Y2 (t), where Y1 (t) is any particular solution of
y 00 − 4y = t2
and Y2 (t) is any particular solution of
y 00 − 4y = 3et .
Using our prescriptions Y1 (t) = −1/4t2 − 1/8, and Y2 (t) = −et . The general
solution is y(t) = −1/4t2 −1/8−et +c1 sinh 2t+c2 cosh 2t. We then find c2 =
9/8 and c1 = 3/2. Answer: y(t) = −1/4t2 −1/8−et +3/2 sinh 2t+9/8 cosh 2t.
2.7. NON-HOMOGENEOUS EQUATIONS. WHEN ONE NEEDS SOMETHING EXTRA.69
2.7
Non-homogeneous Equations. When one needs
something extra.
Example Solve y 00 + y = sin t.
We try Y = A sin t + B cos t, according to the Prescription 1. Plugging
it in, we get
0 = sin t ,
which is impossible. Why did we strike out? Because A sin t and B cos t are
solutions of the corresponding homogeneous equation. Let us multiply the
initial guess by t, and try Y = At sin t + Bt cos t. Calculate Y 0 = A sin t +
At cos t + B cos t − Bt sin t and Y 00 = 2A cos t − At sin t − 2B sin t − Bt cos t,
and plug Y into our equation, and simplify:
2A cos t − At sin t − 2B sin t − Bt cos t + At sin t + Bt cos t = sin t ;
2A cos t − 2B sin t = sin t .
We conclude that A = 0 and B = −1/2, so that Y = − 12 t cos t. Answer:
y = − 12 t cos t + c1 sin t + c2 cos t.
This example prompts us to change the strategy. We now begin by
solving the corresponding homogeneous equation. The Prescriptions from
the previous section are now the Initial Guesses for the particular solution.
The Whole Story If any of the functions, appearing in the Initial Guess,
is a solution of the corresponding homogeneous equation, multiply the entire
Initial Guess by t, and look at the new functions. If still some of them are
solutions of the corresponding homogeneous equation, multiply the entire
Initial Guess by t2 . This is guaranteed to work. (Of course, if none of the
functions appearing in the Initial Guess is a solution of the corresponding
homogeneous equation, then the Initial Guess works.)
In the preceding example, the Initial Guess involved the functions sin t
and cos t, both solutions of the corresponding homogeneous equation. After
we multiplied the Initial Guess by t, the new functions t sin t and t cos t are
not solutions of the corresponding homogeneous equation, and the new guess
works.
Example Solve y 00 + 4y 0 = 2t − 5.
The fundamental set of the corresponding homogeneous equation consists of the functions y1 (t) = 1 and y2 (t) = e−4t . The Initial Guess, according
70
CHAPTER 2. SECOND ORDER EQUATIONS
to the Prescription 2, Y (t) = At+B, is a linear combination of the functions
t and 1, and the second of these functions is a solution of the corresponding homogeneous equation. We multiply the Initial Guess by t, obtaining
Y (t) = t(At + B) = At2 + Bt. This is a linear combination of t2 and t, both
of which are not solutions of the corresponding homogeneous equation. We
plug it in
2A + 4(2At + B) = 2t − 5 .
Equating the coefficients in t, and the constant terms, we have
8A = 2
2A + 4B = −5 .
I.e., A = 1/4, and B = −11/8. The particular solution is Y (t) = t/4 − 11/8.
Answer: y(t) = t/4 − 11/8 + c1 + c2 e−4t .
Example Solve y 00 + 2y 0 + y = te−t .
The fundamental set of the corresponding homogeneous equation consists of the functions y1 (t) = e−t and y2 (t) = te−t . The Initial Guess,
according to the Prescription 3, (At + B)e−t = Ate−t + Be−t is a linear
combination of the same two functions. We multiply the Initial Guess by t:
t(At + B)e−t = At2 e−t + Bte−t . The new guess is a linear combination of
the functions t2 e−t and te−t . The first of these functions is not a solution of
the corresponding homogeneous equation, but the second one is. Therefore,
we multiply the Initial Guess by t2 : Y = t2 (At + B)e−t = At3 e−t + Bt2 e−t .
It is convenient to write Y = (At3 + Bt2 )e−t , and we plug it in
(6At + 2B)e−t − 2(3At2 + 2Bt)e−t + (At3 + Bt2 )e−t
+2 (3At2 + 2Bt)e−t − (At3 + Bt2 )e−t + (At3 + Bt2 )e−t = te−t
We divide both sides by e−t , and simplify
6At = t
2B = 0 .
I.e., A = 1/6 and B = 0. Answer: y(t) =
2.8
1 3 −t
t e + c1 e−t + c2 te−t .
6
Variation of Parameters
We now present a more general way to find a particular solution of nonhomogeneous equation
(8.1)
y 00 + p(t)y 0 + g(t)y = f (t) .
2.8. VARIATION OF PARAMETERS
71
We assume that the corresponding homogeneous equation
(8.2)
y 00 + p(t)y 0 + g(t)y = 0
has a fundamental solution set y1 (t) and y2 (t) (so that c1 y1 (t) + c2 y2 (t) is
the general solution of (8.2). We now look for a particular solution of (8.1)
in the form
(8.3)
Y (t) = u1 (t)y1 (t) + u2 (t)y2 (t) ,
with some functions u1 (t) and u2 (t), that we shall choose to satisfy the
following two equations
(8.4)
u01 (t)y1 (t) + u02 (t)y2 (t) = 0
u01 (t)y10 (t) + u02 (t)y20 (t) = f (t) .
0
0
This is a system
of two linear
equations to find u1 (t) and u2 (t). Its determi
y (t) y2 (t) nant W (t) = 10
is the Wronskian of y1 (t) and y2 (t). We know
y1 (t) y20 (t) that W (t) 6= 0 for all t, because y1 (t) and y2 (t) form a fundamental solution
set. By Cramer’s rule (or by elimination) we solve
(8.5)
u01 (t) = −
u02 (t) =
f (t)y2 (t)
W (t)
f (t)y1 (t)
.
W (t)
We then compute u1 (t) and u2 (t) by integration. It turns out that then
Y (t) = u1 (t)y1 (t)+u2 (t)y2 (t) is a particular solution of the non-homogeneous
equation (8.1). To verify that, we compute the derivatives of Y (t), and plug
them into our equation (8.1):
Y 0 (t) = u01 (t)y1 (t)+u02 (t)y2 (t)+u1 (t)y10 (t)+u2 (t)y20 (t) = u1 (t)y10 (t)+u2 (t)y20 (t).
The first two terms have disappeared, thanks to the first formula in (8.4).
Next:
Y 00 (t) = u01 (t)y10 (t) + u02 (t)y20 (t) + u1 (t)y100 (t) + u2 (t)y200 (t)
= f (t) + u1 (t)y100 (t) + u2 (t)y200 (t) ,
by using the second formula in (8.4). Then
Y 00 + pY 0 + gY = f (t) + u1 y100 + u2 y200 + p(u1 y10 + u2 y20 ) + g(u1 y1 + u2 y2 )
= f (t) + u1 (y100 + py10 + gy1 ) + u2 (y200 + py20 + gy2 )
= f (t) ,
72
CHAPTER 2. SECOND ORDER EQUATIONS
which means that Y (t) is a particular solution. (Both brackets are zero,
because y1 (t) and y2 (t) are solutions of the homogeneous equation.)
In practice, one begins by writing down the formulas (8.5).
Example y 00 + y = tan t.
The fundamental set of the corresponding homogeneous
equation
is y1 =
sin t
cos t sin t and y2 = cos t. Their Wronskian W (t) = = − sin2 t −
cos t − sin t cos2 t = −1, and the formulas (8.5) give
u01 (t) = tan t cos t = sin t
2
t
u02 (t) = − tan t sin t = − sin
cos t .
Integrating, u1 (t) = − cos t. Here we set the constant of integration to zero,
because we only need one particular solution. Integrating the second formula
u2 (t) = −
R
sin2 t
cos t
2
t
dt = − 1−cos
cos t dt = (− sec t + cos t) dt
= − ln | sec t + tan t| + sin t .
R
R
We have a particular solution
Y (t) = − cos t sin t+(− ln | sec t+tan t|+sin t) cos t = − cos t ln | sec t+tan t| .
Answer: y(t) = − cos t ln | sec t + tan t| + c1 sin t + c2 cos t.
2.9
Convolution Integral
2.9.1
Differentiating Integrals
If g(t, s) is a function of two variables, then the integral ab g(t, s) ds depends
on a parameter t (s is a dummy variable). We differentiate this integral as
follows
Z b
Z b
d
g(t, s) ds =
gt(t, s) ds ,
dt a
a
R
where
gt(t, s) denotes the partial derivative in t. To differentiate the integral
Rt
g(s)
ds, we use the Main Theorem of Calculus:
a
d
dt
Z
t
a
g(s) ds = g(t) .
73
2.9. CONVOLUTION INTEGRAL
The integral at g(t, s) ds depends on a parameter t, and it has t as its upper
limit. One can show that
R
d
dt
Z
t
g(t, s) ds =
a
Z
t
a
gt (t, s) ds + g(t, t) ,
i.e., in effect we combine the previous two formulas. Let now z(t) and f (t)
be some functions, then the last formula gives
d
dt
(9.1)
2.9.2
t
Z
a
z(t − s)f (s) ds =
Z
t
a
z 0 (t − s)f (s) ds + z(0)f (t) .
Yet Another Way to Compute a Particular Solution
We consider again the non-homogeneous equation
y 00 + py 0 + gy = f (t) ,
(9.2)
where p and g are given numbers, and f (t) is a given function. Let z(t)
denote the solution of the corresponding homogeneous equation, satisfying
z(0) = 0 and z 0 (0) = 1, i.e., z(t) satisfies
z 00 + pz 0 + gz = 0, z(0) = 0, z 0 (0) = 1 .
(9.3)
Then we can write a particular solution of (9.2) as a convolution integral
(9.4)
Y (t) =
Z
t
z(t − s)f (s) ds .
0
To justify this formula, we compute the derivatives of Y (t), by using the
formula (9.1), and the initial conditions z(0) = 0 and z 0 (0) = 1:
0
Y (t) =
00
Z
0
Y (t) =
Z
0
t
00
t
0
z (t − s)f (s) ds + z(0)f (t) =
0
z (t − s)f (s) ds + z (0)f (t) =
Z
0
t
Z
0
t
z 0 (t − s)f (s) ds ;
z 00 (t − s)f (s) ds + f (t) .
Then
Y 00 (t) + pY 0 (t) + gY (t) =
Rt
0
[z 00 (t − s) + pz 0 (t − s) + gz(t − s)] f (s) ds + f (t) = f (t) .
Here the integral is zero, because z(t) satisfies the homogeneous equation at
all values of its argument t, including t − s.
74
CHAPTER 2. SECOND ORDER EQUATIONS
Example Let us now revisit the equation
y 00 + y = tan t .
Solving
z 00 + z = 0, z(0) = 0, z 0 (0) = 1 ,
we get z(t) = sin t. Then
Y (t) =
Z
0
t
sin(t − s) tan s ds .
Writing sin(t−s) = sin t cos s−cos t sin s, and integrating, it is easy to obtain
the old solution.
What are the advantages of the integral formula (9.4)? It is certainly easy
to remember, and it gives us one of the simplest examples of the Duhamel
Principle, one of the fundamental ideas of Mathematical Physics.
2.10
Applications of Second Order Equations
We present several examples. You will find many more applications in your
science and engineering textbooks. Importance of differential equations was
clear to Newton, the great scientist and co-inventor of Calculus.
2.10.1
Vibrating Spring
Let y = y(t) be displacement of a spring from its natural position. Its
motion is governed by Newton’s second law
ma = f .
The acceleration a = y 00 (t). We assume that the only force f , acting on the
spring, is its own restoring force, which by Hooke’s law is f = −ky, for small
displacements. Here the physical constant k > 0 describes the stiffness (or
the hardness) of the spring. We have
my 00 = −ky .
2
Divide
p both sides by the mass m of the spring, and denote k/m = ω (or
ω = k/m)
y 00 + ω 2 y = 0 .
2.10. APPLICATIONS OF SECOND ORDER EQUATIONS
75
The general solution, y(t) = c1 cos ωt + c2 sin ωt, gives us the harmonic
motion of the spring. Let us write the solution y(t) as
y(t) =
q
c21
+ c22
=A
√ c21
c1
A
c1 +c22
cos ωt + √ c22
cos ωt +
c +c22
c2
A
sin ωt
1
sin ωt ,
q
where we denoted A = c21 + c22 . Observe that ( cA1 )2 + ( cA2 )2 = 1, which
means that we can find an angle δ, such that cos δ = cA1 , and sin δ = cA2 .
Then our solution takes the form
y(t) = A (cos ωt cos δ + sin ωt sin δ) = A cos(ωt − δ) .
I.e., the harmonic motion is just a shifted cosine curve of amplitude A =
q
c21 + c22 . It is a periodic function of period 2π
ω . We see that the larger ω
is, the smaller is the period. So that ω gives us the frequency of oscillations.
The constants c1 and c2 can be computed, once the initial displacement y(0)
and the initial velocity y 0 (0) are given.
Example Solving the initial value problem
y 00 + 4y = 0, y(0) = 3, y 0 (0) = −8
one gets y(t) = 3 cos 2t − 4 sin 2t. This solution is a periodic function, with
the amplitude 5, the frequency 2, and the period π.
The equation
(10.1)
y 00 + ω 2 y = f (t)
models the case when an external force, whose acceleration is equal to f (t),
is applied to the spring. Indeed, the equation of motion is now
my 00 = −ky + mf (t) ,
from which we get (10.1), dividing by m. Let us consider a case of periodic
forcing term
(10.2)
y 00 + ω 2 y = a sin νt ,
where a > 0 is the amplitude of the external force, and ν is the forcing frequency. If ν 6= ω, we look for a particular solution of this non-homogeneous
a
equation in the form Y (t) = A sin νt. Plugging in, gives A = ω2 −ν
2 . The
a
general solution of (10.2), which is y(t) = 2
sin νt+c1 cos ωt+c2 sin ωt,
ω − ν2
is a superposition of harmonic motion and a response term to the external
76
CHAPTER 2. SECOND ORDER EQUATIONS
y
6
1
y = - t cos 2t
4
4
2
5
10
15
20
25
30
t
-2
-4
-6
Figure 2.3: The graph of the secular term y = − 41 t cos 2t, oscillating between
the lines y = − 14 t and y = 14 t
force. We see that the solution is still bounded, although not periodic anymore (it is called “quasiperiodic”).
A very important case is when ν = ω, i.e., when the forcing frequency
is the same as the natural frequency. Then particular solution has the form
Y (t) = At sin νt + Bt cos νt, i.e., we will have unbounded solutions, as time
t increases. This is the case of resonance, when a bounded external force
produces unbounded response. Large displacements will break the spring.
Resonance is a serious engineering concern.
Example y 00 + 4y = sin 2t, y(0) = 0, y 0 (0) = 1.
Both the natural and forcing frequencies are equal to 2. The fundamental
set of the corresponding homogeneous equation consists of sin 2t and cos 2t.
We search for a particular solution in the form Y (t) = At sin 2t + Bt cos 2t.
As before, we compute Y (t) = − 14 t cos 2t. Then the general solution is y(t) =
− 41 t cos 2t + c1 sin 2t + c2 cos 2t. Using the initial conditions, we calculate
c1 = 58 and c2 = 0, so that y(t) = − 41 t cos 2t + 58 sin 2t. The term − 14 t cos 2t
introduces oscillations, whose amplitude 14 t increases without bound with
time t. (It is customary to call this unbounded term a secular term, which
seems to imply that the harmonic terms are divine.)
2.10.2
Problems
I. Solve the non-homogeneous equations
1. 2y 00 − 3y 0 + y = 2 sin t.
Ans. y = c1 et/2 + c2 et +
2. y 00 + 4y 0 + 5y = − sin 2t + 3 cos 2t.
3 cos t 1
− sin t.
5
5
2.10. APPLICATIONS OF SECOND ORDER EQUATIONS
Ans. y =
77
23
11
sin 2t +
cos 2t + c1 e−2t cos t + c2 e−2t sin t.
65
65
3. y 00 − 2y 0 + y = t + 2.
Ans. y = t + 4 + c1 et + c2 et t.
4. y 00 + 4y = t2 − 3t + 1.
Ans. y =
1 2
2t − 6t + 1 + c1 cos 2t + c2 sin 2t.
8
5. y 00 − 4y = te3t , y(0) = 0, y 0 (0) = 1.
Ans. y =
1 3t
13e−2t e2t 6e3t
e t−
+
−
.
5
50
2
25
6. 4y 00 + 8y 0 + 5y = 2t − sin 3t.
Ans. y =
2
16
31
24
t
t
t−
+
sin 3t +
cos 3t + c1 e−t cos + c2 e−t sin .
5
25 1537
1537
2
2
7. y 00 + y = 2e4t + t2 .
Ans. y =
2 4t
e + t2 − 2 + c1 cos t + c2 sin t.
17
8. y 00 − y 0 = 2 sin t − cos 2t.
Ans. y = cos t − sin t + cos 2t +
1
sin 2t + c1 + c2 et .
2
II. Solve the non-homogeneous equations
1. y 00 + y = 2 cos t.
2. y 00 + y 0 − 6y = −e2t .
3. y 00 + 2y 0 + y = 2e3t.
Ans. y = t sin t + c1 cos t + c2 sin t.
1
Ans. y = − e2tt + c1 e−3t + c2 e2t.
5
Ans. y =
e3t
+ c1 e−t + c2 te−t .
8
5. y 00 − 4y 0 = 2 − cos t.
t3 et
+ c1 et + c2 tet .
6
t
cos t 4 sin t
Ans. y = − +
+
+ c1 e4t + c2 .
2
17
17
6. 2y 00 − y 0 − y = t + 1.
Ans. y = −t + c1 e− 2 t + c2 et .
4. y 00 − 2y 0 + y = tet .
Ans. y =
1
III. Write down the form in which one should look for a particular solution,
but DO NOT compute the coefficients
1. y 00 − 4y 0 + 4y = 3te2t + sin 4t − t2 .
2. y 00 − 9y 0 + 9y = −e3t + te4t + sin t − 4 cos t.
78
CHAPTER 2. SECOND ORDER EQUATIONS
IV. Find a particular solution, by using variation of parameters, and then
write down the general solution
1. y 00 −2y 0 +y =
et
.
1 + t2
2. y 00 − 2y 0 + y = 2et.
1
Ans. y = c1 et +c2 tet − et ln(1+t2 )+tet tan−1 t.
2
Ans. y = t2 et + c1 et + c2 tet .
e−t
.
Ans. y = −te−t + te−t ln t + c1 e−t + c2 te−t .
t
4. y 00 + y = sec t.
Ans. y = c1 cos t + c2 sin t + ln(cos t) cos t + t sin t.
3. y 00 + 2y 0 + y =
1
Ans. y = e3t + c1 e−t + c2 te−t .
8
5. y 00 + 2y 0 + y = 2e3t.
V. Verify that the functions y1 (t) and y2 (t) form a fundamental solution
set for the corresponding homogeneous equation, and then use variation of
parameters, to find the general solution
1. t2 y 00 − 2y = t3 − 1.
y1 (t) = t2 , y2 (t) = t−1 .
Hint: begin by putting the equation into the right form.
1 1
1
Ans. y = + t3 + c1 t2 + c2 .
2 4
t
00
0
2 3t
2. ty − (1 + t)y + y = t e .
y1 (t) = t + 1, y2 (t) = et .
Answer. y =
1 3t
e (2t − 1) + c1 (t + 1) + c2 et .
12
3∗ . (3t3 + t)y 00 + 2y 0 − 6ty = 4 − 12t2 .
y1 (t) =
1
, y2 (t) = t2 + 1.
t
Hint: Use Mathematica to compute the integrals.
1
Answer. y = 2t + c1 + c2 (t2 + 1).
t
VI.
1. Use the convolution integral, to solve
y 00 + y = t2 , y(0) = 0, y 0 (0) = 1 .
Answer. y = t2 − 2 + 2 cos t + sin t.
2. Show that y(t) =
equation
Z
0
t
(t − s)n−1
f (s) ds gives a solution of the n-th order
(n − 1)!
y (n) = f (t) .
2.10. APPLICATIONS OF SECOND ORDER EQUATIONS
79
(I.e., this formula lets you compute n consecutive anti-derivatives at once.)
Hint: Use (9.1).
VII.
1. A spring has natural frequency ω = 2. Its initial displacement is −1, and
initial velocity is 2. Find its displacement y(t) at any time t. What is the
amplitude of the oscillations?
√
Answer. y(t) = sin 2t − cos 2t, A = 2.
2. A spring of mass 2 lb is hanging from the ceiling, and its stiffness constant
is k = 18. Initially the spring is pushed up 3 inches, and is given initial
velocity of 2 inch/sec directed downward. Find the displacement of the
spring y(t) at any time t, and the amplitude of oscillations. Assume that
the y axis is directed down from the equilibrium position.
Answer. y(t) =
2
3
sin 3t − 3 cos 3t, A =
√
85
3 .
3. A spring has natural frequency ω = 3. An outside force, with acceleration
f (t) = 2 cos γt is applied to the spring. Here γ is a constant, γ 6= 3. Find
the displacement of the spring y(t) at any time t. What happens to the
amplitude of oscillations in case γ is close to 3?
2
Answer. y(t) = 9−γ
2 cos γt + c1 cos 3t + c2 sin 3t.
4. Assume that γ = 3 in the preceding problem. Find the displacement of
the spring y(t) at any time t. What happens to the spring in the long run?
Answer. y(t) = 31 t sin γt + c1 cos 3t + c2 sin 3t.
5. Consider dissipative (or damped) motion of a spring
y 00 + αy 0 + 9y = 0 .
Write down the solution, assuming that α < 6. What is the smallest value
of the dissipation constant α, which will prevent the spring from oscillating?
Answer. No oscillations for α ≥ 6.
6. Consider forced vibrations of a dissipative spring
y 00 + αy 0 + 9y = sin 3t .
Write down the general solution for
(i) α = 0
(ii) α 6= 0.
What does friction do to the resonance?
80
2.10.3
CHAPTER 2. SECOND ORDER EQUATIONS
Meteor Approaching the Earth
Let r = r(t) be the distance of the meteor from the center of the earth.
The motion of the meteor due to the Earth’s gravitation is governed by the
Newton’s law of gravitation
mr 00 = −
(10.3)
mM G
.
r2
Here m is the mass of the meteor, M denotes the mass of the earth, and G
is the universal gravitational constant. Let a be the radius of the Earth. If
an object is sitting on the Earth’s surface, then r = a, and r 00 = −g, so that
from (10.3)
MG
g= 2 .
a
I.e., M G = ga2 , and we can rewrite (10.3) as
r 00 = −g
(10.4)
a2
.
r2
We could solve this equation by letting r 0 = v(r), because the independent
variable t is missing. Instead, let us multiply both sides of the equation by
r 0 , and write it as
a2
r 0 r 00 + g 2 r 0 = 0 ;
r
d
dt
(10.5)
1 02
a2
r −g
2
r
!
= 0;
1 02
a2
r (t) − g
= c.
2
r(t)
2
a
remains constant at all
I.e., the energy of the meteor E(t) = 12 r 0 2 (t) − g r(t)
time. (That is why the gravitational
force field is called conservative.) We
q
2a2
can now express r 0 (t) = − 2c + g r(t)
, and calculate the motion of meteor
r(t) by separation of variables. However, as we are not riding on the meteor,
this seems to be not worth the effort. What really concerns us is the velocity
of the impact, as the meteor hits the Earth.
Let us assume that the meteor “begins” its journey with zero velocity
r 0 (0) = 0, and at a distance so large that we may assume that r(0) = ∞.
Then the energy of the meteor at time t = 0 is zero, E(0) = 0. But the
energy remains constant at all time, so that the energy at the time of impact
81
2.10. APPLICATIONS OF SECOND ORDER EQUATIONS
is also zero. At the time of impact r = a, and the velocity of impact we
denote by v, i.e., r 0 = v. Then from (10.5)
a2
1 2
v (t) − g
= 0,
2
a
i.e.,
v=
p
2ga .
Food for thought: the velocity of impact is the same, as it would have been
achieved by free fall from height a.
Let us now re-visit the harmonic oscillations of a spring:
y 00 + ω 2 y = 0 .
Similarly to the meteor case, multiply this equation by y 0 :
y 0 y 00 + ω 2 yy 0 = 0;
d
dt
1 02 1 2 2
y + ω y = 0;
2
2
1 2 1
E(t) = y 0 + ω 2 y 2 = constant .
2
2
With the energy E(t) conserved, no wonder the motion of the spring was
periodic.
2.10.4
Damped Oscillations
We add an extra term to our model of spring motion:
my 00 = −ky − k1 y 0 ,
where k1 is another positive constant. I.e., we assume there is an additional
force, which is directed opposite and is proportional to the velocity of motion
y 0 . This can be either air resistance or friction. Denoting k1 /m = α and
k/m = ω 2 , we have
(10.6)
y 00 + αy 0 + ω 2 y = 0 .
Let us see what effect the extra term αy 0 has on the energy of the spring,
E(t) = 12 y 0 2 + 12 ω 2 y 2 . We differentiate the energy, and express from the
equation (10.6), y 00 = −αy 0 − ω 2 y, obtaining
2
E 0 (t) = y 0 y 00 + ω 2 yy 0 = y 0 (−αy 0 − ω 2 y) + ω 2 yy 0 = −αy 0 .
82
CHAPTER 2. SECOND ORDER EQUATIONS
We see that the energy decreases for all time. This is an example of dissipative motion. We expect the amplitude of oscillations to decrease with time.
We call α the dissipation coefficient.
To solve the equation (10.6), we write down its characteristic equation
r 2 + αr + ω 2 y = 0 .
√
α2 − 4ω 2
.
Its roots are
2
There are several cases to consider.
−α ±
(i) α2 − 4ω 2 < 0. (I.e., the dissipation coefficient α is small.) The roots are
α
q
complex. If we denote α2 − 4ω 2 = −q 2 , the roots are − ± i . The general
2
2
solution
α
α
q
q
y(t) = c1 e− 2 t sin t + c2 e− 2 t cos t
2
2
exhibits damped oscillations (the amplitude of oscillations tends to zero, as
t → ∞).
(ii) α2 − 4ω 2 = 0. There is a double real root − α2 . The general solution
α
α
y(t) = c1 e− 2 t + c2 te− 2 t
tends to zero as t → ∞, without oscillating.
(iii) α2 − 4ω 2 > 0. The roots are real and distinct. If we denote α2 − 4ω 2 =
−α − q
−α + q
and r2 =
. Both are negative, because
q 2 , the roots are r1 =
2
2
q < α. The general solution
y(t) = c1 er1 t + c2 er2 t
tends to zero as t → ∞, without oscillating. We see that large enough
dissipation “kills” the oscillations.
2.11
Further Applications
2.11.1
Forced and Damped Oscillations
It turns out that even a little damping is enough to avoid resonance. We
consider the model
(11.1)
y 00 + αy 0 + ω 2 y = sin νt .
83
2.11. FURTHER APPLICATIONS
Our theory tells us to look for a particular solution in the form Y (t) =
A1 cos νt + A2 sin νt. Once the constants A1 and A2 are determined, we can
use trigonometric identities to put this solution into the form
(11.2)
Y (t) = A sin(νt − γ) ,
with constants A and γ depending on A1 and A2 . So, let us look for a
particular solution directly in the form (11.2). We transform the forcing
term as a linear combination of sin(νt − γ) and cos(νt − γ)
sin νt = sin ((νt − γ) + γ) = sin(νt − γ) cos γ + cos(νt − γ) sin γ .
We now plug Y (t) into the equation (11.1)
−Aν 2 sin(νt − γ) + Aαν cos(νt − γ) + Aω 2 sin(νt − γ) =
sin(νt − γ) cos γ + cos(νt − γ) sin γ .
Equating the coefficients in sin(νt − γ) and cos(νt − γ), we have
A(ω 2 − ν 2 ) = cos γ
(11.3)
Aαν = sin γ .
We now square both of these equations, and add
A2 (ω 2 − ν 2 )2 + A2 α2 ν 2 = 1 ,
which allows us to calculate A:
1
A= p 2
.
(ω − ν 2 )2 + α2 ν 2
To calculate γ, we divide the second equation in (11.3) by the first
tan γ =
ω2
αν
αν
, i.e., γ = tan−1 2
.
2
−ν
ω − ν2
We have computed a particular solution
1
Y (t) = p 2
sin(νt − γ), where γ = tan−1
2
2
2
2
(ω − ν ) + α ν
αν
ω 2 −ν 2
.
We now make a physically reasonable assumption that the damping coefficient α is small. Then the homogeneous equation corresponding to (11.1)
r 2 + αr + ω 2 = 0
84
CHAPTER 2. SECOND ORDER EQUATIONS
has a pair of complex roots − α2 ± iβ, where β =
solution of (11.1) is then
√
4ω 2 −α2
.
2
The general
α
α
1
sin(νt − γ) .
y(t) = c1 e− 2 t cos βt + c2 e− 2 t sin βt + p 2
(ω − ν 2 )2 + α2 ν 2
The first two terms of the solution are called the transient oscillations, because they quickly tend to zero, as time goes on (“sic transit gloria mundi”).
So that the second term Y (t) describes the oscillations in the long run. We
see that oscillations of Y (t) are bounded, no matter what is the frequency
ν of the forcing term. The resonance is gone! Moreover, the largest amplitude occurs not at ν = ω, but at a slightly smaller value of ν. Indeed,
the maximal amplitude happens when the quantity in the denominator, i.e.,
(ω 2 − ν 2 )2 + α2 ν 2 , is the smallest. This quantity is q
a quadratic in ν 2 . Its
minimum occurs when ν 2 = ω 2 −
2.11.2
α2
2 ,
i.e., when ν =
ω2 −
α2
2 .
Discontinuous Forcing Term
We now consider equations with a jumping force function. A simple function
with a jump
at some number c, is the Heaviside step function
(
0 if t ≤ c
uc (t) =
.
1 if t > c
Example Solve for t > 0 the problem
y 00 + 4y = f (t)
y(0) = 0, y 0 (0) = 3
(
0
if t ≤ π/4
. (We see that no external force is applied
t + 1 if t > π/4
before the time t = π/4, and the force is equal to t+1 afterwards. The forcing
function can be written as f (t) = uπ/4 (t)(t + 1).)
where f (t) =
The problem naturally breaks down into two parts. When t ≤ π/4, we
are solving
y 00 + 4y = 0 .
Its general solution is y(t) = c1 cos 2t + c2 sin 2t. From the initial conditions
we find that c1 = 0, c2 = 32 , i.e.,
(11.4)
y(t) =
3
sin 2t, for t ≤ π/4 .
2
85
2.11. FURTHER APPLICATIONS
At later times, when t > π/4, our equation is
y 00 + 4y = t + 1 .
(11.5)
But what are the new initial conditions at the time t = π/4? Clearly, we
can get them from (11.4):
(11.6)
y(π/4) =
3
, y 0 (π/4) = 0 .
2
The general solution of (11.5) is y(t) = 14 t + 41 + c1 cos 2t + c2 sin 2t. Calculating c1 and c2 from the initial conditions in (11.6), we find that y(t) =
1
1
5
π
1
4 t + 4 + 8 cos 2t + ( 4 − 16 ) sin 2t. Answer
y(t) =
2.11.3





3
2
sin 2t
1
4t
+
1
4
if t ≤ π/4
+
1
8
cos 2t + ( 45
−
π
16 ) sin 2t,
.
if t > π/4
Oscillations of a Pendulum
Assume that a small ball of mass m is attached to one end of a rigid rod of
length l, while the other end of the rod is attached to the ceiling. Assume
also that the mass of the rod itself is so small, that it can be neglected.
Clearly, the ball will move on an arch of a circle of radius l. Let θ = θ(t)
be the angle that the pendulum makes with the vertical line, at the time
t. We assume that θ > 0 if the pendulum is to the left of the vertical line,
and θ < 0 to the right of the vertical. If the pendulum moves by an angle
θ radians, it covers the distance lθ = lθ(t). It follows that lθ0 (t) gives the
velocity, and lθ00 (t) the acceleration. The gravity force is acting on the mass.
Only the projection of the force on the tangent line to circle is active, which
is mg cos( π2 − θ) = mg sin θ. Newton’s second law of motion gives
mlθ00 (t) = −mg sin θ .
(Minus, because the force works to decrease the θ, when θ > 0.) If we denote
g/l = ω 2 , we have the pendulum equation
θ00 (t) + ω 2 sin θ(t) = 0 .
If the oscillation angle θ(t) is small, then sin θ(t) ' θ(t), giving us again
the harmonic oscillations, this time as a model of small oscillations of a
pendulum.
86
CHAPTER 2. SECOND ORDER EQUATIONS
2.11.4
Sympathetic Oscillations
Suppose we have two pendulums hanging from the ceiling, and they are
coupled (connected) through a weightless spring. Let x1 denote the angle
the left pendulum makes with the vertical, which we take to be positive if
the pendulum is to the left of the vertical, and x1 < 0 if the pendulum is to
the right of the vertical. Let x2 be the angle the right pendulum makes with
the vertical, with the same assumptions on its sign. We assume that x1 and
x2 are small in absolute value, which means that each pendulum separately
can be modeled by a harmonic oscillator. For coupled pendulums the model
is
(11.7)
x001 + ω 2 x1 = −k(x1 − x2 )
x002 + ω 2 x2 = k(x1 − x2 ) ,
where k > 0 is physical constant, which measures how stiff is the coupling
spring. Indeed, if x1 > x2 , then the coupling spring is extended, so that
the spring tries to contract, and in doing so it pulls back the left pendulum,
while accelerating the right pendulum. (Correspondingly, the forcing term
is negative in the first equation, and positive in the second one.) In case
x1 < x2 the spring is compressed, and as it tries to expand, it accelerates
the first (left) pendulum, and slows down the second (right) pendulum. We
solve the system (11.7) together with the simple initial conditions
(11.8)
x1 (0) = a, x01 (0) = 0, x2 (0) = 0, x02 (0) = 0 ,
which correspond to the first pendulum beginning with a small given displacement angle a, and no initial velocity, while the second pendulum is at
rest.
We now add the equations in (11.7), and call z1 = x1 + x2 . We get
z100 + ω 2 z1 = 0, z1 (0) = a, z10 (0) = 0 .
Solution of this problem is, of course, z1 (t) = a cos ωt. Subtracting the
second equation from the first, and calling z2 = x1 − x2 , we have z200 + ω 2 z2 =
−2kz2 , or
z200 + (ω 2 + 2k)z2 = 0, z2 (0) = a, z20 (0) = 0 .
√
Denoting ω 2 + 2k = ω12 , i.e., ω1 = ω 2 + 2k, we have z2 (t) = a cos ω1 t.
Clearly, z1 + z2 = 2x1 . I.e.,
(11.9) x1 =
z1 + z2
a cos ωt + a cos ω1 t
ω1 − ω
ω1 + ω
=
= a cos
t cos
t,
2
2
2
2
2.12. OSCILLATIONS OF A SPRING SUBJECT TO A PERIODIC FORCE87
using a trig identity for the last step. Similarly,
(11.10) x2 =
a cos ωt − a cos ω1 t
ω1 − ω
ω1 + ω
z1 − z2
=
= −a sin
t sin
t.
2
2
2
2
We now analyze the solutions, given by the formulas (11.9) and (11.10). If
k is small, i.e., if the coupling is weak, then ω1 is close to ω, and so their
difference ω1 −ω is small. It follows that both cos ω12−ω t and sin ω12−ω t change
very slowly with time t. We now rewrite the solutions
x1 = A cos
ω1 + ω
ω1 + ω
t, and x2 = B sin
t,
2
2
where we regard A = a cos ω12−ω t and B = a sin ω12−ω t as the slowly varying
amplitudes. So that we think that the pendulums oscillate with the frequency ω12+ω , and with slowly varying amplitudes A and B. Observe that
when cos ω12−ω t is zero, and the first pendulum is at rest, the amplitude
of the second pendulum is | sin ω12−ω t| = 1, obtaining its largest possible
value. We have therefore a complete exchange of energy: when one of the
pendulums is doing the maximal work, the other one is resting. We have
“sympathy” between the pendulums. Observe also that A2 + B 2 = a2 . This
means that at all times t, the point (x1 (t), x2 (t)) lies on a circle of radius a
in (x1 , x2 ) plane.
2.12
Oscillations of a Spring Subject to a Periodic
Force
2.12.1
Fourier Series
For vectors in three dimensions the central notion is that of the scalar
product
(also
known as “inner
product” or “dot product”). Namely, if



x1
y1




x =  x2  and y =  y2  , then their scalar product is
x3
y3
(x, y) = x1 y1 + x2 y2 + x3 y3 .
Scalar product can be used to compute the length of a vector ||x|| =
or the angle θ between the vectors x and y
cos θ =
(x, y)
.
||x|| ||y||
p
(x, x),
88
CHAPTER 2. SECOND ORDER EQUATIONS
In particular, the vectors x and y are orthogonal if and only if (x, y) = 0. If
i, j and k are the unit coordinate vectors, then (x, i) = x1 , (x, j) = x2 and
(x, k) = x3 . Writing x = x1 i + x2 j + x3 k, we express
(12.1)
x = (x, i)i + (x, j)j + (x, k)k .
This formula gives probably the simplest example of Fourier Series.
We shall now consider functions f (t) that are periodic, with period 2π.
Such functions show us everything they got on any interval of length 2π. So
let us consider them on the interval (−π, π). Given two functions f (t) and
g(t), we define their scalar product as
(f, g) =
Z
π
−π
f (t) g(t) dt .
We
call the functions orthogonal if (f, g) = 0. For example, (sin t, cos t) =
Rπ
sin
t cos t dt = 0, so that sin t and cos t are orthogonal. (Observe that
−π
orthogonality of these functions has nothing to do with the angle at which
their graphs intersect.) The notion of scalar product allows us to define the
norm of a function
||f || =
q
(f, f ) =
sZ
π
π
1 1
− cos 2t
2 2
f 2 (t) dt .
−π
For example
|| sin t|| =
sZ
π
−π
2
sin t dt =
sZ
−π
Similarly,
√ for any positive integer n, || sin nt|| =
||1|| = 2π.
√
dt =
√
π.
π, || cos nt|| =
We now consider an infinite set of functions
1, cos t, cos 2t, . . . , cos nt, . . . , sin t, sin 2t, . . . , sin nt, . . . .
They are all mutually orthogonal. This is because
(1, cos nt) =
Z
π
Z
π
cos nt dt = 0;
−π
(1, sin nt) =
−π
sin nt dt = 0;
√
π and
2.12. OSCILLATIONS OF A SPRING SUBJECT TO A PERIODIC FORCE89
(cos nt, cos mt) =
(sin nt, sin mt) =
(sin nt, cos mt) =
Z
Z
π
−π
Z π
−π
π
cos nt cos mt dt = 0, for all n 6= m;
sin nt sin mt dt = 0, for all n 6= m;
sin nt cos mt dt = 0, for any n and m .
−π
The last three integrals are computed by using trig identities. If we divide these functions by their norms, we shall obtain an orthonormal set of
functions
1
cos t cos 2t
cos nt
sin t sin 2t
sin nt
√ , √ , √ , . . . , √ , . . ., √ , √ , . . . , √ , . . . ,
π
π
π
π
π
π
2π
which is similar to the vectors i, j and k. Similarly to the formula for vectors
(12.1), we decompose an arbitrary function f (t) as
∞
X
sin nt
1
cos nt
f (t) = α0 √ +
αn √ + βn √
π
π
2π n=1
where
π
1
1
α0 = (f (t), √ ) = √
f (t) dt;
2π
2π −π
Z π
cos nt
1
αn = (f (t), √ ) = √
f (t) cos nt dt;
π
π −π
,
Z
π
sin nt
1
βn = (f (t), √ ) = √
f (t) sin nt dt .
π
π −π
√
√
√
It is customary to denote a0 = α0 / 2π, an = αn / π, and bn = βn / π, so
that the Fourier Series takes the final form
Z
f (t) = a0 +
∞
X
(an cos nt + bn sin nt) ,
n=1
with
1
a0 =
2π
an =
1
π
bn =
1
π
Z
π
−π
Z π
−π
Z
π
f (t) dt;
−π
f (t) cos nt dt;
f (t) sin nt dt .
90
CHAPTER 2. SECOND ORDER EQUATIONS
Example Let f (t) be a function of period 2π, which is equal to t + π on
the interval (−π, π). If we you look at the graph of this function, you will
see why it has a scary name: the saw-tooth function. It is not defined at the
points nπ and −nπ, but this does not affect the integrals that we need to
compute. Compute
1
a0 =
2π
Z
π
(t + π) dt =
−π
Using guess-and-check
an =
1
π
Z
π
(t + π) cos nt dt =
−π
π
1
(t + π)2 = π ;
−π
4π
1
sin nt
cos nt π
(t + π)(
)+ 2
= 0,
−π
π
n
n π
because sin nπ = 0, and cosine is an even function. Similarly
bn =
1
π
Rπ
−π (t + π)
=
− n2
sin nt dt =
cos nπ =
h
1
π (t +
− n2 (−1)n
π)(− cosnnt ) +
=
2
n+1
n (−1)
sin nt
n2 π
.
i π
−π
Observe that cos nπ is equal to 1 for even n, and to −1 for odd n, i.e.,
cos nπ = (−1)n . We have obtained a Fourier series for the function f (t),
which is defined on (−∞, ∞) (with the exception of points nπ and −nπ).
Restricting to the interval (−π, π), we have
t+π = π+
∞
X
2
n
n=1
(−1)n+1 sin nt, for −π < t < π .
It might look we did not accomplish much by expressing a simple function
t + π through an infinite series. However, we can now express solutions of
differential equations through Fourier series.
y
6
c
−3π
c
c
c
c
c
−π
π
3π
5π
7π
c
c
c
The saw-tooth function
c
c
c
- t
91
2.13. EULER’S EQUATION
2.12.2
Vibrations of a Spring Subject to a Periodic Force
As before, the model is
y 00 + ω 2 y = f (t) ,
where y = y(t) is the displacement, ω > 0 is a constant (“natural frequency”), and f (t) is a given function of period 2π, the “external force”.
This equation also models oscillations in electrical circuits. Expressing f (t)
by its Fourier series, we have
y 00 + ω 2 y = a0 +
∞
X
(an cos nt + bn sin nt) .
n=1
Let us assume that ω 6= n, for any integer n (to avoid resonance). According to our theory, we look for a particular solution in the form Y (t) =
P
A0 + ∞
Plugging this in, we find Y (t) = ωa02 +
n=1 (An cos nt + Bn sin nt).
P∞
n=1
an
ω 2 −n2
cos nt +
bn
ω 2 −n2
sin nt . The general solution is then
∞
X
a0
an
bn
y(t) = 2 +
cos nt + 2
sin nt + c1 cos ωt + c2 sin ωt .
2
2
ω
ω −n
ω − n2
n=1
We see that coefficients in the n-th harmonics, i.e., in cos nt and sin nt, are
large, provided that the natural frequency ω is selected close to n. That is
basically what happens, when one is turning the tuning knob on a radio set.
(The knob controls ω, while your favorite station broadcasts at a frequency
n.)
2.13
Euler’s Equation
2.13.1
Preliminaries
√
What is the meaning of 3 2 ? Or, more generally, what is the definition of
r
the function tr , where r is any real number? Here it is: tr = eln t = er ln t .
We see that the function tr is defined only for t > 0. The function |t|r is
defined for all t 6= 0, but what is the derivative of this function?
More generally, if f (t) is a differentiable function, the function f (|t|) is
differentiable for all t 6= 0 (because
( |t| is not differentiable at t = 0). Let us
1
if t > 0
define a step function sign(t) =
. Observe that
−1 if t < 0
d
|t| = sign(t), for all t 6= 0 ,
dt
92
CHAPTER 2. SECOND ORDER EQUATIONS
as follows by considering separately the cases t > 0 and t < 0.
By the Chain Rule, we have
d
f (|t|) = f 0 (|t|)sign(t), for all t 6= 0
dt
(13.1)
In particular,
d
r
dt |t|
= r|t|r−1 sign(t).
Observe also that t sign(t) = |t|, and (sign(t))2 = 1, for all t 6= 0.
2.13.2
The Important Class of Equations
Euler’s Equation has the form (y = y(t))
t2 y 00 + aty 0 + by = 0 ,
where a and b are given numbers. We look for solution in the form y = tr ,
with the constant r to be determined. Assume first that t > 0. Plugging in
t2 r(r − 1)tr−2 + atrtr−1 + btr = 0 .
Dividing by the positive quantity tr
(13.2)
r(r − 1) + ar + b = 0
gives us a characteristic equation. This is a quadratic equation, and so there
are three possibilities with respect to its roots.
Case 1 There are two real and distinct roots r1 6= r2 . Then tr1 and tr2 are
two solutions, which are not constant multiples of each other, and so the
general solution (valid for t > 0) is
y(t) = c1 tr1 + c2 tr2 .
If r1 is either an integer, or a fraction with an odd denominator, then tr1
is also defined for t < 0. If the same is true for r2 , then the above general
solution is valid for all t. For other r1 or r2 , this solution is not even defined
for t < 0. In such a case, using differentiation formula (13.1), we see that
y(t) = |t|r1 gives a solution of Euler’s equation, which is valid for all t 6= 0.
Just plug it in, and observe that sign(t)sign(t) = 1, and tsign(t) = |t|. So,
if we need general solution valid for all t 6= 0, we can use
y(t) = c1 |t|r1 + c2 |t|r2 .
2.13. EULER’S EQUATION
93
Example t2 y 00 + 2ty 0 − 2y = 0.
The characteristic equation
r(r − 1) + 2r − 2 = 0
has roots r1 = −2 and r2 = 1. Solution: y(t) = c1 t−2 + c2 t, valid for
all t 6= 0. Another general solution valid for all t 6= 0 is y(t) = c1 |t|−2 +
c2 |t| = c1 t−2 + c2 |t|. This is a truly different solution! Why such an unusual
complexity? If one divides the equation by t2 , then the functions p(t) = 2/t
and g(t) = −2/t2 from our general theory, are both discontinuous at t = 0.
We have a singularity at t = 0, and in general, solution y(t) is not defined
at t = 0 (as we saw in the above example). However, when solving initial
value problems, it does not matter which form of general solution one takes.
For example, if we prescribe some initial conditions at t = −1, then both
forms of the general solution are valid only on the interval (−∞, 0), and on
that interval both forms are equivalent.
We now turn to the cases of equal roots, and of complex roots. We could
proceed similarly to the linear equations with constant coefficients. Instead,
we make a change of independent variables from t to a new variable s, by
letting t = es , or s = ln t. By the chain rule
dy
dy ds
dy 1
=
=
.
dt
ds dt
ds t
Using the product rule, and then the chain rule,
d2 y
d dy 1 dy 1
d2 y ds 1 dy 1
d2 y 1
dy 1
=
(
)
−
=
−
=
−
.
2
2
2
2
2
2
dt
dt ds t
ds t
ds dt t
ds t
ds t
ds t2
Then Euler’s equation becomes
d2 y dy
dy
−
+ a + by = 0 .
2
ds
ds
ds
This is a linear equations with constant coefficients! We know how to solve
it, for any a and b. Let us use primes again to denote the derivatives in s.
Then we have
(13.3)
y 00 + (a − 1)y 0 + by = 0 .
Its characteristic equation
(13.4)
r 2 + (a − 1)r + b = 0
94
CHAPTER 2. SECOND ORDER EQUATIONS
is exactly the same as (13.2).
We now return to Euler’s equation, and its characteristic equation (13.2).
Case 2 r1 is a double root of the characteristic equation (13.2), i.e., r1 is a
double root of (13.4). Then y = c1 er1 s + c2 ser1 s is the general solution of
(13.3). Returning to the original variable t, by substituting s = ln t:
y(t) = c1 tr1 + c2 tr1 ln t .
This is the general solution of Euler’s equation, valid for t > 0. More
generally,
y(t) = c1 |t|r1 + c2 |t|r1 ln |t|
gives us general solution of Euler’s equation, valid for all t 6= 0.
Case 3 p ± iq are complex roots of the characteristic equation (13.2). Then
y = c1 eps sin qs + c2 seps cos qs is the general solution of (13.3). Returning to
the original variable t, by substituting s = ln t, we get the general solution
of Euler’s equation
y(t) = c1 tp sin(q ln t) + c2 tp cos(q ln t) ,
valid for t > 0. Replacing t by |t|, will give us general solution of Euler’s
equation, valid for all t 6= 0.
Example t2 y 00 − 3ty 0 + 4y = 0, t > 0.
The characteristic equation r(r − 1) − 3r + 4 = 0 has a double root r = 2.
The general solution: y = c1 t2 + c2 t2 ln t.
Example t2 y 00 − 3ty 0 + 4y = 0, y(1) = 4, y 0 (1) = 7.
Using the general solution from the preceding problem, we calculate
c1 = 4 and c2 = −1. Answer: y = 4t2 − t2 ln t.
Example t2 y 00 + ty 0 + 4y = 0, y(−1) = 0, y 0 (−1) = 3.
The characteristic equation r(r − 1) + r + 4 = 0 has a pair of complex
roots ±2i. Here p = 0, q = 2, and the general solution valid for both positive
and negative t is
y(t) = c1 sin(2 ln |t|) + c2 cos(2 ln |t|) .
2.14. LINEAR EQUATIONS OF ORDER HIGHER THAN TWO
95
From the first initial condition y(−1) = c2 = 0, so that y(t) = c1 sin(2 ln |t|).
To use the second initial condition, we need to differentiate y(t) at t = −1.
Observe that for negative t, |t| = −t, and so y(t) = c1 sin(2 ln(−t)). Then
y 0 (t) = c1 cos(2 ln(−t)) 2t , and y 0 (−1) = −2c1 = 3. I.e., c1 = −3/2. Answer:
3
y(t) = − sin(2 ln |t|).
2
Example t2 y 00 − 3ty 0 + 4y = t − 2.
This is a non-homogeneous equation. We look for a particular solution
1
in the form Y = At + B. Plugging this in, we compute Y = t − . The
2
fundamental set of the corresponding homogeneous equation is given by t2
1
and t2 ln t, as we saw above. The general solution is y = t− +c1 t2 +c2 t2 ln t.
2
2.14
Linear Equations of Order Higher Than Two
2.14.1
The Polar Form of Complex Numbers
For a complex number x + iy, one can use the point (x, y) to represent it.
This turns the usual plane into the complex plane. The point (x, y) can also
be identified by its polar coordinates (r, θ). We shall always take r > 0.
Then
z = x + iy = r cos θ + ir sin θ = r(cos θ + i sin θ)
gives us a polar form to represent a complex number z. Using Euler’s for3π
mula, we can also write z = reiθ . For example, −2i = 2ei 2 , because
√ i π the
point (0, −2) has polar coordinates (2, 3π
).
Similarly,
1
+
i
=
2e 4 , and
2
−1 = eiπ (real numbers are just a particular case of complex ones).
There are infinitely many ways to represent complex numbers using polar
coordinates z = rei(θ+2πm) , where m is any integer (positive or negative).
We now compute the n-th root(s) of z:
θ
z 1/n = r 1/n ei( n +
2πm
)
n
, m = 0, 1, . . . , n − 1 .
Here r 1/n is the positive n-th root of the positive number r. (The “high
school” n-th root.) When m varies from 0 to n − 1 we get different answers,
and then the roots repeat themselves. There are n complex n-th roots of
any complex number (and in particular, of any real number). All roots lie
on circle of radius r 1/n , and the difference in polar angle between any two
neighbors is 2π/n.
96
CHAPTER 2. SECOND ORDER EQUATIONS
Example Solve the equation: z 4 + 16 = 0.
We need the four complex roots of −16 = 16ei(π+2πm) . We have:
π
πm
π
(−16)(1/4) = 2ei( 4 +√2 ) , m
= 0, 1, 2, 3. When m = 0, we have ei 4 =
√
2(cos π4 + i sin π4 ) = 2 + i 2. We compute the other roots similarly. They
√
√
√
√
come in two complex conjugate pairs: 2 ± i 2 and − 2 ± i 2. In the
complex plane, they all lie on circle of radius 2, and the difference in polar
angle between any two neighbors is π/2.
6
'$
b
b
2
b
b
&%
The four complex fourth roots of −16 on the circle of radius 2
Example Solve the equation: z 3 + 8 = 0.
We need the three complex roots of −8. One of them is z1 = −2 = 2eiπ ,
and the other two lie
√ on the circle of radius 2, at an
√ angle 2π/3 away. I.e.,
z2 = 2eiπ/3 = 1 + 3i, and z3 = 2e−iπ/3 = 1 − 3i. (Alternatively, the
root z = −2 is easy to guess. Then, using the long division, one can factor
z 3 + 8 = (z + 2)(z 2 − 2z + 4), and set the second factor to zero, to find the
other two roots.)
2.14.2
Linear Homogeneous Equations
Let us consider the fourth order equations
(14.1)
a0 y 0000 + a1 y 000 + a2 y 00 + a3 y 0 + a4 y = 0 ,
with given numbers a0 , a1 , a2 , a3 and a4 . Equations of even higher orders
can be considered similarly. Again, we search for a solution in the form
y(t) = ert , with a constant r to be determined. Plugging in, and dividing
by the positive exponent ert , we obtain a characteristic equation
(14.2)
a0 r 4 + a1 r 3 + a2 r 2 + a3 r + a4 = 0 .
If we have an equation of order n
(14.3)
a0 y (n) + a1 y (n−1) + . . . + an−1 y 0 + an y = 0 ,
2.14. LINEAR EQUATIONS OF ORDER HIGHER THAN TWO
97
then the corresponding characteristic equation is
a0 r n + a1 r n−1 + . . . + an−1 r + an = 0 .
The Fundamental Theorem of Algebra says that any polynomial of degree
n has n roots in the complex plane, counted according to their multiplicity
(i.e., double root is counted as two roots, and so on). The characteristic
equation (14.2) has four complex roots.
The theory is similar to the second order case. We need four different
solutions of (14.1), i.e., every solution is not a linear combination of the other
three (for the equation (14.3) we need n different solutions). Every root of
the characteristic equation must “pull its weight”. If the root is simple, it
brings in one solution, if it is repeated twice, then two solutions. (Three
solutions, if the root is repeated three times, and so on.) The following
cases may occur for the n-th order equation (14.3).
Case 1 r1 is a simple real root. Then it brings er1 t into the fundamental
set.
Case 2 r1 is a real root repeated s times. Then it brings the following s
solutions into the fundamental set: er1 t , ter1 t ,..., ts−1 er1 t .
Case 3 p+iq and p−iq are simple complex roots. They contribute: ept cos qt
and ept sin qt.
Case 4 p + iq and p − iq are repeated s times each. They bring the following
2s solutions into the fundamental set: ept cos qt and ept sin qt, tept cos qt and
tept sin qt,..., ts−1 ept cos qt and ts−1 ept sin qt.
Example y 0000 − y = 0. The characteristic equation is
r4 − 1 = 0 .
We solve it by factoring
(r − 1)(r + 1)(r 2 + 1) = 0 .
The roots are −1, 1, −i, i. The general solution: y(t) = c1 e−t + c2 et +
c3 cos t + c4 sin t.
Example y 000 − 3y 00 + 3y 0 − y = 0. The characteristic equation is
r 3 − 3r 2 + 3r − 1 = 0 .
98
CHAPTER 2. SECOND ORDER EQUATIONS
This is a cubic equation. You probably did not study how to solve it by
Cardano’s formula. Fortunately, you must remember that the quantity on
the left is an exact cube:
(r − 1)3 = 0 .
Root r = 1 is repeated 3 times. The general solution: y(t) = c1 et + c2 tet +
c3 t2 et .
Let us suppose you did not know the formula for cube of a difference.
Then one can guess that r = 1 is a root. This means that the cubic polynomial can be factored, with one factor being r − 1. The other factor is
then found by the long division. The other factor is a quadratic polynomial,
whose roots are easy to find.
Example y 000 − y 00 + 3y 0 + 5y = 0. The characteristic equation is
r 3 − r 2 + 3r + 5 = 0 .
We need to guess a root. The procedure for guessing a root is a simple one:
try r = 0, r = ±1, r = ±2, and then give up. We see that r = −1 is a root.
It follows that the first factor is r + 1, and the second one is found by the
long division:
(r + 1)(r 2 − 2r + 5) = 0 .
The roots of the quadratic give us r2 = 1 − 2i, and r3 = 1 + 2i. The general
solution: y(t) = c1 e−t + c2 et cos 2t + c3 et sin 2t.
Example y 0000 + 2y 00 + y = 0. The characteristic equation is
r 4 + 2r 2 + 1 = 0 .
It can be solved by factoring
(r 2 + 1)2 = 0 .
(Or one can set z = r 2 , and obtain a quadratic equation for z.) The roots are
−i, i, each repeated twice. The general solution: y(t) = c1 cos t + c2 sin t +
c3 t cos t + c4 t sin t.
Example y 0000 + 16y = 0. The characteristic equation is
r 4 + 16 = 0 .
√
Its
i.e.,
2±
√
√ roots are√ the four
√ complex roots of −16, computed earlier,
√
2t
i 2√ and − 2 ± i 2. √ The general solution:
y(t) = c1 e
cos( 2 t) +
√
√
√
√
c2 e 2 t sin( 2 t) + c3 e− 2 t cos( 2 t) + c4 e− 2 t sin( 2 t).
2.14. LINEAR EQUATIONS OF ORDER HIGHER THAN TWO
99
Example y (5) + 9y 000 = 0. The characteristic equation is
r 5 + 9r 3 = 0 .
Factoring r 3 (r 2 + 9) = 0, we see that the roots are: 0, 0, 0, −3i, 3i. The
general solution: y(t) = c1 + c2 t + c3 t2 + c4 cos 3t + c5 sin 3t.
2.14.3
Non-Homogeneous Equations
The theory is parallel to the second order case. Again, we need a particular
solution.
Example y (5) + 9y 000 = 3t − sin 2t.
We have just found the general solution of the corresponding homogeneous equation. We produce a particular solution in the form Y (t) =
Y1 (t) + Y2 (t), where Y1 (t) is a particular solution of y (5) + 9y 000 = 3t,
and Y2 (t) is a particular solution of y (5) + 9y 000 = − sin 2t. We guess that
1
1
Y1 (t) = At4 , and compute A = 72
, and that Y2 (t) = B cos 2t, with B = − 40
.
1 4
1
We have Y (t) = 72 t − 40 cos 2t.
Answer: y(t) =
2.14.4
1 4
72 t
−
1
40
cos 2t + c1 + c2 t + c3 t2 + c4 cos 3t + c5 sin 3t.
Problems
I. Solve the non-homogeneous equations with discontinuous forcing function.
1. y 00 + 9y = f (t), where f (t) = 0 for t < π and f (t) = t for t > π, y(0) = 0,
y 0 (0) = −2.
Answer:
y(t) =

2

 − 3 sin 3t


1
π
9t + 9
cos 3t −
if t ≤ π
17
27
.
sin 3t, if t > π
2. y 00 + y = f (t), where f (t) = 0 for t < π and f (t) = t for t > π, y(0) = 2,
y 0 (0) = 0.
II. Find the general solution, valid for t > 0.
1. t2 y 00 − 2ty 0 + 2y = 0.
2. t2 y 00 + ty 0 + 4y = 0.
Ans. y = c1 t + c2 t2 .
Ans. y = c1 cos(2 ln t) + c2 sin(2 ln t).
100
CHAPTER 2. SECOND ORDER EQUATIONS
3. t2 y 00 + 5ty 0 + 4y = 0.
Ans. y = c1 t−2 + c2 t−2 ln t.
4. t2 y 00 + 5ty 0 + 5y = 0.
Ans. y = c1 t−2 cos(ln t) + c2 t−2 sin(ln t).
5. t2 y 00 − 3ty 0 = 0.
1
6. y 00 + t−2 y = 0.
4
Ans. y = c1 + c2 t4 .
√
√
Ans. y = c1 t + c2 t ln t.
III. Find the general solution, valid for all t 6= 0.
1. t2 y 00 + ty 0 + 4y = 0.
Ans. y = c1 cos(2 ln |t|) + c2 sin(2 ln |t|).
2. 2t2 y 00 − ty 0 + y = 0.
Ans. y = c1 |t| + c2 t.
q
3. 2t2 y 00 − ty 0 + y = t2 − 3.
Hint: look for a particular solution as Y = At2 + Bt + C.
q
1
Ans. y = t2 − 3 + c1 |t| + c2 t.
3
4. 2t2 y 00 − ty 0 + y = t3 . Hint: look for a particular solution as Y = At3 .
q
1
Ans. y = t3 + c1 |t| + c2 t.
10
IV. Solve the initial value problems.
1. t2 y 00 − 2ty 0 + 2y = 0, y(1) = 2, y 0 (1) = 5.
Ans. y = −t + 3t2 .
2. t2 y 00 − 3ty 0 + 4y = 0, y(−1) = 1, y 0 (−1) = 2.
Ans. y = t2 − 4t2 ln |t|.
3. t2 y 00 + 3ty 0 − 3y = 0, y(−1) = 1, y 0 (−1) = 2.
3
1
Ans. y = − t−3 − t.
4
4
4. t2 y 00 − ty 0 + 5y = 0, y(1) = 0, y 0 (1) = 2.
Ans. y = t sin(2 ln t).
V. Solve the polynomial equations.
1. r 3 − 1 = 0.
Ans. Roots are 1, ei
2. r 3 + 27 = 0.
3. r 4 − 16 = 0.
Ans. −3,
4. r 3 − 3r 2 + r + 1 = 0.
3
2
−
2π
3
√
3 3
3
2 i, 2
Ans. ±2 and ±2i.
π
Ans. 1, 1 −
3π
4
5π
4
+
√
, ei
.
√
3 3
2 i.
2, 1 +
7π
4
√
2.
5. r 4 + 1 = 0.
Ans. ei 4 , ei
6. r 4 + 4 = 0.
Ans. 1 + i, 1 − i, −1 + i, −1 − i.
7. r 4 + 8r 2 + 16 = 0.
, ei
4π
3
, ei
.
Ans. 2i and −2i are both double roots.
101
2.14. LINEAR EQUATIONS OF ORDER HIGHER THAN TWO
8. r 4 + 5r 2 + 4 = 0.
Ans. ±i and ±2i.
VI. Find the general solution.
000
t
1. y − y = 0.
−t/2
Ans. y = c1 e + c2 e
√
3
3
−t/2
cos
t + c3 e
sin
t.
2
2
√
2)t
2. y 000 − 3y 00 + y 0 + y = 0.
Ans. y = c1 et + c2 e(1−
3. y (4) − 8y 00 + 16y = 0.
4. y (4) +8y 00 +16y = 0.
√
√
+ c3 e(1+
2)t
.
Ans. y = c1 e−2t + c2 e2t + c3 te−2t + c4 te2t .
Ans. y = c1 cos 2t+c2 sin 2t+c3 t cos 2t+c4 t sin 2t.
5. y (4) + y = 0.
t
t
t
t
√t
− √t
− √t
cos √ + c2 e 2 sin √ + c3 e 2 cos √ + c4 e 2 sin √ .
2
2
2
2
√
√
3
3
6. y 000 − y = t2 .
Ans. y = −t2 + c1 et + c2 e−t/2 cos
t + c3 e−t/2 sin
t.
2
2
Ans. y = c1 e
√t
2
7. y (6) − y 00 = 0.
Ans. y = c1 + c2 t + c3 e−t + c4 et + c5 cos t + c6 sin t.
8. y (8) − y (6) = sin t.
Ans. y =
1
sin t + c1 + c2 t + c3 t2 + c4 t3 + c5 t4 + c6 t5 + c7 e−t + c8 et .
2
9. y 0000 + 4y = 4t2 − 1.
Ans. y = t2 −
1
+ c1 et cos t + c2 et sin t + c3 e−t cos t + c4 e−t sin t.
4
VII. Solve the following initial value problems.
1. y 000 + 4y 0 = 0, y(0) = 1, y 0 (0) = −1, y 00 (0) = 2.
Ans. y =
2. y
0000
3
2
− 21 cos 2t −
1
2
sin 2t.
+ 4y = 0, y(0) = 1, y 0 (0) = −1, y 00 (0) = 2, y 000(0) = 3.
Ans. y = − 18 et (cos t − 5 sin t) + 38 e−t (3 cos t − sin t).
3. y 000 + 8y = 0, y(0) = 0, y 0 (0) = 1, y 00 (0) = −2.
√
Ans. y = − 13 e−2t + 13 et cos 3 t.
VIII.
102
CHAPTER 2. SECOND ORDER EQUATIONS
1. Find a linear differential equation of the lowest possible order, which has
the following functions as its solutions: 1, e−2t , sin t.
Ans. y 0000 + 2y 000 + y 00 + 2y 0 = 0.
2. Solve the following nonlinear equation
2
2y 0 y 000 − 3y 00 = 0,
Hint: Write it as
y 000
3 y 00
=
,
y 00
2 y0
then integrate, to get
y 00 = c1 y 0
(3/2)
.
Let y 0 = v, and obtain a first order equation.
1
Ans. y =
+ c3 , and also y = c4 t + c5 .
c1 t + c2
Chapter 3
Using Infinite Series to Solve
Differential Equations
3.1
3.1.1
Series Solution Near a Regular Point
Maclauren and Taylor Series
Infinitely differentiable functions can often be represented by a series
(1.1)
f (x) = a0 + a1 x + a2 x2 + · · · + an xn + · · · .
Letting x = 0, we see that a0 = f (0). If we differentiate (1.1), and then
let x = 0, we have a1 = f 0 (0). Differentiating (1.1) twice, and then letting
(n)
00
x = 0, gives a2 = f 2(0) . Continuing this way, we see that an = f n!(0) , giving
us the Maclauren series
(1.2)
f 00(0) 2
f (n) (0) n
2! x + · · · +
n! x
P∞ f (n) (0) n
n=0
n! x .
f (x) = f (0) + f 0 (0)x +
=
+···
It turns out there is a number R, so that the Maclauren series converges
inside the interval (−R, R), and diverges when |x| > R. R is called the radius
of convergence. For some f (x), we have R = ∞ (for example, for sin x, cos x,
ex ), while for some series we have R = 0, and in general 0 ≤ R ≤ ∞.
When we apply (1.2) to some specific functions, we get:
sin x = x −
∞
X
x3 x5 x7
x2n+1
+
−
+··· =
(−1)n
;
3!
5!
7!
(2n + 1)!
n=0
103
104CHAPTER 3. USING INFINITE SERIES TO SOLVE DIFFERENTIAL EQUATIONS
∞
X
x2
x4 x6
x2n
cos x = 1 −
+
−
+··· =
(−1)n
;
2!
4!
6!
(2n)!
n=0
∞
X
xn
x2 x3 x4
+
+
+··· =
;
e = 1+x+
2!
3!
4!
n!
n=0
x
1
= 1 + x + x2 + · · · + xn + · · · .
1−x
The last series, called the geometric series, converges on the interval (−1, 1),
i.e., R = 1.
Maclauren series gives an approximation of f (x), for x close to zero. For
example, sin x ' x gives a reasonably good approximation for |x| small. If
3
we add one more term of the Maclauren series: sin x ' x − x6 then, say on
the interval (−0.3, 0.3), this approximation is excellent.
If one needs a Maclauren series for sin x2 , one begins with a series for
sin x, and replaces each x by x2 , obtaining
∞
X
x6 x10
x4n+2
sin x = x −
+
+··· =
(−1)n
.
3!
5!
(2n + 1)!
n=0
2
2
One can split a Maclauren series into even and odd powers, writing it in
the form
∞
X
f (n) (0)
n=0
n!
xn =
∞
X
f (2n)(0)
n=0
(2n)!
x2n +
∞
X
f (2n+1) (0)
n=0
(2n + 1)!
x2n+1 .
In the following series only the odd powers have non-zero coefficients
P∞
n=1
=
1−(−1)n n
x = 21 x + 23 x3 + 25 x5 + · · ·
n
P∞
P
2
1
2n+1
2n+1
=2 ∞
n=0 2n+1 x
n=0 2n+1 x
.
All of the series above were centered at 0. We can replace zero by any
number a, obtaining the Taylor’s series
f 00 (a)
f (n) (a)
2
2! (x − a) + · · · +
n! (x
P∞ f (n) (a)
n
n=0
n! (x − a) .
f (x) = f (a) + f 0 (a)(x − a) +
=
− a)n + · · ·
It converges on an interval (a − R, a + R), centered at a. Radius of convergence satisfies 0 ≤ R ≤ ∞, as before. The Taylor’s series allows us to
approximate f (x) for x close to a. For example, one can usually expect (al00
though it is not always true) that f (x) ' f (2) + f 0 (2)(x − 2) + f 2!(2) (x − 2)2
for x close to 2, say for 1.8 < x < 2.2.
3.1. SERIES SOLUTION NEAR A REGULAR POINT
3.1.2
105
A Toy Problem
Let us begin with the equation (here y = y(x))
y 00 + y = 0 ,
for which we know the general solution. Let us call by y1 (x) the solution of
the initial value problem
(1.3)
y 00 + y = 0, y(0) = 1, y 0 (0) = 0 .
By y2 (x) we denote the solution of the same equation, together with the
initial conditions y(0) = 0, y 0 (0) = 1. Clearly, y1 (x) and y2 (x) are not
constant multiples of each other. Therefore, they form a fundamental set,
giving us the general solution y(x) = c1 y1 (x) + c2 y2 (x).
Let us now compute y1 (x), the solution of (1.3). From the initial conditions, we already know the first two terms of its Maclauren series
y(x) = y(0) + y 0 (0)x +
y 00 (0) 2
y (n) (0) n
x +···+
x +··· .
2
n!
To get more terms, we need to compute the derivatives of y(x) at zero.
From the equation (1.3) y 00 (0) = −y(0) = −1. We now differentiate the
equation (1.3): y 000 + y 0 = 0, and then set x = 0: y 000 (0) = −y 0 (0) = 0.
Differentiating again, we have y 0000 (0) = −y 00 (0) = 1. On the next step:
y (5)(0) = −y 000 (0) = 0. We see that all derivatives of odd order vanish, while
derivatives of even order alternate between 1 and −1. We write down the
Maclauren series:
y1 (x) = 1 −
x2
x4
+
− · · · = cos x .
2!
4!
Similarly, we compute the series for y2 (x):
y2 (x) = x −
x3
x5
+
− · · · = sin x .
3!
5!
We shall solve equations with variable coefficients
(1.4)
P (x)y 00 + Q(x)y 0 + R(x)y = 0 ,
where the continuous functions P (x), Q(x) and R(x) are given. We shall
always denote by y1 (x) the solution of (1.4) satisfying the initial conditions
106CHAPTER 3. USING INFINITE SERIES TO SOLVE DIFFERENTIAL EQUATIONS
y(0) = 1, y 0 (0) = 0, and by y2 (x) the solution of (1.4) satisfying the initial
conditions y(0) = 0, y 0 (0) = 1. If one needs to solve (1.4), together with the
given initial conditions
y(0) = α, y 0 (0) = β ,
then the solution is
y(x) = αy1 (x) + βy2 (x) .
3.1.3
Using Series When Other Methods Fail
Let us try to find the general solution of the equation
y 00 + xy 0 + 2y = 0 .
This equation has variable coefficients, and none of the previous methods
will apply here.
We shall derive a formula for y (n) (0), and use it to calculate y1 (x) and
y2 (x). From the equation we have y 00 (0) = −2y(0). We now differentiate
the equation
y 000 + xy 00 + 3y 0 = 0 ,
which gives y 000(0) = −3y 0 (0). Differentiate the equation again
y 0000 + xy 000 + 4y 00 = 0 ,
and get: y 0000(0) = −4y 00 (0). It is clear that, in general, we get a recursive
relation
y (n) (0) = −ny (n−2) (0), n = 3, 4, . . . .
Let us begin with the computation of y2 (x), for which we use the initial
conditions y(0) = 0, y 0 (0) = 1. Then from the recursive relation:
y 00 (0) = −2y(0) = 0;
y 000 (0) = −3y 0 (0) = −3 · 1;
y 0000(0) = −4y 00 (0) = 0;
It is clear that all derivatives of even order are zero. Let us continue with
derivatives of odd order:
y (5)(0) = −5y 000 (0) = (−1)2 5 · 3 · 1;
107
3.1. SERIES SOLUTION NEAR A REGULAR POINT
y (7)(0) = −7y 000 (0) = (−1)3 7 · 5 · 3 · 1;
And in general,
y (2n+1)(0) = (−1)n (2n + 1) · (2n − 1) · · · 3 · 1 .
Then the Maclauren series for y2 (x) is
y2 (x) =
P∞
=x+
P∞
n=1
y(2n+1) (0) 2n+1
(2n+1)! x
n (2n+1)·(2n−1)···3·1 x2n+1
(2n+1)!
P∞
1
n
+ n=1 (−1) 2n·(2n−2)···4·2
x2n+1 .
=x+
=x
y(n) (0) n
n! x
n=0
P∞
n=1 (−1)
One can also write this solution as y2 (x) =
P∞
n=0 (−1)
n
1
2n n!
x2n+1 .
To compute y1 (x), we use the initial conditions y(0) = 1, y 0 (0) = 0. As
before, we see from the recursive relation that all derivatives of odd order
vanish, while the even ones satisfy
y (2n) (0) = (−1)n 2n · (2n − 2) · · · 4 · 2 , for n = 1, 2, 3, . . . .
This leads to y1 (x) = 1 +
solution:
y(x) = c1 1 +
∞
X
n=1
P∞
n=1 (−1)
n
1
(2n−1)·(2n−3)···3·1
x2n . The general
∞
X
(−1)n
1
x2n + c2
(−1)n n x2n+1 .
(2n − 1)(2n − 3) · · ·3 · 1
2 n!
n=0
!
We shall need a formula for repeated differentiation of a product of two
functions. Starting with the product rule (f g)0 = f 0 g + f g 0 , we have
(f g)00 = f 00 g + 2f 0 g 0 + f g 00 ;
(f g)000 = f 000 g + 3f 00 g 0 + 3f 0 g 00 + f g 000 ,
and in general
(1.5)
n(n−1) (n−2) 00
f
g
2
0 (n−1)
(n)
(f g)(n) = f (n) g + nf (n−1) g 0 +
+ n(n−1)
f 00 g (n−2)
2
+ nf g
+ fg
+···
.
This formula is similar to the binomial formula for the expansion of
(x + y)n . Using summation notation, we can write it as
(f g)
(n)
=
n
X
k=0
!
n (n−k) (k)
f
g ,
k
108CHAPTER 3. USING INFINITE SERIES TO SOLVE DIFFERENTIAL EQUATIONS
where
n
k
=
n!
k!(n−k)!
are the binomial coefficients. (Convention: f (0) = f .)
The formula (1.5) simplifies considerably in case f (x) = x, or if f (x) = x2 :
(xg)(n) = ng (n−1) + xg (n) ;
(x2 g)(n) = n(n − 1)g (n−2) + 2nxg (n−1) + x2 g (n) .
We shall use series to solve linear second order equations with variable
coefficients
P (x)y 00 + Q(x)y 0 + R(x)y = 0 ,
where the continuous functions P (x), Q(x) and R(x) are given. A number
a is called a regular point if P (a) 6= 0. If P (a) = 0, then x = a is a singular
point.
Example (2 + x2 )y 00 − xy 0 + 4y = 0. For this equation any a is a regular
point. Let us find a series solution, centered at a = 0, i.e., the Maclauren
series for the solution y(x). We differentiate both sides of the equation n
times. When we use the formula (1.5) to differentiate the first term, only
three terms are non-zero, because derivatives of order three and higher of
2 + x2 are zero. When we differentiate xy 0 , only two terms survive. We have
n(n − 1) (n)
2y + n(2x)y (n+1) + (2 + x2 )y (n+2) − ny (n) − xy (n+1) + 4y (n) = 0 .
2
We set here x = 0. Several terms vanish. Combining the like terms:
2y (n+2) (0) + n2 − 2n + 4 y (n) (0) = 0 ,
which gives us the recursive relation
y
(n+2)
n2 − 2n + 4 (n)
(0) = −
y (0) .
2
This relation is too involved to get a neat formula for y (n) (0) as a function
of n. However, it can be used to crank out the derivatives at zero, as many
as you wish. To compute y1 (x), we use the initial conditions y(0) = 1 and
y 0 (0) = 0. We see from the recursive relation that all derivatives of odd
order are zero. Setting n = 0 in the recursive relation, we have
y 00 (0) = −2y(0) = −2 .
109
3.1. SERIES SOLUTION NEAR A REGULAR POINT
When n = 2:
y 0000(0) = −2y 00 (0) = 4 .
Using these derivatives in the Maclauren series, we have
1
y1 (x) = 1 − x2 + x4 + · · · .
6
To compute y2 (x), we use the initial conditions y(0) = 0 and y 0 (0) = 1. We
see from the recursive relation that all derivatives of even order are zero,
while when n = 1, we get
3
3
y 000(0) = − y 0 (0) = − .
2
2
Setting n = 3, we have
7
21
y (5)(0) = − y 000 (0) =
.
2
4
Using these derivatives in the Maclauren series, we conclude
7 5
1
x +··· .
y2 (x) = x − x3 +
4
160
The general solution:
1
1
7 5
y(x) = c1 y1 (x)+c2 y2 (x) = c1 1 − x2 + x4 + · · · +c2 x − x3 +
x +··· .
6
4
160
Suppose we wish to solve the above equation together with the initial
conditions: y(0) = −2, y 0 (0) = 3. Then y(0) = c1 y1 (0) + c2 y2 (0) = c1 = −2,
and y 0 (0) = c1 y10 (0) + c2 y20 (0) = c2 = 3. It follows that y(x) = −2y1 (x) +
3y2 (x). If we need to approximate y(x) near x = 0, say on the interval
(−0.3, 0.3), then
1
1
7 5
y(x) ' −2 1 − x + x4 + 3 x − x3 +
x
6
4
160
2
will provide an excellent approximation.
Example Approximate the general solution of Airy’s equation
y 00 − x y = 0
near x = 1.
110CHAPTER 3. USING INFINITE SERIES TO SOLVE DIFFERENTIAL EQUATIONS
We need to compute the Taylor’s series about a = 1:
P
y(n) (1)
n
00
y(x) = ∞
n=0
n! (x−1) . From the equation, y (1) = y(1). To get higher
derivatives, we differentiate our equation n times, and then set x = 1
y (n+2) − ny (n−1) − xy (n) = 0;
y (n+2) (1) = ny (n−1) (1) + y (n)(1), n = 1, 2, . . . .
To compute y1 (x), we use the initial conditions y(1) = 1, y 0 (1) = 0.
Then y 00 (1) = y(1) = 1. Setting n = 1 in the recursive relation: y (3)(1) =
y(1) + y 0 (1) = 1. When n = 2, y (4)(1) = 2y 0 (1) + y 00 (1) = 1. Then y (5)(1) =
3y 00 (1) + y 000(1) = 4. We have
y1 (x) = 1 +
(x − 1)2 (x − 1)3 (x − 1)4 (x − 1)5
+
+
+
+··· .
2
6
24
30
To compute y2 (x), we use the initial conditions y(1) = 0, y 0 (1) = 1.
Then y 00 (1) = y(1) = 0. Setting n = 1 in the recursive relation: y (3)(1) =
y(1) + y 0 (1) = 1. When n = 2, y (4)(1) = 2y 0 (1) + y 00 (1) = 2. Then y (5)(1) =
3y 00 (1) + y 000(1) = 1. We have
y2 (x) = x − 1 +
3.2
(x − 1)3 (x − 1)4 (x − 1)5
+
+
+··· .
6
12
120
Solution Near a Mildly Singular Point
Again we consider the equation
P (x)y 00 + Q(x)y 0 + R(x)y = 0 ,
with given functions P , Q and R that are continuous near a point a, at
P
y(n) (a)
n
which we want to compute a series solution y(x) = ∞
n=0
n! (x − a) . If
P (a) = 0, we have a problem: we cannot compute y 00 (a) from the equation
(and we have the same problem for higher derivatives). However, if a is
a simple root of P (x), it turns out that we can proceed much as before.
Namely, we assume that P (x) = (x − a)P1 (x), with P1 (a) 6= 0. Dividing the
equation by P1 (x), and calling q(x) = PQ(x)
, r(x) = PR(x)
, we put it into the
1 (x)
1 (x)
form
(x − a)y 00 + q(x)y 0 + r(x)y = 0 .
The functions q(x) and r(x) are continuous near a. In case a = 0, we have
(2.1)
xy 00 + q(x)y 0 + r(x)y = 0 .
111
3.2. SOLUTION NEAR A MILDLY SINGULAR POINT
For this equation we cannot expect to obtain two linearly independent solutions, by prescribing y1 (0) = 1, y10 (0) = 0 or y2 (0) = 0, y20 (0) = 1, the way
we did before, because the equation is singular at x = 0 (the functions q(x)
x
and r(x)
are
discontinuous
at
x
=
0,
and
so
the
existence
and
uniqueness
x
theorem does not apply).
Example Let us try to solve: xy 00 − y 0 = 0, y(0) = 0, y 0 (0) = 1.
Multiplying through by x, we obtain Euler’s equation, which we solve,
to get the general solution: y(x) = c1 x2 + c2 . Here y 0 (0) = 0. The initial
value problem has no solution. However, if we change the initial conditions,
and consider the problem xy 00 − y 0 = 0, y(0) = 1, y 0 (0) = 0, then we have
infinitely many solutions y = 1 + c1 x2 .
We therefore lower our expectations, and we will be satisfied to compute
just one series solution of (2.1). It turns out that in most cases it is possible
P
n
to calculate a series solution of the form ∞
n=0 an x , starting with a0 = 1.
Example Find a series solution of xy 00 + 3y 0 − 2y = 0.
It is convenient to multiply the equation by x
x2 y 00 + 3xy 0 − 2xy = 0 .
∞
n
0
n−1
We let y = ∞
and y 00 = ∞
n=0 an x . Then y =
n=1 an nx
n=2 an n(n −
1)xn−2 . Observe that each differentiation “kills” a term. Plugging this series
into the equation:
P
∞
X
n=2
P
an n(n − 1)xn +
∞
X
n=1
P
3an nxn −
∞
X
2an xn+1 = 0 .
n=0
The third series is not “lined up” with the other two. We therefore replace
n by n − 1 in that series, obtaining
∞
X
2an xn+1 =
n=0
∞
X
2an−1 xn .
n=1
We have
(2.2)
∞
X
n=2
n
an n(n − 1)x +
∞
X
n=1
n
3an nx −
P∞
∞
X
2an−1 xn = 0 .
n=1
We shall use the following fact: if n=1 bn xn = 0 for all x, then bn = 0 for
all n = 1, 2, . . .. We now combine the three series in (2.2) into one, so that
112CHAPTER 3. USING INFINITE SERIES TO SOLVE DIFFERENTIAL EQUATIONS
we can set all of the resulting coefficients to zero. The x term is present in
the second and the third series, but not in the first. However, we can start
the first series at n = 1, because at n = 1 the coefficient is zero. I.e., we
have
∞
X
n=1
an n(n − 1)xn +
xn
∞
X
n=1
3an nxn −
∞
X
2an−1 xn = 0 .
n=1
Now for all n ≥ 1, the
term is present in all three series, so that we can
combine these series into one series. We therefore just “lift” the coefficients:
an n(n − 1) + 3an n − 2an−1 = 0 .
We solve for an :
an =
2
an−1 , n ≥ 1 .
n(n + 2)
Starting with a0 = 1, we compute a1 =
a3 =
2
3·5 a2
=
24
3! 5! ,
2n+1
n! (n+2)! .
2n+1
2
1·3 ,
a2 =
2
2·4 a1
=
22
(1·2)(3·4)
=
23
2! 4! ,
. . ., an =
Answer: y(x) = 1 +
∞
X
n=1
n! (n + 2)!
xn =
∞
X
2n+1
xn .
n!
(n
+
2)!
n=0
Example Find a series solution of xy 00 − 3y 0 − 2y = 0.
This is a small modification of the preceding problem, so that we can quickly
see that the recurrence relation takes the form:
(2.3)
an =
2
an−1 , n ≥ 1 .
n(n − 4)
If we start with a0 = 1, and proceed as before, then at n = 4 the denominator
is zero, and the computation stops! To avoid the trouble at n = 4, we look
for the solution in the form y =
∞
X
an xn , starting with a4 = 1, and using
n=4
the above recurrence relation for n ≥ 4. (Plugging y =
∞
X
an xn into the
n=4
equation shows that an must satisfy (2.3).) Compute: a5 =
2
22
22
2n−4
a6 = 6·2
a5 = 6·5·2·1
= 24 6!2!
, . . ., an = 24 n!(n−4)!
.
4
Answer: y(x) = x + 24
∞
X
n=5
2
5·1 a4
=
2
5·1 ,
2n−4
xn .
n!(n − 4)!
Our experience with the previous two problems can be summarized as
follows.
3.2. SOLUTION NEAR A MILDLY SINGULAR POINT
113
Theorem 8 Consider the problem (2.1). If q(0) is not a non-positive integer (i.e., q(0) is not equal to 0, −1, −2 . . .), one can find a series solution of the form y =
∞
X
n=0
an xn , starting with a0 = 1. In case q(0) = −k,
where k is a non-negative integer, one can find a series solution of the form
y=
∞
X
an xn , starting with ak+1 = 1.
n=k+1
Example x2 y 00 + xy 0 + (x2 − ν 2 )y = 0.
This is the Bessel equation, of great importance in Mathematical Physics!
It depends on a real parameter ν. It is also called Bessel’s equation of order
ν. Its solutions are called Bessel’s functions of order ν. We see that a = 0
is not a mildly singular point, for ν 6= 0. (Zero is a double root of x2 .) But
in case ν = 0, we can cancel x, putting Bessel’s equation of order zero into
the form:
(2.4)
xy 00 + y 0 + xy = 0 ,
where a = 0 is a mildly singular point. We now find the series solution,
centered at a = 0, i.e., the Maclauren series for y(x).
We put the equation back into the form
x2 y 00 + xy 0 + x2 y = 0 ,
and look for the solution in the form
∞
X
n=2
an n(n − 1)xn +
∞
X
P∞
n=0
an xn . Plug this in:
an nxn +
∞
X
an xn+2 = 0 .
n=0
n=1
In the last series we replace n by n − 2:
∞
X
n=2
an n(n − 1)xn +
∞
X
an nxn +
n=1
∞
X
an−2 xn = 0 .
n=2
None of the series has a constant term. The x term is present only in the
second series. Its coefficient is a1 , and so
a1 = 0 .
The terms xn , starting with n = 2, are present in all series, so that
an n(n − 1) + an n + an−2 = 0 ,
114CHAPTER 3. USING INFINITE SERIES TO SOLVE DIFFERENTIAL EQUATIONS
1.0
J0 HxL
0.8
0.6
0.4
0.2
5
10
15
-0.2
-0.4
Figure 3.1: The graph of Bessel’s function J0 (x)
or
an−2
.
n2
This recurrence relation tells us that all odd coefficients are zero, a2n+1 = 0,
while for the even ones
a2n−2
a2n−2
a2n = −
=− 2 2 .
(2n)2
2 n
an = −
a0
1
1
Starting with a0 = 1, we compute a2 = − 2 2 = − 2 2 , a4 = − 2 2 a2 =
2 1
2 1
2 2
1
1
1
(−1)2 2·2
, a6 = − 2 2 a4 = (−1)3 2·3
, and in general,
2
2 (1 · 2)
2 3
2 (1 · 2 · 3)2
1
. We then have
a2n = (−1)n 2n
2 (n!)2
y =1+
∞
X
n=1
a2n x2n =
∞
X
n=0
(−1)n
1
x2n .
22n (n!)2
We have obtained Bessel’s function of order zero of the first kind, the cus∞
X
1
tomary notation J0 (x) =
(−1)n 2n
x2n .
2
2
(n!)
n=0
3.2.1∗ Derivation of J0 (x) by differentiation of the equation
We differentiate our equation (2.4) n times, and then set x = 0:
ny (n+1) + xy (n+2) + y (n+1) + ny (n−1) + xy (n) = 0;
3.3. MODERATELY SINGULAR EQUATIONS
115
ny (n+1) (0) + y (n+1) (0) + ny (n−1) (0) = 0 .
(It is not always true that xy (n+2) → 0 as x → 0. However, in case of the
initial conditions y(0) = 1, y 0 (0) = 0 that is true, as was justified in the
author’s paper [5].) We get a recursive relation
y (n+1) (0) = −
n (n−1)
y
(0) .
n+1
We begin with the initial conditions y(0) = 1, y 0 (0) = 0. Then all derivatives
of odd order vanish, while
(2n−2) (0) = (−1)2 2n−1 2n−3 y (2n−4) (0) = . . .
y (2n) (0) = − 2n−1
2n y
2n 2n−4
= (−1)n (2n−1)(2n−3)···3·1
y(0) = (−1)n (2n−1)(2n−3)···3·1
.
2n(2n−2)···2
2n n!
Then
y(x) =
∞
X
y (2n) (0)
n=0
(2n)!
2n
x
=
∞
X
n=0
(−1)n
1
x2n .
22n (n!)2
We have again obtained Bessel’s function of order zero of the first kind,
∞
X
1
J0 (x) =
(−1)n 2n
x2n .
2
2
(n!)
n=0
In case of the initial conditions y(0) = 0 and y 0 (0) = 1, the problem has
no solution (the recurrence relation above is not valid in this case, because
the relation xy (n+2) → 0 as x → 0 is not true here). In fact, the second
solution of Bessel’s equation cannot be continuously differentiable
at x = 0.
R
− x1 dx
c
Indeed, the Wronskian of two solutions is equal to ce
= x . We have
W (y1 , y2 ) = y1 y20 − y10 y2 = xc . The solution J0 (x) is bounded at x = 0, and
it has a bounded derivative at x = 0. Therefore, the other solution must be
discontinuous at x = 0. It turns out that the other solution, called Bessel’s
function of the second type, and denoted Y0 (x), has a ln x term in its series
representation, see e.g., [1].
3.3
Moderately Singular Equations
We shall deal only with the Maclauren series, i.e., we take a = 0, with the
general case being similar. We consider the equation
(3.1)
x2 y 00 + xp(x)y 0 + q(x)y = 0 ,
116CHAPTER 3. USING INFINITE SERIES TO SOLVE DIFFERENTIAL EQUATIONS
where p(x) and q(x) are given infinitely differentiable functions, that can be
represented by the Maclauren series
1
p(x) = p(0) + p0 (0)x + p00 (0)x2 + · · · ;
2
1
q(x) = q(0) + q 0 (0)x + q 00 (0)x2 + · · · .
2
If it happens that q(0) = 0, then q(x) has a factor of x, and we can divide the
equation (3.1) by x, to obtain a mildly singular equation. So the difference
from the preceding section is that we now allow the case of q(0) 6= 0. Observe
also that in case p(x) and q(x) are constants, the equation (3.1) is Euler’s
equation, that we have studied before. This connection with Euler’s equation
is the “guiding light” of the theory that follows.
We change to a new unknown function v(x), by letting y(x) = xr v(x),
with a constant r to be specified. With y 0 = rxr−1 v + xr v 0 and y 00 =
r(r − 1)xr−2 v + 2rxr−1 v 0 + xr v 00 , we plug y in:
(3.2)
xr+2 v 00 + xr+1 v 0 (2r + p(x)) + xr v [r(r − 1) + rp(x) + q(x)] = 0 .
We now choose r, to satisfy the following characteristic equation
(3.3)
r(r − 1) + rp(0) + q(0) = 0 .
The quantity in the square bracket in (3.2) is then
1
1
rp0 (0)x + r p00 (0)x2 + · · · + q 0 (0)x + q 00 (0)x2 + · · · ,
2
2
i.e., it has a factor of x. We take this factor out, and divide the equation
(3.2) by xr+1 , obtaining a mildly singular equation, that we have analyzed
in the previous section.
2x2 y 00 − xy 0 + (1 + x)y = 0. To put it into the right form,
1
1
1
we divide by 2:
x2 y 00 − xy 0 + ( + x)y = 0. Here p(x) = − 12 and
2
2
2
q(x) = 12 + 12 x. The characteristic equation is then
Example
1
1
r(r − 1) − r + = 0 .
2
2
Its roots are r =
1
2
and r = 1.
117
3.3. MODERATELY SINGULAR EQUATIONS
1
Case r = 12 . We know that a substitution y = x 2 v will produce a mildly
singular equation for v(x). The resulting equation for v could be derived
from scratch, but it is easier to use (3.2):
1
1
x5/2 v 00 + x3/2 v 0 + x3/2 v = 0 ,
2
2
or, dividing by x3/2 , we have a mildly singular equation
1
1
xv 00 + v 0 + v = 0 .
2
2
We multiply this equation by 2x, for convenience,
2x2 v 00 + xv 0 + xv = 0 ,
and look for solution in the form v(x) =
get as before
∞
X
n=2
2an n(n − 1)xn +
∞
X
P∞
n=0
an nxn +
n=1
an xn . Plugging this in, we
∞
X
an xn+1 = 0 .
n=0
To line up powers, we shift n → n − 1 in the last series. The first series we
may begin at n = 1 instead of n = 2, because the coefficient at n = 1 is
zero. We then have
∞
X
n=1
2an n(n − 1)xn +
∞
X
n=1
an nxn +
∞
X
an−1 xn = 0 .
n=1
We can now combine these series into one series. Setting its coefficients to
zero, we have
2an n(n − 1) + an n + an−1 = 0 ,
which gives us a recurrence relation
an = −
1
an−1 .
n(2n − 1)
1
1
Starting with a0 = 1, compute a1 = − 1·1
, a2 = − 2·3
a1 = (−1)2 (1·2)1(1·3) ,
1
a3 = − 3·5
a2 = (−1)3 (1·2·3)1(1·3·5) , and in general
an = (−1)n
1
.
n! · 1 · 3 · 5 · · · (2n − 1)
118CHAPTER 3. USING INFINITE SERIES TO SOLVE DIFFERENTIAL EQUATIONS
We have computed the first solution
y1 (x) = x1/2 v(x) = x1/2
"
∞
X
#
1
1+
(−1)n
xn .
n!
·
1
·
3
·
5
·
·
·
(2n
−
1)
n=1
Case r = 1. We set y = xv. The resulting equation for v from (3.2) is
1
3
x3 v 00 + x2 v 0 + x2 v = 0 ,
2
2
or, dividing by x2 , we have a mildly singular equation
3
1
xv 00 + v 0 + v = 0 .
2
2
We multiply this equation by 2x, for convenience,
2x2 v 00 + 3xv 0 + xv = 0 ,
and look for solution in the form y =
as before
∞
X
n=2
2an n(n − 1)xn + 3
∞
X
P∞
n=0
an xn . Plugging this in, we get
an nxn +
n=1
∞
X
an xn+1 = 0 .
n=0
As before, we can start the first series at n = 1, and make a shift n → n − 1
in the last series:
∞
X
n=1
2an n(n − 1)xn +
∞
X
n=1
3an nxn +
∞
X
an−1 xn = 0 .
n=1
Setting the coefficient of xn to zero,
2an n(n − 1) + 3an n + an−1 = 0 ,
gives us a recurrence relation
an = −
1
an−1 .
n(2n + 1)
1
1
1
Starting with a0 = 1, compute a1 = − 1·3
, a2 = − 2·5
a1 = (−1)2 (1·2) (1·3·5)
,
1
1
3
a3 = − 3·7 a2 = (−1) (1·2·3) (1·3·5·7) , and in general
an = (−1)n
1
.
n! · 1 · 3 · 5 · · · (2n + 1)
119
3.3. MODERATELY SINGULAR EQUATIONS
We now have the second solution
"
∞
X
#
1
y2 (x) = x 1 +
(−1)n
xn .
n!
·
1
·
3
·
5
·
·
·
(2n
+
1)
n=1
The general solution is, of course, y(x) = c1 y1 + c2 y2 .
Example
1
x2 y 00 + xy 0 + (x2 − )y = 0.
9
This is Bessel’s equation of order 13 . Here p(x) = 1 and q(x) = x2 − 19 .
The characteristic equation
r(r − 1) + r −
1
=0
9
has roots r = − 13 and r = 13 .
1
4
1
Case r = − 13 . We set y = x− 3 v. Compute y 0 = − 13 x− 3 v + x− 3 v 0 , y 00 =
1
4
4 − 73
v − 32 x− 3 v 0 + x− 3 v 00 . Plugging this in and simplifying, we get a mildly
9x
singular equation
1
xv 00 + v 0 + xv = 0 .
3
We multiply by 3x
3x2 v 00 + xv 0 + 3x2 v = 0 ,
and look for solution in the form y =
as before
∞
X
n=2
3an n(n − 1)xn +
∞
X
P∞
n=0
an xn . Plugging this in, we get
an nxn +
n=1
∞
X
3an xn+2 = 0 .
∞
X
3an−2 xn = 0 .
n=0
We shift n → n − 2 in the last series:
∞
X
n=2
3an n(n − 1)xn +
∞
X
n=1
an nxn +
n=2
The x term is present only in the second series. Its coefficient must be zero,
i.e.,
(3.4)
a1 = 0 .
The term xn , with n ≥ 2, is present in all three series. Setting its coefficient
to zero
3an n(n − 1) + an n + 3an−2 = 0 ,
120CHAPTER 3. USING INFINITE SERIES TO SOLVE DIFFERENTIAL EQUATIONS
which gives us the recurrence relation
an = −
3
an−2 .
n(3n − 2)
We see that all odd coefficients are zero (because of (3.4)), while, starting
with a0 = 1, we compute the even ones
a2n = −
a2n = (−1)n
3
a2n−2 ;
2n(6n − 2)
3n
.
(2 · 4 · · · 2n) (4 · 10 · · · (6n − 2))
We have the first solution
y1 (x) = x
−1/3
"
∞
X
#
3n
1+
(−1)
x2n .
(2
·
4
·
·
·
2n)
(4
·
10
·
·
·(6n
−
2))
n=1
n
1
2
1
5
Case r = 13 . We set y = x 3 v. Compute y 0 = 31 x− 3 v +x 3 v 0 , y 00 = − 92 x− 3 v +
1
2 − 23 0
v + x 3 v 00. Plugging this in and simplifying, we get a mildly singular
3x
equation
5
xv 00 + v 0 + xv = 0 .
3
We multiply by 3x
3x2 v 00 + 5xv 0 + 3x2 v = 0 ,
n
and look for solution in the form y = ∞
n=0 an x . Plugging this in, we
conclude as before that a1 = 0, and the following recurrence relation
P
a2n = −
1
3
a2n−2 = −
a2n−2 .
n(3n + 2)
n(n + 2/3)
We then have the second solution
y2 (x) = x
1/3
"
∞
X
#
1
1+
(−1)
x2n .
(2
·
4
·
·
·
2n)
((2
+
2/3)
·
(4
+
2/3)
·
·
·(2n
+
2/3))
n=1
n
121
3.3. MODERATELY SINGULAR EQUATIONS
3.3.1
Problems
I. Find the Maclauren series of the following functions, and state their radius
of convergence
1
3
1. sin x2 .
2.
.
3. e−x .
1 + x2
II. 1. Find the Taylor series of f (x) centered at a.
(ii) f (x) = ex , a = 1.
(i) f (x) = sin x, a = π2 .
2. Show that
∞
X
1 + (−1)n
n2
n=1
3. Show that
xn =
(iii) f (x) =
1
, a = 1.
x
∞
1X
1 2n
x .
2 n=1 n2
∞
X
n+3
n+2
xn+1 =
xn .
n!(n
+
1)
(n
−
1)!n
n=0
n=1
∞
X
h
4. Expand the n-th derivative: (x2 + x)g(x)
h
5. Find the n-th derivative: (x2 + x)e2x
i(n)
i(n)
.
.
III. Find the general solution, using power series centered at a (find the
recurrence relation, and two linearly independent solutions).
1. y 00 − xy 0 − y = 0, a = 0.
Answer. y1 (x) =
∞
X
x2n
n=0
2n n!
, y2 (x) =
2. y 00 − xy 0 + 2y = 0, a = 0.
∞
X
2n n!
x2n+1 .
(2n
+
1)!
n=0
1
1 5
Answer. y1 (x) = 1 − x2 , y2 (x) = x − x3 −
x − · · ·.
6
120
3. y 00 − xy 0 − y = 0, a = 1.
1
1
1
Answer. y1 (x) = 1 + (x − 1)2 + (x − 1)3 + (x − 1)4 + · · ·,
2
6
6
1
1
1
2
3
y2 (x) = (x − 1) + (x − 1) + (x − 1) + (x − 1)4 + · · ·.
2
2
4
4. (x2 + 1)y 00 + xy 0 + y = 0, a = 0.
Answer. The recurrence relation: y (n+2) (0) = −(n2 + 1)y (n)(0).
1
1
5 4
y1 (x) = 1 − 12 x2 + 24
x − · · ·, y2 (x) = x − x3 + x5 − · · ·.
3
6
122CHAPTER 3. USING INFINITE SERIES TO SOLVE DIFFERENTIAL EQUATIONS
IV. 1. Find the solution of the initial value problem, using power series
centered at 2
y 00 − 2xy = 0, y(2) = 1, y 0 (2) = 0 .
1
2
Answer. y = 1 + 2(x − 2)2 + (x − 2)3 + (x − 2)4 + · · ·.
3
3
2. Find the solution of the initial value problem, using power series centered
at −1
y 00 + xy = 0, y(−1) = 2, y 0 (−1) = −3 .
V. Find one series solution of the following mildly singular equations (here
a = 0)
1. 2xy 00 + y 0 + xy = 0.
∞
X
x4
x6
(−1)nx2n
x2
+
−
+· · · = 1+
.
Answer. y = 1−
2 · 3 2 · 4 · 3 · 7 2 · 4 · 6 · 3 · 7 · 11
2n n!3 · 7 · 11 · · ·(4n − 1)
n=1
2. xy 00 + y 0 − y = 0.
Answer. y =
∞
X
xn
n=0
(n!)2
.
3. xy 00 + 2y 0 + y = 0.
Answer. y =
∞
X
(−1)n
xn .
n!(n
+
1)!
n=0
4. xy 00 + y 0 − 2xy = 0.
Answer. y = 1 +
∞
X
1
x2n .
n (n!)2
2
n=1
VI. 1. Find one series solution in the form
∞
X
n=5
singular equation (here a = 0)
xy 00 − 4y 0 + y = 0 .
Answer. y = x5 + 120
(−1)n−5
n=6 n!(n−5)!
P∞
xn .
an xn of the following mildly
123
3.3. MODERATELY SINGULAR EQUATIONS
2. Find one series solution of the following mildly singular equation (here
a = 0)
xy 00 − 2y 0 − 2y = 0 .
3. Find one series solution of the following mildly singular equation (here
a = 0)
xy 00 + y = 0 .
Hint: Look for solution in the form
∞
X
an xn , starting with a1 = 1.
n=1
Answer. y =
∞
X
(−1)n−1
n!(n − 1)!
n=1
xn .
4. Recall that J0 (x) is a solution of
xy 00 + y 0 + xy = 0 .
2
Show that the “energy” E(x) = y 0 (x) + y 2 (x) is a decreasing function.
Conclude that each maximum value of J0 (x) is greater than the absolute
value of the minimum value that follows it, which in turn is larger than the
next maximum value, and so on (see the graph of J0 (x)).
5. Show that the absolute value of the slope decreases at each consecutive
root of J0 (x).
VII. 1. Verify that the Bessel equation of order 1/2
x2 y 00 + xy 0 + (x2 − 1/4)y = 0
has a moderate singularity at zero. Write down the characteristic equation,
and find its roots.
(i) Corresponding to the root r = 1/2, perform a change of variables y =
x1/2 v to obtain a mildly singular equation for v(x). Find one solution of
that equation, to obtain one of the solutions of the Bessel equation.
Answer. y = x
x−1/2 sin x.
1/2
"
1+
∞
X
(−1)n x2n
n=1
(2n + 1)!
#
= x
−1/2
"
x+
∞
X
(−1)n x2n+1
n=1
(2n + 1)!
#
=
124CHAPTER 3. USING INFINITE SERIES TO SOLVE DIFFERENTIAL EQUATIONS
(ii) Corresponding to the root r = −1/2, perform a change of variables
y = x−1/2 v to obtain a mildly singular equation for v(x). Find one solution
of that equation, to obtain the second solution of the Bessel equation.
Answer. y = x−1/2 cos x.
(iii) Find the general solution.
Answer. y = c1 x−1/2 sin x + c2 x−1/2 cos x.
2. Find the general solution of the Bessel equation of order 3/2
x2 y 00 + xy 0 + (x2 − 9/4)y = 0 .
Chapter 4
Laplace Transform
4.1
4.1.1
Laplace Transform And Its Inverse
Review of Improper Integrals
The mechanics of computing the integrals, involving infinite limits, is similar
to that for integrals with finite end-points. For example
Z
0
∞
1
1
∞
e−2t dt = − e−2t |0 = .
2
2
Here we did not plug in the upper limit t = ∞, but rather computed the
limit as t → ∞ (the limit is zero). This is an example of a convergent
integral. On the other hand, the integral
Z
∞
1
1
∞
dt = ln t |1
t
is divergent, because ln t has an infinite limit as t → ∞. When computing
improper integrals, we use the same techniques of integration, in essentially
the same way. For example
Z
0
∞
1
1
∞
1
te−2t dt = − te−2t − e−2t |0 = .
2
4
4
Here the anti-derivative is computed by guess-and-check (or by integration
by parts). The limit at infinity is computed by Lopital’s rule to be zero.
125
126
CHAPTER 4. LAPLACE TRANSFORM
4.1.2
Laplace Transform
Let the function f (t) be defined on the interval [0, ∞). Let s > 0 be a
positive parameter. We define the Laplace transform of f (t) as
Z
F (s) =
∞
0
e−st f (t) dt = L(f (t)),
provided that this integral converges. It is customary to use the corresponding capital letters to denote the Laplace transform (i.e., the Laplace
transform of g(t) is denoted by G(s), of h(t) by H(s), etc.). We also use the
“operator” notation for the Laplace transform: L(f (t)).
We now build up a collection of Laplace transforms.
L(1) =
L(t) =
Z
0
∞
Z
0
−st
e
∞
e−st dt = −
e−st ∞ 1
| = ;
s 0
s
"
#
e−st t e−st ∞ 1
t dt = −
− 2 |0 = 2 .
s
s
s
Using integration by parts
n
L(t ) =
Z
0
∞
e−st tn ∞ n
t dt = −
|0 +
s
s
−st n
e
Z
0
∞
e−st tn−1 dt =
n
L(tn−1 ) .
s
With this recurrence relation, we now compute L(t2 ) = 2s L(t) =
3!
3
2
s L(t ) = s4 , and in general
L(tn ) =
2
s3 ,
L(t3 ) =
n!
.
sn+1
The next class of functions are exponentials eat, where a is some number:
L(eat ) =
Z
0
∞
e−st eat dt = −
1 −(s−a)t ∞
1
e
|0 =
, provided that s > a.
s−a
s−a
Here we had to assume that s > a, to obtain a convergent integral.
Next we observe that for any constants c1 and c2
(1.1)
L(c1 f (t) + c2 g(t)) = c1 F (s) + c2 G(s),
because a similar property holds for integrals (and the Laplace transform is
an integral). This expands considerably the set of functions for which we
can write down the Laplace transform. For example,
1
1
1 1
1 1
s
L(cosh at) = L( eat + e−at ) =
+
= 2
, for s > a .
2
2
2s−a 2s+a
s − a2
4.1. LAPLACE TRANSFORM AND ITS INVERSE
Similarly,
L(sinh at) =
s2
127
a
, for s > a .
− a2
The Laplace transforms of sin at and cos at we compute in tandem. First
by the formula for exponentials
L(eiat ) =
1
s + ia
s + ia
s
a
=
= 2
= 2
+i 2
.
2
2
s − ia
(s − ia)(s + ia)
s +a
s +a
s + a2
Now, cos at = Re(eiat) and sin at = =(eiat), and it follows that
L(cos at) =
s
a
, and L(sin at) = 2
.
s2 + a2
s + a2
If c is some number, then
L(ect f (t)) =
Z
0
∞
e−st ect f (t) dt =
Z
∞
0
e−(s−c)t f (t) dt = F (s − c) .
This is called a shift formula
L(ect f (t)) = F (s − c) .
(1.2)
For example,
L(e5t sin 3t) =
Another example:
3
;
(s − 5)2 + 9
L(e−2t cosh 3t) =
s+2
;
(s + 2)2 − 9
In the last example c = −2, so that s − c = s + 2. Similarly,
L(et t5 ) =
4.1.3
5!
;
(s − 1)6
Inverse Laplace Transform
This is just going from F (s) back to f (t). We denote it as f (t) = L−1 (F (s)).
We have
L−1 (c1 F (s) + c2 G(s)) = c1 f (t) + c2 g(t) .
This is just the formula (1.1), read backwards. Each of the formulas for
Laplace Transform leads to the corresponding formula for its inverse:
L−1 (
tn
)
=
;
sn+1
n!
1
128
CHAPTER 4. LAPLACE TRANSFORM
s
) = cos at ;
s2 + a2
1
1
) = sin at ;
L−1 ( 2
2
s +a
a
1
L−1 (
) = eat ,
s−a
L−1 (
and so on. To compute L−1 , one often uses partial fractions, as well as the
inverse of the shift formula (1.2)
L−1 (F (s − c)) = ect f (t) ,
(1.3)
which we shall also call the shift formula.
3s − 5
Example Find L−1 ( 2
).
s +4
Breaking this fraction into a sum of two fractions, we have
L−1 (
3s − 5
s
1
5
) = 3L−1 ( 2
) − 5L−1 ( 2
) = 3 cos 2t − sin 2t .
s2 + 4
s +4
s +4
2
2
).
(s − 5)4
We recognize that a shift by 5 is performed in the function s24 . We
3
begin by inverting this function L−1 ( s24 ) = t3 , and then “pay” for the shift
according to the shift formula (1.3)
Example Find L−1 (
L−1 (
3
2
5t t
)
=
e
.
(s − 5)4
3
s+7
).
−s−6
We factor the denominator, and then use partial fractions
Example Find L−1 (
s2
s2
s+7
s+7
2
1
=
=
−
,
−s−6
(s − 3)(s + 2)
s−3 s+2
which gives
L−1 (
Example Find L−1 (
s+7
) = 2e3t − e−2t .
s2 − s − 6
2s − 1
).
s2 + 2s + 5
129
4.2. SOLVING THE INITIAL VALUE PROBLEMS
One cannot factor the denominator, so we complete the square
2s − 1
2s − 1
2(s + 1) − 3
=
=
,
s2 + 2s + 5
(s + 1)2 + 4
(s + 1)2 + 4
and then we adjust the numerator, so that it involves the same shift (as
in the denominator). Without the shift, we have the function 2s−3
, whose
s2 +4
3
inverse Laplace transform is 2 cos 2t − 2 sin 2t. By the shift formula
L−1 (
4.2
2s − 1
3
) = 2e−t cos 2t − e−t sin 2t .
s2 + 2s + 5
2
Solving The Initial Value Problems
Integrating by parts,
L(f 0 (t)) =
Z
0
∞
h
i
∞
e−st f 0 (t) dt = e−st f (t) |0 +s
Z
0
∞
e−st f (t) dt .
Let us assume that f (t) does not grow too fast, as t → ∞, i.e., |f (t)| ≤ beat,
for some positive constants a and b. If we now choose s > a, then the limit
as t → ∞ is zero, while the lower limit gives −f (0). We then have
L(f 0 (t)) = −f (0) + sF (s) .
(2.1)
To compute the Laplace transform of f 00 (t), we use the formula (2.1) twice
(2.2) L(f 00 (t)) = L((f 0 (t))0 ) = −f 0 (0)+sL(f 0 (t)) = −f 0 (0)−sf (0)+s2 F (s) .
In general
(2.3)
L(f (n) (t)) = −f (n−1) (0) − sf (n−2) (0) − · · · − sn−1 f (0) + sn F (s) .
Example Solve y 00 + 3y 0 + 2y = 0, y(0) = −1, y 0 (0) = 4.
We apply Laplace transform to both sides of the equation. Using the
linearity of Laplace transform, and that L(0) = 0, we have
L(y 00 ) + 3L(y 0 ) + 2L(y) = 0 .
We now apply the formulas (2.1), (2.2), and then use our initial conditions:
−y 0 (0) − sy(0) + s2 Y (s) + 3 (−y(0) + sY (s)) + 2Y (s) = 0 ;
130
CHAPTER 4. LAPLACE TRANSFORM
−4 + s + s2 Y (s) + 3 (1 + sY (s)) + 2Y (s) = 0 ;
s2 + 3s + 2 Y (s) + s − 1 = 0 .
We now solve for Y (s)
Y (s) =
s2
1−s
.
+ 3s + 2
To get the solution, it remains to find the inverse Laplace transform y(t) =
L−1 (Y (s)). For that we factor the denominator, and use partial fractions
s2
1−s
1−s
2
3
=
=
−
.
+ 3s + 2
(s + 1)(s + 2)
s+1 s+2
Answer: y(t) = 2e−t − 3e−2t .
Of course, the problem could also be solved without using the Laplace
transform. The same will be true for all other problems that we shall consider. Laplace transform is an alternative solution method, and in addition,
it provides a tool that can be used in more involved situations, for example,
for partial differential equations.
Example Solve y 00 − 4y 0 + 5y = 0, y(0) = 1, y 0 (0) = −2.
We apply Laplace transform to both sides of the equation. Using our
initial conditions, we get
2 − s + s2 Y (s) − 4 (−1 + sY (s)) + 5Y (s) = 0 ;
Y (s) =
s−6
.
s2 − 4s + 5
To invert the Laplace transform, we complete the square in the denominator,
and then produce the same shift in the numerator
Y (s) =
s2
s−6
(s − 2) − 4
=
.
− 4s + 5
(s − 2)2 + 1
Answer: y(t) = e2t cos t − 4e2t sin t.
Example Solve
y 00 + ω 2 y = 5 cos 2t, ω 6= 2,
y(0) = 1, y 0 (0) = 0 .
131
4.2. SOLVING THE INITIAL VALUE PROBLEMS
We have a spring with natural frequency ω, subjected to an external force
of frequency 2. We apply Laplace transform to both sides of the equation.
Using our initial conditions, we get
−s + s2 Y (s) + ω 2 Y (s) =
Y (s) =
(s2
5s
;
+4
s2
5s
s
+ 2
.
2
2
+ 4)(s + ω ) s + ω 2
The second term is easy to invert. To find the inverse Laplace transform of
the first term, we use guess-and-check (or partial fractions)
s
1
s
s
= 2
− 2
.
2
2
2
2
(s + 4)(s + ω )
ω − 4 s + 4 s + ω2
5
(cos 2t − cos ωt) + cos ωt.
−4
When ω approaches 2, the amplitude of oscillations gets large. To treat
the case of resonance, when ω = 2, we need one more formula.
Answer: y(t) =
ω2
We differentiate in s both sides of the formula
Z
F (s) =
to obtain
F 0 (s) = −
or
Z
0
∞
∞
0
e−st f (t) dt
e−st tf (t) dt = −L(tf (t)) ,
L(tf (t)) = −F 0 (s) .
For example,
(2.4)
L(t sin 2t) = −
d
2
4s
= 2
.
2
ds s + 4
(s + 4)2
Example Solve (a case of resonance)
y 00 + 4y = 5 cos 2t,
y(0) = 0, y 0 (0) = 0 .
Using Laplace transform, we have
s2 Y (s) + 4Y (s) =
5s
;
s2 + 4
132
CHAPTER 4. LAPLACE TRANSFORM
Y (s) =
(s2
5s
.
+ 4)2
5
Then, using (2.4), y(t) = t sin 2t. We see that the amplitude of oscillations
4
(which is equal to 54 t) tends to infinity with time.
Example Solve
y 0000 − y = 0,
y(0) = 1, y 0 (0) = 0, y 00 (0) = 1, y 000(0) = 0 .
Applying the Laplace transform, then using our initial conditions, we have
−y 000 (0) − sy 00 (0) − s2 y 0 (0) − s3 y(0) + s4 Y (s) − Y (s) = 0 ;
(s4 − 1)Y (s) = s3 + s = s(s2 + 1) ;
Y (s) =
s(s2 + 1)
s
s(s2 + 1)
=
= 2
.
s4 − 1
(s2 − 1)(s2 + 1)
s −1
I.e., y(t) = cosh t.
4.2.1
Step Functions
Sometimes an external force acts only over some finite time interval. One
uses step functions to model such forces. The basic step function is the
Heaviside function uc (t), defined for any positive constant c by
uc (t) =


 0 if t < c
.

 1 if t ≥ c
u
6
uc (t)
b
c
The Heaviside step function uc (t)
-t
133
4.2. SOLVING THE INITIAL VALUE PROBLEMS
(Oliver Heaviside, 1850 - 1925, was a self-taught English electrical engineer.)
Using uc (t), we can build up other step functions. For example, the function
u2 (t) − u4 (t) is equal to 1 for 2 ≤ t ≤ 4, and is zero otherwise. Then, for
example, (u2 (t) − u4 (t)) t2 gives a force, which is equal to t2 for 2 ≤ t ≤ 4,
and is zero for other t. We have
L(uc (t)) =
∞
Z
0
e−st uc (t) dt =
e−cs
s
L−1
!
Z
c
∞
e−st dt =
e−cs
;
s
= uc (t) .
Similarly, we shall compute the Laplace transform of the following “shifted”
function, which “begins” at t = c (it is zero for 0 < t < c):
L (uc (t)f (t − c)) =
Z
0
∞
−st
e
uc (t)f (t − c) dt =
Z
∞
c
e−st f (t − c) dt .
In the last integral we change the variable t → z, by setting t − c = z. We
have dt = dz, and the integral becomes
Z
0
∞
−s(c+z)
e
−cs
f (z) dz = e
Z
0
∞
e−sz f (z) dz = e−cs F (s) .
We have established another pair of shift formulas:
L (uc (t)f (t − c)) = e−cs F (s) ;
L−1 e−cs F (s) = uc (t)f (t − c) .
(2.5)
Example Solve
y 00 + 9y = u2 (t) − u4 (t),
y(0) = 1, y 0 (0) = 0 .
Here the forcing term is equal to 1 for 2 ≤ t ≤ 4, and is zero for other t.
Taking the Laplace transform, then solving for Y (s), we have
s2 Y (s) − s + 9Y (s) =
Y (s) =
s2
e−2s e−4s
−
;
s
s
s
1
1
+ e−2s 2
− e−4s 2
.
+9
s(s + 9)
s(s + 9)
Using guess-and-check (or partial fractions)
1
1 1
s
=
− 2
,
2
s(s + 9)
9 s s +9
134
CHAPTER 4. LAPLACE TRANSFORM
and therefore
−1
L
1
2
s(s + 9)
=
1 1
− cos 3t .
9 9
Using (2.5), we conclude
1 1
1 1
y(t) = cos 3t + u2 (t)
− cos 3(t − 2) − u4 (t)
− cos 3(t − 4) .
9 9
9 9
Observe that the solution undergoes changes in behavior at t = 2, and at
t = 4.
4.3
The Delta Function and Impulse Forces
Imagine a rod, which is so thin that we can consider it to be one dimensional,
and so long that we assume it to extend for −∞ < t < ∞ along the t axis.
Assume that the function ρ(t) gives the density of the rod. If we subdivide
the rod, using the points t1 , t2 , . . . at a distance ∆t apart, then the mass
P
of the piece i can be approximated by ρ(ti )∆t, and i ρ(ti )∆t gives an
approximation of the total mass. Passing toR the limit, we get the exact
P
∞
value of the mass m = lim∆t→0 i ρ(ti)∆t = −∞
ρ(t) dt. Assume now that
the rod is moved to a new position in the (t, y) plane, with each point (t, 0)
moved to (t, f (t)), where f (t) is a given function. What is the work needed
for this move? For the piece i, the work is approximated by f (ti )ρ(ti)∆t.
(We assume that the gravity g = 1 on our
private planet.) The total work
R∞
P
is then W = lim∆t→0 i f (ti )ρ(ti )∆t = −∞
ρ(t)f (t) dt.
Now assume that the rod has unit mass, m = 1, and the entire mass is
pushed into a single point t = 0. The resulting distribution of mass is called
the delta distribution or the delta function, and is denoted δ(t). It has the
following properties.
(i) δ(t) = 0 for t 6= 0.
(ii)
∞
Z
δ(t) dt = 1 (unit mass).
−∞
(iii)
Z
∞
−∞
δ(t)f (t) dt = f (0).
The last formula holds, because work is needed only to move mass 1 at
t = 0, the distance of f (0). Observe that δ(t) is not a usual function, like
4.3. THE DELTA FUNCTION AND IMPULSE FORCES
135
the ones studied in Calculus. (If a usual function is equal to zero, except at
one point, its integral is zero, over any interval.) One can think of δ(t) as
the limit of the functions
f (t) =
as → 0. (Observe that
R∞



1
2

 0
−∞ f (t) dt
if − < t < for other t
= 1.)
For any number t0 , the function δ(t − t0 ) gives a translation of delta
function, with unit mass concentrated at t = t0 . Correspondingly, its properties are
(i) δ(t − t0 ) = 0 for t 6= t0 .
(ii)
∞
Z
−∞
(iii)
Z
δ(t − t0 ) dt = 1 .
∞
−∞
δ(t − t0 )f (t) dt = f (t0 ).
Using the properties (i) and (iii), we compute the Laplace transform, for
any t0 ≥ 0,
L (δ(t − t0 )) =
Z
0
∞
−st
δ(t − t0 )e
dt =
Z
∞
−∞
δ(t − t0 )e−st dt = e−st0 .
In particular,
L (δ(t)) = 1 .
Correspondingly, L−1 (1) = δ(t), and L−1 e−st0 = δ(t − t0 ).
Of course, other physical quantities can also be concentrated at a point.
In the following example we consider forced vibrations of a spring, with the
external force concentrated at t = 2. In other words, an external impulse is
applied at t = 2.
Example Solve the initial value problem
y 00 + 2y 0 + 5y = 6 δ(t − 2), y(0) = 0, y 0 (0) = 0 .
Applying the Laplace transform, then solving for Y (s),
(s2 + 2s + 5)Y (s) = 6e−2s ;
136
CHAPTER 4. LAPLACE TRANSFORM
y
1.5
1.0
0.5
2
4
6
8
t
Figure 4.1: Spring’s response to an impulse force
Y (s) =
6
e−2s
= e−2s
.
2
s + 2s + 5
(s + 1)2 + 4
Recalling that by a shift formula L−1 (s+1)1 2 +4 = 12 e−t sin 2t, we conclude,
using (2.5), that
y(t) = 3u2 (t)e−(t−2) sin 2(t − 2) .
Before the time t = 2, the external force is zero. Coupled with zero
initial conditions, this leaves the spring at rest for t ≤ 2. An impulse force
at t = 2 sets the spring in motion, but the vibrations quickly die down,
because of heavy damping, see the Figure 4.1.
4.4
Convolution and the Tautochrone curve
The problem
y 00 + y = 0,
y(0) = 0, y 0 (0) = 1
has solution y = sin t. If we now add a forcing term g(t)
y 00 + y = g(t),
then the solution is
(4.1)
y(t) =
Z
0
y(0) = 0, y 0 (0) = 1 ,
t
sin(t − v)g(v) dv,
137
4.4. CONVOLUTION AND THE TAUTOCHRONE CURVE
as we saw in the section on convolution integrals. Motivated by this formula,
we now define the concept of convolution of two functions f (t) and g(t)
f ∗g =
t
Z
f (t − v)g(v) dv .
0
The formula (4.1) can then be written as
y(t) = sin t ∗ g(t) .
Another example:
2
t∗t =
Z
t
0
2
(t − v)v dv = t
Z
t
0
2
v dv −
t
Z
v 3 dv =
0
t4
t4
t4
−
=
.
3
4
12
If you compute t2 ∗ t, the answer is the same. More generally,
g∗f = f ∗g.
Indeed, making a change of variables v → u, by letting u = t − v, we express
g∗f =
Rt
g(t − v)f (v) dv = −
0
=
Rt
0
R0
t
g(u)f (t − u) du
f (t − u)g(u) du = f ∗ g .
It turns out that the Laplace transform of convolution is equal to the
product of the Laplace transforms:
L (f ∗ g) = F (s)G(s) .
Indeed,
L (f ∗ g) =
Z
0
∞
e−st
Z
0
t
f (t − v)g(v) dv dt =
ZZ
D
e−st f (t − v)g(v) dv dt ,
where the double integral on the right hand side is taken over the region D
of the tv plane, which is an infinite wedge 0 < v < t in the first quadrant.
We now evaluate this double integral by using the reverse order of repeated
integrations:
(4.2)
ZZ
−st
e
D
f (t − v)g(v) dv dt =
Z
0
∞
g(v)
Z
v
∞
−st
e
f (t − v) dt dv .
For the integral in the brackets, we make a change of variables t → u, by
letting u = t − v,
Z
v
∞
e−st f (t − v) dt =
Z
0
∞
e−s(v+u) f (u) du = e−sv F (s) ,
138
CHAPTER 4. LAPLACE TRANSFORM
and then the right hand side of (4.2) is equal to F (s)G(s).
v
6
@
@ @
@ @
@ @
@
D
@ @
@ @
@
@ @
@- t
@ @ @
The infinite wedge D
We conclude a useful formula
L−1 (F (s)G(s)) = (f ∗ g)(t) .
For example,
s2
(s2 + 4)2
−1
L
!
= cos 2t ∗ cos 2t =
Z
0
t
cos 2(t − v) cos 2v dv .
Using that
cos 2(t − v) = cos 2t cos 2v + sin 2t sin 2v ,
we conclude
L−1
s2
(s2 +4)2
= cos 2t
Rt
0
cos2 2v dv + sin 2t
= 12 t cos 2t +
1
4
sin 2t .
Rt
0
sin 2v cos 2v dv
Example Consider vibrations of a spring at resonance
y 00 + y = −3 cos t,
y(0) = 0, y 0 (0) = 0 .
Taking the Laplace transform, we compute
Y (s) = −3
Then
s
.
(s2 + 1)2
3
y(t) = −3 sin t ∗ cos t = − t sin t .
2
4.4. CONVOLUTION AND THE TAUTOCHRONE CURVE
139
The Tautochrone curve
y
6
d
d (x, y)
(x1 , v)
s
-x
The Tautochrone curve
Assume a particle slides down a curve through the origin in the first
quadrant of the xy plane, under the influence of the force of gravity. We
wish to find a curve such that the time T it takes to reach the bottom
at (0, 0) is the same, regardless of the starting point (x, y). (The initial
velocity at the starting point is assumed to be zero.) This historical curve
the tautochrone (which means loosely “the same time” in Latin) was found
by Christian Huygens in 1673. He was motivated by the construction of a
clock pendulum whose period is independent of its amplitude.
Let (x1 , v) be any intermediate position of the particle, v < y. Let
s = f (v) be the length of the curve from (0, 0) to (x1 , v). Of course the
length s depends also on the time t, and ds
dt gives the speed of the particle.
The kinetic energy of the particle is due to the decrease of its potential
energy (m is the mass of the particle)
1
ds
m
2
dt
By the chain rule
ds
dt
=
ds dv
dv dt
2
= mg(y − v) .
= f 0 (v) dv
dt , giving us
1
dv
f 0 (v)
2
dt
f 0 (v)
2
= g(y − v) ;
p √
dv
= − 2g y − v .
dt
140
CHAPTER 4. LAPLACE TRANSFORM
(Minus, because the function v(t) is decreasing.) We separate the variables,
and integrate
Z y
Z Tp
p
f 0 (v)
dv =
√
2g dt = 2gT .
y−v
0
0
(During the time T , the particle descends from v = y to v = 0.) To find the
function f 0 , we need to solve an integral equation. We can rewrite it as
y −1/2 ∗ f 0 (y) =
(4.3)
p
2gT .
− 12
Recall that in one of our homework problems we had L t
terms of the variable y
=
r
π
, or in
s
π
.
s
We now apply the Laplace transform to the equation (4.3)
L y
(4.4)
r
− 21
=
r
p
π
1
L f 0 (y) = 2gT .
s
s
We now solve for L (f 0 (y)), and put the answer in the form
Tp
L f (y) =
2g
π
0
where we denote a =
(4.5)
We have ds =
in (4.5)
(4.6)
p
dx2
T2
π2
r
π √
= a
s
r
π
,
s
2g. Using (4.4) again
√
f 0 (y) = ay −1/2 .
0
+ dy 2 ,
and so f (y) =
s
1+
dx
dy
2
=
√
ds
dy
=
r
1+
dx
dy
2
. We use this
1
a√ .
y
This is a first order equation. We could solve it for dx
dy , and then separate
the variables. But it is easier to use the parametric integration technique.
To do that, we solve this equation for y
(4.7)
a
y=
1+
and set
(4.8)
dx
dy
2 ,
dx
1 + cos θ
=
,
dy
sin θ
141
4.5. DISTRIBUTIONS
where θ is a parameter. Using (4.8) in (4.7), we express
y=a
sin2 θ
sin2 θ
a 1 − cos2 θ
a
=
= (1 − cos θ) .
=
a
2
2
2 + 2 cos θ
2 1 + cos θ
2
sin θ + (1 + cos θ)
Observe that dy =
a
2
sin θ dθ, and then from (4.8)
dx =
1 + cos θ
a
dy = (1 − cos θ) dθ .
sin θ
2
We compute x by integration, obtaining
x=
a
(θ − sin θ),
2
y=
a
(1 − cos θ) .
2
We have obtained a parametric representation of the tautochrone. This
curve was studied in Calculus, under the name of cycloid.
4.5
Distributions
A function f (t) converts a number t into a number f (t). (Sometimes we do
not plug in all real t, but restrict f (t) to its domain.) Functionals convert
functions into numbers. Actually, only “nice” functions may be plugged into
functionals.
A function ϕ(t) is said to be of compact support if it is equal to zero
outside of some interval (a, b). Functions that are infinitely differentiable
and of compact support are called test functions. We shall reserve writing
ϕ(t) exclusively for test functions.
Definition A distribution is a linear functional on test functions. Notation:
(f, ϕ). (A test function ϕ(t) goes in, a number (f, ϕ) is the output.)
Example Let f (t) be any continuous function. Define a distribution
(5.9)
(f, ϕ) =
∞
Z
f (t)ϕ(t) dt .
−∞
Convergence is not a problem here, because the integrand vanishes outside
of some interval. This functional is linear, because
(f, c1 ϕ1 + c2 ϕ2 ) =
= c1
Z
∞
−∞
f (t)ϕ1 dt + c2
Z
Z
∞
−∞
∞
−∞
f (t) (c1 ϕ1 + c2 ϕ2 ) dt
f (t)ϕ2 dt = c1 (f, ϕ1 ) + c2 (f, ϕ2 ) ,
142
CHAPTER 4. LAPLACE TRANSFORM
for any two constants c1 and c2 , and any two test functions ϕ1 (x) and ϕ2 (x).
We see that any “usual” function can be viewed as a distribution, i.e., the
formula (5.9) lets us consider f (t) in the sense of distributions.
Example The Delta distribution. Define
(δ(t), ϕ) = ϕ(0) .
∞
(Compare this with (5.9), and the intuitive formula −∞
δ(t)ϕ(t) dt = ϕ(0)
that we had before.) We see that in the realm of distributions the delta
function δ(t) sits next to usual functions, as an equal member of the club.
R
Assume f (t) is a differentiable function. Viewing f 0 (t) as a distribution,
we have
(f 0 , ϕ) =
Z
∞
−∞
f 0 (t)ϕ(t) dt = −
Z
∞
−∞
f (t)ϕ0 (t) dt = −(f, ϕ0 ) ,
using an integration by parts. Motivated by this, we now define the derivative of any distribution f
(f 0 , ϕ) = −(f, ϕ0 ) .
In particular,
(δ 0 , ϕ) = −(δ, ϕ0 ) = −ϕ0 (0) .
(I.e., δ 0 is another distribution. We know how it acts on test functions.)
Similarly, (δ 00 , ϕ) = −(δ 0 , ϕ0) = ϕ00 (0), and in general
(δ (n) , ϕ) = (−1)n ϕ(n)(0) .
We see that all distributions are infinitely differentiable! In particular, all
functions are infinitely differentiable, if we view them as distributions.
Example The Heaviside function H(t) has a jump at t = 0. Clearly, H(t)
is not differentiable at t = 0. But in the sense of distributions
H 0 (t) = δ(t) .
Indeed,
(H 0(t), ϕ) = −(H(t), ϕ0) = −
Z
0
∞
ϕ0 (t) dt = ϕ(0) = (δ(t), ϕ) .
143
4.5. DISTRIBUTIONS
Example The function |t| is not differentiable at t = 0. But in the sense of
distributions
|t|0 = 2H(t) − 1 .
Indeed,
0
0
(|t| , ϕ) = −(|t|, ϕ ) = −
Z
0
−∞
0
(−t)ϕ (t) dt −
Z
∞
0
tϕ0 (t) dt .
Integrating by parts in both integrals, we continue
(|t|0 , ϕ) = −
4.5.1
Z
0
ϕ(t) dt +
−∞
Z
∞
0
ϕ(t) dt = (2H(t) − 1, ϕ) .
Problems
I. Find the Laplace transform of the following functions.
1. 5 + 2t3 − e−4t .
Ans.
2. 2 sin 3t − t3 .
3. cosh 2t − e4t .
4. e2t cos 3t.
5.
t3 − 3t
.
t
6. e−3t t4 .
Ans.
1
5 12
+ 4 −
.
s s
s+4
6
6
Ans. 2
− 4.
s +9 s
s−2
(s − 2)2 + 9
2
3
− .
3
s
s
24
Ans.
(s + 3)5
Ans.
II. Find the inverse Laplace transform of the following functions.
1
2
1
1. 2
− .
Ans. sin 2t − t2 .
s + 4 s3
2
s
2
2. 2
−
.
s −9 s+3
1
3. 2
.
Ans. 1 − e−t .
s +s
1
4. 3
.
Ans. 1 − cos t.
s +s
144
CHAPTER 4. LAPLACE TRANSFORM
1
1
1
.
Ans. e3t − .
− 3s
3
3
1
1
1
.
Ans. sin t − sin 2t.
6. 2
(s + 1)(s2 + 4)
3
6
5.
s2
1
.
+ 2s + 10
1
8. 2
.
s +s−2
7.
9.
1 −t
e sin 3t.
3
1
1
Ans. et − e−2t .
3
3
"
√
√ #
3
3
1
− 12 t
cos
Ans. e
t − √ sin
t .
2
2
3
Ans.
s2
s2
s
.
+s+1
s−1
.
s2 − s − 2
s
11.
.
2
4s − 4s + 5
10.
1 2t 2 −t
e + e .
3
3
1
1
1
t
Ans. e 2
cos t + sin t .
4
8
Ans.
III. Using the Laplace transform, solve the following initial value problems.
1. y 00 + 3y 0 + 2y = 0, y(0) = −1, y 0 (0) = 2.
Ans. y = −e−2t .
2. y 00 + 2y 0 + 5y = 0, y(0) = 1, y 0 (0) = −2.
Ans.
1 −t
2e
(2 cos 2t − sin 2t).
3. y 00 + y = sin 2t, y(0) = 0, y 0 (0) = 1.
Ans. y =
5
1
sin t − sin 2t.
3
3
4. y 00 + 2y 0 + 2y = et , y(0) = 0, y 0 (0) = 1.
1
Ans. y = et −
5
0000
5. y − y = 0,
1 −t
e (cos t − 3 sin t).
5
y(0) = 0, y 0 (0) = 1, y 00 (0) = 0, y 000 (0) = 0.
1
1
sin t + sinh t.
2
2
0000
6. y − 16y = 0, y(0) = 0, y 0 (0) = 2, y 00 (0) = 0, y 000 (0) = 8.
Ans. y =
Ans. y = sinh 2t.
IV.
1. (a) Let s > 0. Show that
Z
∞
−∞
−sx2
e
dx =
r
π
.
s
145
4.5. DISTRIBUTIONS
Hint: Denote I =
∞
Z
−∞
Z
I2 =
∞
−∞
2
e−sx dx. Then
2
e−sx dx
Z
∞
−∞
2
e−sy dy =
Z
∞
Z
∞
−∞ −∞
e−s(x
2 +y2 )
dA .
This is a double integral over the entire xy plane. Evaluate it using polar
coordinates, to obtain I 2 = πs .
− 12
(b) Show that L t
1
Hint: L t− 2
=
Z
0
1
2
∞
r
=
1
t− 2 e−st dt. Make a change of variables t → x, by
− 12
letting x = t . Then L t
∞
π
.
s
=2
Z
∞
0
−sx2
e
dx =
Z
∞
−∞
−sx2
e
dx =
r
π
.
s
sin x
π
dx = .
x
2
0
Z ∞
sin tx
dx, and calculate the Laplace transform
Hint: Consider f (t) =
x
0
F (s) = π2 1s .
2. Show that
Z
3. Solve a system of differential equations
dx
dt
dy
dt
= 2x − y, x(0) = 4
= −x + 2y, y(0) = −2 .
Ans. x(t) = et + 3e3t , y(t) = et − 3e3t .
4. Solve the following non-homogeneous system of differential equations
x0 = 2x − 3y + t, x(0) = 0
y 0 = −2x + y, y(0) = 1 .
Ans. x(t) = e−t −
9 4t
1
3
1
e + (−7 + 4t), y(t) = e−t + e4t + (−3 + 4t).
16
16
8
8
V.
1. A function f (t) is equal to 1 for 1 ≤ t ≤ 5, and is equal to 0 for all
other t ≥ 0. Represent f (t) as a difference of two step functions, and find
its Laplace transform.
Ans. f (t) = u1 (t) − u5 (t), F (s) =
e−s
s
−
e−5s
s .
146
CHAPTER 4. LAPLACE TRANSFORM
2. Find the Laplace transform of t2 −2u4 (t).
Ans. F (s) =
2
e−4s
s3 −2 s .
3. Sketch the graph of the function u2 (t) − 2u3 (t), and find its Laplace
transform.
4. A function g(t) is equal to 1 for 0 ≤ t ≤ 5, and is equal to −2 for t > 5.
Represent g(t) using step functions, and find its Laplace transform.
1
s
Ans. g(t) = 1 − 3u5 (t), G(s) =
−5s
− 3e s .
5. Find the inverse Laplace transform of
Ans. 2u1 (t)(t − 1) − 3u4 (t)(t − 4).
1 −s
−4s
2e
−
3e
.
s2
6. Find the inverse Laplace transform of e−2s
Ans. u2 (t) 3 cos 2(t − 2) −
1
sin 2(t − 2) .
2
7. Find the inverse Laplace transform of e−s
1 2t−2 1 −3t+3
e
− e
.
Ans. u1 (t)
5
5
3s − 1
.
s2 + 4
s2
1
.
+s−6
π
1
8. Find the inverse Laplace transform of e− 2 s 2
, and simplify the
s + 2s + 5
answer.
1
Ans. − uπ/2 (t)e−t+π/2 sin 2t.
2
9. Solve
y 00 + y = 2u1 (t) − u5 (t), y(0) = 0, y 0 (0) = 2.
10. Solve
y 00 + 3y 0 + 2y = u2 (t), y(0) = 0, y 0 (0) = −1.
Ans. y(t) = e−2t − e−t + u2 (t)
1
1
− e−(t−2) + e−2(t−2) .
2
2
VI.
1. Show that L u0c (t) = L (δ(t − c)).
This problem shows that u0c (t) = δ(t − c).
147
4.5. DISTRIBUTIONS
2. Find the Laplace transform of δ(t − 4) − 2u4 (t).
3. Solve
y 00 + y = δ(t − π), y(0) = 0, y 0 (0) = 2.
Ans. y(t) = 2 sin t − uπ (t) sin t.
4. Solve
y 00 + 2y 0 + 10y = δ(t − π), y(0) = 0, y 0 (0) = 0.
1
Ans. y(t) = − uπ (t)e−t+π sin 3t.
3
5. Solve
4y 00 + y = δ(t), y(0) = 0, y 0 (0) = 0.
Ans. y(t) =
1
1
sin t.
2
2
6. Solve
4y 00 + 4y 0 + 5y = δ(t − 2π), y(0) = 0, y 0 (0) = 1.
1
1
1
Ans. y(t) = e− 2 t sin t + u2π (t) e− 2 (t−2π) sin t.
4
VII.
1. Show that sin t ∗ 1 = 1 − cos t. (Observe that sin t ∗ 1 6= sin t.)
2. Show that f (t) ∗ δ(t) = f (t), for any f (t).
(So that the delta function plays the role of unity for convolution.)
3. Find the convolution t ∗ sin at.
Ans.
at − sin at
.
a2
4. Find the convolution cos t ∗ cos t.
1
1
t cos t + sin t.
2
2
5. Using convolutions, find the inverse Laplace transform of the following
functions
Hint: cos(t − v) = cos t cos v + sin t sin v.
(a)
1
s3 (s2 + 1)
.
Ans.
Ans.
t2
t2
∗ sin t =
+ cos t − 1.
2
2
148
CHAPTER 4. LAPLACE TRANSFORM
e−t
1
3
+
cos 3t +
sin 3t.
10
10
10
(b)
s
.
(s + 1)(s2 + 9)
(c)
s
.
(s2 + 1)2
Ans.
1
t sin t.
2
1
.
+ 9)2
Ans.
1
3
sin 3t − t cos 3t.
54
54
(d)
(s2
Ans. −
6. Solve the following initial value problem at resonance
y 00 + 9y = cos 3t,
y(0) = 0, y 0 (0) = 0.
1
t sin 3t.
6
7. Solve the initial value problem with a given forcing term g(t)
Ans. y(t) =
y 00 + 4y = g(t),
Ans. y(t) =
1
2
Z
0
y(0) = 0, y 0 (0) = 0.
t
sin 2(t − v)g(v) dv.
VIII.
1. Find the second derivative of |t| in the sense of distributions.
Ans. 2δ(t).
2. Find f (t), such that
f (n) (t) = δ(t) .
Ans. f (t) =
(
3. Let f (t) =
0
tn−1
(n−1)!
(
if t < 0
if t > 0
t2
if t < 0
t2 + 5 if t > 0
Show that f 0 (t) = 2t + 5δ(t).
Hint: f (t) = t2 + 5H(t).
Chapter 5
Linear Systems of
Differential Equations
5.1
5.1.1
The Case of Distinct Eigenvalues
Review of Vectors and Matrices




a1
b1




Recall that given two vectors C1 =  a2  and C2 =  b2  we can add
a3
b3




a1 + b1
xa1




them as C1 +C2 =  a2 + b2 , or multiply by a constant x: xC1 =  xa2 .
a3 + b3
xa3
More
generally,
we
can
compute
the
linear
combinations
x
C
+
x
1 1
2 C2 =


x1 a1 + x2 b1


 x1 a2 + x2 b2  for any two constants x1 and x2 . We shall be dealing only
x1 a3 + x2 b3


a11 a12 a13


with the square matrices, like A =  a21 a22 a23 . We shall view A
a31 a32 a33


a11
h
i


as a row of column vectors A = C1 C2 C3 , where C1 =  a12 ,
a13




a21
a31




C2 =  a22 , and C3 =  a32 . The product of the matrix A and of
a23
a33
149
150CHAPTER 5. LINEAR SYSTEMS OF DIFFERENTIAL EQUATIONS


x1


vector x =  x2  is defined as the vector
x3
Ax = C1 x1 + C2 x2 + C3 x3 .
(This definition is equivalent to the more traditional one, that you might
have seen before.) We get


a11 x1 + a12 x2 + a13 x3


Ax =  a21 x1 + a22 x2 + a23 x3  .
a31 x1 + a32 x2 + a33 x3
Two vectors C1 and C2 are called linearly dependent if one of them is a
constant multiple of the other, i.e., C2 = aC1 , for some number a. (Zero
vector is linearly dependent with any other.) Linearly dependent C1 and
C2 go along the same line. If the vectors C1 and C2 do not go along the
same line, they are linearly independent. Three vectors C1 , C2 and C3 are
called linearly dependent if one of them is a linear combination of the others,
e.g., if C3 = aC1 + bC2 , for some numbers a, b. This means that C3 lies
in the plane determined by C1 and C2 , so that all three vectors lie in the
same plane. If C1 , C2 and C3 do not lie in the same plane, they are linearly
independent.
A system of 3 equations with 3 unknowns
(1.1)
a11 x1 + a12 x2 + a13 x3 = b1
a21 x1 + a22 x2 + a23 x3 = b2
a31 x1 + a32 x2 + a33 x3 = b3
can be written in the matrix form
Ax = b ,


b1


where A is the 3 × 3 matrix above, and B =  b2  is the given vector of the
b3
right hand sides. Recall that the system (1.1) has a unique solution for any
vector B if and only if the columns of the matrix A are linearly independent.
(In that case, the determinant |A| =
6 0, and the inverse matrix A−1 exists.)
5.1. THE CASE OF DISTINCT EIGENVALUES
5.1.2
151
Linear First Order Systems with Constant Coefficients
We wish to find the functions x1 (t), x2 (t) and x3 (t), which solve the following
system of equations, with given constant coefficients a11 , . . . , a33,
x01 = a11 x1 + a12 x2 + a13 x3
x02 = a21 x1 + a22 x2 + a23 x3
(1.2)
x03 = a31 x1 + a32 x2 + a33 x3 ,
subject to the given initial conditions
x1 (t0 ) = α, x2 (t0 ) = β, x3 (t0 ) = γ .
We shall write this system in matrix notation
x0 = Ax, x(0) = x0 ,
(1.3)


x1 (t)


where x(t) =  x2 (t)  is the unknown vector function, A is the 3×3 matrix
x3 (t)


α


above, and x0 =  β  is the vector of initial conditions. Indeed, on the
γ


x01 (t)


left in (1.2) we have components of the vector x0 (t) =  x02 (t) , while on
x03 (t)
the right we see the components of the vector Ax.
Let us observe that given two vector functions y(t) and z(t), which are
solutions of (1.3), their linear combination c1 y(t) + c2 z(t) is also a solution
of (1.3), for any constants c1 and c2 . This is because our system is linear.
We now search for solution of (1.3) in the form
(1.4)
x(t) = eλt ξ ,
with a constant λ, and a vector ξ, whose entries are independent of t. Plugging this into (1.3), we have
λeλtξ = A eλt ξ ;
Aξ = λξ .
152CHAPTER 5. LINEAR SYSTEMS OF DIFFERENTIAL EQUATIONS
So that if λ is an eigenvalue of A, and ξ the corresponding eigenvector, then
(1.4) gives us a solution of the problem (1.3). Observe that this is true for
any square n × n matrix A. Let λ1 , λ2 and λ3 be the eigenvalues of our 3 × 3
matrix A. There are several cases to consider.
Case 1 The eigenvalues of A are real and distinct. Recall that the corresponding eigenvectors ξ1 , ξ2 and ξ3 are then linearly independent. We know
that eλ1 t ξ1 , eλ2 t ξ2 and eλ3 t ξ3 are solutions of our system, so that their linear
combination
(1.5)
x(t) = c1 eλ1 t ξ1 + c2 eλ2 t ξ2 + c3 eλ3 t ξ3
gives the general solution of our system. We need to determine c1 , c2 , and
c3 to satisfy the initial conditions. I.e.,
(1.6)
x(t0 ) = c1 eλ1 t0 ξ1 + c2 eλ2 t0 ξ2 + c3 eλ3 t0 ξ3 = x0 .
This is a system of three linear equations with three unknowns c1 , c2 , and c3 .
The matrix of this system is non-singular, because its columns are linearly
independent (observe that these columns are constant multiples of linearly
independent vectors ξ1 , ξ2 and ξ3 ). Therefore we can find a unique solution
triple c̄1 , c̄2 , and c̄3 of the system (1.6). Then x(t) = c̄1 eλ1 t ξ1 + c̄2 eλ2 tξ2 +
c̄3 eλ3 tξ3 is the desired solution of our initial value problem (1.3).
Example Solve
0
x =
"
2 1
1 2
#
"
x, x(0) =
−1
2
#
.
This is a 2 × 2 system, so that we have only two terms in (1.5). "We compute
#
−1
the eigenvalue λ1 = 1, and the corresponding eigenvector ξ1 =
, and
1
"
1
. The general solution
1
"
1
1
λ2 = 3, with the corresponding eigenvector ξ2 =
#
is then
t
x(t) = c1 e
"
−1
1
#
3t
+ c2 e
or in components
x1 (t) = −c1 et + c2 e3t
x2 (t) = c1 et + c2 e3t .
#
,
153
5.1. THE CASE OF DISTINCT EIGENVALUES
Turning to the initial conditions,
x1 (0) = −c1 + c2 = −1
x2 (0) = c1 + c2 = 2 .
We calculate c1 = 3/2, c2 = 1/2. Answer:
x1 (t) = − 32 et + 12 e3t
x2 (t) = 23 et + 21 e3t .
Case 2 Eigenvalue λ1 is double, λ3 6= λ1 , however λ1 has two linearly
independent eigenvectors ξ1 and ξ2 . The general solution is given again
by the formula (1.5), with λ2 replaced by λ1 . We see from (1.6) that we
again get a linear system, which has a unique solution, because its matrix
has linearly independent columns. (Linearly independent eigenvectors is the
key here!)
Example Solve




2 1 1
1




x0 =  1 2 1  x , x(0) =  0  .
1 1 2
−4
We calculate that the matrix has a double eigenvalue
λ1 = 1, which

 has
−1
0




two linearly independent eigenvectors ξ1 =  0  and ξ2 =  −1 . The
1
1


1


other eigenvalue is λ3 = 4, with the corresponding eigenvector ξ3 =  1 .
1
The general solution is then






0
1
−1



4t 
t
t
x(t) = c1 e  0  + c2 e  −1  + c3 e  1  .
1
1
1
Or in components
x1 (t) = −c1 et + c3 e4t
x2 (t) = −c2 et + c3 e4t
x3 (t) = c1 et + c2 et + c3 e4t .
154CHAPTER 5. LINEAR SYSTEMS OF DIFFERENTIAL EQUATIONS
Using the initial conditions, we calculate c1 = −2, c2 = −1, and c3 = −1.
Answer:
x1 (t) = 2et − e4t
x2 (t) = et − e4t
x3 (t) = −3et − e4t .
Recall that if an n × n matrix A is symmetric (i.e., aij = aji for all i and
j), then all of its eigenvalues are real. The eigenvalues of a symmetric matrix
may be repeated, but there is always a full set of n linearly independent
eigenvectors. So that one can solve the initial value problem (1.3) for any
system with a symmetric matrix.
Case 3 Eigenvalue λ1 has multiplicity two (i.e., it is a double root of the
characteristic equation), λ3 6= λ1 , but λ1 has only one linearly independent
eigenvector ξ. We have one solution eλ1 t ξ. By analogy with the second
order equations, we try teλ1 t ξ for the second solution. However, this vector
function is a scalar multiple of the first solution, i.e., is linearly dependent
with it. We modify our guess:
x(t) = teλ1 tξ + eλ1 t η.
(1.7)
It turns out one can choose a vector η, to obtain a second linearly independent solution. Plugging this into our system (1.3), and using that Aξ = λ1 ξ,
eλ1 t ξ + λ1 teλ1 t ξ + λ1 eλ1 t η = λ1 teλ1 t ξ + eλ1 t Aη.
Cancelling a pair of terms, and dividing by eλ1 t , we simplify this to
(1.8)
(A − λ1 I)η = ξ .
Even though the matrix A − λ1 I is singular (its determinant is zero), it
can be shown that the linear system (1.8) always has a solution η, called
generalized eigenvector. Using this η in (1.7), gives us the second linearly
independent solution, corresponding to λ = λ1 .
Example Solve
0
x =
"
1 −1
1
3
#
x.
155
5.2. A PAIR OF COMPLEX CONJUGATE EIGENVALUES
This"matrix
# has a double eigenvalue λ1 = λ2 =" 2, and
# only one eigenvector
1
1
ξ=
. We have one solution x1 (t) = e2t
. The system (1.8) to
−1
−1
determine the vector η =
"
#
η1
η2
takes the form
−η1 − η2 = 1
η1 + η2 = −1 .
We throw away the second equation, because it is a multiple of the first.
The first equation has infinitely many solutions. But all we need is just one
solution, that is not a multiple of ξ. So we set η2 = 0, which gives η1 = −1.
We have computed the second linearly independent solution
2t
x2 (t) = te
"
1
−1
#
+e
2t
"
2t
"
−1
0
#
.
2t
"
The general solution is then
2t
x(t) = c1 e
5.2
5.2.1
"
1
−1
#
+ c2 te
1
−1
#
+e
−1
0
#!
.
A Pair of Complex Conjugate Eigenvalues
Complex Valued Functions
Recall that one differentiates complex valued functions much the same way,
as the real ones. For example,
d it
e = ieit ,
dt
√
where i = −1 is treated the same way as any other constant. Any complex
valued function f (t) can be written in the form f (t) = u(t)+iv(t), where u(t)
and v(t) are real valued functions. It follows by the definition of derivative
that f 0 (t) = u0 (t) + iv 0 (t). For example, using Euler’s formula,
d it
d
e = (cos t + i sin t) = − sin t + i cos t = i(cos t + i sin t) = ieit .
dt
dt
Any complex valued vector function x(t) can also be written as x(t) =
u(t) + iv(t), where u(t) and v(t) are real valued vector functions. Again, we
have x0 (t) = u0 (t) + iv 0 (t). If x(t) is a solution of our system (1.3), then
u0 (t) + iv 0 (t) = A(u(t) + iv(t)) .
156CHAPTER 5. LINEAR SYSTEMS OF DIFFERENTIAL EQUATIONS
Separating the real and imaginary parts, we see that both u(t) and v(t) are
real valued solutions of our system.
5.2.2
General Solution
So assume that the matrix A has a pair of complex conjugate eigenvalues
λ+iµ and λ−iµ. They need to contribute two linearly independent solutions.
The eigenvector corresponding to λ + iµ is complex valued, which we can
write as ξ+iη, where ξ and η are real valued vectors. Then x(t) = e(λ+iµ)t(ξ+
iη) is a solution of our system. To get two real valued solutions we shall
take real and imaginary parts of this solution. We have
x(t) = eλt (cos µt + i sin µt)(ξ + iη)
= eλt (cos µt ξ − sin µt η) + ieλt(sin µt ξ + cos µt η) .
So that
u(t) = eλt (cos µt ξ − sin µt η)
v(t) = eλt (sin µt ξ + cos µt η)
are the two real valued solutions. In case of a 2 × 2 matrix (when there are
no other eigenvalues), the general solution is x(t) = c1 u(t) + c2 v(t).
Example Solve
0
x =
"
1 −2
2
1
#
"
x, x(0) =
2
1
#
.
We calculate the eigenvalues
"
# λ1 = 1 + 2i and λ2 = 1 − 2i. The eigenvector
i
corresponding to λ1 is
. So that we have a complex valued solution
1
(1+2i)t
e
"
#
i
. We rewrite it as
1
t
e (cos 2t + i sin 2t)
"
i
1
#
t
=e
"
− sin 2t
cos 2t
#
t
+ ie
"
cos 2t
sin 2t
#
.
The real and imaginary parts give us two linearly independent solutions, so
that the general solution is
t
x(t) = c1 e
"
− sin 2t
cos 2t
#
t
+ c2 e
"
cos 2t
sin 2t
#
.
157
5.3. EXPONENTIAL OF A MATRIX
In components
x1 (t) = −c1 et sin 2t + c2 et cos 2t
x2 (t) = c1 et cos 2t + c2 et sin 2t .
From the initial conditions
x1 (0) = c2 = 2
x2 (0) = c1 = 1 ,
so that c1 = 1, and c2 = 2. Answer:
x1 (t) = −et sin 2t + 2et cos 2t
x2 (t) = et cos 2t + 2et sin 2t .
We see from the form of the solutions, that if the matrix A has all eigenvalues which are either negative or have negative real parts, then limt→∞ x(t) =
0 (i.e., all components of the vector x(t) tend to zero).
5.3
Exponential of a matrix
In matrix notation a linear system, with a 2 × 2 or 3 × 3 matrix A,
x0 = Ax,
(3.1)
x(0) = x0
looks like a single equation. In case A and x0 are constants, the solution of
(3.1) is
(3.2)
x(t) = eAt x0 .
In order to write the solution of our system in the form (3.2), we define the
notion of the exponential of a matrix. First, we define powers of a matrix:
A2 = A · A, A3 = A2 · A, and so on. Starting with the Maclauren series
∞
X xn
x2 x3 x4
e = 1+x+
+
+
+··· =
,
2!
3!
4!
n!
n=0
x
we define (I is the identity matrix)
eA = I + A +
∞
X
A2
A3 A4
An
+
+
+··· =
.
2!
3!
4!
n!
n=0
158CHAPTER 5. LINEAR SYSTEMS OF DIFFERENTIAL EQUATIONS
So that eA is the sum of infinitely many matrices, i.e., each entry of eA is
an infinite series. It can be shown that all of these series are convergent for
any matrix A, so that we can compute eA for any square matrix A.
Example Let A =
"
an 0
0 bn
A
e =
e
=
"
#
a 0
, where a and b are constants. Then An =
0 b
#
, and we have
"
1+a+
0
Example Let A =
At
"
a2
2!
"
+
a3
3!
+···
1+b+
b2
2!
+
b3
3!
0
+···
#
=
"
ea 0
0 eb
#
.
#
0 −1
. Then for any constant t
1
0
1 − t2 /2! + t4 /4! + · · · −t + t3 /3! − t5 /5! + · · ·
t − t3 /3! + t5 /5! + · · ·
1 − t2 /2! + t4 /4! + · · ·
#
=
"
cos t − sin t
sin t
cos t
The last example lets us write the solution of the system
x01 =
(3.3)
−x2 , x(0) = α
x02 = x1 ,
y(0) = β
in the form
"
x1 (t)
x2 (t)
#
At
=e
"
α
β
#
=
"
cos t − sin t
sin t
cos t
"
#"
α
β
#
,
#
α
which is a rotation of the initial vector
by an angle t, counterclockwise.
β
We see that the integral curves of our system are" circles
, x2 ) plane.
# in"the (x1#
0
x1
−x2
This makes sense, because the velocity vector
=
is always
x02
x1
perpendicular to the position vector
"
x1
x2
#
.
Observe that the system (3.3) is equivalent to the equation
x001 + x1 = 0 ,
i.e., a harmonic oscillator.
#
.
159
5.3. EXPONENTIAL OF A MATRIX


a11 a12 a13


Recall that we can write a square matrix A =  a21 a22 a23  as a row
a31 a32 a33




a11
a21




of column vectors A = [C1 C2 C3 ], where C1 =  a12 , C2 =  a22 , and
a13
a23




a31
x1




C3 =  a32 . Then the product of the matrix A and of vector x =  x2 
a33
x3
is defined as the vector
Ax = C1 x1 + C2 x2 + C3 x3 .
If B is another 3 × 3 matrix, whose columns are the vectors K1 ,K2 and K3 ,
we define the product of the matrices
AB = A [K1 K2 K3 ] = [AK1 AK2 AK3 ] .
(This definition is equivalent to the traditional one, which is giving the ij
element of the product (AB)ij =
3
X
k=1
aik bkj .) In general, AB 6= BA.


λ1 0 0


Let Λ be a diagonal matrix Λ =  0 λ2 0 . Compute the product
0 0 λ3
 





λ1
0
0
 





A Λ = A  0  A  λ2  A  0  = [λ1 C1 λ2 C2 λ3 C3 ] .
0
0
λ3
I.e., multiplying a matrix A from the right by a diagonal matrix Λ results
in columns of A being multiplied by the corresponding entries of Λ.
Assume now that the matrix A has 3 linearly independent eigenvectors
E1 ,E2 , and E3 , i.e., AE1 = λ1 E1 , AE2 = λ2 E2 , and AE3 = λ3 E3 , and
the eigenvalues λ1 , λ2 and λ3 are not necessarily different. Form a matrix
S = [E1 E2 E3 ]. Observe that S has an inverse matrix S −1 . We have
A S = [A E1 A E2 A E3 ] = [λ1 E1 λ2 E2 λ3 E3 ] = S Λ .
Multiplying both sides from the left by S −1 , we have
(3.4)
S −1 A S = Λ .
160CHAPTER 5. LINEAR SYSTEMS OF DIFFERENTIAL EQUATIONS
Similarly,
(3.5)
A = S Λ S −1 .
One refers to the formulas (3.4) and (3.5) as diagonalization of the matrix
A. We see that a matrix with a complete set of 3 linearly independent
eigenvectors can be diagonalized.
If A is diagonalizable, i.e., the formula (3.5) holds, then A2 = A A =
S Λ S −1 S Λ S −1 = S Λ2 S −1 , and in general An = S Λn S −1 . We then have
(for any real scalar t)
eAt =
∞
X
An tn
n=0
n!
=
∞
X
S Λn tn S −1
n!
n=0
=S
∞
X
Λn tn
n=0
n!
S −1 = S eΛt S −1 .
The following important theorem holds for any n × n matrix.
Theorem 1 Assume that all eigenvalues of the matrix A are either negative
or have negative real parts. Then all solutions of the system
x0 = A x
tend to zero as t → ∞.
We can prove this theorem in the case when A is diagonalizable. Indeed,
then we have
x(t) = eAt x(0) = S eΛt S −1 x(0) .
The matrix eΛt is a diagonal one, whose entries eλ1 t , eλ2 t, . . . , eλn t tend to
zero as t → ∞, and so x(t) → 0 as t → ∞.
5.3.1
Problems
I. Solve the following systems of differential equations:
0
1. x =
"
3 1
1 3
#
x, x(0) =
"
1
−3
#
. Answer:
x1 (t) = 2e2t − e4t
x2 (t) = −2e2t − e4t .
"
#
"
#
0 1
2
2. x0 =
x, x(0) =
. Check your answer by reducing this
1 0
1
system to a second order equation for x1 (t). Answer:
x1 (t) = 21 e−t + 32 et
x2 (t) = − 21 e−t + 32 et .
161
5.3. EXPONENTIAL OF A MATRIX




−2 2 3
0




3. x0 =  −2 3 2  x, x(0) =  1  . Answer:
−4 2 5
2
x1 (t) = −3et − 2e2t + 5e3t
x2 (t) = −4e2t + 5e3t
x3 (t) = −3et + 5e3t .




4 0 1
−2




4. x0 =  −2 1 0  x, x(0) =  5  . Answer:
−2 0 1
0
x1 (t) = 2e2t − 4e3t
x2 (t) = 5et − 4e2t + 4e3t
x3 (t) = −4e2t + 4e3t .




−2
2 1 1




5. x0 =  1 2 1  x, x(0) =  3  . Answer:
2
1 1 2
x1 (t) = −3et + e4t
x2 (t) = 2et + e4t
x3 (t) = et + e4t .
"
#
"
#
0 −2
−2
6. x =
x, x(0) =
. Check your answer by reducing this
2
0
1
system to a second order equation for x1 (t). Answer:
0
x1 (t) = −2 cos 2t − sin 2t
x2 (t) = cos 2t − 2 sin 2t .
0
7. x =
"
3 −2
2
3
#
x, x(0) =
"
0
1
#
. Answer:
x1 (t) = −e3t sin 2t
x2 (t) = e3t cos 2t .
162CHAPTER 5. LINEAR SYSTEMS OF DIFFERENTIAL EQUATIONS




1
2
2
−1




8. x0 =  −1
1
0  x, x(0) =  1  . Answer:
0 −2 −1
2
x1 (t) = − cos t + 5 sin t
x2 (t) = −et + 2 cos t + 3 sin t
x3 (t) = et + cos t − 5 sin t .
II.
0
1. Show that all solutions of the system x =
constants a and b, satisfy
"
−a
b
−b −a
#
x, with positive
lim x1 (t) = 0, and lim x2 (t) = 0 .
t→∞
t→∞
2. Write the equation (here b and c are constants, y = y(t))
y 00 + by 0 + cy = 0
as a system of two first order equations, by letting x1 (t) = y(t), x2 (t) =
y 0 (t). Compute the eigenvalues for the matrix of this system. Show that
all solutions tend to zero, as t → ∞, provided that b and c are positive
constants.
3. Consider the system
x01 = ax1 + bx2
x02 = cx1 + dx2
with given constants a,b,c and d. Assume that a + d < 0 and ad − bc > 0.
Show that all solutions tend to zero, as t → ∞ (i.e., x1 (t) → 0 and x2 (t) → 0,
as t → ∞).
Hint: Show that the eigenvalues for the matrix of this system are either
negative or have negative real parts.
III.
1. Let A =
"
#
0 1
. Show that
0 0
At
e
=
"
1 t
0 1
#
.
163
5.3. EXPONENTIAL OF A MATRIX


0 1 0


2. Let A =  0 0 1 . Show that
0 0 0
eAt

1 t 21 t2


= 0 1 t .
0 0 1



0 −1 0


3. Let A =  1
0 0 . Show that
0
0 2
eAt


cos t − sin t 0


=  sin t
cos t 0  .
0
0 e2t
Chapter 6
Non-Linear Systems
6.1
Predator-Prey Interaction
In 1925, Vito Volterra’s future son-in-law, biologist Umberto D’Ancona, told
him of the following puzzle. During World War I, when ocean fishing almost
ceased, the ratio of predators (like sharks) to prey (like tuna) had increased.
Why did sharks benefit more from the decreased fishing? (While the object
of fishing is tuna, sharks are also caught in the nets.)
The Lotka-Volterra Equations
Let x(t) and y(t) give respectively the numbers of prey (tuna) and predators
(sharks) as functions of time t. Let us assume that in the absence of sharks
tuna will obey the Malthusian model
x0 (t) = ax(t),
with some growth rate a > 0. (I.e., it would grow exponentially x(t) =
x(0)eat.) In the absence of tuna the number of sharks will decrease exponentially
y 0 (t) = −cy(t),
with some c > 0, because its other prey is less nourishing. Clearly, the
presence of sharks will decrease the rate of growth of tuna, while tuna is
good for sharks. So the model is
(1.1)
x0 (t) = a x(t) − b x(t) y(t)
y 0 (t) = −c y(t) + d x(t) y(t) ,
164
6.1.
165
PREDATOR-PREY INTERACTION
with two more given positive constants b and d. The x(t)y(t) term is proportional to the number of encounters between sharks and tuna. These
encounters decrease the growth rate of tuna, and increase the growth rate
of sharks. Notice that both equations are nonlinear, and we assume that
x(t) > 0 and y(t) > 0. This is the famous Lotka-Volterra model. Alfred
J. Lotka was an American mathematician, who developed similar ideas at
about the same time. A fascinating story of Vito Volterra’s life and work,
and of life in Italy in the first half of the 20-th Century, is told in a very
nice book of Judith R. Goodstein [4].
Analysis of the Model
Remember energy being constant for the vibrating spring? We have something similar here. It turns out that
(1.2)
a ln y(t) − by(t) + c ln x(t) − dx(t) = C = constant.
To justify that, let us introduce the function F (x, y) = a ln y−by+c ln x−dx.
We wish to show that F (x(t), y(t)) = constant. Using the chain rule, and
expressing the derivatives from our equations (1.1), we have
d
dt F (x(t), y(t))
0
0
(t)
(t)
= Fx x0 + Fy y 0 = c xx(t)
− dx0 (t) + a yy(t)
− by 0 (t) =
c(a − by(t)) − d(ax(t) − bx(t)y(t)) + a(−c + dx(t)) − b(−cy(t) + dx(t)y(t))
= 0,
proving that F (x(t), y(t)) does not change with time t.
We assume that the initial numbers of both sharks and tuna is known:
(1.3)
x(0) = x0 , y(0) = y0 ,
i.e., x0 and y0 are given numbers. Lotka-Volterra system, together with the
initial conditions (1.3), determine both populations at all times (x(t), y(t)).
(Existence and uniqueness theorem holds for systems too.) Letting t = 0 in
(1.2), we calculate the value of C
(1.4)
C0 = a ln y0 − by0 + c ln x0 − dx0 .
In the xy-plane, the solution (x(t), y(t)) gives a parametric curve, with time
t being the parameter. The same curve is described by the implicit relation
(1.5)
a ln y − by + c ln x − dx = C0 .
166
CHAPTER 6. NON-LINEAR SYSTEMS
Predator
3.0
2.5
2.0
1.5
1.0
0.5
1
2
3
4
Prey
Figure 6.1: The integral curves of the Lotka-Volterra system
This curve is just a level curve of the function F (x, y) = a ln y−by+c ln x−dx,
introduced earlier. How does the graph of z = F (x, y) look? Like a mountain
with a single peak, because F (x, y) is a sum of a function of y, a ln y − by,
and of a function of x, c ln x − dx, and both of these functions are concave
(down). It is clear that all level lines of F (x, y) are closed curves. Following
the closed curve (1.5), one can see how dramatically the relative fortunes of
sharks and tuna change, just as a result of their interaction. We present a
picture of three integral curves, computed by Mathematica in the case when
a = 0.7, b = 0.5, c = 0.3 and d = 0.2.
Properties of the Solutions
All solutions are closed curves, and there is a dot (rest point) in the middle.
When x0 = c/d and y0 = a/b, i.e., when we start at the point (c/d, a/b), we
calculate from the Lotka-Volterra equations that x0 (0) = 0 and y 0 (0) = 0.
Solution is then x(t) = c/d, and y(t) = a/b for all times. In the xy-plane
this gives us the point (c/d, a/b), called the rest point. All other solutions
(x(t), y(t)) are periodic, because they represent closed curves. I.e., there is
a number T , a period, so that x(t + T ) = x(t) and y(t + T ) = y(t). This
period changes from curve to curve, and it is larger the further the solution
curve is from the rest point. (The monotonicity of the period was proved
only in mid eighties by Franz Rothe [6], and Jorg Waldvogel [10].)
Divide the first of the Lotka-Volterra equations by x(t), and then integrate over the period T :
x0 (t)
= a − by(t) ;
x(t)
6.1.
167
PREDATOR-PREY INTERACTION
Z
T
0
x0 (t)
dt = aT − b
x(t)
Z
T
y(t) dt .
0
0
(t)
dt = ln x(t) |T0 = 0, because x(T ) = x(0) by periodicity. It
But 0T xx(t)
follows that
Z
1 T
y(t) dt = a/b .
T 0
Similarly, we derive
Z
1 T
x(t) dt = c/d .
T 0
We have a remarkable fact: the averages of both x(t) and y(t) are the same
for all solutions. Moreover, these averages are equal to the coordinates of
the rest point.
R
The Effect of Fishing
Extensive fishing will decrease the growth rate of both tuna and sharks. The
new model is
x0 (t) = (a − α)x(t) − b x(t) y(t)
y 0 (t) = −(c + β)y(t) + d x(t) y(t) ,
where α and β two more given positive constants, related to the intensity
of fishing. (There are other ways to model fishing.) As before, we compute
the average numbers of both tuna and sharks
1
T
Z
0
T
1
x(t) dt = (c + β)/d,
T
Z
0
T
y(t) dt = (a − α)/b .
We see an increase for the average number of tuna, and a decrease for the
sharks, as a result of moderate amount of fishing (assuming that α < a).
Conversely, decreased fishing increases the numbers of sharks, giving us
an explanation of U. D’Ancona’s data. This result is known as Volterra’s
principle. It applies also to insecticide treatments. If such a treatment
destroys both pests and their predators, it may result in an increase of the
number of pests!
Biologists have questioned the validity of both the Lotka-Volterra model,
and of the way we have accounted for fishing (perhaps, they cannot accept
the idea of two simple differential equations ruling the oceans). In fact,
recently it is more common to model fishing using the system
x0 (t) = ax(t) − b x(t) y(t) − h1 (t)
y 0 (t) = −cy(t) + d x(t) y(t) − h2 (t) ,
168
CHAPTER 6. NON-LINEAR SYSTEMS
with some given positive functions h1 (t) and h2 (t).
I imagine both sharks and tuna do not think much of Lotka-Volterra
equations. But we do, because this system shows that dramatic changes
may occur just from the interaction of the species, and not as a result of
any external events. You can read more on the Lotka-Volterra model in
the nice book of M.W. Braun [2]. That book has other applications of
differential equations, e.g., to discovering of art forgeries, to epidemiology,
and to theories of war.
6.2
Competing species
When studying the logistic population model (here x = x(t) > 0 is the
number of rabbits)
x0 = x(a − x) ,
we were able to analyze the behavior of solutions, even without solving this
equation. Indeed, the quadratic polynomial x(a−x) is positive for 0 < x < a,
and then x0 (t) > 0 and so x(t) is an increasing function. The same quadratic
is negative for x > a, and so x(t) is decreasing in this range. If x(0) = a,
then x(t) = a for all t by the uniqueness part of the existence and uniqueness
theorem. So that the solution curves look as in the figure below. Here a is
the carrying capacity, which is the maximum sustainable number of rabbits.
x
6
a
- t
Solution curves for the logistic model
169
6.2. COMPETING SPECIES
If x(0) is small, the solution grows exponentially at first, and then the
rate of growth gets smaller and smaller, which is a typical logistic curve.
The point x = a is the rest point, and we can describe the situation by the
following one-dimensional picture
c
-
c
a
-x
The logistic equation has a stable fixed point x = a, and an unstable one x = 0
Suppose that we also have a population of deer, whose number z(t) at
the time t is modeled by a logistic equation
y 0 = y(d − y) ,
with the carrying capacity d > 0. Suppose that rabbits and deer live on
the same tropical island, and that they compete (for food, water and hiding
places). The Lotka-Volterra model of their interaction is
(2.1)
x0 = x (a − x − by)
y 0 = y (d − cx − y) .
The first equation tells us that the presence of deer effectively decreases the
carrying capacity of rabbits, and the positive constant b quantifies this influence. The positive constant c describes how bad the presence of rabbits is
for deer. What predictions will follow from the model (2.1)? And how do we
analyze this model, since solving the nonlinear system (2.1) is not possible?
We shall analyze the system (2.1) together with the initial conditions
x(0) = α, y(0) = β .
The given numbers α > 0 and β > 0 determine the initial point (α, β).
Similarly to the case of a single logistic equation, we look at the points
where the right hand sides of (2.1) are zero (but x > 0 and y > 0). I.e.,
(2.2)
a − x − by = 0
d − cx − y = 0 .
170
CHAPTER 6. NON-LINEAR SYSTEMS
These equations give us two straight lines, the null-clines of the system
(2.1). Both of these lines have negative slopes in the (x, y) plane. In the
first quadrant (the region of interest, where x > 0 and y > 0) these lines
may intersect either once, or not at all, depending on the values of a,b,c and
d, and that will determine the long turn prediction.
We shall denote by I the null-cline a − x − by = 0. Above this straight
line we have a − x − by < 0, which implies that x0 (t) < 0 and the motion
is to the left in the xy plane. Below the null-cline I, the motion is to the
right. We denote by II the null-cline d − cx − y = 0. Above II, y 0 (t) < 0 and
the motion is down, while below II the point (x(t), y(t)) moves up. (E.g.,
if a point lies above both I and II, the motion is to the left and down, in
the “southwest” direction. If a point is above I but below II, the motion is
northwest, etc.) The system (2.1) has the trivial solution (0, 0) (i.e., x = 0
and y = 0), and two semi-trivial solutions (a, 0) and (0, d). The solution
(a, 0) is when deer goes extinct, while (0, d) means there are no rabbits.
Observe that the semi-trivial solution (a, 0) is the x-intercept of the nullcline I, while the second semi-trivial solution (0, d) is the y-intercept of the
null-cline II. The long turn behavior of solutions will depend on whether the
null clines intersect in the first quadrant or not. We consider the following
cases.
y
6
Q
Q
d
A
A
A
Q
Q
1
Q
Q
Q
@@
Q
R
A
A
A
A II
3
A
A
Q
Q
Q
Q
2
Q
Q
Q
Q I
Q
Qd
- x
Non-intersecting null-clines, with the semitrivial solutions circled
Case 1: The null clines do not intersect (in the first quadrant). Assume
171
6.2. COMPETING SPECIES
first that I is on top of II, i.e., a/b > d and a > d/c. The null clines divide the
first quadrant into three regions. In the region 1 the motion is southwest,
in the region 2 southeast, and in 3 northeast. On the cline I the motion
is due south, and on II due north. Regardless of the initial point (α, β),
all trajectories tend to the semi-trivial solution (a, 0), as t → ∞. (The
trajectories starting in the region 3 will enter the region 2, and then tend to
(a, 0). The trajectories starting in the region 2 will stay in that region and
tend to (a, 0). The trajectories starting in the region 1 will tend to (a, 0) by
either staying in this region, or through the region 2.) This case corresponds
to the extinction of deer, and the maximum sustainable number of rabbits.
In case II is on top of I, i.e., a/b < d and a < d/c, a similar analysis shows
that all solutions tend to the semi-trivial solution (0, d), as t → ∞. So that
the species whose null-cline is on top, wins the competition, and drives the
other one to extinction.
y
6
J
J
J
2 J
1
J
d
H @@
R
H J
HHJ
H
JH
JHH
H
H
J
I HH
@
J
@
H
H
J
H II
3
I
HH
J
4
H
Jd
- x
Intersecting null-clines, with the semitrivial solutions on the inside (circled)
Case 2: The null clines intersect (in the first quadrant). Their point of
d−ac
intersection ( a−bd
1−bc , 1−bc ) is a solution of our system (2.1), a rest point. Its
stability depends on which of the following sub-cases hold.
Sub-case a: The semi-trivial solutions are on the inside, i.e., a < d/c
and d < a/b. The null clines divide the first quadrant into four regions. In
all four regions the motion is toward the rest point. Solutions starting in the
region 2 will always stay in this region. Indeed, on the boundary between
the regions 1 and 2 the motion is due south, and on the border between
172
CHAPTER 6. NON-LINEAR SYSTEMS
the regions 2 and 3, the trajectories travel due east. So that solutions
starting in the region 2 stay in this region, and they tend to the rest point
d−ac
( a−bd
1−bc , 1−bc ). Similarly, solutions starting in the region 4 never leave this
d−ac
region, and they tend to the rest point ( a−bd
1−bc , 1−bc ). If a solution starts in
the region 1 it may stay in this region, and tend to the rest point, or it may
enter either the region 2 or 4, and then tend to the rest point. Similarly,
d−ac
solutions which begin in 3 will tend to the rest point ( a−bd
1−bc , 1−bc ) either by
staying in this region, or through the regions 2 or 4 (depending on the initial
conditions). This is the case of co-existence. For any initial point (α, β), we
a − bd
d − ac
have lim x(t) =
, and lim y(t) =
.
t→∞
t→∞
1 − bc
1 − bc
y
6
d
J
J
J
2 J
1
IJ
H@
H@ J
HHJ
H
JH
JHH
H
H
J
J @ HH
H
R
@
H
J
HH I
3
J II 4
HH
J
d
- x
Intersecting null-clines, with the semitrivial solutions on the outside (circled)
Sub-case b: The semi-trivial solutions are on the outside, i.e., a >
d/c and d > a/b. In the regions 2 and 4 the motion is now away from the rest
point. On the lower boundary of the region 2 the motion is due north, and
on the upper border the motion is due west. So that trajectories starting in
the region 2, stay in this region, and tend to the semitrivial solution (0, d).
Similarly, solutions starting in the region 4, stay in this region, and tend
to the semitrivial solution (a, 0). A typical solution starting in the region 1
will either enter the region 2 and tend to (0, d), or it will enter the region
4 and tend to (a, 0) (this will depend on the initial conditions). The same
conclusion holds for the region 3. We have competitive exclusion in this
sub-case.
6.3. AN APPLICATION TO EPIDEMIOLOGY
173
What is the reason behind the drastic difference in the above sub-cases?
The second equation in (2.1) tells us that the effective carrying capacity of
the second species is d − cx < d. For the first species, the effective carrying
capacity is a − by, and in the sub-case a (when bd < a) we have
a − by > a − bd > 0 ,
so that the first species will survive assuming the number of the second
species is at its maximum sustainable level. Similarly one shows that the
second species will survive assuming the number of the first species is at its
maximum sustainable level. The species do not affect each other too badly,
so that they can co-exist.
6.3
An application to epidemiology
Suppose a small group of people comes down with an infectious decease.
Will this cause an epidemic? What are the parameters that public health
officials should try to control? We shall analyze a way to model the spread
of an infectious decease.
Let I(t) be the number of infected people at time t. Let S(t) be the
number of susceptible people, the ones at risk of catching the decease. The
following model was proposed in 1927 by W.O. Kermack and A.G. McKendrick
dS
(3.1)
= −rSI
dt
dI
= rSI − γI ,
dt
with some positive constants r and γ. The first equation reflects the fact
that the number of susceptible people decreases, as some people catch the
infection (and so they join the group of infected people). The rate of decrease
is proportional to the number of “encounters” between the infected and
susceptible people, which in turn is proportional to the product SI. The
first term in the second equation tells us that the number of infected people
would increase at exactly the same rate, if it was not for the second term.
The second term, −γI, says that the infection rate is not that bad, because
some infected people are removed from the population (people who died from
the decease, people who have recovered and developed immunity, and sick
people who are isolated from others). The coefficient γ is called the removal
rate. To the equations (3.1) we add initial conditions
(3.2)
S(0) = S0 ,
I(0) = I0 ,
174
CHAPTER 6. NON-LINEAR SYSTEMS
with given numbers S0 - the initial number of susceptible people, and I0 - the
initial number of infected people. Solving the equations (3.1) and (3.2) will
give us (S(t), I(t)), which is a parametric curve in (S, I) plane. Alternatively,
this curve can be described, if we express I as a function of S.
We express
dI
dI/dt
rSI − γI
γ1
=
=
= −1 +
.
dS
dS/dt
−rSI
rS
We integrate this equation (we denote γ/r = ρ) to obtain
(3.3)
I = I0 + S0 − ρ ln S0 − S + ρ ln S .
For different values of (S0 , I0 ), we get a different integral curve from (3.3),
but all these curves take the maximum value at S = ρ, see the picture. The
motion on these curves is from right to the left, because we see from the
first equation in (3.1) that S(t) is a decreasing function of time t. We see
that if the initial point (S0 , I0 ) satisfies S0 > ρ, then the function I(t) grows
at first, and then declines. That is the case when an epidemics occurs. If,
on the other hand, the initial number of susceptible people is below ρ, then
the number of infected people I(t) declines to zero, i.e., the initial outbreak
has been successfully contained. The number ρ is called the threshold value.
I
6
+
a
(S0 , I0 )
+
a (S0 , I0 )
ρ
The solution curves of the system (3.1)
- S
175
6.4. LYAPUNOV STABILITY
To avoid an epidemics, one tries to increase the threshold value ρ (ρ =
γ/r) by increasing the removal rate γ, which is achieved by isolating sick
people. Notice also the following harsh conclusion: if a decease kills people
quickly, then the removal rate is high, and such a decease may be easier to
contain.
In some cases it is easy to estimate the number of people, who will get
sick during an epidemics. Assume that I0 is so small that we can set it to
zero, while S0 is a little larger than the threshold value ρ, i.e., S0 = ρ + ν,
where ν > 0 is a small value. As time t → ∞, I(t) → 0, while S(t) → S∞ ,
its final number. We then conclude from (3.3) that
(3.4)
S∞ − ρ ln S∞ = S0 − ρ ln S0 .
The function I = S − ρ ln S takes a global maximum at S = ρ. Such a
function is symmetric with respect to ρ for S close to ρ, as can be seen from
a three term Taylor series expansion. It follows from (3.4) that the points S0
and S∞ must be equidistant from ρ, i.e., S∞ = ρ − ν. The total number of
people, who will get sick during an epidemics is then equal to S0 − S∞ = 2ν.
6.4
Lyapunov stability
We consider a nonlinear system of equations (the unknown functions are
x(t) and y(t))
(4.1)
x0 = f (x, y),
0
y = g(x, y),
x(0) = α
y(0) = β ,
with the given functions f (x, y) and g(x, y), and the given numbers α and
β. A point (x0 , y0 ) is called a rest point if
f (x0 , y0 ) = g(x0 , y0 ) = 0 .
If we solve the system (4.1) with the initial data x(0) = x0 and y(0) = y0 ,
then x(t) = x0 and y(t) = y0 , i.e., our system is at rest for all time. Now
suppose the initial conditions are perturbed from (x0 , y0 ). Will our system
come back to rest at (x0 , y0 ) ?
A differentiable function L(x, y) is called a Lyapunov function at (x0 , y0 )
if
L(x0 , y0 ) = 0
L(x, y) > 0, for (x, y) in some neighborhood of (x0 , y0 ) .
176
CHAPTER 6. NON-LINEAR SYSTEMS
How do the level lines
(4.2)
L(x, y) = c
look? If c = 0, then (4.2) is satisfied only at (x0 , y0 ). If c > 0, the level lines
are closed curves around (x0 , y0 ), and the smaller c is, the closer the level
line is to (x0 , y0 ).
Along a solution (x(t), y(t)) of our system (4.1), the Lyapunov function
is a function of t: L(x(t), y(t)). Now assume that for all solutions (x(t), y(t))
starting near (x0 , y0 ), we have
(4.3)
d
L(x(t), y(t)) < 0
dt
for all t > 0 .
Then (x(t), y(t)) → (x0 , y0 ), as t → ∞, i.e., the rest point (x0 , y0 ) is asymptotically stable. (As t increases, the solution point (x(t), y(t)) moves to the
level lines that are closer and closer to (x0 , y0 ).) Using the chain rule, we
rewrite (4.3) as
(4.4)
d
L(x(t), y(t)) = Lx x0 + Ly y 0 = Lx f (x, y) + Ly g(x, y) < 0 .
dt
I.e., the rest point (x0 , y0 ) is asymptotically stable, provided that
Lx (x, y)f (x, y) + Ly (x, y)g(x, y) < 0,
for all (x, y) near (x0 , y0 ) .
One usually assumes that (x0 , y0 ) = (0, 0), which can always be accomplished by declaring the point (x0 , y0 ) to be the origin. Then L(x, y) =
ax2 + cy 2 , with positive a and c, is often a good choice of a Lyapunov
function.
Example The system
x0 = −2x + xy 2
y 0 = −y − 3x2 y .
has a rest point (0, 0). With L(x, y) = ax2 + cy 2 , (4.4) takes the form
d
L(x(t), y(t)) = 2axx0 + 2byy 0 = 2ax(−2x + xy 2 ) + 2by(−y − 3x2 y) .
dt
If we choose a = 3 and b = 1, this simplifies to
d
L(x(t), y(t)) = −12x2 − 2y 2 < 0 .
dt
177
6.4. LYAPUNOV STABILITY
It follows that all solutions of this system tend to (0, 0) as t → ∞. One
says that the domain of attraction of the rest point (0, 0) is the entire xy
plane. The rest point (0, 0) is asymptotically stable.
Example The system
x0 = −2x + y 4
y 0 = −y + x5
has a rest point (0, 0). If we drop the nonlinear terms, then the equations
x0 = −2x and y 0 = −y have all solutions tending to zero as t → ∞. For
small |x| and |y|, the nonlinear terms are negligible. Therefore, we expect
asymptotic stability. With L(x, y) = x2 + y 2 , we compute
d
dt L(x(t), y(t))
2
2
= 2xx0 + 2yy 0 = 2x(−2x + y 4 ) + 2y(−y + x5 )
= −4x − 2y + 2xy 4 + 2x5 y < −2(x2 + y 2 ) + 2xy 4 + 2x5 y < 0 ,
provided that (x, y) belongs to a disc Bδ : x2 + y 2 < δ 2 , with a sufficiently
small δ, as can be seen by using the polar coordinates. Hence any solution,
with the initial point (x(0), y(0)) in Bδ , tends to zero as t → ∞. The rest
point (0, 0) is asymptotically stable. Its domain of attraction is Bδ , with δ
sufficiently small.
Example The system
x0 = −y + y x2 + y 2
y 0 = x − x x2 + y 2
has a rest point (0, 0). Multiply the first equation by x, the second one by
y, and add the results. Obtain:
xx0 + yy 0 = 0 ,
d 2
x + y2 = 0 ,
dt
x2 + y 2 = c2 .
The solution curves (x(t), y(t)) are circles around the origin. (If x(0) = a
and y(0) = b, then x2 (t) + y 2 (t) = a2 + b2 .) If a solution starts near (0, 0), it
stays near (0, 0), but it does not tend to (0, 0). We say that the rest point
(0, 0) is stable, although it is not asymptotically stable.
178
CHAPTER 6. NON-LINEAR SYSTEMS
Example The system
x0 = −y + x x2 + y 2
y 0 = x + y x2 + y 2
has a rest point (0, 0). Again, multiply the first equation by x, the second
one by y, and add the results. Obtain:
xx0 + yy 0 = x2 + y 2
Denoting ρ = x2 + y 2 , we rewrite this as
2
.
1 dρ
= ρ2 .
2 dt
We have dρ
dt > 0, i.e., the function ρ = ρ(t) is increasing. Solutions spiral
out of the rest point (0, 0). We say that the rest point (0, 0) is unstable.
6.4.1
Problems
I.
1. (i) Show that the rest point (0, 0) is asymptotically stable for the system
x01 = −2x1 + x2 + x1 x2
x02 = x1 − 2x2 + x31 .
(ii) Find the general solution of the corresponding linear system
x01 = −2x1 + x2
x02 = x1 − 2x2 ,
and discuss its behavior as t → ∞.
2. The equation (for y = y(t))
y 0 = y 2 (1 − y)
has rest points y = 0 and y = 1. Discuss their stability.
Answer: y = 1 is asymptotically stable, y = 0 is not.
3. (i) Convert the nonlinear equation
y 00 + f (y)y 0 + y = 0
179
6.4. LYAPUNOV STABILITY
into a system, by letting y = x1 , and y 0 = x2 .
Answer:
x01 = x2
x02 = −x1 − f (x1 )x2 ,
(ii) Show that the rest point (0, 0) of this system is asymptotically stable,
provided that f (x1 ) > 0 for all x1 . Hint: L = x21 + x22 . What does this imply
for the original equation?
II.
1. Show that for any solution of the system
x0 = x (5 − x − 2y)
y 0 = y (2 − 3x − y)
we have lim (x(t), y(t)) = (5, 0).
t→∞
2. Show that for any solution of the system
x0 = x 2 − x − 21 y
y 0 = y (3 − x − y)
we have lim (x(t), y(t)) = (1, 2).
t→∞
3. Find lim (x(t), y(t)) for the initial value problem
t→∞
x0 = x (3 − x − y) , x(0) =
y 0 = y (4 − 2x − y) , y(0) =
5
2
1
4
.
What if the initial conditions are x(0) = 0.1 and y(0) = 3 ?
Answer. (3, 0). For the other initial conditions: (0, 4).
Chapter 7
Fourier Series and Boundary
Value Problems
7.1
Fourier series for functions of an arbitrary period
Recall that in Chapter 2 we considered Fourier series for functions of period
2π. Suppose now that f (x) has a period 2L, where L > 0 is any number.
Consider an auxiliary function g(t) = f ( L
π t). Then g(t) has period 2π, and
we can represent it by the Fourier series
∞
X
L
(an cos nt + bn sin nt) ,
f ( t) = g(t) = a0 +
π
n=1
with
1
a0 =
2π
π
an =
1
π
Z
bn =
1
π
Z
−π
π
1
g(t) dt =
2π
−π
Z
g(t) cos nt dt =
1
π
Z
g(t) sin nt dt =
1
π
Z
π
−π
We now set
x=
Then
(1.1)
f (x) = a0 +
L
t,
π
an cos
n=1
180
π
L
f ( t) cos nt dt ,
π
−π
π
L
f ( t) sin nt dt .
π
−π
or t =
∞ X
π
L
f ( t) dt ,
π
−π
Z
π
x.
L
nπ
nπ
x + bn sin
x
L
L
,
7.1. FOURIER SERIES FOR FUNCTIONS OF AN ARBITRARY PERIOD 181
and making the change of variables t → x, by t =
express the coefficients as
1
2L
a0 =
1
L
Z
L
an =
1
L
Z
L
bn =
−L
Z
π
Lx
and dt =
π
L dx,
we
L
−L
f (x) dx ,
f (x) cos
nπ
x dx ,
L
f (x) sin
nπ
x dx .
L
−L
The formula (1.1) gives us the desired Fourier series. Observe that we use the
values of f (x) only on the interval (−L, L), when computing the coefficients.
Example Let f (x) be a function of period 6, which on the interval (−3, 3)
is equal to x.
Here L = 3 and f (x) = x on the interval (−3, 3). We have
f (x) = a0 +
∞ X
an cos
n=1
nπ
nπ
x + bn sin
x
3
3
.
The functions x and x cos nπ
3 x are odd, and so
1
a0 =
6
1
an =
3
The function x sin
nπ
3 x
bn =
1
3
Z
Z
3
x dx = 0 ,
−3
3
x cos
−3
nπ
x dx = 0 .
3
is even, giving us
Z
3
x sin
−3
nπ
2
x dx =
3
3
Z
3
x sin
0
nπ
x dx
3
2
3
nπ
9
nπ
6
6
3
=
− x cos
x + 2 2 sin
x =−
cos nπ =
(−1)n+1 ,
0
3
nπ
3
n π
3
nπ
nπ
because cos nπ = (−1)n . We conclude that
f (x) =
∞
X
6
nπ
n=1
(−1)n+1 sin
nπ
x.
3
182CHAPTER 7. FOURIER SERIES AND BOUNDARY VALUE PROBLEMS
Restricting to the interval (−3, 3), we have
x=
∞
X
6
nπ
n=1
(−1)n+1 sin
nπ
x,
3
for −3 < x < 3 .
Outside of the interval (−3, 3) this Fourier series converges not to x, but to
the periodic extension of x, i.e., to the function f (x) we have started with.
We see that it is enough to know f (x) on the interval (−L, L), in order
to compute its Fourier coefficients. If f (x) is defined only on (−L, L), we
can still represent it by its Fourier series (1.1).
7.1.1
Even and odd functions
Our computations in the preceding example were aided by the nice properties of even and odd functions, which we review next.
Recall that a function f (x) is called even if
f (−x) = f (x)
for all x .
Examples include cos x, x2 , x4 , and in general x2n , for any even power 2n.
The graph of an even function is symmetric with respect to y axis. It follows
that
Z
Z
L
L
f (x) dx = 2
−L
f (x) dx
0
for any even function f (x), and any constant L. A function f (x) is called
odd if
f (−x) = −f (x) for all x .
Examples include sin x, tan x, x, x3 , and in general x2n+1 , for any odd power
2n + 1. (We see that the even functions “eat” minus, while the odd ones
“pass it through”.) The graph of an odd function is symmetric with respect
to the origin. It follows that
Z
L
f (x) dx = 0
−L
for any odd function f (x), and any constant L. Products of even and odd
functions are either even or odd:
even · even = even
even · odd = odd
odd · odd = even .
7.1. FOURIER SERIES FOR FUNCTIONS OF AN ARBITRARY PERIOD 183
If f (x) is even, then bn = 0 for all n (as integrals of odd functions), and
so the Fourier series (1.1) becomes
∞
X
f (x) = a0 +
an cos
n=1
with
a0 =
1
L
Z
nπ
x,
L
L
f (x) dx ,
0
2 L
nπ
an =
f (x) cos
x dx .
L 0
L
If f (x) is odd , then a0 = 0 and an = 0 for all n, and the Fourier series (1.1)
becomes
∞
X
nπ
x,
f (x) =
bn sin
L
n=1
Z
with
2
bn =
L
Z
L
f (x) sin
0
nπ
x dx .
L
y
6
@
@
@
@
a
−3
−2
@
@
@
@
@
a
@
@
@
a
−1
1
2
@
@
@
@
a
a
3
6
4
@
@
@
@- x
The periodic extension of |x| as a function of period 2
Example Let f (x) be a function of period 2, which on the interval (−1, 1)
is equal to |x|.
Here L = 1 and f (x) = |x| on the interval (−1, 1). The function f (x) is
even, so that
f (x) = a0 +
∞
X
an cos nπx .
n=1
Observing that |x| = x on the interval (0, 1), we compute the coefficients
a0 =
Z
0
1
x dx =
1
,
2
184CHAPTER 7. FOURIER SERIES AND BOUNDARY VALUE PROBLEMS
an = 2
Z
1
x cos nπx dx = 2
0
We have
x sin nπx cos nπx 1 2(−1)n − 2
+
.
=
0
nπ
n2 π 2
n2 π 2
∞
1 X
2(−1)n − 2
+
cos nπx,
2 n=1
n2 π 2
|x| =
for −1 < x < 1 .
Outside of the interval (−1, 1), this Fourier series converges to the periodic
extension of |x|, i.e., to the function f (x).
7.1.2
Further examples and the convergence theorem
Even and odd functions are very special. A “general” function is neither
even nor odd.
Example On the interval (−2, 2) represent the function
(
f (x) =
1 for −2 < x ≤ 0
x for 0 < x < 2
by its Fourier series. This function is neither even nor odd (and also it is
not continuous). Here L = 2, and the Fourier series is
f (x) = a0 +
∞ X
nπ
nπ
an cos
x + bn sin
x
2
2
n=1
Compute
1
a0 =
4
0
1
1 dx +
4
−2
Z
Z
.
2
x dx = 1 ,
0
where we broke the interval of integration into two pieces, according to the
definition of f (x). Similarly
an =
1
2
0
Z
cos
−2
1
bn =
2
nπ
1
x+
2
2
0
nπ
1
sin
x+
2
2
−2
Z
2
Z
x cos
0
Z
2
0
x sin
nπ
2 (−1 − (−1)n )
x dx =
.
2
n2 π 2
nπ
(−1 − (−1)n)
x dx =
.
2
nπ
On the interval (−2, 2) we have
f (x) = 1 +
∞ X
2 (−1 − (−1)n )
n=1
n2 π 2
cos
nπ
(−1 − (−1)n )
nπ
x+
sin
x
2
nπ
2
.
7.1. FOURIER SERIES FOR FUNCTIONS OF AN ARBITRARY PERIOD 185
The quantity −1 − (−1)n is equal to zero if n is odd, and to −2 if n is even.
All even n can be obtained in the form n = 2k, with k = 1, 2, 3, . . .. We can
then rewrite the Fourier series as
f (x) = 1 −
∞ X
1
k=1
1
cos kπx +
sin kπx ,
2
2
k π
kπ
for −2 < x < 2 .
Outside of the interval (−2, 2), this series converges to an extension of f (x)
as a function of period 4.
Example On the interval (−π, π) find the Fourier series of f (x) = 2 sin x +
sin2 x.
Here L = π, and the Fourier series takes the form
f (x) = a0 +
∞
X
(an cos nx + bn sin nx) .
n=1
Let us spell out several terms of this series
f (x) = a0 + a1 cos x + a2 cos 2x + a3 cos 3x + . . . + b1 sin x + b2 sin 2x + . . . .
1
2
Using a trig formula sin2 x =
f (x) =
−
1
2
cos 2x, we write
1 1
− cos 2x + 2 sin x .
2 2
This is the desired Fourier series! Here a0 = 21 , a2 = − 12 , b1 = 2, and all
other coefficients are zero. We see that this function is its own Fourier series.
Example On the interval (−2π, 2π) find the Fourier series of f (x) = 2 sin x+
sin2 x.
This time L = 2π, and the Fourier series has the form
f (x) = a0 +
∞ X
n=1
n
n
an cos x + bn sin x .
2
2
As before, we rewrite f (x)
f (x) =
1 1
− cos 2x + 2 sin x .
2 2
And again this is the desired Fourier series! This time a0 =
b2 = 2, and all other coefficients are zero.
1
2,
a4 = − 12 ,
186CHAPTER 7. FOURIER SERIES AND BOUNDARY VALUE PROBLEMS
To discuss the convergence properties of Fourier series, we need a concept
of piecewise smooth functions. These are functions that are continuous and
differentiable, except for discontinuities at some isolated points. In case a
discontinuity happens at some point x0 , we assume that the limit from the
left f (x0 −) exists, as well as the limit from the right f (x0 +). (I.e., at a
point of discontinuity either f (x0 ) is not defined, or f (x0 −) 6= f (x0 +).)
Theorem 1 Let f(x) be a piecewise smooth function of period 2L. Then its
Fourier series
∞ X
nπ
nπ
x + bn sin
x
a0 +
an cos
L
L
n=1
converges to f (x) at any point x where f (x) is continuous. If f (x) has a
jump at x, the Fourier series converges to
f (x−) + f (x+)
.
2
We see that at jump points, the Fourier series tries to be fair, and it
converges to the average of the limits from the left and right.
Let now f (x) be defined on [−L, L]. Let us extend it as a function of
period 2L. Unless it so happens that f (−L) = f (L), the new function will
have jumps at x = −L and x = L. Then the previous theorem implies the
next one.
Theorem 2 Let f(x) be a piecewise smooth function on [−L, L]. Let x be a
point inside (−L, L). Then its Fourier series
a0 +
∞ X
n=1
nπ
nπ
an cos
x + bn sin
x
L
L
converges to f (x) at any point x where f (x) is continuous. If f (x) has a
discontinuity at x, the Fourier series converges to
f (x−) + f (x+)
.
2
At both end points x = −L and x = L, the Fourier series converges to
f (−L+) + f (L−)
.
2
7.2. FOURIER COSINE AND FOURIER SINE SERIES
7.2
187
Fourier cosine and Fourier sine series
Suppose a function f (x) is defined on the interval (0, L). How do we represent f (x) by a Fourier series? We can compute Fourier series for a function
defined on (−L, L), but f (x) “lives” only on (0, L). We can extend f (x) as
an arbitrary function on (−L, 0) (i.e., draw randomly any graph on (−L, 0)).
This gives us a function defined on (−L, L), which we represent by its Fourier
series, and use this series only on the interval (0, L), where f (x) lives. So
that there are infinitely many ways to represent f (x) by a Fourier series on
the interval (0, L). However, two of these Fourier series stand out, the ones
when the extension produces either an even or an odd function.
Let f (x) be defined on the interval (0, L). We define its even extension
to the interval (−L, L), as follows
(
fe (x) =
f (x)
for 0 < x < L
f (−x) for −L < x < 0
I.e., we reflect the graph of f (x) with respect to the y axis. The function
fe (x) is even on (−L, L), and as we saw before, its Fourier series has the
form
∞
X
nπ
fe (x) = a0 +
an cos
x,
L
n=1
with the coefficients
1
a0 =
L
an =
2
L
Z
0
Z
0
L
1
fe (x) dx =
L
Z
L
nπ
2
x dx =
L
L
Z
L
L
fe (x) cos
0
f (x) dx ,
f (x) cos
0
nπ
x dx ,
L
because on the interval of integration (0, L), fe (x) = f (x). We now restrict
the series to the interval (0, L), obtaining
(2.1)
f (x) = a0 +
∞
X
an cos
n=1
with
(2.2)
(2.3)
a0 =
an =
2
L
Z
0
1
L
Z
nπ
x,
L
for 0 < x < L ,
L
f (x) dx ,
0
L
f (x) cos
nπ
x dx .
L
188CHAPTER 7. FOURIER SERIES AND BOUNDARY VALUE PROBLEMS
The series (2.1), with the coefficients computed using the formulas (2.2) and
(2.3), is called a Fourier cosine series.
Where is fe (x) now? It disappeared. We used it as an artifact of construction, like scaffolding.
Example On the interval (0, 3) find the Fourier cosine series of f (x) = x+2.
Compute
Z
1 3
7
a0 =
(x + 2) dx = ,
3 0
2
Z 3
nπ
6 (−1 + (−1)n )
2
(x + 2) cos
x dx =
.
an =
3 0
3
n2 π 2
Answer
x+2=
=
7
2
7
2
− 12
6(−1+(−1)n )
cos nπ
3 x
n2 π2
P∞
(2k−1)π
1
x.
k=1 (2k−1)2 π2 cos
3
+
P∞
n=1
Consider again f (x), which is defined on the interval (0, L). We now
define its odd extension to the interval (−L, L), as follows
fo (x) =
(
f (x)
for 0 < x < L
.
−f (−x) for −L < x < 0
I.e., we reflect the graph of f (x) with respect to the origin. If f (0) 6= 0, this
extension is discontinuous at x = 0. The function f0 (x) is odd on (−L, L),
and as we saw before, its Fourier series has only the sine terms
f0 (x) =
∞
X
bn sin
n=1
nπ
x,
L
with the coefficients
2
bn =
L
Z
0
L
nπ
2
fo (x) sin
x dx =
L
L
Z
0
L
f (x) sin
nπ
x dx ,
L
because on the interval of integration (0, L), fo (x) = f (x). We again restrict
the series to the interval (0, L), obtaining
(2.4)
f (x) =
∞
X
bn sin
n=1
with
(2.5)
bn =
2
L
Z
0
nπ
x,
L
for 0 < x < L ,
L
f (x) sin
nπ
x dx .
L
7.3. TWO POINT BOUNDARY VALUE PROBLEMS
189
The series (2.4), with the coefficients computed using the formula (2.5), is
called Fourier sine series.
Example On the interval (0, 3) find the Fourier sine series of f (x) = x + 2.
Compute
Z
2 3
nπ
4 − 10(−1)n
bn =
(x + 2) sin
x dx =
.
3 0
3
nπ
We conclude
x+2=
∞
X
4 − 10(−1)n
n=1
nπ
sin
nπ
x,
3
for 0 < x < 3 .
Clearly this series does not converge to f (x) at the end points x = 0 and
x = 3 of our interval (0, 3). But inside (0, 3) we do have convergence.
Fourier sine and cosine series were developed by using Fourier series on
(−L, L). It follows that inside (0, L) both of these series converge to f (x)
(x+)
at points of continuity, and to f (x−)+f
if f (x) is discontinuous at x. At
2
both end points x = 0 and x = L, Fourier sine series converges to 0, while
Fourier cosine series converges to f (0+) and f (L−) respectively.
7.3
Two point boundary value problems
We shall need to find solutions y = y(x) of the problem
(3.1)
y 00 + λy = 0, 0 < x < L
y(0) = y(L) = 0
on some interval (0, L). Here λ is some number. Unlike initial value problems, where we prescribe the values of the solution and its derivative at one
point, here we prescribe that the solution vanishes at x = 0 and at x = L,
which are the end-points (the boundary points) of the interval (0, L). Of
course, y(x) = 0 is a solution of our problem, which is called a trivial solution. We wish to find non-trivial solutions. What are the values of the
parameter λ, for which non-trivial solutions are possible?
The form of the general solution depends on whether λ is positive, negative or zero, i.e., we have three cases to consider.
√
Case 1. λ < 0. We may write λ = −ω 2 , with some ω > 0 (ω = −λ), and
the equation takes the form
y 00 − ω 2 y = 0 .
190CHAPTER 7. FOURIER SERIES AND BOUNDARY VALUE PROBLEMS
Its general solution is y = c1 e−ωx + c2 eωx . The boundary conditions
y(0) = c1 + c2 = 0
y(L) = e−ωL c1 + eωL c2 = 0
give us two equations to determine c1 and c2 . From the first equation c2 =
−c1 , and then from the second equation c1 = 0. So that c1 = c2 = 0, i.e.,
the only solution is y = 0, the trivial solution.
Case 2. λ = 0. The equation takes the form
y 00 = 0 .
Its general solution is y = c1 + c2 x. The boundary conditions
y(0) = c1 = 0
y(L) = c1 L + c2 = 0
give us c1 = c2 = 0. We struck out again in our search for a non-trivial
solution.
√
Case 3. λ > 0. We may write λ = ω 2 , with some ω > 0 (ω = λ), and
the equation takes the form
y 00 + ω 2 y = 0 .
Its general solution is y = c1 cos ωx+c2 sin ωx. The first boundary condition
y(0) = c1 = 0
tells us that c1 = 0. We update the general solution: y = c2 sin ωx. The
second boundary condition gives
y(L) = c2 sin ωL = 0 .
One possibility for this product to be zero, is c2 = 0. This leads again to
the trivial solution. What saves us is that sin ωL = 0 for some “lucky”
n2 π2
2
ω’s, namely when ωL = nπ, or ωn = nπ
L , and λn = ωn = L2 , n =
1, 2, 3, . . .. The corresponding solutions are c2 sin nπ
L x, or we can simply
write sin nπ
x,
because
a
constant
multiple
of
a
solution
is also a solution.
L
To recapitulate, non-trivial solutions occur at the infinite number of λ’s,
2 2
λn = nLπ2 , called the eigenvalues, the corresponding solutions sin nπ
L x are
called the eigenfunctions.
191
7.3. TWO POINT BOUNDARY VALUE PROBLEMS
Next, we search for non-trivial solutions of the problem
y 00 + λy = 0, 0 < x < L
y 0 (0) = y 0 (L) = 0 ,
in which the boundary conditions are different. As before, we see that in
case λ < 0 there are no non-trivial solutions. The case λ = 0 turns out to be
different: any non-zero constant is a non-trivial solution. So that λ0 = 0 is
an eigenvalue, and y0 = 1 the corresponding eigenfunction. In case λ > 0 we
2 2
get as before infinitely many eigenvalues λn = nLπ2 , with the corresponding
eigenfunctions yn = cos nπ
L x, n = 1, 2, 3, . . ..
7.3.1
Problems
I. 1. Is the integral
Hint: Consider first
Z
3/2
−1
R1
−1
tan15 x dx positive or negative?
tan15 x dx.
Ans. Positive.
2. Show that any function can be written as a sum of an even function and
an odd function.
Hint: f (x) =
f (x)+f (−x)
2
+
f (x)−f (−x)
.
2
3. Let g(x) = x on the interval (0, 3).
(i) Find the even extension of g(x).
Ans. ge (x) = |x|, defined on (−3, 3).
(ii) Find the odd extension of g(x).
Ans. go (x) = x, defined on (−3, 3).
4. Let h(x) = −x3 on the interval (0, 5). Find its even and odd extensions,
and state the interval on which they are defined.
5. Let f (x) = x2 on the interval (0, 1). Find its even and odd extensions,
and state the interval on which they are defined.
Answer. fo (x) = x|x|, defined on (−1, 1).
6. Assume that f (x) has period 2π. Show that the function
R
also 2π-periodic, if and only if 02π f (t) dt = 0.
Rx
0
7. Assume that f (x) has period T . Show that for any constant a
Z
a
T +a
f 0 (x)ef (x) dx = 0 .
f (t) dt is
192CHAPTER 7. FOURIER SERIES AND BOUNDARY VALUE PROBLEMS
II. Find the Fourier series of a given function over the indicated interval.
1. f (x) = sin x cos x + cos2 2x on (−π, π).
Ans. f (x) =
1
2
+
1
2
cos 4x + 12 sin 2x.
2. f (x) = sin x cos x + cos2 2x on (−2π, 2π).
Ans. f (x) =
1
2
+
1
2
cos 4x + 12 sin 2x.
3. f (x) = sin x cos x + cos2 2x on (−π/2, π/2).
Ans. f (x) =
1
2
+
1
2
cos 4x + 12 sin 2x.
4. f (x) = x + x2 on (−π, π).
4(−1)n
n2
2(−1)n+1
n
Ans. f (x) =
π2
3
+
5. (i) f (x) =
(
1
for 0 < x < π
on (−π, π).
−1 for −π < x < 0
Ans. f (x) =
4
π
(ii) Set x =
π
2
P∞
n=1
cos nx +
sin x + 13 sin 3x +
1
5
sin 5x +
1
7
sin nx .
sin 7x + · · · .
in the last series, to conclude that
π
1 1 1
= 1− + − +··· .
4
3 5 7
6. f (x) = 1 − |x| on (−2, 2).
Answer. f (x) =
∞
X
4
nπ
(1 − (−1)n ) cos
x.
2π2
n
2
n=1
7. f (x) = x|x| on (−1, 1).
Answer. f (x) =
∞
X
−2 n2 π 2 − 2 (−1)n − 4
n=1
n3 π 3
sin nπx.
(
1 for −1 < x < 0
on (−1, 1). Sketch the graphs of f (x)
0 for 0 < x < 1
and of its Fourier series. Then calculate the Fourier series of f (x) on (−1, 1).
8. Let f (x) =
Answer. f (x) =
∞
1 X
2
−
sin(2k − 1)πx.
2 k=1 (2k − 1)π
7.3. TWO POINT BOUNDARY VALUE PROBLEMS
193
(
x
for −2 < x < 0
on (−2, 2). Sketch the graphs of
−1 for 0 < x < 2
f (x) and of its Fourier series. Then calculate the Fourier series of f (x) on
(−2, 2).
9. Let f (x) =
III. Find the Fourier cosine series of a given function over the indicated
interval.
1. f (x) = cos 3x − sin2 3x on (0, π).
1
1
Answer. f (x) = − + cos 3x + cos 6x.
2
2
2. f (x) = cos 3x − sin2 3x on (0, π3 ).
1
1
Answer. f (x) = − + cos 3x + cos 6x.
2
2
3. f (x) = x on (0, 2).
Answer. f (x) = 1 +
∞
X
−4 + 4(−1)n
n2 π 2
n=1
cos
nπ
x.
2
4. f (x) = sin x on (0, 2).
Hint: sin ax cos bx =
Answer. f (x) =
1
2
sin(a − b)x + 12 sin(a + b)x.
∞
X
1
2 cos 2
nπ
(1 − cos 2) +
(−1)n 2 2
cos
x.
2
n π −4
2
n=1
5. f (x) = sin4 x on (0, π2 ).
Answer. f (x) =
3 1
1
− cos 2x + cos 4x.
8 2
8
IV. Find the Fourier sine series of a given function over the indicated interval.
1. f (x) = 5 sin x cos x on (0, π).
5
sin 2x.
2
2. f (x) = 1 on (0, 3).
Answer. f (x) =
Answer. f (x) =
∞
X
2
n=1
nπ
3. f (x) = x on (0, 2).
(1 − (−1)n) sin
nπ
x.
3
194CHAPTER 7. FOURIER SERIES AND BOUNDARY VALUE PROBLEMS
Answer. f (x) =
∞
X
4
nπ
n=1
(−1)n+1 sin
nπ
x.
2
4. f (x) = sin x on (0, 2).
Hint: sin ax sin bx =
Answer. f (x) =
∞
X
1
2
cos(a − b)x −
(−1)n+1
n=1
3
1
2
cos(a + b)x.
2nπ sin 2
nπ
sin
x.
2
2
n π −4
2
5. f (x) = sin x on (0, π).
Answer.
1
3
sin x − sin 3x.
4
4
6. f (x) =
(
x
for 0 < x < π2
on (0, π).
π − x for π2 < x < π
Answer. f (x) =
∞
X
4
nπ
sin
sin nx.
2
πn
2
n=1
7. f (x) = x − 1 on (0, 3).
Answer. f (x) = −
∞
X
2 + 4(−1)n
n=1
nπ
sin
nπ
x.
3
V.
1. Find the eigenvalues and the eigenfunctions of
y 00 + λy = 0, 0 < x < L, y 0 (0) = y(L) = 0 .
Answer. λn =
π2 (n+ 12 )2
,
L2
yn = cos
π(n+ 12 )
L
x.
2. Find the eigenvalues and the eigenfunctions of
y 00 + λy = 0, 0 < x < L, y(0) = y 0 (L) = 0 .
3. Find the eigenvalues and the eigenfunctions of
y 00 + λy = 0, y(x) is a 2π periodic function .
Answer. λ0 = 0 with y0 = 1, and λn = n2 with yn = an cos nx + bn sin nx.
4∗ . Show that the fourth order problem (with a > 0)
y 0000 − a4 y = 0, 0 < x < 1, y(0) = y 0 (0) = y(1) = y 0 (1) = 0
7.4. HEAT EQUATION AND SEPARATION OF VARIABLES
195
has non-trivial solutions (eigenfunctions) if and only if a satisfies
cos a =
1
.
cosh a
Show graphically that there are infinitely many such a’s, and calculate the
corresponding eigenfunctions.
Hint: The general solution is y(x) = c1 cos ax + c2 sin ax + c3 cosh ax +
c4 sinh ax. From the boundary conditions obtain two equation for c3 and c4 .
7.4
Heat equation and separation of variables
Suppose we have a rod of length L, which is so thin that we may assume it
to be one dimensional, extending along the x-axis, for 0 ≤ x ≤ L. Assume
that the surface of the rod is insulated, so that heat can travel only to the
left or to the right along the rod. We wish to determine the temperature
u = u(x, t) at any point x of the rod, and at any time t. Consider an element
of the rod of length ∆x, extending over the sub-interval (x, x + ∆x). The
amount of heat (in calories) that this element holds is
cu(x, t)∆x .
Indeed, the amount of heat must be proportional to the temperature u =
u(x, t), the length ∆x, and there must be a physical constant c > 0, which
reflects the rod’s ability to store heat (and c also makes the physical units
right, so that the product is in calories). The rate of change of heat is (here
ut (x, t) is the partial derivative in t)
cut (x, t)∆x .
On the other hand, the change in heat occurs because of the heat flow at
the end points of the interval (x, x + ∆x). Because this interval is small, the
function u(x, t) is likely to be monotone over this interval, so let us assume
that u(x, t) in increasing in x over (x, x + ∆x) (think of t as fixed). At the
right end point x + ∆x, heat flows into our element, because to the right of
this point the temperatures are higher. The heat flow per unit time is
c1 ux(x + ∆x, t) ,
i.e., it is proportional to how steeply the temperatures increase (c1 > 0 is
another physical constant). At the left end x, we loose
c1 ux (x, t)
196CHAPTER 7. FOURIER SERIES AND BOUNDARY VALUE PROBLEMS
of heat per unit time. The balance equation is
cut(x, t)∆x = c1 ux (x + ∆x, t) − c1 ux(x, t) .
We now divide by c∆x, call
c1
c
ut(x, t) = k
=k
ux (x + ∆x, t) − ux (x, t)
.
∆x
And finally, we let ∆x → 0, obtaining the heat equation
ut = kuxx .
This is a partial differential equation, or PDE for short.
Suppose now that initially, i.e., at time t = 0, the temperatures of the
rod could be obtained from a given function f (x), while the temperatures
at the end points x = 0 and x = L are kept at 0 degrees Celcius at all times
t (one can think that the end points are kept on ice). To determine the
temperature u(x, t) at all points x, and all times t, we need to solve
(4.1)
ut = kuxx
for 0 < x < L, and t > 0
u(x, 0) = f (x)
for 0 < x < L
u(0, t) = u(L, t) = 0
for t > 0 .
Here the second line represents the initial condition, and the third line gives
the boundary conditions.
7.4.1
Separation of variables
We search for solution of (4.1) in the form u(x, t) = F (x)G(t), with the
functions F (x) and G(t) to be determined. From the equation (4.1)
F (x)G0 (t) = kF 00 (x)G(t) .
Divide by kF (x)G(t):
G0 (t)
F 00 (x)
=
.
kG(t)
F (x)
On the left we have a function of t only, while on the right we have a function
of x only. In order for them to be the same, they must be both equal to the
same constant, which we denote by −λ
G0 (t)
F 00 (x)
=
= −λ .
kG(t)
F (x)
7.4. HEAT EQUATION AND SEPARATION OF VARIABLES
197
This gives us two differential equations for F (x) and G(t)
G0 (t)
= −λ ,
kG(t)
(4.2)
and
F 00 (x) + λF (x) = 0 .
From the boundary conditions
u(0, t) = F (0)G(t) = 0 .
This implies that F (0) = 0 (if we set G(t) = 0, we would get u = 0, which
does not satisfy the initial condition in (4.1)). Similarly, we have F (L) = 0,
using the other boundary condition. So that to determine F (x), we have
(4.3)
F 00 (x) + λF (x) = 0,
F (0) = F (L) = 0 .
We have studied this problem before. Nontrivial solutions occur only for
2 2
λ = λn = nLπ2 . Corresponding solutions are
Fn (x) = sin
We now plug in λ = λn =
(4.4)
nπ
x
L
n2 π2
L2
(and their multiples) .
into the equation (4.2)
G0 (t)
n2 π 2
= −k 2 .
G(t)
L
Solving these equations for all n
Gn (t) = bn e−k
n2 π 2
L2
t
,
where bn ’s are arbitrary constants. We have constructed infinitely many
functions
n2 π 2
nπ
un (x, t) = Gn (t)Fn (x) = bn e−k L2 t sin
x
L
which satisfy the PDE in (4.1), and the boundary conditions. By linearity,
their sum
∞
∞
2 2
X
X
nπ
−k n π2 t
L
(4.5)
u(x, t) =
un (x, t) =
bn e
sin
x
L
n=1
n=1
also satisfies the PDE in (4.1), and the boundary conditions. We now turn
to the initial condition
(4.6)
u(x, 0) =
∞
X
n=1
bn sin
nπ
x = f (x) .
L
198CHAPTER 7. FOURIER SERIES AND BOUNDARY VALUE PROBLEMS
We need to represent f (x) by its Fourier sine series, for which we choose
2
bn =
L
(4.7)
Z
L
f (x) sin
0
nπ
x dx .
L
n2 π 2
−k 2 t
L
Conclusion: the series ∞
sin nπ
n=1 bn e
L x, with bn ’s computed using
(4.7), gives the solution of our problem (4.1). We see that going from the
Fourier sine series of f (x) to the solution of our problem involves just putting
P
in the additional factors e−k
n2 π 2
L2
t
.
Example Solve
ut = 5uxx
for 0 < x < 2π, and t > 0
u(x, 0) = 2 sin x − 3 sin x cos x
u(0, t) = u(2π, t) = 0
for 0 < x < 2π
for t > 0 .
n
Here k = 5, and L = 2π. The Fourier sine series has the form ∞
n=1 bn sin 2 x.
Writing
3
2 sin x − 3 sin x cos x = 2 sin x − sin 2x ,
2
we see that this function is its own Fourier sine series, with b2 = 2, b4 = − 32 ,
and all other coefficients equal to zero. Solution:
P
2 π2
(2π)2
−5 2
u(x, t) = 2e
t
3 −5 42 π2 t
3
sin x − e (2π)2 sin 2x = 2e−5t sin x − e−20t sin 2x .
2
2
We see that by the time t = 1, the first term of the solution totally dominates
the second one.
Example Solve
ut = 2uxx
for 0 < x < 3, and t > 0
u(x, 0) = x − 1
for 0 < x < 3
u(0, t) = u(3, t) = 0
for t > 0 .
Here k = 2, and L = 3. We begin by calculating the Fourier sine series
x−1=
with
bn =
2
3
Z
0
∞
X
n=1
3
(x − 1) sin
bn sin
nπ
x,
3
nπ
2 + 4(−1)n
x dx = −
.
3
nπ
7.4. HEAT EQUATION AND SEPARATION OF VARIABLES
Solution:
u(x, t) = −
∞
X
2 + 4(−1)n
nπ
n=1
e−2
n2 π 2
9
t
sin
199
nπ
x.
3
What is the value of this solution? Again, very quickly (by the time t = 1)
the n = 1 term dominates all others, and we have
u(x, t) ≈
2 − 2π2 t
π
e 9 sin x .
π
3
Initial temperatures were negative for 0 < x < 1, and positive for 1 < x <
3. Very quickly, temperatures became positive at all points, because the
first harmonic sin π3 x > 0 on the interval (0, 3). Then temperatures tend
exponentially to zero, while retaining the shape of the first harmonic.
Let us now solve the problem
(4.8)
ut = kuxx
for 0 < x < L, and t > 0
u(x, 0) = f (x)
for 0 < x < L
ux (0, t) = ux (L, t) = 0
for t > 0 .
Compared with (4.1), the boundary conditions are now different. This
time we assume that the rod is insulated at the end points x = 0 and x = L.
We saw above that the flux at x = 0 (i.e., amount of heat flowing per
unit time) is proportional to ux (0, t). Since there is no heat flow at all t,
ux (0, t) = 0, and similarly u(L, t) = 0. We expect that in the long run the
temperatures inside the rodRwill average out, and be equal to the average of
the initial temperatures, L1 0L f (x) dx.
As before, we search for solution in the form u(x, t) = F (x)G(t). Separating variables, we see that G(t) still satisfies (4.4), while for F (x) we
have
F 00 (x) + λF (x) = 0, F 0 (0) = F 0 (L) = 0 .
We have studied this problem before. Nontrivial solutions occur only for
2 2
λ = λ0 = 0, and λ = λn = nLπ2 . Corresponding solutions are
F0 (x) = 1, Fn (x) = cos
nπ
x
L
(and their multiples) .
Solving (4.4) for n = 0, and all n = 1, 2, 3, · · ·
G0 = a0 , Gn (t) = an e−k
n2 π 2
L2
t
,
200CHAPTER 7. FOURIER SERIES AND BOUNDARY VALUE PROBLEMS
where a0 and an ’s are arbitrary constants. We have constructed infinitely
many functions
u0 (x, t) = G0 (t)F0 (x) = a0 , un (x, t) = Gn (t)Fn (x) = an e−k
n2 π 2
L2
t
cos
nπ
x
L
which satisfy the PDE in (4.8), and the boundary conditions. By linearity,
their sum
(4.9) u(x, t) = u0 (x, t) +
∞
X
un (x, t) = a0 +
n=1
∞
X
an e−k
n2 π 2
L2
t
cos
n=1
nπ
x
L
also satisfies the PDE in (4.8), and the boundary conditions. We now turn
to the initial condition
u(x, 0) = a0 +
∞
X
nπ
x = f (x) .
L
an cos
n=1
We need to represent f (x) by its Fourier sine series, for which we choose
(4.10)
1
a0 =
L
Z
L
0
2
f (x) dx, an =
L
∞
X
Z
L
f (x) cos
0
2 π2
L2
nπ
x dx .
L
nπ
x, with an ’s comL
n=1
puted using (4.10), gives the solution of our problem (4.8). Again, going from
the Fourier cosine series of f (x) to the solution of our problem involves just
Conclusion: the series u(x, t) = a0 +
2 π2
−k n
−k n
an e
t
cos
t
L2
. As t → ∞, u(x, t) → a0 , which
putting in the additional factors e
is equal to the average of the initial temperatures.
Example Solve
ut = 3uxx
u(x, 0) =
2 cos2 x
for 0 < x < π/2, and t > 0
− 3 cos2 2x
ux (0, t) = ux (π/2, t) = 0
for 0 < x < π/2
for t > 0 .
Here k = 3, and L = π/2. The Fourier cosine series has the form a0 +
n=1 an cos 2n x. Writing
P∞
1
3
2 cos2 x − 3 cos2 2x = − + cos 2x − cos 4x ,
2
2
we see that this function is its own Fourier cosine series, with a0 = − 12 ,
a1 = 1, a2 = − 32 , and all other coefficients equal to zero. Solution:
1
3
u(x, t) = − + e−12t cos 2x − e−48t cos 4x .
2
2
7.4. HEAT EQUATION AND SEPARATION OF VARIABLES
201
Example Solve
ut = 3uxx − au
for 0 < x < π, and t > 0
u(x, 0) = 2 cos x + x2
for 0 < x < π
ux (0, t) = ux (π, t) = 0
for t > 0 ,
where a is a positive constant. The extra term −au is an example of a lower
order term. Its physical significance is that the rod is no longer insulated,
and heat freely radiates through its side.
We set eat u(x, t) = v(x, t), or
u(x, t) = e−at v(x, t) .
We have ut = −ae−at v +e−at vt , and uxx = e−at vxx . We conclude that v(x, t)
satisfies the problem
vt = 3vxx
for 0 < x < π, and t > 0
v(x, 0) = 2 cos x + x2
for 0 < x < π
vx (0, t) = vx (π, t) = 0
for t > 0 ,
which is of the type we considered above. Because 2 cos x is its own Fourier
cosine series on the interval (0, π), we shall expand x2 in Fourier cosine
series, and then add 2 cos x. We have
2
x = a0 +
∞
X
an cos n x ,
n=1
where
π2
,
3
0
Z
2 π 2
4(−1)n
an =
x cos nx dx =
.
π 0
n2
a0 =
Then
2 cos x + x2 =
and so
v(x, t) =
Answer: u(x, t) =
1
π
Z
π
x2 dx =
∞
X 4(−1)n
π2
− 2 cos x +
cos n x ,
3
n2
n=2
∞
X
π2
4(−1)n −3n2 t
− 2e−3t cos x +
e
cos n x .
3
n2
n=2
π2 −at
3 e
− 2e−(3+a)t cos x +
P∞
n=2
4(−1)n −(3n2 +a)t
e
cos n x.
n2
202CHAPTER 7. FOURIER SERIES AND BOUNDARY VALUE PROBLEMS
7.5
Laplace’s equation
We now study heat conduction in a thin two-dimensional rectangular plate:
0 ≤ x ≤ L, 0 ≤ y ≤ M . Assume that both sides of the plate are insulated, so
that heat travels only in the xy plane. Let u(x, y, t) denote the temperature
at a point (x, y) and a time t > 0. It is natural to expect that the heat
equation in two dimensions takes the form
(5.1)
ut = k (uxx + uyy ) .
Indeed, one can derive (5.1) similarly to the way we derived the one-dimensional
heat equation. The boundary of our plate consists of four line segments. Let
us assume that the side lying on the x-axis is kept at 1 degrees Celsius, i.e.,
u(x, 0) = 1 for 0 ≤ x ≤ L, while the other three sides are kept at 0 degrees Celsius (they are kept on ice). I.e., u(x, M ) = 0 for 0 ≤ x ≤ L, and
u(0, y) = u(L, y) = 0 for 0 ≤ y ≤ M . The heat will flow from the warmer
side toward the three sides on ice. While the heat will continue its flow
indefinitely, eventually the temperatures will stabilize (we can expect temperatures close to 1 near the warm side, and close to 0 near the icy sides).
Stable temperatures do not change with time, i.e., u = u(x, y). Then ut = 0,
and the equation (5.1) takes the form
(5.2)
uxx + uyy = 0 .
This is the Laplace equation, one of the three main equations of Mathematical Physics. Mathematicians use the notation: ∆u = uxx + uyy , while
engineers seem to prefer ∇2 u = uxx + uyy . The latter notation has to do
with the fact that computing the divergence of the gradient of u(x, y) gives
∇ · ∇u = uxx + uyy . Solutions of the Laplace equation are called harmonic
functions.
To find the steady state temperatures u = u(x, y) for our example, we
need to solve the problem
uxx + uyy = 0
for 0 < x < L, and 0 < y < M
u(x, 0) = 1
for 0 < x < L
u(x, M ) = 0
for 0 < x < L
u(0, y) = u(L, y) = 0
for 0 < y < M .
We again apply the separation of variables technique, looking for solution
in the form
u(x, y) = F (x)G(y)
203
7.5. LAPLACE’S EQUATION
with the functions F (x) and G(y) to be determined. Plugging this into the
Laplace equation, we get
F 00 (x)G(y) = −F (x)G00 (y) .
Divide both sides by F (x)G(y)
F 00 (x)
G00 (y)
=−
.
F (x)
G(y)
On the left we have a function of x only, while on the right we have a function
of y only. In order for them to be the same, they must be both equal to the
same constant, which we denote by −λ
G00 (y)
F 00 (x)
=−
= −λ .
F (x)
G(y)
From this, and using the boundary conditions of our problem, we obtain
F 00 + λF = 0,
(5.3)
F (0) = F (L) = 0 ,
G00 − λG = 0,
(5.4)
G(M ) = 0 .
2 2
Nontrivial solutions of (5.3) occur at λ = λn = nLπ2 , and they are Fn (x) =
n2 π2
nπ
Bn sin nπ
L x. We solve (5.4) with λ = L2 , obtaining Gn (y) = sinh L (y−M ).
We conclude that
nπ
nπ
un (x, y) = Fn (x)Gn (y) = Bn sin
x sinh
(y − M )
L
L
satisfies the Laplace equation and the three zero boundary conditions. The
same is true for their sum
u(x, y) =
∞
X
Fn (x)Gn (y) =
n=1
∞
X
Bn sinh
n=1
nπ
nπ
(y − M ) sin
x.
L
L
It remains to satisfy the boundary condition at the warm side
u(x, 0) =
∞
X
Bn sinh
n=1
nπ
nπ
(−M ) sin
x = 1.
L
L
We need to choose Bn , so that Bn sinh nπ
L (−M ) is the Fourier sine series
coefficient of f (x) = 1, i.e.,
Bn sinh
nπ
2
(−M ) =
L
L
Z
0
L
sin
nπ
2 (1 − (−1)n )
x dx =
,
L
nπ
204CHAPTER 7. FOURIER SERIES AND BOUNDARY VALUE PROBLEMS
which gives
Bn = −
2 (1 − (−1)n )
.
nπ sinh nπM
L
Answer:
u(x, y) = −
∞
X
2 (1 − (−1)n )
n=1
nπ sinh
sinh
nπM
L
nπ
nπ
(y − M ) sin
x.
L
L
Example Solve
uxx + uyy = 0
for 0 < x < 1, and 0 < y < 2
u(x, 0) = 0
for 0 < x < 1
u(x, 2) = 0
for 0 < x < 1
u(0, y) = 0
for 0 < y < 2
u(1, y) = y
for 0 < y < 2
We look for solution u(x, y) = F (x)G(y). After separating variables, it
is convenient to use λ (instead of −λ) to denote the common value of two
functions
G00 (y)
F 00 (x)
=−
= λ.
F (x)
G(y)
Using the boundary conditions we conclude that
G00 + λG = 0, G(0) = G(2) = 0 ,
F 00 − λF = 0, F (0) = 0 .
2 2
The first equation has non-trivial solutions at λ = λn = n 4π , and they are
n2 π2
Gn (y) = Bn sin nπ
2 y. We then solve the second equation with λ =
4 ,
nπ
obtaining Fn (x) = sinh 2 x. We conclude that
un (x, y) = Fn (x)Gn(y) = Bn sinh
nπ
nπ
x sin
y
2
2
satisfies the Laplace equation and the three zero boundary conditions. The
same is true for their sum
u(x, y) =
∞
X
Fn (x)Gn (y) =
n=1
∞
X
n=1
Bn sinh
nπ
nπ
x sin
y.
2
2
It remains to satisfy the boundary condition at the fourth side
u(1, y) =
∞
X
n=1
Bn sinh
nπ
nπ
sin
y = y.
2
2
205
7.5. LAPLACE’S EQUATION
We need to choose Bn , so that Bn sinh nπ
2 is the Fourier sine series coefficient
of y, i.e.,
Z 2
nπ
nπ
4(−1)n+1
Bn sinh
=
y sin
y dy =
,
2
2
nπ
0
which gives
4(−1)n+1
Bn =
.
nπ sinh nπ
2
Answer:
u(x, y) =
∞
X
4(−1)n+1
n=1
nπ sinh
nπ
2
sinh
nπ
nπ
x sin
y.
2
2
Our computations in the above examples were aided by the fact that
three out of the four boundary conditions were zero (homogeneous). The
most general boundary value problem has the form
(5.5)
uxx + uyy = 0
for 0 < x < L, and 0 < y < M
u(x, 0) = f1 (x)
for 0 < x < L
u(x, M ) = f2 (x)
for 0 < x < L
u(0, y) = g1 (y)
for 0 < y < M
u(L, y) = g2 (y)
for 0 < y < M ,
with given functions f1 (x), f2 (x), g1 (y) and g2 (y). Because this problem is
linear, we can break it into four sub-problems, each time keeping one of the
boundary conditions, and setting the other three to zero. Namely, we look
for solution in the form
u(x, y) = u1 (x, y) + u2 (x, y) + u3 (x, y) + u4 (x, y) ,
where u1 is found by solving
uxx + uyy = 0
for 0 < x < L, and 0 < y < M
u(x, 0) = f1 (x)
u(x, M ) = 0
for 0 < x < L
for 0 < x < L
u(0, y) = 0
for 0 < y < M
u(L, y) = 0
for 0 < y < M ,
u2 is computed from
uxx + uyy = 0
for 0 < x < L, and 0 < y < M
206CHAPTER 7. FOURIER SERIES AND BOUNDARY VALUE PROBLEMS
u(x, 0) = 0
for 0 < x < L
u(x, M ) = f2 (x)
for 0 < x < L
u(0, y) = 0
for 0 < y < M
u(L, y) = 0
for 0 < y < M ,
u3 is computed from
uxx + uyy = 0
for 0 < x < L, and 0 < y < M
u(x, 0) = 0
u(x, M ) = 0
u(0, y) = g1 (y)
u(L, y) = 0
for 0 < x < L
for 0 < x < L
for 0 < y < M
for 0 < y < M ,
and u4 from
uxx + uyy = 0
for 0 < x < L, and 0 < y < M
u(x, 0) = 0
for 0 < x < L
u(x, M ) = 0
u(0, y) = 0
u(L, y) = g2 (y)
for 0 < x < L
for 0 < y < M
for 0 < y < M .
We solve each of these four problems, using separation of variables, as
before.
7.6
The wave equation
We consider vibrations of a guitar string. We assume that the motion of
the string is transverse, i.e., it goes only up and down (and not sideways).
Let u(x, t) denote the displacement of the string a point x and time t. The
motion of an element of the string (x, x + ∆x) is governed by the Newton’s
law
ma = f .
The acceleration a = utt(x, t). If ρ denotes the density of the string, then
m = ρ∆x. We assume that the internal tension is the only force acting on
this element, and we also assume that the magnitude of the tension T is
constant throughout the string. Our final assumption is that that the slope
of the string ux (x, t) is small for all x and t. Observe that ux (x, t) = tan θ,
207
7.6. THE WAVE EQUATION
see the figure. Then the vertical component of the force at the right end of
our element is
T sin θ ≈ T tan θ = T ux (x + ∆x, t) ,
because for small angles θ, sin θ ≈ tan θ ≈ θ. At the left end, the vertical
component of force is T ux (x + ∆x, t), and we have
ρ∆xutt(x, t) = T ux (x + ∆x, t) − T ux (x, t) .
We divide both sides by ρ∆x, and denote
utt (x, t) = c2
T
ρ
= c2
ux (x + ∆x, t) − ux (x, t)
.
∆x
Letting ∆x → 0, we obtain the wave equation
utt(x, t) = c2 uxx (x, t) .
u
6
T
>
θ
T
9
x
x + ∆x
-x
Forces acting on an element of a string
We consider now vibrations of a string of length L, which is fixed at the
end points x = 0 and x = L, with given initial displacement f (x), and given
initial velocity g(x)
utt = c2 uxx
for 0 < x < L, and t > 0
u(0, t) = u(L, t) = 0
for t > 0
u(x, 0) = f (x)
for 0 < x < L
ut (x, 0) = g(x)
for 0 < x < L .
208CHAPTER 7. FOURIER SERIES AND BOUNDARY VALUE PROBLEMS
As before, we search for the solution in the form u(x, t) = F (x)G(t):
F (x)G00(t) = c2 F 00 (x)G(t) ,
G00 (t)
F 00 (x)
=
= −λ .
c2 G(t)
F (x)
Using the boundary conditions, we get
F 00 + λF = 0,
F (0) = F (L) = 0 ,
G00 + λc2 G = 0 .
Nontrivial solutions of the first equation occur at λ = λn =
are Fn (x) = sin nπ
L x. The second equation takes the form
n2 π2
,
L2
and they
n2 π 2 2
c G = 0.
L2
G00 +
Its general solution is
nπc
nπc
t + Bn sin
t,
L
L
Gn (t) = bn cos
where bn and Bn are arbitrary constants. The function
u(x, t) =
∞
X
Fn (x)Gn(t) =
n=1
∞ X
bn cos
n=1
nπc
nπc
nπ
t + Bn sin
t sin
x
L
L
L
satisfies the wave equation and the boundary conditions. It remains to
satisfy the initial conditions. Compute
u(x, 0) =
∞
X
bn sin
n=1
nπ
x = f (x) ,
L
for which we choose
2
bn =
L
(6.1)
and
ut(x, 0) =
Z
∞
X
L
f (x) sin
0
Bn
n=1
for which we choose
(6.2)
Bn =
2
nπc
Z
0
nπ
x dx ,
L
nπc
nπ
sin
x = g(x) ,
L
L
L
g(x) sin
nπ
x dx .
L
209
7.6. THE WAVE EQUATION
Answer
u(x, t) =
∞ X
n=1
nπc
nπc
nπ
t + Bn sin
t sin
x,
bn cos
L
L
L
with bn ’s computed by (6.1), and Bn ’s by (6.2).
We see that the motion of the string is periodic in time, similar to the
harmonic motion of a spring that we studied before. This is understandable,
because we did not account for dissipation of energy in our model.
Example Solve
utt = 9uxx
for 0 < x < π, and t > 0
u(0, t) = u(π, t) = 0
u(x, 0) = 2 sin x
ut (x, 0) = 0
for t > 0
for 0 < x < π
for 0 < x < π .
Here c = 3 and L = π. Because g(x) = 0, all Bn = 0, while bn ’s are
Fourier sine series coefficients of 2 sin x, i.e., b1 = 2, and all other bn = 0.
Answer. u(x, t) = 2 cos 3t sin x.
To interpret this answer, we use a trig identity to write
u(x, t) = sin(x − 3t) + sin(x + 3t) .
The graph of y = sin(x − 3t) in the xy-plane is obtained by translating the
graph of sin x by 3t units to the right. I.e, we have a wave traveling to the
right with speed 3. Similarly, the graph of sin(x + 3t) is a wave traveling to
the left with speed 3. Solution is the superposition of these two waves. For
the general wave equation, c gives the wave speed.
7.6.1
Non-homogeneous problems
Let us solve
utt − 4uxx = x
for 0 < x < 3, and t > 0
u(0, t) = 1
for t > 0
u(3, t) = 2
for t > 0
u(x, 0) = 0
for 0 < x < 3
ut (x, 0) = 1
for 0 < x < 3 .
210CHAPTER 7. FOURIER SERIES AND BOUNDARY VALUE PROBLEMS
This problem does not fit the above pattern, because it is non-homogeneous.
Indeed, the x term on the right makes the equation non-homogeneous, and
the boundary conditions are non-homogeneous (non-zero) too. We look for
solution in the form
u(x, t) = U (x) + v(x, t) .
We ask of the function U (x) to take care of all of our problems, i.e., to
satisfy
−4U 00 = x
U (0) = 1, U (3) = 2 .
Integrating twice, we find
U (x) = −
1 3 17
x + x+1.
24
24
Then the function v(x, t) = u(x, t) − U (x) satisfies
vtt − 4vxx = 0
for 0 < x < 3, and t > 0
v(0, t) = 0
for t > 0
v(3, t) = 0
v(x, 0) = −U (x) =
1 3
24 x
vt (x, 0) = 1
for t > 0
−
17
24 x
−1
for 0 < x < 3
for 0 < x < 3 .
This is a homogeneous problem, of the type we considered before! Here
c = 2 and L = 3. Separation of variables, as above, gives
v(x, t) =
∞ X
n=1
2nπ
2nπ
nπ
bn cos
t + Bn sin
t sin
x,
3
3
3
with
2
bn =
3
Z
0
3
1 3 17
nπ
x − x − 1 sin
x dx =
24
24
3
Bn =
Answer
+
P∞
n=1
h
2
− nπ
+
1
nπ
Z
0
3
sin
!
nπ
3 − 3(−1)n
x dx =
.
3
n2 π 2
1 3
u(x, t) = − 24
x +
27+8n2 π2
2n3 π3
2
27 + 8n2 π 2
−
+
(−1)n
nπ
2n3 π 3
17
24 x
(−1)n cos 2nπ
3 t+
+1
3−3(−1)n
n2 π2
i
nπ
sin 2nπ
3 t sin 3 x .
,
211
7.6. THE WAVE EQUATION
In the non-homogeneous wave equation
utt − c2 uxx = f (x, t)
(6.3)
the term f (x, t) gives the acceleration of an external force applied to the
string. Indeed, the ma = f equation for an element of a string takes the
form
ρ∆xutt = T (ux (x + ∆x, t) − ux (x, t)) + ρ∆xf (x, t) .
Dividing by ρ∆x, and letting ∆x → 0 (as before), we obtain (6.3).
Non-homogeneous problems for heat equation are solved similarly.
Example Let us now solve the problem
ut − 2uxx = 1
for 0 < x < 1, and t > 0
u(x, 0) = x
for 0 < x < 1
u(0, t) = 0
for t > 0
u(1, t) = 3
for t > 0 .
Again, we look for solution in the form
u(x, t) = U (x) + v(x, t) ,
with U (x) satisfying
−2U 00 = 1
U (0) = 0, U (1) = 3 .
Integrating, we calculate
1
13
U (x) = − x2 + x .
4
4
The function v(x, t) = u(x, t) − U (x) then satisfies
vt − 2vxx = 0
for 0 < x < 1, and t > 0
v(x, 0) = x − U (x) = 41 x2 − 94 x
v(0, t) = v(1, t) = 0
for 0 < x < 1
for t > 0 .
To solve the last problem, we begin by expanding the initial temperature
in its Fourier sine series
∞
X
1 2 9
x − x=
bn sin nπx ,
4
4
n=1
212CHAPTER 7. FOURIER SERIES AND BOUNDARY VALUE PROBLEMS
with
bn = 2
Z
0
9
−1 + 1 + 4n2 π 2 (−1)n
x − x sin nπx dx =
.
4
4
n3 π 3
1 1
Then
v(x, t) =
∞
X
−1 + 1 + 4n2 π 2 (−1)n
n3 π 3
n=1
Answer:
2
2 π2 t
e−2n
sin nπx .
∞
X
1
13
−1 + 1 + 4n2 π 2 (−1)n −2n2 π2 t
e
sin nπx .
u(x, t) = − x2 + x +
4
4
n3 π 3
n=1
7.6.2
Problems
I. Solve the following problems, and explain their physical significance.
ut = 2uxx
1.
for 0 < x < π, and t > 0
u(x, 0) = sin x − 3 sin x cos x
u(0, t) = u(π, t) = 0
for 0 < x < π
for t > 0 .
Answer. u(x, t) = e−2t sin x − 23 e−8t sin 2x.
2.
ut = 2uxx
for 0 < x < 2π, and t > 0
u(x, 0) = sin x − 3 sin x cos x
for 0 < x < 2π
u(0, t) = u(2π, t) = 0
for t > 0 .
Answer. u(x, t) = e−2t sin x − 32 e−8t sin 2x.
3.
ut = 5uxx
for 0 < x < 2, and t > 0
u(x, 0) = x
for 0 < x < 2
u(0, t) = u(2, t) = 0
Answer. u(x, t) =
4.
P∞
2 2
n=1
4(−1)n+1 − 5n π t
4
e
sin nπ
nπ
2 x.
ut = 3uxx
u(x, 0) =
(
for 0 < x < π, and t > 0
x
for 0 < x < π2
π − x for π2 < x < π
u(0, t) = u(π, t) = 0
Answer. u(x, t) =
4 −3n2 t
n=1 πn2 e
P∞
for t > 0 .
for 0 < x < π
for t > 0 .
sin nπ
2 sin nx.
213
7.6. THE WAVE EQUATION
5.
ut = uxx
for 0 < x < 3, and t > 0
u(x, 0) = x + 2
for 0 < x < 3
u(0, t) = u(3, t) = 0
Answer. u(x, t) =
6.
P∞
n=1
4−10(−1)n
nπ
ut = uxx
e−
n2 π 2
9
for t > 0 .
t
sin nπ
3 x.
for 0 < x < 3, and t > 0
u(x, 0) = x + 2
for 0 < x < 3
ux (0, t) = ux (3, t) = 0
P∞
Answer. u(x, t) =
7
2
7.
ut = 2uxx
+
n=1
6(−1+(−1)n)
n2 π2
e−
8.
for 0 < x < 2, and t > 0
u(x, 0) = 1 − x
for 0 < x < 2
ux (0, t) = ux (π, t) = 0
9.
10.
11.
for t > 0 .
+ 12 e−8t cos 2x + 81 e−32t cos 4x.
ut = 3uxx + u
Answer. u(x, t) =
cos nπ
3 x.
for 0 < x < π
ux (0, t) = ux (π, t) = 0
3
8
n2 π 2
t
9
for 0 < x < π, and t > 0
u(x, 0) = cos4 x
Answer. u(x, t) =
for t > 0 .
P∞
8
k=1 π2 (2k−1)2 e
uxx + uyy = 0
−
for t > 0 .
3(2k−1)2 π 2
+1
4
t
cos (2k−1)π
x.
2
for 0 < x < 2, and 0 < y < 3
u(x, 0) = u(x, 3) = 5
for 0 < x < 2
u(0, y) = u(2, y) = 5
for 0 < y < 3 .
uxx + uyy = 0
for 0 < x < 2, and 0 < y < 3
u(x, 0) = u(x, 3) = 5
for 0 < x < 2
u(0, y) = u(2, y) = 0
for 0 < y < 3 .
uxx + uyy = 0
for 0 < x < π, and 0 < y < 1
u(x, 0) = 0
for 0 < x < π
u(x, 1) = 3 sin 2x
for 0 < x < π
u(0, y) = u(π, y) = 0
for 0 < y < 1 .
214CHAPTER 7. FOURIER SERIES AND BOUNDARY VALUE PROBLEMS
Answer. u(x, y) =
12.
3
sinh 2
sin 2x sinh 2y.
uxx + uyy = 0
for 0 < x < 2π, and 0 < y < 2
u(x, 0) = sin x
Answer. u(x, y) = −
13.
for 0 < x < 2π
u(x, 2) = 0
u(0, y) = 0
for 0 < x < 2π
for 0 < y < 2
u(2π, y) = y
for 0 < y < 2 .
∞
X
1
4(−1)n+1
nπ
nπ
sinh(y−2) sin x+
sinh
x sin
y.
2
sinh 2
nπ sinh nπ
2
2
n=1
utt − 4uxx = 0
for 0 < x < π, and t > 0
u(x, 0) = sin 2x
for 0 < x < π
ut (x, 0) = −4 sin 2x
for 0 < x < π
u(0, t) = u(π, t) = 0
for t > 0 .
Answer. u(x, t) = cos 4t sin 2x − sin 4t sin 2x.
14.
utt − 4uxx = 0
for 0 < x < 1, and t > 0
u(x, 0) = 0
for 0 < x < 1
ut (x, 0) = x
for 0 < x < 1
u(0, t) = u(1, t) = 0
Answer. u(x, t) =
15.
P∞
n=1
(−1)n+1
n2 π2
utt − 4uxx = 0
sin 2nπt sin nπx.
for 0 < x < 1, and t > 0
u(x, 0) = −3
ut (x, 0) = x
for 0 < x < 1
for 0 < x < 1
u(0, t) = u(1, t) = 0
Answer. u(x, t) =
P∞
n=1
h
6
nπ
for t > 0 .
for t > 0 .
((−1)n − 1) cos 2nπt +
(−1)n+1
n2 π2
i
sin 2nπt sin nπx.
II. Solve the following non-homogeneous problems. You may leave the integrals unevaluated.
1.
215
7.7. CALCULATING THE EARTH’S TEMPERATURE
ut = 2uxx
for 0 < x < π, and t > 0
u(x, 0) = 0
for 0 < x < π
u(0, t) = 0
for t > 0
u(π, t) = 1
for t > 0 .
Hint: U (x) = πx .
2.
ut = 2uxx + 4x
for 0 < x < 1, and t > 0
u(x, 0) = 0
for 0 < x < 1
u(0, t) = 0
for t > 0
u(1, t) = 0
for t > 0 .
Hint: U (x) = 31 (x − x3 ).
3.
utt = 4uxx + x
Hint: U (x) = 1 + 67 x −
7.7
7.7.1
for 0 < x < 4, and t > 0
u(x, 0) = x
for 0 < x < 4
ut (x, 0) = 0
for 0 < x < 4
u(0, t) = 1
for t > 0
u(4, t) = 3
for t > 0 .
1 3
24 x .
Calculating the Earth’s temperature
The complex form of the Fourier series
Recall that a function f (x) defined on (−L, L) can be represented by the
Fourier series
f (x) = a0 +
∞ X
an cos
n=1
with
a0 =
1
2L
Z
L
−L
f (x) dx, an =
bn =
1
L
Z
nπ
nπ
x + bn sin
x
L
L
1
L
Z
L
−L
f (x) sin
L
−L
f (x) cos
nπ
x dx .
L
,
nπ
x dx ,
L
216CHAPTER 7. FOURIER SERIES AND BOUNDARY VALUE PROBLEMS
Using Euler’s formulas
f (x) = a0 +
nπ
∞
X
an
ei L
x
n=1
+ e−i
2
nπ
x
L
nπ
+ bn
ei L
x
− e−i
2i
nπ
x
L
!
.
Combining the like terms, we rewrite this as
∞
X
f (x) =
(7.1)
cn ei
nπ
x
L
,
n=−∞
where
cn =
(
an
2
an
2
c0 = a0
− bn2 i
for n > 0
.
bn i
+ 2
for n < 0
We see that c̄m = c−m , for any integer m, where the bar denotes complex
conjugate.
Using the formulas for an and bn , and Euler’s formula, we have for n > 0
cn =
1
L
Z
L
f (x)
−L
nπ
cos nπ
1
L x − i sin L x
dx =
2
L
Z
L
−L
nπ
f (x)e−i L x dx .
The formula for c−n is exactly the same. The series (7.1), with coefficients
cn =
1
2L
Z
L
−L
f (x)e−i
nπ
x
L
dx, n = ±1, ±2, · · · .
is called the the complex form of the Fourier series.
Example Find the complex form of the Fourier series of
f (x) =
(
−1 for −2 < x < 0
on (−2, 2).
1 for 0 < x < 2
With the help from Euler’s formula, we compute
cn =
1
4
Z
0
−2
(−1)e−i
nπ
x
2
and so
f (x) =
dx +
Z
0
∞
X
n=−∞
2
1e−i
nπ
x
2
dx =
i
(−1 + (−1)n ) ,
nπ
nπ
i
(−1 + (−1)n ) ei 2 x .
nπ
7.7. CALCULATING THE EARTH’S TEMPERATURE
7.7.2
217
Temperatures inside the Earth and wine cellars
Suppose the average daily temperature is given by the function f (t), 0 ≤
t ≤ 365. We assume this function to be periodic, with the period T = 365.
What is the temperature x cm inside the Earth, when t = today. We assume
that x is not large, so that we may ignore the geotermic effects. We direct
the x axis inside the Earth, with x = 0 corresponding to the Earth’s surface,
and solve the heat equation for u = u(x, t)
(7.2)
ut = kuxx , u(0, t) = f (t) for x > 0, and −∞ < t < ∞ .
2
Geologists tell us that k = 2 · 10−3 cm
sec . Observe that unlike what we had
before, the “initial condition” is now prescribed along the t axis, and the
“evolution” happens along the x axis. This is sometimes referred to as a
sideways heat equation. We represent f (t) by its complex Fourier series
(L = T2 for T periodic functions)
f (t) =
∞
X
cn ei
2nπ
t
T
.
n=−∞
Similarly, we expand the solution u = u(x, t)
u(x, t) =
∞
X
un (x)ei
2nπ
t
T
.
n=−∞
The coefficients un (x) are now complex valued functions of x. Plugging this
into (7.2)
u00n = p2n un , un (0) = cn , with p2n = 2inπ
(7.3)
kT .
Depending on whether n is positive or negative, we have
2in = (1 ± i)2 |n| ,
and then
pn = (1 ± i)qn , with qn =
q
|n|π
kT
.
(It is plus in case n > 0, and minus for n < 0.) Solution of (7.3) is
un (x) = an e(1±i)qn x + bn e−(1±i)qn x .
We must set here an = 0, to avoid solutions whose absolute value becomes
infinite as x → ∞. Then
un (x) = cn e−(1±i)qn x .
218CHAPTER 7. FOURIER SERIES AND BOUNDARY VALUE PROBLEMS
Similarly,
u0 (x) = c0 .
We have
(7.4)
u(x, t) =
∞
X
cn e−qn x ei[
2nπ
t−(±)qn x
T
].
n=−∞
We write for n > 0
cn = |cn|eiγn ,
and transform (7.4) as follows
u(x, t) = c0 +
= c0 +
P∞
P∞
= c0 +
h
−qn x c ei
n
n=1 e
−qn x
cn ei
n=1 e
P∞
n=1
2nπ
t−qn x
T
2nπ
t−qn x
T
|cn|e−qn x cos
+ c−n e−i
+ cn ei
2nπ
T t+
2nπ
t+qn x
T
2nπ
t−qn x
T
i
γn − qn x .
On the last step we used that c̄n = c−n , and that z + z̄ = 2Re(z).
We see that the amplitude |cn |e−qn x of the n-th wave is damped exponentially with x, and that this damping is increasing with n, i.e.,
u(x, t) ≈ c0 + |c1 |e−q1 x cos
2π
t + γ1 − q1 x .
T
We see that when x changes, the cosine term is a shift of the function cos 2π
T t,
2π
i.e., a wave. If we select x so that γ1 − q1 x = − T π, we have a complete
phase shift, i.e., the coolest temperatures occur in summer, and the warmest
in winter. This x is a good depth for a wine cellar. Not only the seasonal
variations are very small, but they will also counteract any influence of air
flow into the cellar.
The material of this section is based on the book of A. Sommerfeld [8].
I became aware of this application through the wonderful lectures of Henry
P. McKean at Courant Institute, NYU in the late seventies.
7.8
Laplace’s equation in circular domains
Polar coordinates (r, θ) will be appropriate for circular domains, and it turns
out that
1
1
(8.1)
∆u = uxx + uyy = urr + ur + 2 uθθ .
r
r
219
7.8. LAPLACE’S EQUATION IN CIRCULAR DOMAINS
To justify (8.1), we begin by writing
u(x, y) = u(r(x, y), θ(x, y)) ,
with
(8.2)
r = r(x, y) =
By the chain rule
q
x2 + y 2 , θ = θ(x, y) = arctan
y
.
x
ux = ur rx + uθ θx ,
uxx = urr rx2 + 2urθ rxθx + uθθ θx2 + ur rxx + uθ θxx .
Similarly
uyy = urr ry2 + 2urθ ry θy + uθθ θy2 + ur ryy + uθ θyy ,
and so
uxx + uyy = urr rx2 + ry2 + 2urθ (rx θx + ry θy ) + uθθ θx2 + θy2
+ur (rxx + ryy ) + uθ (θxx + θyy ) .
Straightforward differentiation, using (8.2), shows that
rx2 + ry2 = 1 ,
rxθx + ry θy = 0 ,
1
,
r2
1
rxx + ryy = ,
r
θxx + θyy = 0 ,
θx2 + θy2 =
and the formula (8.1) follows.
We now consider a circular plate: x2 + y 2 < R2 , or r < R in polar
coordinates (R > 0 is its radius). The boundary of the plate consists of
the points (R, θ), with 0 ≤ θ ≤ 2π. Assume that the temperatures at the
boundary points, u(R, θ) are prescribed by a given function f (θ), of period
2π. What are the steady state temperatures u(r, θ) inside the plate? We
assume that the function u(r, θ) has period 2π in θ.
We need to solve
∆u = urr + 1r ur +
1
u
r 2 θθ
= 0,
u(R, θ) = f (θ) .
for r < R
220CHAPTER 7. FOURIER SERIES AND BOUNDARY VALUE PROBLEMS
We perform separation of variables, looking for the solution in the form
u(r, θ) = F (r)G(θ). Plugging this into the equation
1
1
F 00 (r)G(θ) + F 0 (r)G(θ) = − 2 F (r)G00 (θ) .
r
r
We multiply both sides by r 2 , and divide by F (r)G(θ):
r 2 F 00 (r) + rF 0 (r)
G00 (θ)
=−
= λ.
F (r)
G(θ)
This gives
G00 + λG = 0,
G(θ) is 2π periodic,
r 2 F 00 (r) + rF 0 (r) − λF (r) = 0 .
The first problem has non-trivial solutions when λ = λn = n2 , and when
λ = λ0 = 0 and they are
Gn (θ) = An cos nθ + Bn sin nθ, G0 = A0 ,
with arbitrary constants A0 , An and Bn . Then the second equation becomes
r 2 F 00 (r) + rF 0 (r) − n2 F (r) = 0 .
This is an Euler’s equation! Its general solution is
F (r) = c1 r n + c2 r −n .
(8.3)
We have to select c2 = 0, to avoid infinite temperature at r = 0, i.e., we
select Fn (r) = r n . When n = 0, the general solution is F (r) = c1 ln r + c2 .
Again, we select c1 = 0, and F0 (r) = 1 The function
u(r, θ) = F0 (r)G0 (θ) +
∞
X
Fn (r)Gn(θ) = A0 +
n=1
∞
X
r n (An cos nθ + Bn sin nθ)
n=1
satisfies the Laplace equation for r < R. Turning to the boundary condition
u(R, θ) = A0 +
∞
X
Rn (An cos nθ + Bn sin nθ) = f (θ) .
n=1
Expand f (θ) in its Fourier series
(8.4)
f (θ) = a0 +
∞
X
n=1
(an cos nθ + bn sin nθ) .
7.8. LAPLACE’S EQUATION IN CIRCULAR DOMAINS
221
Then we need to select A0 = a0 , An Rn = an and Bn Rn = bn , i.e., An =
1
1
Rn an and Bn = Rn bn . We conclude that
u(r, θ) = a0 +
∞ n
X
r
n=1
R
(an cos nθ + bn sin nθ) .
Example Solve
for x2 + y 2 < 4
∆u = 0,
u = x2 − 3y on x2 + y 2 = 4 .
Using that x = 2 cos θ, and y = 2 sin θ on the boundary, we have
x2 − y = 4 cos2 θ − 6 sin θ = 2 + 2 cos 2θ − 6 sin θ .
Then
u(r, θ) = 2 + 2
2
r
2
r
1
cos 2θ − 6 sin θ = 2 + r 2 cos 2θ − 3r sin θ .
2
2
In Cartesian coordinates this solution is (using that cos 2θ = cos2 θ − sin2 θ)
1
u(x, y) = 2 + (x2 − y 2 ) − 3y .
2
Next, we solve the exterior problem
∆u = urr + 1r ur +
1
u
r 2 θθ
= 0,
for r > R
u(R, θ) = f (θ) .
Physically, we have a plate with the disc r < R removed. Outside this disc
the plate is so large, that we can think that it extends to infinity. We follow
the same steps, and in (8.3) we select c1 = 0 to avoid infinite temperatures
as r → ∞. We conclude that
u(r, θ) = a0 +
∞ n
X
R
n=1
r
(an cos nθ + bn sin nθ) ,
with the coefficients from the Fourier series of f (θ) in (8.4).
222CHAPTER 7. FOURIER SERIES AND BOUNDARY VALUE PROBLEMS
7.9
Sturm-Liouville problems
Let us recall the eigenvalue problem
y 00 + λy = 0, 0 < x < L
y(0) = y(L) = 0
on some interval (0, L). Its eigenfunctions sin nπ
L x, n = 1, 2, · · · are the
building blocks of the Fourier Rsine series on (0, L). These eigenfunctions
mπ
are orthogonal on (0, L), i.e., 0L sin nπ
L x sin L x dx = 0 for any m 6= n.
Similarly, the eigenfunctions of
y 00 + λy = 0, 0 < x < L
y 0 (0) = y 0 (L) = 0
1, cos nπ
L x, n = 1, 2, · · · give rise to the Fourier cosine series on (0, L). It
turns out that under some conditions solutions of eigenvalue problems lead
to its own type of Fourier series on (0, L).
On an interval (a, b) we consider the eigenvalue problem
0
p(x)y 0 + λr(x)y = 0 ,
(9.1)
together with the boundary conditions
(9.2)
αy(a) + βy 0 (a) = 0, γy(b) + δy 0 (b) = 0 .
The given differentiable function p(x) and continuous function r(x) are assumed to be positive on [a, b]. The boundary conditions are called separated,
with the first one at x = a, and the other one at the right end x = b. The
constants α, β, γ and δ are given, however we cannot allow both constants
in the same condition to be zero, i.e., we assume that α2 + β 2 6= 0 and
γ 2 + δ 2 6= 0. Recall that by eigenfunctions we mean non-trivial solutions of
(9.1), satisfying the boundary conditions in (9.2).
Theorem 1 Assume that y(x) is an eigenfunction corresponding to an eigenvalue λ, while z(x) is an eigenfunction corresponding to an eigenvalue µ, and
λ 6= µ. Then y(x) and z(x) are orthogonal on [a, b] with weight r(x), i.e.,
Z
b
a
y(x)z(x) r(x) dx = 0 .
223
7.9. STURM-LIOUVILLE PROBLEMS
Proof:
The eigenfunction z(x) satisfies
(p(x)z 0 )0 + µr(x)z = 0
(9.3)
αz(a) + βz 0 (a) = 0, γz(b) + δz 0 (b) = 0 .
We multiply the equation (9.1) multiplied by z(x), and subtract from it
the equation (9.3) multiplied by y(x), obtaining
0
0
p(x)y 0 z(x) − p(x)z 0 y(x) + (λ − µ)r(x)y(x)z(x) = 0 .
We can rewrite this
0
p(y 0 z − yz 0 ) + (λ − µ)r(x)y(x)z(x) = 0 .
Integrate this over [a, b]
(9.4)
0
0
p(y z − yz )
We shall show that
|ba
+(λ − µ)
Z
b
y(x)z(x) r(x) dx = 0 .
a
(9.5)
p(b)(y 0(b)z(b) − y(b)z 0 (b)) = 0 ,
(9.6)
p(a)(y 0(a)z(a) − y(a)z 0 (a)) = 0 .
This means that the first term Rin (9.4) is zero. Then the second term in
(9.4) is also zero, and therefore ab y(x)z(x) r(x) dx = 0, because λ − µ 6= 0.
We shall justify (9.5), and the proof of (9.6) is similar. Consider first the
case when δ = 0. Then the corresponding boundary conditions simplify to
read y(b) = 0, z(b) = 0, and (9.5) follows. In the other case when δ 6= 0, we
can express y 0 (b) = − γδ y(b), z 0 (b) = − γδ z(b), and then
γ
γ
y 0 (b)z(b) − y(b)z 0 (b) = − y(b)z(b) + y(b)z(b) = 0 .
δ
δ
Theorem 2 The eigenvalues of (9.1) and (9.2) are real numbers.
Proof:
Assume, on the contrary, that an eigenvalue λ is not real, i.e.,
λ̄ 6= λ, and the corresponding eigenfunction y(x) is complex valued. Taking
complex conjugate of (9.1) and (9.2), we get
(p(x)ȳ 0)0 + λ̄r(x)ȳ = 0
αȳ(a) + β ȳ 0 (a) = 0, γ ȳ(b) + δ ȳ 0 (b) = 0 .
224CHAPTER 7. FOURIER SERIES AND BOUNDARY VALUE PROBLEMS
It follows that λ̄ is also an eigenvalue, and ū is the corresponding eigenfunction. By the preceding theorem
Z
b
y(x)ȳ(x) r(x) dx =
a
Z
b
a
|y(x)|2 r(x) dx = 0 .
The second integral involves a non-negative function, and it can be zero only
if y(x) ≡ 0. But an eigenfunction cannot be zero. We have a contradiction,
which was caused by the assumption that λ is not real. So, only real eigenvalues are possible.
♦
Example On the interval (0, π) we consider an eigenvalue problem
y 00 + λy = 0
y(0) = 0, y 0 (π) − y(π) = 0 .
Case 1. λ < 0. We may write λ = −k2 , with k > 0. The general solution is
y = c1 e−kx +c2 ekx . Using the boundary conditions, we compute c1 = c2 = 0,
i.e., y = 0 and we have no negative eigenvalues.
Case 2. λ = 0. The general solution is y = c1 x + c2 . Again we have
c1 = c2 = 0, i.e., λ = 0 is not an eigenvalue.
Case 3. λ > 0. We may write λ = k2 , with k > 0. The general solution is
y = c1 cos kx + c2 sin kx, and c1 = 0 by the first boundary condition. The
second boundary condition implies that
c2 (k cos kπ − sin kπ) = 0 .
We need c2 6= 0 to get a non-trivial solution, therefore the quantity in the
bracket must be zero, which implies that
tan kπ = k .
This equation has infinitely many solutions 0 < k1 < k2 < k3 < · · ·, as can
be seen by drawing the graphs of y = k and y = tan kπ in the ky plane.
We obtain infinitely many eigenvalues λi = ki2 , and the corresponding eigenfunctions yi = sin kix, i = 1, 2, 3, · · ·. (Observe that −ki ’s are also solutions
of this equation, but they lead to the same eigenvalues and eigenfunctions.)
Using that tan ki π = ki , we verify that for all i 6= j
Z
0
π
sin ki x sin kj x dx =
1
(kj cos kj π sin ki π − ki cos ki π sin ki π) = 0 ,
ki2 − kj2
225
7.9. STURM-LIOUVILLE PROBLEMS
which serves to illustrate the first theorem above.
As we mentioned before, the eigenfunctions of (9.1) and (9.2), yi (x) allow
us to do Fourier series, i.e., we can represent
f (x) =
∞
X
cj yj (x) .
j=1
To find the coefficients, we multiply both sides by yi (x)r(x), and integrate
over [a, b]
Z
b
a
f (x)yi (x)r(x) dx =
∞
X
j=1
cj
Z
a
b
yj (x)yi (x)r(x) dx .
By the first theorem above, for all j 6=R i the integrals on the right are zero.
So the sum on the right is equal to ci ab yi2 (x)r(x) dx. Therefore
(9.7)
ci =
Rb
a
f (x)yi(x)r(x) dx
Rb
a
yi2 (x)r(x) dx
.
For the above example, the Fourier series is
f (x) =
∞
X
ci sin ki x ,
j=1
with
ci =
Rπ
0R f (x) sin ki x dx
π
2
0 sin ki x dx
.
Using a symbolic software, like Mathematica, it is easy to approximately
compute ki’s, and the integrals for ci’s.
7.9.1
Fourier-Bessel series
Let F = F (r) be a solution of the following eigenvalue problem on the
interval (0, R)
(9.8)
F 00 + r1 F 0 + λF = 0
F 0 (0) = 0, F (R) = 0 .
Observe that we can rewrite the equation in the form (9.1) as
0
rF 0 + λrF = 0 ,
226CHAPTER 7. FOURIER SERIES AND BOUNDARY VALUE PROBLEMS
so that any two eigenfunctions of (9.8) are orthogonal with weight r.
We shall reduce the equation in (9.8) to Bessel’s equation of order zero.
To this end, we make a change of variables r → x, by letting r = √1λ x. By
the chain rule
√
Fr = Fx λ, Frr = Fxx λ .
Then the problem (9.8) becomes
λFxx +
1
√
λFx + λF = 0
√
Fx (0) = 0, F ( λR) = 0 .
√1 x
λ
We divide the equation by λ, and use primes again to denote the derivatives
in x
F 00 + x1 Fx + F = 0
√
F 0 (0) = 0, F ( λR) = 0 .
This equation is Bessel’s equation of order zero. The Bessel function J0 (x),
which we considered before, satisfies this equation, as well as the condition
F 0 (0) = 0. Recall that the function J0 (x) has infinitely many positive roots
r1 < r2 < r3 < · · ·. In order to satisfy the second boundary condition, we
need
√
λR = ri ,
√
which, after returning to the original variable r (observe that F (x) = F ( λr)),
gives us the eigenvalues and eigenfunctions of (9.8)
λi =
ri2
ri
, Fi (r) = J0
r
R2
R
.
The Fourier-Bessel series is then the expansion using the eigenfunctions Fi (r)
(9.9)
f (r) =
∞
X
j=1
ci J0
ri
r
R
,
and ci are (in view of (9.7))
(9.10)
ci =
RR
0
f (r)J0 ( rRi r)r dr
RR
0
J02 ( rRi r)r dr
.
227
7.9. STURM-LIOUVILLE PROBLEMS
7.9.2
Cooling of a cylindrical tank
Suppose we have a cylindrical tank x2 + y 2 ≤ R2 , 0 ≤ z ≤ H, whose
temperatures are independent of z, i.e., u = u(x, y, t). The heat equation is
then
ut = k∆u = k (uxx + uyy ) .
We assume that the boundary of the cylinder is kept at zero temperature,
while the initial temperatures are prescribed to be f (r). Because initial
temperatures do not depend on θ, it is natural to expect that u = u(x, y, t)
is independent of θ, i.e., u = u(r, t). Then
1
1
1
∆u = urr + ur + 2 uθθ = urr + ur .
r
r
r
We need to solve the following problem
ut = k urr + 1r ur
for r < a, and t > 0
ur (0, t) = 0, u(a, t) = 0 for t > 0
u(r, 0) = f (r) ,
with a given function f (r). We have added the condition ur (0, t) = 0,
because we expect the temperatures to have a critical point in the middle of
the tank, for all times t. We separate variables, writing u(r, t) = F (r)G(t).
We plug this into our equation, then divide both sides by kF (r)G(t):
1
F (r)G0 (t) = k F 00 (r)) + F 0 (r) G(t) ,
r
F 00 (r)) + 1r F 0 (r)
G0 (t)
=
= −λ ,
kG(t)
F (r)
which gives
F 00 + 1r F 0 + λF = 0
F 0 (0) = 0, F (R) = 0 ,
(9.11)
G0 (t)
= −λ .
kG(t)
The eigenvalue problem for F (r) we have solved before, with λi =
Fi (r) = J0 ( rRi r). Using λ = λi in (9.11), we compute
ri2
Gi (t) = ci e−k R2 t .
ri2
R2
and
228CHAPTER 7. FOURIER SERIES AND BOUNDARY VALUE PROBLEMS
The function
(9.12)
u(r, t) =
∞
X
i=1
−k
ci e
r2
i t
R2
ri
J0 ( r)
R
satisfies our equation, and the boundary conditions. The initial condition
u(r, 0) =
∞
X
ri
ci J0 ( r) = f (r)
R
i=1
tells us that we must choose ci ’s to be the coefficients of Fourier-Bessel
series, given by (9.10). Conclusion: the series in (9.12), with ci ’s computed
by (9.10), gives the solution to our problem.
7.10
Green’s function
We wish to solve the non-homogeneous boundary value problem
(p(x)y 0 )0 + r(x)y = f (x)
(10.1)
y(a) = 0, y(b) = 0 .
Here p(x), r(x) and f (x) are given functions, r(x) > 0 on [a, b]. We shall
consider the corresponding homogeneous equation
0
p(x)y 0 + r(x)y = 0 ,
(10.2)
and the corresponding homogeneous boundary value problem
(p(x)y 0)0 + r(x)y = 0
(10.3)
y(a) = 0, y(b) = 0 .
Recall the concept of Wronskian determinant of two functions y1 (x) and
y2 (x), or Wronskian, for short:
y (x) y (x)
2
W (x) = 10
y1 (x) y20 (x)
= y1 (x)y20 (x) − y10 (x)y2 (x) .
Lemma 7.10.1 Let y1 (x) and y2 (x) be any two solutions of the homogeneous equation (10.2). Then p(x)W (x) is a constant.
Proof:
We need to show that (p(x)W (x))0 = 0. Compute
(p(x)W (x))0 = y10 p(x)y20 + y1 (p(x)y20 )0 − y10 p(x)y20 − (p(x)y10 )0 y2
= y1 (p(x)y20 )0 − (p(x)y10 )0 y2 = −r(x)y1 y2 + r(x)y1 y2 = 0 .
229
7.10. GREEN’S FUNCTION
On the last step we had expressed (p(x)y10 )0 = −r(x)y1 , and (p(x)y20 )0 =
−r(x)y2 , by using the equation (10.2), which both y1 and y2 satisfy.
♦
We make the following fundamental assumption: the homogeneous boundary value problem (10.3) has only the trivial solution y = 0. Define y1 (x)
to be a non-trivial solution of the homogeneous equation (10.2) together
with the condition y(a) = 0 (which can be achieved e.g., by adding a second
initial condition y 0 (a) = 1). By our fundamental assumption, y1 (b) 6= 0.
Similarly, we define y2 (x) to be a non-trivial solution of the homogeneous
equation (10.2) together with the condition y(b) = 0. By the fundamental
assumption, y2 (a) 6= 0. The functions y1 (x) and y2 (x) form a fundamental
set of the homogeneous equation (10.2) (they are not constant multiples of
one another). To find a solution of the non-homogeneous equation (10.1),
we use the variation of parameters method, i.e., we look for solution in the
form
(10.4)
y(x) = u1 (x)y1 (x) + u2 (x)y2 (x) ,
with the functions u1 (x) and u2 (x) to be chosen. We shall require these
functions to satisfy
(10.5)
u1 (b) = 0, u2 (a) = 0 .
Then y(x) in (10.4) satisfies our boundary conditions y(a) = y(b) = 0. We
now put the equation (10.1) into the form we considered before
y 00 +
p0 (x) 0 r(x)
f (x)
y +
y=
.
p(x)
p(x)
p(x)
Then by the formulas we had
(10.6)
u01 (x) = −
(10.7)
u02 (x) =
y2 (x)f (x)
y2 (x)f (x)
=−
,
p(x)W (x)
K
y1 (x)f (x)
y1 (x)f (x)
=
,
p(x)W (x)
K
where W is the Wronskian of y1 (x) and y2 (x), and by K we denote the
constant that p(x)W (x) is equal to.
Integrating (10.7), and using the condition u2 (a) = 0, we get
u2 (x) =
x
Z
a
y1 (ξ)f (ξ)
dξ .
K
Similarly, integrating (10.6), and using the condition u1 (b) = 0, we get
u1 (x) =
Z
b
x
y2 (ξ)f (ξ)
dξ .
K
230CHAPTER 7. FOURIER SERIES AND BOUNDARY VALUE PROBLEMS
Using these functions in (10.4), we get the solution of our problem
(10.8)
y(x) = y1 (x)
b
Z
x
y2 (ξ)f (ξ)
dξ + y2 (x)
K
Z
x
a
y1 (ξ)f (ξ)
dξ .
K
It is customary to define Green’s function
(10.9)
G(x, ξ) =
(
y1 (x)y2 (ξ)
K
y2 (x)y2 (ξ)
K
for a ≤ x ≤ ξ
for ξ ≤ x ≤ b ,
so that the solution (10.8) can be written as
(10.10)
y(x) =
Z
b
G(x, ξ)f (ξ)dξ .
a
Example Find Green’s function and the solution of the problem
y 00 + y = f (x)
y(0) = 0, y(1) = 0 .
The function sin(x − a) solves the corresponding homogeneous equation, for
any constant a. Therefore, we may take y1 (x) = sin x and y2 (x) = sin(x−1).
Compute
W = y1 (x)y20 (x) − y10 (x)y2 (x) = sin x cos(x − 1) − cos x sin(x − 1) = sin 1 ,
which follows by using that cos(x − 1) = cos x cos 1 + sin x sin 1 and sin(x −
1) = sin x cos 1 − cos x sin 1. Then
G(x, ξ) =
(
sin x sin(ξ−1)
sin 1
sin(x−1) sin ξ
sin 1
and the solution is
y(x) =
Z
for 0 ≤ x ≤ ξ
for ξ ≤ x ≤ 1 ,
1
G(x, ξ)f (ξ)dξ .
0
Example Find Green’s function and the solution of the problem
x2 y 00 − 2xy + 2y = f (x)
y(1) = 0, y(2) = 0 .
The corresponding homogeneous equation
x2 y 00 − 2xy + 2y = 0
231
7.11. FOURIER TRANSFORM
is Euler’s equation, whose general solution is y(x) = c1 x + c2 x2 . We then
find y1 (x) = x − x2 , and y2 (x) = 2x − x2 . Compute
W = y1 (x)y20 (x) − y10 (x)y2 (x) = (x − x2 )(2 − 2x) − (1 − 2x)(2x − x2 ) = x2 .
Turning to the construction of G(x, ξ), we observe that our equation is not
in the form we considered above. To put it into the right form, we divide
the equation by x2
2
2
f (x)
y 00 − y 0 + 2 y = 2 ,
x
x
x
R
and then multiply this equation by the integrating factor µ = e
e−2 ln x = x12 , obtaining
Here p(x) =
1
,
x2
1 0
y
x2
0
+
(− x2 ) dx
=
2
f (x)
y= 4 .
x4
x
and K = p(x)W (x) = 1. Then
G(x, ξ) =
(
(x − x2 )(2ξ − ξ 2 )
(2x − x2 )(ξ − ξ 2 )
and the solution is
y(x) =
Z
2
G(x, ξ)
1
for 0 ≤ x ≤ ξ
for ξ ≤ x ≤ 1 ,
f (ξ)
dξ .
ξ4
Finally, we observe that the same construction works for general separated boundary conditions (9.2). If y1 (x) and y2 (x) are solutions of the
homogeneous equation satisfying the boundary conditions at x = a and
x = b respectively, then the formula (10.9) gives the Green’s function.
7.11
Fourier Transform
Recall the complex form of the Fourier series. A function f (x) defined on
(−L, L) can be represented by the series
(11.1)
f (x) =
∞
X
nπ
cn ei L x ,
n=−∞
with the coefficients
(11.2)
cn =
1
2L
Z
L
−L
f (ξ)e−i
nπ
ξ
L
dξ, n = ±1, ±2, · · · .
232CHAPTER 7. FOURIER SERIES AND BOUNDARY VALUE PROBLEMS
We use (11.2) in (11.1):
(11.3)
f (x) =
∞
X
n=−∞
1
2L
Z
L
nπ
f (ξ)ei L (x−ξ) dξ .
−L
Now assume that the interval (−∞, ∞) along some axis, which we call saxis, is subdivided into pieces, using the subdivision points sn = nπ
L . The
π
length of each interval is ∆s = L . We rewrite (11.3) as
(11.4)
f (x) =
∞
1 X
2π n=−∞
Z
L
f (ξ)eisn (x−ξ) dξ∆s .
−L
so that we can regard this as a Riemann sum of certain integral in s over
the interval (−∞, ∞). Let now L → ∞. Then ∆s → 0, and the Riemann
sum in (11.4) converges to
Z ∞ Z ∞
1
f (x) =
f (ξ)eis(x−ξ) dξds .
2π −∞ −∞
This formula is known as the Fourier integral. Our derivation of it is made
rigorous in more advanced books. We rewrite this integral as
Z ∞
Z ∞
1
1
isx
−isξ
√
f (x) = √
(11.5)
e
f (ξ)e
dξ ds .
2π −∞
2π −∞
We define the Fourier transform by
Z ∞
1
F (s) = √
f (ξ)e−isξ dξ .
2π −∞
The inverse Fourier transform is then
Z ∞
1
f (x) = √
eisx F (s) ds .
2π −∞
As with the Laplace transform, we use capital letters to denote Fourier
transforms. We shall also use the operator notation F (s) = F (f (x)).
Example Let f (x) =
(
1
0
for |x| ≤ a
Then using Euler’s formula
for |x| > a .
a
1
2 eias − e−ias
2 sin as
F (s) = √
e−isξ dξ = √
=
.
2is
π s
2π −a
2π
Assume that f (x) → 0, as x → ±∞. Using integration by parts
Z ∞
Z ∞
1
F (f 0 (x)) = √
f 0 (ξ)e−isξ dξ = is
f (ξ)e−isξ dξ = isF (s) .
2π −∞
−∞
It follows that
F (f 00 (x)) = isF (f 0 (x)) = −s2 F (s) .
Z
r
233
7.12. PROBLEMS ON INFINITE DOMAINS
7.12
Problems on infinite domains
7.12.1
Evaluation of some integrals
The following integral occurs often
I=
(12.1)
Z
∞
2
−∞
e−x dx =
√
π.
Indeed,
I2 =
R∞
R ∞ R ∞ −x2 −y2
−y2
dy = −∞
−∞ e
−∞ e
R 2π R ∞ −r2
r drdθ = π ,
0
−∞ e
2
−∞
e−x dx
=
R∞
dxdy
and (12.1) follows. We used polar coordinates to evaluate the double integral.
We shall show that for any x
(12.2)
F (x) =
Z
0
∞
2
e−z cos xz dz =
√
π − x2
e 4 .
2
Using integration by parts, we compute (d denotes the differential)
F 0 (x) =
R∞
0
2
e−z (−z sin xz) dz =
= − x2
I.e.,
R∞
0
2
1 R∞
2 0 sin xz d
e−z cos xz dz .
e−z
2
x
F 0 (x) = − F (x) .
2
(12.3)
Also
(12.4)
F (0) =
√
π
2
in view of (12.1). Solving the differential equation (12.3) together with the
initial condition (12.4), we justify (12.2).
The last integral we need is just a Laplace transform (a is a constant)
(12.5)
Z
0
∞
e−sy cos as ds =
y2
y
.
+ a2
234CHAPTER 7. FOURIER SERIES AND BOUNDARY VALUE PROBLEMS
7.12.2
Heat equation for −∞ < x < ∞
We solve the initial value problem
(12.6)
ut = kuxx − ∞ < x < ∞, t > 0
u(x, 0) = f (x) − ∞ < x < ∞ .
Here u(x, t) gives the temperature at x and time t of an infinite bar. Initial
temperatures are described by the given function f (x), k > 0 is a given
constant. The Fourier transform F (u(x, t)) = U (s, t) depends on s and on t,
which we regard here as a parameter. Observe that F (ut(x, t)) = Ut (s, t) (as
follows by writing the definition of the Fourier transform, and differentiating
in t). Applying the Fourier transform, we have
Ut = −ks2 U
U (s, 0) = F (s) .
Integrating this (we now regard s as a parameter)
2
U (s, t) = F (s)e−ks t .
To get the solution, we apply the inverse Fourier transform
u(x, t) =
R ∞ isx
R ∞ R ∞ is(x−ξ)−ks2 t
2
1
√1
e F (s)e−ks t ds = 2π
f (ξ) dξds
−∞ −∞ e
2π −∞
R
R
2
∞
∞ is(x−ξ)−ks t
1
= 2π
ds f (ξ) dξ .
−∞
−∞ e
We denote by K the integral in the brackets, i.e., K =
To evaluate K, we use Euler’s formula
K=
Z
∞
−∞
2
[cos s(x − ξ) + i sin s(x − ξ)] e−ks t ds = 2
Z
0
∞
is(x−ξ)−ks2 t
−∞ e
R∞
ds.
2
cos s(x−ξ)e−ks t ds ,
because cos s(x − ξ) is an even function of s, and sin s(x − ξ) is odd. To
evaluate the last integral, we make a change of variables s → z, by setting
√
kts = z ,
and then use the integral (12.2):
2
K=√
kt
Z
0
∞
−z2
e
x−ξ
cos √ z
kt
√
π (x−ξ)2
dz = √ e− 4kt .
kt
235
7.12. PROBLEMS ON INFINITE DOMAINS
With K evaluated, we conclude
(12.7)
1
u(x, t) = √
2 πkt
Z
∞
−∞
e−
(x−ξ)2
4kt
f (ξ) dξ .
Let us now assume that the initial temperature is positive on some small
interval (−, ), and is zero outside of this interval. Then u(x, t) > 0 for all
x ∈ (−∞, ∞) and t > 0. I.e., not only the temperatures become positive
far from the heat source, this happens practically instantaneously! This
is known as infinite propagation speed. This points to an imperfection of
our model. Observe, however, that the temperatures given by (12.7) are
negligible for large |x|.
7.12.3
Steady state temperatures for the upper half plane
We solve the boundary value problem
(12.8)
uxx + uyy = 0
− ∞ < x < ∞, y > 0
u(x, 0) = f (x) − ∞ < x < ∞ .
Here u(x, y) gives the steady state temperature at a point (x, y) of an infinite
plate, occupying the upper half of the xy plane. The given function f (x)
provides the prescribed temperatures at the boundary of the plate. We
are interested in the solution that is bounded, as y → ∞. (Without this
assumption the solution is not unique: if u(x, y) is a solution of (12.8), so is
u(x, y) + cy, for any constant c.)
Applying the Fourier transform in x, U (s, y) =
we have (observe that F (uyy (x, t)) = Uyy (s, t))
(12.9)
R∞
√1
2π −∞
u(ξ, y)e−isξ dξ,
Uyy − s2 U = 0
U (s, 0) = F (s) .
The general solution of the equation in (12.9) is
U (s, y) = c1 e−sy + c2 esy .
When s > 0, we select c2 = 0, so that U (and therefore u) is bounded as
y → ∞. Then c1 = F (s), giving us
U (s, y) = F (s)e−sy when s > 0 .
236CHAPTER 7. FOURIER SERIES AND BOUNDARY VALUE PROBLEMS
When s < 0, we select c1 = 0 to get a bounded solution. Then c2 = F (s),
giving us
U (s, y) = F (s)esy when s < 0 .
Combining both cases, we conclude that the bounded solution of (12.9) is
U (s, y) = F (s)e−|s|y .
It remains to compute the inverse Fourier transform
1
u(x, y) = √
2π
Z
∞
eisx−|s|y F (s) ds =
−∞
1
2π
Z
∞
f (ξ)
−∞
Z
∞
eis(x−ξ)−|s|y ds
−∞
dξ .
We evaluate the integral in the brackets by using Euler’s formula, that fact
that cos s(x − ξ) is even in s, and sin s(x − ξ) is odd in s, and the formula
(12.5) on the last step
Z
∞
−∞
eis(x−ξ)−|s|y ds = 2
Z
∞
0
e−sy cos s(x − ξ) ds =
2y
.
(x − ξ)2 + y 2
The solution of (12.8), known as Poisson’s formula, is
y
u(x, y) =
π
7.12.4
Z
∞
−∞
f (ξ) dξ
.
(x − ξ)2 + y 2
Using Laplace transform for a semi-infinite string
A string extending for 0 < x < ∞ is initially at rest. The left end x = 0
undergoes periodic vibrations given by A sin ωt, with given constants A and
ω. Find the displacements u(x, t) at any point x > 0 and time t, assuming
that the displacements are bounded.
We need to solve the problem
utt = c2 uxx
x>0
u(x, 0) = ut (x, 0) = 0
x>0
u(0, t) = A sin ωt .
We take Laplace transform of the equation in the t variable, i.e., U (x, s) =
L (u(x, t)). Using the initial conditions
(12.10)
s2 U = c2 Uxx , U (0, s) =
Aω
.
s2 + ω 2
237
7.12. PROBLEMS ON INFINITE DOMAINS
The general solution of this equation is
s
s
U (x, s) = c1 e c x + c2 e− c x .
To get a solution bounded as x → +∞, we select c1 = 0. Then c2 =
by the initial condition in (12.10), and we have
x
U (x, s) = e− c s
s2
Aω
s2 +ω 2
Aω
.
+ ω2
Taking the inverse Laplace transform, we have the answer
u(x, t) = Aux/c (t) sin(t − x/c) ,
where ux/c (t) is the Heaviside function. We see that at any point x, the
solution is zero for t < x/c (the time it takes the signal to travel). For
t > x/c, the motion is identical with that at x = 0, but is delayed in time
by the amount x/c.
7.12.5
Problems
I. Find the complex form of the Fourier series for the following functions on
the given interval.
1. f (x) = x on (−2, 2).
Answer. x =
P∞
n=−∞
2i(−1)n
nπ
ei
nπ
x
2
.
2. f (x) = ex on (−1, 1).
x
Answer. e =
∞
X
(−1)n
n=−∞
2
(1 + inπ)(e − 1e ) inπx
e
.
2(1 + n2 π 2 )
3. f (x) = sin x on (−π, π).
Answer. − 14 e−i2x +
1
2
− 14 ei2x.
4. f (x) = sin 2x cos 2x on (−π/2, π/2).
Answer.
i −i4x
4e
− 4i ei4x .
5. Suppose a real valued function f (x) is represented by its complex Fourier
series on (−L, L)
f (x) =
∞
X
n=−∞
Show that c̄n = c−n for all n.
cn ei
nπ
x
L
.
238CHAPTER 7. FOURIER SERIES AND BOUNDARY VALUE PROBLEMS
II. Solve the following problems on the circular domains, and describe their
physical significance.
1.
∆u = 0, r < 3
u(3, θ) = 4 cos2 θ .
2
2
Answer. u = 2 + r 2 cos 2θ = 2 + (x2 − y 2 ).
9
9
2.
∆u = 0, r > 3
u(3, θ) = 4 cos2 θ .
Answer. u = 2 +
18
x2 − y 2
cos
2θ
=
2
+
18
.
r2
(x2 + y 2 )2
3.
∆u = 0, r < 2
u(2, θ) = y 2 .
1
Answer. u = 2 − (x2 − y 2 ).
2
4.
∆u = 0, r > 2
u(2, θ) = y 2 .
Answer. u = 2 − 8
x2 − y 2
.
(x2 + y 2 )2
5.
∆u = 0, r < 1
u(1, θ) = cos4 θ .
6.
∆u = 0, r < 1
u(1, θ) = θ .
Hint: Extend f (θ) = θ as a 2π periodic function, equal to θ on [0, 2π]. Then
an =
1
π
Z
π
f (θ) cos nθdθ =
−π
1
π
Z
2π
f (θ) cos nθdθ =
0
and compute similarly a0 and bn ’s.
Answer. u = π −
7.
∞
X
2
n=1
n
r n sin nθ.
∆u = 0, r > 3
u(1, θ) = θ + 2 .
1
π
Z
0
2π
θ cos nθdθ ,
7.12. PROBLEMS ON INFINITE DOMAINS
Answer. u = π + 2 − 2
∞
X
3n
n=1
n
239
r −n sin nθ.
III.
1. Find the eigenvalues and the eigenfunctions of
y 00 + λy = 0,
y(0) + y 0 (0) = 0, y(π) + y 0 (π) = 0 .
Answer. λn = n2 , yn = sin nx − n cos nx, and also λ = −1 with y = e−x .
2. Identify graphically the eigenvalues, and find the eigenfunctions of
y 00 + λy = 0,
y(0) + y 0 (0) = 0, y(π) = 0 .
3. Find the eigenvalues and the eigenfunctions of (a is a constant)
y 00 + ay 0 + λy = 0,
Answer. λn =
a2
4
+
n2 π2
L2
y(0) = y(L) = 0 .
a
, yn = e− 2 x sin nπ
L x.
4. (i) Find the eigenvalues and the eigenfunctions of
x2 y 00 + 3xy 0 + λy = 0,
y(1) = y(e) = 0 .
Answer. λn = 1 + n2 π 2 , yn = x−1 sin (nπ ln x).
(ii) Put this equation into the form (p(x)y 0 )0 + λr(x)y = 0, and verify that
the eigenfunctions are orthogonal with weight r(x).
Hint: Divide the equation by x2 , and verify that x3 is the integrating factor,
so that p(x) = x3 and r(x) = x.
5. Show that the eigenfunctions of
y 0000 + λy = 0, y(0) = y 0 (0) = y(L) = y 0 (L) = 0
corresponding to different eigenvalues, are orthogonal on (0, L).
Hint:
y 0000 z − yz 0000 =
d
y 000z − y 00 z 0 + y 0 z 00 − yz 000 .
dx
6. Consider an eigenvalue problem
(p(x)y 0)0 + λr(x)y = 0,
αy(0) + βy 0 (0) = 0, γy(π) + δy 0 (π) = 0 .
240CHAPTER 7. FOURIER SERIES AND BOUNDARY VALUE PROBLEMS
Assume that the given functions p(x) and r(x) are positive, while α and β
are non-zero constants of different sign, and γ and δ are non-zero constants
of the same sign. Show that all eigenvalues are positive.
Hint. Multiply the equation by y(x) and integrate over (0, π). Do an integration by parts.
IV. Find Green’s function and the solution of the following problems.
1.
y 00 + y = f (x)
a<x<b
y(a) = 0, y(b) = 0 .
Answer. G(x, ξ) =



2.
sin(x−a) sin(ξ−b)
sin(b−a)
sin(x−b) sin(ξ−a)
sin(b−a)
y 00 + y = f (x)
for 0 ≤ x ≤ ξ
for ξ ≤ x ≤ 1 .
0<x<2
y(0) = 0, y 0 (2) + y(2) = 0 .
Hint: y1 (x) = sin x, y2 (x) = − sin(x − 2) + cos(x − 2).
3.
x2 y 00 + 4xy + 2y = f (x)
1<x<2
y(1) = 0, y(2) = 0 .
Answer. G(x, ξ) =
y(x) =
V.
R2
1
(
(x−1 − x−2 )(ξ −1 − 2ξ −2 )
(ξ −1 − ξ −2 )(x−1 − 2x−2 )
G(x, ξ)ξ 2f (ξ) dξ.
1. Find the Fourier transform of the function f (x) =
Answer. F (s) =
r
for 1 ≤ x ≤ ξ
for ξ ≤ x ≤ 2 .
(
1 − |x|
0
2 1
(1 − cos s).
π s2
2. Find the Fourier transform of the function f (x) = e−|x| .
Answer. F (s) =
r
2 1
.
π s2 + 1
3. Find the Fourier transform of the function f (x) = e−
2
− s2
Answer. F (s) = e
. (Hint: use the formula (12.2).)
4. Show that for any constant a
x2
2
.
for |x| ≤ 1
.
for |x| > 1
241
7.12. PROBLEMS ON INFINITE DOMAINS
(i) F (f (x)eiax) = F (s − a).
1 s
(ii) F (f (ax)) = F ( ) (a 6= 0).
a a
5. Find a non-trivial solution of the boundary value problem
uxx + uyy = 0 − ∞ < x < ∞, y > 0
u(x, 0) = 0 − ∞ < x < ∞ .
This example shows that physical intuition may fail in an unbounded domain.
6. Use Poisson’s formula to solve
uxx + uyy = 0
− ∞ < x < ∞, y > 0
u(x, 0) = f (x) − ∞ < x < ∞ ,
where f (x) =
(
1
0
Answer. u(x, y) =
for |x| ≤ 1
.
for |x| > 1
1
x+1
x−1
tan−1
− tan−1
.
π
y
y
Chapter 8
Numerical Computations
8.1
The capabilities of software systems, like Mathematica
Mathematica uses the command DSolve to solve differential equations analytically (i.e., by a formula). This is not always possible, but Mathematica
does seem to know the solution methods we have studied in Chapters 1 and
2. For example, to solve the equation
y 0 = 2y − sin2 x, y(0) = 0.3 ,
(1.1)
we enter the commands
sol = DSolve@8y '@xD Š 2 y@xD - Sin@xD ^ 2, y@0D Š .3<, y@xD, xD
z@x_D = y@xD . sol@@1DD
Plot@z@xD, 8x, 0, 1<D
Mathematica returns the solution y(x) = −0.125 cos(2x) + 0.175e2x +
0.125 sin(2x) + 0.25, and its graph, which is given in Figure 7.1.
If you are new to Mathematica, do not worry about its sintaxis now. Try
to solve other equations, by making obvious modifications to the above commands. Notice that Mathematica had chosen the point (0, 0.4) to originate
the axes. If one needs the general solution, the command is
DSolve[y0[x] == 2y[x]-Sin[x]ˆ2, y[x], x]
Mathematica returns
1
y(x) → e2x c[1] + (−Cos[2x] + Sin[2x] + 2) .
8
242
8.1. THE CAPABILITIES OF SOFTWARE SYSTEMS, LIKE MATHEMATICA 243
y
1.5
1.0
0.5
0.2
0.4
0.6
0.8
1.0
x
Figure 8.1: The solution of the equation (1.1)
Observe that c[1] is Mathematica’s way to write an arbitrary constant c, and
that the answer is returned as a “replacement rule” (and that was the reason
for an extra command in the preceding example). Second (and higher) order
equations are solved similarly. To solve the following resonant problem
y 00 + 4y = 8 sin 2t, y(0) = 0, y 0 (0) = −2 ,
we enter
DSolve[{y00[t]+4y[t] == 8 Sin[2t], y[0]==0, y0 [0]==-2 }, y[t], t]
// Simplify
and Mathematica returns the solution y(t) = −2t cos 2t, which implies unbounded oscillations.
When we try to use DSolve command to solve the nonlinear equation
y 0 = 2y 3 − sin2 x
Mathematica thinks for a while, and then it throws this equation back at
us. It cannot do it, and most likely, nobody can. However, we can use
Euler’s method to compute numerical approximation of any solution, if a
initial condition is provided, e.g., the solution of
(1.2)
y 0 = 2y 3 − sin2 x y(0) = 0.3 .
Mathematica can also compute the numerical approximation of this solution. Instead of Euler’s method it uses a more sophisticated method. The
command is NDSolve. We enter the following commands
244
CHAPTER 8. NUMERICAL COMPUTATIONS
sol = NDSolve@8y '@xD Š 2 y@xD ^ 3 - Sin@xD ^ 2, y@0D Š .3<, y, 8x, 0, 3<D
z@x_D = y@xD . sol@@1DD
Plot@z@xD, 8x, 0, 1<, AxesLabel ® 8"x", "y"<D
y
0.30
0.25
0.20
0.15
0.2
0.4
0.6
0.8
1.0
x
Figure 8.2: The solution of the equation (1.2)
Mathematica produces the graph of the solution. Mathematica returns
solution as an interpolation function, i.e., after computing the values of
the solution at a sequence of points, it joins the points on the graph by a
smooth curve. The solution function (it is z(x) in the way we did it) and its
derivatives can be evaluated at any point. It is practically indistinguishable
from the exact solution. When one uses NDSolve command to solve the
problem (1.1) and plot the solution, one obtains a graph practically identical
to Figure 7.1.
The NDSolve command can also be used for systems of differential equations. For example, let x = x(t) and y = y(t) be solutions of
(1.3)
x0 = −y + y 2 , x(0) = 0.2
y 0 = x, y(0) = 0.3 .
Once the solution is computed, the solution functions x = x(t), y = y(t)
define a parametric curve in the xy plane, which we draw. The commands
and output are given in Figure 7.3. The first command tells Mathematica:
“forget everything”. This is a good practice with heavy usage. If you play
8.2. SOLVING BOUNDARY VALUE PROBLEMS
In[21]:=
Out[22]=
In[24]:=
245
Clear@"`*"D
sol =
NDSolve@88x '@tD Š -y@tD + y@tD ^ 2, y '@tD Š x@tD<, 8x@0D Š 0.2, y@0D Š 0.3<<, 8x, y<, 8t, 0, 20<D
88x ® InterpolatingFunction@880., 20.<<, <>D, y ® InterpolatingFunction@880., 20.<<, <>D<<
ParametricPlot@8x@tD . sol@@1, 1DD, y@tD . sol@@1, 2DD<, 8t, 0, 20<D
0.4
0.3
0.2
0.1
Out[24]=
-0.3
-0.2
0.1
-0.1
0.2
0.3
-0.1
-0.2
-0.3
Figure 8.3: The solution of the system (1.3)
with other initial conditions, in which |x(0)| and |y(0)| are small, you will
discover that the rest point (0, 0) is a center.
8.2
Solving boundary value problems
Given the functions a(x) and f (x), we wish to find the solution y = y(x) of
the following boundary value problem
(2.1)
y 00 + a(x)y = f (x), a < x < b
y(a) = y(b) = 0 .
The general solution of the equation in (2.1) is of course
(2.2)
y(x) = Y (x) + c1 y1 (x) + c2 y2 (x) ,
246
CHAPTER 8. NUMERICAL COMPUTATIONS
where Y (x) is any particular solution, and y1 (x), y2 (x) two solutions of the
corresponding homogeneous equation
y 00 + a(x)y = 0 ,
(2.3)
which are not constant multiples of one another. To compute y1 (x), we shall
use the NDSolve command to solve the homogeneous equation (2.3) with
the initial conditions
(2.4)
y1 (a) = 0, y10 (a) = 1 .
To compute y2 (x), we shall use with the initial conditions
(2.5)
y2 (b) = 0, y10 (b) = −1 .
(Mathematica has no problem solving the equation “backwards” on (a, b).)
To find Y (x), we can solve the equation in (2.1) with any initial conditions,
say Y (a) = 0, Y 0 (a) = 1. We have computed the general solution (2.2). It
remains to pick the constants c1 and c2 to satisfy the boundary conditions.
Using (2.4),
y(a) = Y (a) + c1 y1 (a) + c2 y2 (a) = Y (a) + c2 y2 (a) = 0 ,
i.e., c2 = − yY2(a)
(a) . We assume here that y2 (a) 6= 0, otherwise our problem
(2.1) is not solvable for general f (x). Similarly, using (2.5),
y(b) = Y (b) + c1 y1 (b) + c2 y2 (b) = Y (b) + c1 y1 (b) = 0 ,
i.e., c1 = − yY1(b)
. The solution of our problem (2.1) is
(b)
y(x) = Y (x) −
Y (b)
Y (a)
y1 (x) −
y2 (x) .
y1 (b)
y2 (a)
Mathematica’s subroutine, or module, to produce this solution, called lin is
given in Figure 7.4. Here a = 0 and b = 1.
For example, entering the commands in Figure 7.5 produces the graph
of the solution of
(2.6)
y 00 + ex y = −3x + 1, 0 < x < 1
y(0) = y(1) = 0 ,
which is given in Figure 7.6.
247
8.2. SOLVING BOUNDARY VALUE PROBLEMS
Clear@"`*"D
lin :=
ModuleB8s1, s2, s3, y1, y2, Y<,
s1 = NDSolve@8 y ''@xD + a@xD y@xD Š 0, y@0D Š 0, y¢ @0D Š 1<, y, 8x, 0, 1<D;
s2 = NDSolve@8 y ''@xD + a@xD y@xD Š 0, y@1D Š 0, y¢ @1D Š -1<, y, 8x, 0, 1<D;
s3 = NDSolve@8 y ''@xD + a@xD y@xD Š f@xD, y@0D Š 0, y¢ @0D Š 1<, y, 8x, 0, 1<D;
y1@x_D = y@xD . s1@@1DD;
y2@x_D = y@xD . s2@@1DD;
Y@x_D = y@xD . s3@@1DD;
Y@1D
Y@0D
y1@xD y2@xD ;
z@x_D := Y@xD y1@1D
y2@0D
F
Figure 8.4: The solution module for the problem (2.1)
a@x_D = ã ^ x;
f@x_D = -3 x + 1;
lin
Plot@z@xD, 8x, 0, 1<, AxesLabel ® 8"x", "z"<D
Figure 8.5: Solving the problem (2.6)
z
0.08
0.06
0.04
0.02
0.2
0.4
0.6
0.8
1.0
x
Figure 8.6: Solution of the problem (2.6)
248
8.3
CHAPTER 8. NUMERICAL COMPUTATIONS
Solving nonlinear boundary value problems
Review of Newton’s method
Suppose we wish to solve the equation
(3.1)
f (x) = 0
with a given function f (x). For example, in case f (x) = e2x − x − 1, the
equation
e2x − x − 1 = 0
has a solution on the interval (0,1) (as can be seen by drawing f (x) in
Mathematica), but this solution cannot be expressed by a formula. Newton’s method produces a sequence of iterates {xn } to approximate a solution of (3.1). If the iterate xn has been already computed, we use a linear
approximation
f (x) ≈ f (xn ) + f 0 (xn )(x − xn ) .
We then replace (3.1) by
f (xn ) + f 0 (xn )(x − xn ) = 0 ,
solve this equation for x, and declare this solution x to be next approximation, i.e.,
xn+1 = xn −
f (xn )
, n = 1, 2, · · · , beginning with some x0 .
f 0 (xn )
Newton’s method does not always converge, but when it does, the convergence is usually super fast. Let us denote by x∗ the (true) solution of (3.1).
Then |xn − x∗ | gives the error on the n-th step. Under some conditions on
f (x) it can be shown that
|xn+1 − x∗ | < c|xn − x∗ |2 ,
with some constant c > 0. Let us suppose that c = 1 and |x0 − x∗ | = 0.1.
Then |x1 − x∗ | < |x0 − x∗ |2 = 0.12 = 0.01, |x2 − x∗ | < |x1 − x∗ |2 < 0.012 =
0.0001, |x3 − x∗ | < |x2 − x∗ |2 < 0.00012 = 0.00000001. We see that x3 is
practically an exact solution!
8.3. SOLVING NONLINEAR BOUNDARY VALUE PROBLEMS
249
A class of nonlinear boundary value problems
We shall solve the problem
(3.2)
y 00 + g(y) = e(x), a < x < b
y(a) = y(b) = 0 ,
with given functions g(y) and e(x). We shall produce a sequence of iterates
{yn (x)} to approximate a solution of (3.2). If the iterate yn (x) has been
already computed, we use a linear approximation
g(y) ≈ g(yn) + g 0 (yn )(y − yn ) ,
and replace (3.2) with the linear problem
(3.3)
y 00 + g(yn (x)) + g 0 (yn (x))(y − yn (x)) = e(x), a < x < b
y(a) = y(b) = 0 .
The solution of this problem we declare to be our next approximation,
yn+1 (x). We rewrite (3.3) as
y 00 + a(x)y = f (x), a < x < b
y(a) = y(b) = 0 ,
with known coefficient functions
a(x) = g 0 (yn (x))
f (x) = −g(yn (x)) + g 0 (yn (x))yn(x)) + e(x) ,
and call on the procedure lin from the preceding section to solve (3.3), and
produce yn+1 (x).
Example We have solved the problem
(3.4)
y 00 + y 3 = 2 sin 4πx − x, 0 < x < 1
y(0) = y(1) = 0 .
The commands are given in Figure 7.7. (The procedure lin has been executed before these commands.) We started with y0 (x) = 1 (yold[x]= 1 in
Mathematica). We did five iterations of Newton’s method. The solution
(the function z[x]) is plotted in Figure 7.8.
250
CHAPTER 8. NUMERICAL COMPUTATIONS
e@x_D = 2 Sin@4 Π xD - x;
yold@x_D = 1;
g@y_D = y ^ 3;
st = 5;
For@i = 1, i £ st, i ++,
a@x_D = g '@yold@xDD;
f@x_D = e@xD - g@yold@xDD + g '@yold@xDD yold@xD ;
lin;
yold@x_D = z@xD;
D
Figure 8.7: Solving the problem (3.4)
0.06
0.05
0.04
0.03
0.02
0.01
0.2
0.4
0.6
0.8
1.0
Figure 8.8: Solution of the problem (3.4)
The resulting solution is very accurate, and we have verified it by the
following independent calculation. We used Mathematica to calculate the
slope of the solution at zero z0 [0] ≈ 0.00756827, and then we had solved the
equation in (3.4) with the initial conditions y(0) = 0, y 0 (0) = 0.00756827
(using the NDSolve command). The graph of this solution y(x) is identical
to the one in Figure 7.8.
8.4
Direction fields
The equation (y = y(x))
(4.1)
y 0 = cos 2y + 2 cos 2x
251
8.4. DIRECTION FIELDS
y
5
4
3
2
1
1
2
3
4
5
6
x
Figure 8.9: The direction field for the equation (4.1)
cannot be solved analytically (like most equations). If we add an initial condition, we can find the corresponding solution, using the NDSolve command.
But this is just one solution. Can we visualize a bigger picture?
The right hand side of the equation (4.1) gives us the slope of the solution passing through the point (x, y) (if cos 2y + 2 cos 2x > 0, the solution
is increasing at (x, y)). The vector < 1, cos 2y + 2 cos 2x > is called the
direction vector. If solution is increasing at (x, y), this vector points up, and
the faster is the rate of increase, the larger is the amplitude of the direction
vector. If we plot the direction vectors at many points, the result is called
the direction field, which can tell us at a glance how various solutions are
behaving. In Figure 7.9 the direction field is plotted using Mathematica’s
command VectorFieldPlot.
How will the solution of (4.1) with the initial condition y(0) = 1 behave?
Imagine a particle placed at the initial point (0, 1). The direction field, or
“wind”, will take it a little down, but soon the direction of motion will be
up. After a while, a strong downdraft will take the particle much lower,
but eventually it will be going up again. In Figure 7.10, we give the actual
252
CHAPTER 8. NUMERICAL COMPUTATIONS
y
1.5
1.0
0.5
0.5
1.0
1.5
2.0
2.5
3.0
x
Figure 8.10: The solution of the initial value problem (4.2)
solution of
(4.2)
y 0 = cos 2y + 2 cos 2x, y(0) = 1 ,
produced using the NDSolve command.
Bibliography
[1] W.E. Boyce and R.C. DiPrima, Elementary Differential Equations and
Boundary Value Problems. John Wiley and Sons, Inc., New YorkLondon-Sydney 1965.
[2] M.W. Braun, Differential equations and their applications. An introduction to applied mathematics. Fourth edition. Texts in Applied Mathematics, 11. Springer-Verlag, New York, 1993.
[3] A.F. Filippov, A Collection of Problems in Differential Equations (In
Russian), Nauka, Moscow 1970.
[4] Judith R. Goodstein, The Volterra chronicles. The life and times of an
extraordinary mathematician 1860–1940. History of Mathematics, 31.
American Mathematical Society, Providence, RI; London Mathematical
Society, London, 2007.
[5] P. Korman, Computation of radial solutions of semilinear equations,
Electron. J. Qual. Theory Differ. Equ., No. 13, 14 pp. (electronic) (
2007).
[6] F. Rothe, The periods of the Volterra- Lotka system, J. Reine Angew.
Math. 355 129-138 (1985).
[7] A. Sommerfeld, Mechanics. Lecture Notes On Theoretical Physics, Vol.
I, Academic Press, New York-London 1964.
[8] A. Sommerfeld, Partial Differential Equations in Physics. Translated
by Ernst G. Straus. Academic Press, Inc., New York, N. Y., 1949.
[9] W.W. Stepanow, Lehrbuch der Differentialgleichungen. Fifth German
edition. Translated from the sixth Russian edition. Hochschulbcher für
Mathematik [University Books for Mathematics], 20. VEB Deutscher
Verlag der Wissenschaften, Berlin, 1982.
[10] J. Waldvogel, The period in the Lotka - Volterra system is monotonic,
J. Math. Anal. Appl. 114 no. 1, 178-184 (1986).
[11] E.C. Young, Partial differential equations. An introduction. Allyn and
Bacon, Inc., Boston, Mass., 1972.
253
Chapter 9
Appendix
.1
The chain rule and its descendants
The chain rule
d
f (g(x)) = f 0 (g(x))g 0(x)
dx
lets us differentiate the composition of functions f (x) and g(x). In particular, if f (x) = xr with a constant r, then f 0 (x) = rxr−1 , and we conclude
(1.1)
d
[g(x)]r = r [g(x)]r−1 g 0 (x) .
dx
which is the generalized power rule. In case f (x) = ex , we have
(1.2)
d g(x)
e
= eg(x) g 0 (x) .
dx
In case f (x) = ln x, we conclude
(1.3)
d
g 0 (x)
ln g(x) =
.
dx
g(x)
These are children of the chain rule. These formulas should be memorized
separately, even though they can derived from the chain rule. If g(x) = ax,
with a constant a, then by (1.2)
d ax
e = aeax .
dx
This grandchild of the chain rule should be also memorized separately. For
d −x
example, dx
e = −e−x .
254
255
.2. PARTIAL FRACTIONS
The chain rule also lets us justify the following integration formulas (a
and b are constants):
x2
Z
dx
1
= ln |ax + b| + c ,
ax + b
a
Z
.2
dx
1
x
= tan−1 + c ,
2
+a
a
a
Z
g 0 (x)
dx = ln |g(x)| + c .
g(x)
Partial fractions
This method is needed for both computing integrals and inverse Laplace
transforms. We have
x2
x+1
A
B
x+1
=
=
+
.
+x−2
(x − 1)(x + 2)
x−1 x+2
The first step is to factor the denominator. We now look for the constants A
and B so that the original fraction is equal to the sum of two simpler ones.
Adding the fractions on the right, we need
x+1
A(x + 2) + B(x − 1)
=
.
(x − 1)(x + 2)
(x − 1)(x + 2)
We need to arrange for the numerators to be the same:
A(x + 2) + B(x − 1) = x + 1 ;
(A + B)x + 2A − B = x + 1 .
Equating the coefficients of the two linear polynomials
A+B = 1
2A − B = 1 .
We calculate A = 32 , B = 31 . Conclusion:
x2
x+1
2/3
1/3
=
+
.
+x−2
x−1 x+2
256
CHAPTER 9. APPENDIX
Our next example
s2 − 1
s2 − 1
A
Bs + C
=
= + 2
3
2
2
s +s +s
s(s + s + 1)
s
s +s+1
involves a quadratic factor in the denominator that cannot be factored (an
irreducible quadratic). Adding the fractions on the right, we need
A(s2 + s + 1) + s(Bs + C) = s2 − 1 .
Equating the coefficients of the two quadratic polynomials
A+B = 1
A+
C=0
A = −1 .
I.e., A = −1, B = 2, C = 1. Conclusion:
1
2s + 1
s2 − 1
=− + 2
.
3
2
s +s +s
s s +s+1
s−1
involves a product of
(s + 3)2 (s2 + 3)
a square of a linear factor and an irreducible quadratic. The way to proceed
is:
s−1
A
B
Cs + D
=
+
+ 2
.
2
2
2
(s + 3) (s + 3)
s + 3 (s + 3)
s +3
The denominator of the next example
1
As before, we calculate A = − 12
, B = − 13 , C = D =
1
12 .
Conclusion:
s−1
1
1
s+1
=−
−
+
.
2
2
2
(s + 3) (s + 3)
12(s + 3) 3(s + 3)
12 (s2 + 3)
.3
Eigenvalues and eigenvectors
The vector z =
"
have
Bz =
1
−1
"
#
3 1
1 3
is very special for the matrix B =
#"
1
−1
#
=
"
2
−2
#
=2
"
1
−1
#
"
#
3 1
. We
1 3
= 2z ,
257
.3. EIGENVALUES AND EIGENVECTORS
i.e., Bz = 2z. We say that z is an eigenvector of B, corresponding to an
eigenvalue 2. In general, we say that a vector x 6= 0 is an eigenvector of a
square matrix A, corresponding to an eigenvalue λ if
(3.4)
Ax = λx .
If A is 2 × 2, then in components x =



"
x1
x2
#
6=
"
#
0
. In case A is 3 × 3,
0

x1
0

 

then x =  x2  =
6  0 . If c 6= 0 is any constant, then
x3
0
A (cx) = cAx = cλx = λ (cx) ,
which implies that cx is also an eigenvector
of
"
# the matrix A, correspond1
ing to an eigenvalue λ. In particular, c
gives us eigenvectors of B,
−1
corresponding to the eigenvalue λ = 2.
We now rewrite (3.4) in the form
(3.5)
(A − λI) x = 0 ,
where I is the identity matrix. This is a homogeneous system of linear
equations. To have non-zero solutions, its determinant must be zero:
(3.6)
|A − λI| = 0 .
This is a polynomial equation for λ, called characteristic equation. If the
matrix A is 2 × 2, this is a quadratic equation, and it has two roots λ1 and
λ2 . In case A is 3 × 3, this is a cubic equation, and it has three roots λ1 , λ2
and λ3 , and so on for larger A. To calculate the eigenvector corresponding
to λ1 , we solve the system
(A − λ1 I) x = 0 ,
and proceed similarly with other eigenvalues.
Example B =
"
#
3 1
. The characteristic equation
1 3
3−λ 1
|B − λI| = 1
3−λ
= (3 − λ)2 − 1 = 0
258
CHAPTER 9. APPENDIX
has roots λ1 "= 2 and
# λ2 = 4. (Because: 3 − λ = ±1, λ = 3 ± 1.) We already
1
know that c
are the eigenvectors for λ1 = 2, so let us compute the
−1
eigenvectors
"
# for λ2 = 4. We need to solve the system (A − 4I) x = 0 for
x1
x=
, i.e.,
x2
−x1 + x2 = 0
x1 − x2 = 0 .
The second equation is superfluous,
"
#and the first one gives x"1 = #x2 . If we
1
1
let x2 = 1, then x1 = 1 so that
, and more generally c
gives us
1
1
the eigenvectors corresponding to λ2 = 4.
If an eigenvalue λ has multiplicity two (i.e., it is a double root of the
characteristic equation) then it may have either two linearly independent
eigenvectors, or only one. In case there are two, x1 and x2 , then Ax1 = λx1
and Ax2 = λx2 , and then for any constants c1 and c2
A (c1 x1 + c2 x2 ) = c1 Ax1 + c2 Ax2 = c1 λx1 + c2 λx2 = λ (c1 x1 + c2 x2 ) ,
i.e., c1 x1 + c2 x2 is an eigenvector corresponding to λ.


2 1 1


Example A =  1 2 1 . The characteristic equation
1 1 2
2−λ 1
1
2−λ 1
1
1
1
2−λ
= −λ3 + 6λ2 − 9λ + 4 = 0
is a cubic equation, so we need to guess a root. λ1 = 1 is a root. We then
factor
λ3 − 6λ2 + 9λ − 4 = (λ − 1) λ2 − 5λ + 4 .
Setting the second factor to zero, we find the other two roots λ2 = 1 and
λ3 = 4. Turning to the eigenvectors, let us begin with the simple
 eigenvalue

x1


λ3 = 4. We need to solve the system (A − 4I) x = 0 for x =  x2 , i.e.,
x3
i.e.,
−2x1 + x2 + x3 = 0
.3. EIGENVALUES AND EIGENVECTORS
259
x1 − 2x2 + x3 = 0
x1 + x2 − 2x3 = 0 .
The third equation is superfluous, because adding the first two gives the
negative of the third. We are left with
−2x1 + x2 + x3 = 0
x1 − 2x2 + x3 = 0 .
We have more variables to play with, than equations to satisfy. We are free
to set x3 = 1, and then solve
 two
 equations for x1 and x2 , obtaining x1 = 1
1


and x2 = 1. Conclusion: c  1  gives us the eigenvectors corresponding to
1
λ3 = 4.
To find the eigenvectors of the double eigenvalue λ1 = 1, we need to
solve the system (A − I) x = 0, i.e.,
x1 + x2 + x3 = 0
x1 + x2 + x3 = 0
x1 + x2 + x3 = 0 .
Throwing away both the second and the third equations, we are left with
x1 + x2 + x3 = 0 .
Now both x2 and x3 are
1

calculate x1 = 1, i.e.,  0
1

free
variables. Letting x3 = 1 and x2 = 0, we


 is an eigenvector. If we let x3 = 0 and x2 = 1,

1


we calculate x1 = 1, i.e.,  1  is an eigenvector. Conclusion:
0




1
1




c1  0  + c2  1 
0
1
with arbitrary constants c1 and c2 , give us all eigenvectors, corresponding
to λ1 = 1, i.e., the eigenspace of λ1 = 1.
260
CHAPTER 9. APPENDIX
For the matrix A =
"
1
4
−4 −7
#
we have λ1 = λ2 = −3, but there is
only one linearly independent eigenvector: c
"
#
−1
.
1
Download