Uploaded by Asuna Chen

textbook-diffyqs

advertisement
Notes on Diffy Qs
Differential Equations for Engineers
by Jiล™í Lebl
May 6, 2022
(version 6.3)
2
Typeset in LATEX.
Copyright ©2008–2022 Jiล™í Lebl
This work is dual licensed under the Creative Commons Attribution-Noncommercial-Share Alike 4.0
International License and the Creative Commons Attribution-Share Alike 4.0 International License.
To view a copy of these licenses, visit https://creativecommons.org/licenses/by-nc-sa/4.0/
or https://creativecommons.org/licenses/by-sa/4.0/ or send a letter to Creative Commons
PO Box 1866, Mountain View, CA 94042, USA.
You can use, print, duplicate, share this book as much as you want. You can base your own notes
on it and reuse parts if you keep the license the same. You can assume the license is either the
CC-BY-NC-SA or CC-BY-SA, whichever is compatible with what you wish to do, your derivative
works must use at least one of the licenses. Derivative works must be prominently marked as such.
During the writing of this book, the author was in part supported by NSF grant DMS-0900885 and
DMS-1362337.
The date is the main identifier of version. The major version / edition number is raised only if there
have been substantial changes. Edition number started at 5, that is, version 5.0, as it was not kept
track of before.
See https://www.jirka.org/diffyqs/ for more information (including contact information).
The LATEX source for the book is available for possible modification and customization at github:
https://github.com/jirilebl/diffyqs
Contents
Introduction
7
0.1 Notes about these notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
0.2 Introduction to differential equations . . . . . . . . . . . . . . . . . . . . . . . 10
0.3 Classification of differential equations . . . . . . . . . . . . . . . . . . . . . . 17
1
2
3
First order equations
1.1 Integrals as solutions . . . . . . . . . . . . .
1.2 Slope fields . . . . . . . . . . . . . . . . . . .
1.3 Separable equations . . . . . . . . . . . . . .
1.4 Linear equations and the integrating factor
1.5 Substitution . . . . . . . . . . . . . . . . . .
1.6 Autonomous equations . . . . . . . . . . . .
1.7 Numerical methods: Euler’s method . . . .
1.8 Exact equations . . . . . . . . . . . . . . . .
1.9 First order linear PDE . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Higher order linear ODEs
2.1 Second order linear ODEs . . . . . . . . . . . .
2.2 Constant coefficient second order linear ODEs
2.3 Higher order linear ODEs . . . . . . . . . . . .
2.4 Mechanical vibrations . . . . . . . . . . . . . .
2.5 Nonhomogeneous equations . . . . . . . . . . .
2.6 Forced oscillations and resonance . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Systems of ODEs
3.1 Introduction to systems of ODEs . . . . . . . . .
3.2 Matrices and linear systems . . . . . . . . . . . .
3.3 Linear systems of ODEs . . . . . . . . . . . . . .
3.4 Eigenvalue method . . . . . . . . . . . . . . . . .
3.5 Two-dimensional systems and their vector fields
3.6 Second order systems and applications . . . . . .
3.7 Multiple eigenvalues . . . . . . . . . . . . . . . .
3.8 Matrix exponentials . . . . . . . . . . . . . . . . .
3.9 Nonhomogeneous systems . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
21
21
27
33
40
46
51
57
63
72
.
.
.
.
.
.
79
79
84
90
96
104
111
.
.
.
.
.
.
.
.
.
119
119
127
136
140
147
152
162
169
177
4
4
CONTENTS
Fourier series and PDEs
4.1 Boundary value problems . . . . . . . . . . . . . . . .
4.2 The trigonometric series . . . . . . . . . . . . . . . . .
4.3 More on the Fourier series . . . . . . . . . . . . . . . .
4.4 Sine and cosine series . . . . . . . . . . . . . . . . . . .
4.5 Applications of Fourier series . . . . . . . . . . . . . .
4.6 PDEs, separation of variables, and the heat equation .
4.7 One-dimensional wave equation . . . . . . . . . . . .
4.8 D’Alembert solution of the wave equation . . . . . . .
4.9 Steady state temperature and the Laplacian . . . . . .
4.10 Dirichlet problem in the circle and the Poisson kernel
.
.
.
.
.
.
.
.
.
.
189
189
198
208
218
226
232
243
252
258
264
5
More on eigenvalue problems
5.1 Sturm–Liouville problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.2 Higher order eigenvalue problems . . . . . . . . . . . . . . . . . . . . . . . .
5.3 Steady periodic solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
273
273
282
286
6
The Laplace transform
6.1 The Laplace transform . . . . . . . . . . .
6.2 Transforms of derivatives and ODEs . . .
6.3 Convolution . . . . . . . . . . . . . . . . .
6.4 Dirac delta and impulse response . . . . .
6.5 Solving PDEs with the Laplace transform
.
.
.
.
.
293
293
300
308
313
320
7
Power series methods
7.1 Power series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.2 Series solutions of linear second order ODEs . . . . . . . . . . . . . . . . . .
7.3 Singular points and the method of Frobenius . . . . . . . . . . . . . . . . . .
327
327
335
342
8
Nonlinear systems
8.1 Linearization, critical points, and equilibria . . . . .
8.2 Stability and classification of isolated critical points
8.3 Applications of nonlinear systems . . . . . . . . . .
8.4 Limit cycles . . . . . . . . . . . . . . . . . . . . . . .
8.5 Chaos . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
351
351
357
364
373
378
.
.
.
.
.
.
385
385
396
407
421
428
436
A Linear algebra
A.1 Vectors, mappings, and matrices . . .
A.2 Matrix algebra . . . . . . . . . . . . . .
A.3 Elimination . . . . . . . . . . . . . . . .
A.4 Subspaces, dimension, and the kernel
A.5 Inner product and projections . . . . .
A.6 Determinant . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
CONTENTS
5
B Table of Laplace Transforms
443
Further Reading
445
Solutions to Selected Exercises
447
Index
461
6
CONTENTS
Introduction
0.1
Notes about these notes
Note: A section for the instructor.
This book originated from my class notes for Math 286 at the University of Illinois at
Urbana-Champaign (UIUC) in Fall 2008 and Spring 2009. It is a first course on differential
equations for engineers. Using this book, I also taught Math 285 at UIUC, Math 20D at
University of California, San Diego (UCSD), and Math 4233 at Oklahoma State University
(OSU). Normally these courses are taught with Edwards and Penney, Differential Equations
and Boundary Value Problems: Computing and Modeling [EP], or Boyce and DiPrima’s
Elementary Differential Equations and Boundary Value Problems [BD], and this book aims to
be more or less a drop-in replacement. Other books I used as sources of information
and inspiration are E.L. Ince’s classic (and inexpensive) Ordinary Differential Equations [I],
Stanley Farlow’s Differential Equations and Their Applications [F], now available from Dover,
Berg and McGregor’s Elementary Partial Differential Equations [BM], and William Trench’s
free book Elementary Differential Equations with Boundary Value Problems [T]. See the Further
Reading chapter at the end of the book.
0.1.1
Organization
The organization of this book to some degree requires chapters be done in order. Later
chapters can be dropped. The dependence of the material covered is roughly:
Introduction
Appendix A
Chapter 1
Chapter 2
Chapter 3
Chapter 8
Chapter 7
Chapter 4
Chapter 5
Chapter 6
8
INTRODUCTION
There are a few references in chapters 4 and 5 to chapter 3 (some linear algebra), but
these references are not essential and can be skimmed over, so chapter 3 can safely be
dropped, while still covering chapters 4 and 5. Chapter 6 does not depend on chapter 4
except that the PDE section 6.5 makes a few references to chapter 4, although it could, in
theory, be covered separately. The more in-depth appendix A on linear algebra can replace
the short review § 3.2 for a course that combines linear algebra and ODE.
0.1.2
Typical types of courses
Several typical types of courses can be run with the book. There are the two original
courses at UIUC, both cover ODE as well some PDE. Either, there is the 4 hours-a-week for
a semester (Math 286 at UIUC):
Introduction (0.2), chapter 1 (1.1–1.7), chapter 2, chapter 3, chapter 4 (4.1–4.9), chapter 5 (or
6 or 7 or 8).
Or, the second course at UIUC is at 3 hours-a-week (Math 285 at UIUC):
Introduction (0.2), chapter 1 (1.1–1.7), chapter 2, chapter 4 (4.1–4.9), (and maybe chapter 5,
6, or 7).
A semester-long course at 3 hours a week that doesn’t cover either systems or PDE
will cover, beyond the introduction, chapter 1, chapter 2, chapter 6, and chapter 7, (with
sections skipped as above). On the other hand, a typical course that covers systems will
probably need to skip Laplace and power series and cover chapter 1, chapter 2, chapter 3,
and chapter 8.
If sections need to be skipped in the beginning, a good core of the sections on single
ODE is: 0.2, 1.1–1.4, 1.6, 2.1, 2.2, 2.4–2.6.
The complete book can be covered at a reasonably fast pace at approximately 76
lectures (without appendix A) or 86 lectures (with appendix A replacing § 3.2). This is
not accounting for exams, review, or time spent in a computer lab. A two-quarter or a
two-semester course can be easily run with the material. For example (with some sections
perhaps strategically skipped):
Semester 1: Introduction, chapter 1, chapter 2, chapter 6, chapter 7.
Semester 2: Chapter 3, chapter 8, chapter 4, chapter 5.
A combined course on ODE with linear algebra can run as:
Introduction, chapter 1 (1.1–1.7), chapter 2, appendix A, chapter 3 (w/o § 3.2), (possibly
chapter 8).
The chapter on the Laplace transform (chapter 6), the chapter on Sturm–Liouville
(chapter 5), the chapter on power series (chapter 7), and the chapter on nonlinear systems
(chapter 8), are more or less interchangeable and can be treated as “topics”. If chapter 8
is covered, it may be best to place it right after chapter 3, and chapter 5 is best covered
right after chapter 4. If time is short, the first two sections of chapter 7 make a reasonable
self-contained unit.
0.1. NOTES ABOUT THESE NOTES
0.1.3
9
Computer resources
The book’s website https://www.jirka.org/diffyqs/ contains the following resources:
1. Interactive SAGE demos.
2. Online WeBWorK homeworks (using either your own WeBWorK installation or
Edfinity) for most sections, customized for this book.
3. The PDFs of the figures used in this book.
I taught the UIUC courses using IODE (https://faculty.math.illinois.edu/iode/).
IODE is a free software package that works with Matlab (proprietary) or Octave (free
software). The graphs in the book were made with the Genius software (see https:
//www.jirka.org/genius.html). I use Genius in class to show these (and other) graphs.
The LATEX source of the book is also available for possible modification and customization
at github (https://github.com/jirilebl/diffyqs).
0.1.4
Acknowledgments
Firstly, I would like to acknowledge Rick Laugesen. I used his handwritten class notes
the first time I taught Math 286. My organization of this book through chapter 5, and
the choice of material covered, is heavily influenced by his notes. Many examples and
computations are taken from his notes. I am also heavily indebted to Rick for all the advice
he has given me, not just on teaching Math 286. For spotting errors and other suggestions,
I would also like to acknowledge (in no particular order): John P. D’Angelo, Sean Raleigh,
Jessica Robinson, Michael Angelini, Leonardo Gomes, Jeff Winegar, Ian Simon, Thomas
Wicklund, Eliot Brenner, Sean Robinson, Jannett Susberry, Dana Al-Quadi, Cesar Alvarez,
Cem Bagdatlioglu, Nathan Wong, Alison Shive, Shawn White, Wing Yip Ho, Joanne Shin,
Gladys Cruz, Jonathan Gomez, Janelle Louie, Navid Froutan, Grace Victorine, Paul Pearson,
Jared Teague, Ziad Adwan, Martin Weilandt, Sönmez ลžahutoฤŸlu, Pete Peterson, Thomas
Gresham, Prentiss Hyde, Jai Welch, Simon Tse, Andrew Browning, James Choi, Dusty
Grundmeier, John Marriott, Jim Kruidenier, Barry Conrad, Wesley Snider, Colton Koop,
Sarah Morse, Erik Boczko, Asif Shakeel, Chris Peterson, Nicholas Hu, Paul Seeburger,
Jonathan McCormick, David Leep, William Meisel, Shishir Agrawal, Tom Wan, Andres
Valloud, Martin Irungu, Justin Corvino, and probably others I have forgotten. Finally, I
would like to acknowledge NSF grants DMS-0900885 and DMS-1362337.
10
0.2
INTRODUCTION
Introduction to differential equations
Note: more than 1 lecture, §1.1 in [EP], chapter 1 in [BD]
0.2.1
Differential equations
The laws of physics are generally written down as differential equations. Therefore, all
of science and engineering use differential equations to some degree. Understanding
differential equations is essential to understanding almost anything you will study in your
science and engineering classes. You can think of mathematics as the language of science,
and differential equations are one of the most important parts of this language as far as
science and engineering are concerned. As an analogy, suppose all your classes from now
on were given in Swahili. It would be important to first learn Swahili, or you would have a
very tough time getting a good grade in your classes.
You saw many differential equations already without perhaps knowing about it. And
you even solved simple differential equations when you took calculus. Let us see an
example you may not have seen:
๐‘‘๐‘ฅ
+ ๐‘ฅ = 2 cos ๐‘ก.
(1)
๐‘‘๐‘ก
Here ๐‘ฅ is the dependent variable and ๐‘ก is the independent variable. Equation (1) is a basic
example of a differential equation. It is an example of a first order differential equation, since
it involves only the first derivative of the dependent variable. This equation arises from
Newton’s law of cooling where the ambient temperature oscillates with time.
0.2.2
Solutions of differential equations
Solving the differential equation means finding ๐‘ฅ in terms of ๐‘ก. That is, we want to find a
function of ๐‘ก, which we call ๐‘ฅ, such that when we plug ๐‘ฅ, ๐‘ก, and ๐‘‘๐‘ฅ
๐‘‘๐‘ก into (1), the equation
holds; that is, the left hand side equals the right hand side. It is the same idea as it would
be for a normal (algebraic) equation of just ๐‘ฅ and ๐‘ก. We claim that
๐‘ฅ = ๐‘ฅ(๐‘ก) = cos ๐‘ก + sin ๐‘ก
is a solution. How do we check? We simply plug ๐‘ฅ into equation (1)! First we need to
๐‘‘๐‘ฅ
compute ๐‘‘๐‘ฅ
๐‘‘๐‘ก . We find that ๐‘‘๐‘ก = − sin ๐‘ก + cos ๐‘ก. Now let us compute the left-hand side of (1).
๐‘‘๐‘ฅ
+ ๐‘ฅ = (− sin ๐‘ก + cos ๐‘ก) + (cos ๐‘ก + sin ๐‘ก) = 2 cos ๐‘ก.
๐‘‘๐‘ก
|
{z
} | {z }
๐‘‘๐‘ฅ
๐‘‘๐‘ก
๐‘ฅ
Yay! We got precisely the right-hand side. But there is more! We claim ๐‘ฅ = cos ๐‘ก + sin ๐‘ก + ๐‘’ −๐‘ก
is also a solution. Let us try,
๐‘‘๐‘ฅ
= − sin ๐‘ก + cos ๐‘ก − ๐‘’ −๐‘ก .
๐‘‘๐‘ก
11
0.2. INTRODUCTION TO DIFFERENTIAL EQUATIONS
We plug into the left-hand side of (1)
๐‘‘๐‘ฅ
+ ๐‘ฅ = (− sin ๐‘ก + cos ๐‘ก − ๐‘’ −๐‘ก ) + (cos ๐‘ก + sin ๐‘ก + ๐‘’ −๐‘ก ) = 2 cos ๐‘ก.
๐‘‘๐‘ก
|
{z
} |
{z
}
๐‘ฅ
๐‘‘๐‘ฅ
๐‘‘๐‘ก
And it works yet again!
So there can be many different solutions. For this equation all solutions can be written
in the form
๐‘ฅ = cos ๐‘ก + sin ๐‘ก + ๐ถ๐‘’ −๐‘ก ,
for some constant ๐ถ. Different constants ๐ถ will give different solutions, so there are really
infinitely many possible solutions. See Figure 1 for the graph of a few of these solutions.
We will see how we find these solutions a few lectures from now.
Solving differential equations can be
quite hard. There is no general method
that solves every differential equation. We
will generally focus on how to get exact formulas for solutions of certain differential
equations, but we will also spend a little
bit of time on getting approximate solutions. And we will spend some time on
understanding the equations without solving them.
Most of this book is dedicated to ordinary
differential equations or ODEs, that is, equations with only one independent variable,
Figure 1: Few solutions of ๐‘‘๐‘ฅ
๐‘‘๐‘ก + ๐‘ฅ = 2 cos ๐‘ก.
where derivatives are only with respect to
this one variable. If there are several independent variables, we get partial differential equations or PDEs.
Even for ODEs, which are very well understood, it is not a simple question of turning
a crank to get answers. When you can find exact solutions, they are usually preferable
to approximate solutions. It is important to understand how such solutions are found.
Although in real applications you will leave much of the actual calculations to computers,
you need to understand what they are doing. It is often necessary to simplify or transform
your equations into something that a computer can understand and solve. You may even
need to make certain assumptions and changes in your model to achieve this.
To be a successful engineer or scientist, you will be required to solve problems in your
job that you never saw before. It is important to learn problem solving techniques, so that
you may apply those techniques to new problems. A common mistake is to expect to learn
some prescription for solving all the problems you will encounter in your later career. This
course is no exception.
0
1
2
3
4
5
3
3
2
2
1
1
0
0
-1
-1
0
1
2
3
4
5
12
0.2.3
INTRODUCTION
Differential equations in practice
So how do we use differential equations in
Real-world problem
science and engineering? First, we have some
real-world problem we wish to understand. We
abstract
interpret
make some simplifying assumptions and cresolve
ate a mathematical model. That is, we translate Mathematical
Mathematical
the real-world situation into a set of differential
model
solution
equations. Then we apply mathematics to get
some sort of a mathematical solution. There is still something left to do. We have to interpret
the results. We have to figure out what the mathematical solution says about the real-world
problem we started with.
Learning how to formulate the mathematical model and how to interpret the results is
what your physics and engineering classes do. In this course, we will focus mostly on the
mathematical analysis. Sometimes we will work with simple real-world examples so that
we have some intuition and motivation about what we are doing.
Let us look at an example of this process. One of the most basic differential equations is
the standard exponential growth model. Let ๐‘ƒ denote the population of some bacteria on
a Petri dish. We assume that there is enough food and enough space. Then the rate of
growth of bacteria is proportional to the population—a large population grows quicker.
Let ๐‘ก denote time (say in seconds) and ๐‘ƒ the population. Our model is
๐‘‘๐‘ƒ
= ๐‘˜๐‘ƒ,
๐‘‘๐‘ก
for some positive constant ๐‘˜ > 0.
Example 0.2.1: Suppose there are 100 bacteria at time 0 and 200 bacteria 10 seconds later.
How many bacteria will there be 1 minute from time 0 (in 60 seconds)?
First we need to solve the equation. We
claim that a solution is given by
0
10
20
30
40
50
60
6000
6000
5000
5000
4000
4000
3000
3000
2000
2000
1000
1000
๐‘˜๐‘ก
๐‘ƒ(๐‘ก) = ๐ถ๐‘’ ,
where ๐ถ is a constant. Let us try:
๐‘‘๐‘ƒ
= ๐ถ ๐‘˜๐‘’ ๐‘˜๐‘ก = ๐‘˜๐‘ƒ.
๐‘‘๐‘ก
And it really is a solution.
OK, now what? We do not know ๐ถ, and
we do not know ๐‘˜. But we know something.
We know ๐‘ƒ(0) = 100, and we know ๐‘ƒ(10) =
200. Let us plug these conditions in and see
what happens.
0
0
0
10
20
30
40
50
60
Figure 2: Bacteria growth in the first 60 seconds.
100 = ๐‘ƒ(0) = ๐ถ๐‘’ ๐‘˜0 = ๐ถ,
200 = ๐‘ƒ(10) = 100 ๐‘’ ๐‘˜10 .
0.2. INTRODUCTION TO DIFFERENTIAL EQUATIONS
Therefore, 2 = ๐‘’ 10๐‘˜ or
ln 2
10
13
= ๐‘˜ ≈ 0.069. So
๐‘ƒ(๐‘ก) = 100 ๐‘’ (ln 2)๐‘ก/10 ≈ 100 ๐‘’ 0.069๐‘ก .
At one minute, ๐‘ก = 60, the population is ๐‘ƒ(60) = 6400. See Figure 2 on the preceding page.
Let us talk about the interpretation of the results. Does our solution mean that there
must be exactly 6400 bacteria on the plate at 60s? No! We made assumptions that might
not be true exactly, just approximately. If our assumptions are reasonable, then there
will be approximately 6400 bacteria. Also, in real life ๐‘ƒ is a discrete quantity, not a real
number. However, our model has no problem saying that for example at 61 seconds,
๐‘ƒ(61) ≈ 6859.35.
Normally, the ๐‘˜ in ๐‘ƒ 0 = ๐‘˜๐‘ƒ is known, and we want to solve the equation for different
initial conditions. What does that mean? Take ๐‘˜ = 1 for simplicity. Suppose we want to
solve the equation ๐‘‘๐‘ƒ
๐‘‘๐‘ก = ๐‘ƒ subject to ๐‘ƒ(0) = 1000 (the initial condition). Then the solution
turns out to be (exercise)
๐‘ƒ(๐‘ก) = 1000 ๐‘’ ๐‘ก .
We call ๐‘ƒ(๐‘ก) = ๐ถ๐‘’ ๐‘ก the general solution, as every solution of the equation can be written
in this form for some constant ๐ถ. We need an initial condition to find out what ๐ถ is, in
order to find the particular solution we are looking for. Generally, when we say “particular
solution,” we just mean some solution.
0.2.4
Four fundamental equations
A few equations appear often and it is useful to just memorize what their solutions are. Let
us call them the four fundamental equations. Their solutions are reasonably easy to guess
by recalling properties of exponentials, sines, and cosines. They are also simple to check,
which is something that you should always do. No need to wonder if you remembered the
solution correctly.
First such equation is
๐‘‘๐‘ฆ
= ๐‘˜ ๐‘ฆ,
๐‘‘๐‘ฅ
for some constant ๐‘˜ > 0. Here ๐‘ฆ is the dependent and ๐‘ฅ the independent variable. The
general solution for this equation is
๐‘ฆ(๐‘ฅ) = ๐ถ๐‘’ ๐‘˜๐‘ฅ .
We saw above that this function is a solution, although we used different variable names.
Next,
๐‘‘๐‘ฆ
= −๐‘˜ ๐‘ฆ,
๐‘‘๐‘ฅ
for some constant ๐‘˜ > 0. The general solution for this equation is
๐‘ฆ(๐‘ฅ) = ๐ถ๐‘’ −๐‘˜๐‘ฅ .
14
INTRODUCTION
Exercise 0.2.1: Check that the ๐‘ฆ given is really a solution to the equation.
Next, take the second order differential equation
๐‘‘2 ๐‘ฆ
2
= −๐‘˜ 2 ๐‘ฆ,
๐‘‘๐‘ฅ
for some constant ๐‘˜ > 0. The general solution for this equation is
๐‘ฆ(๐‘ฅ) = ๐ถ1 cos(๐‘˜๐‘ฅ) + ๐ถ2 sin(๐‘˜๐‘ฅ).
Since the equation is a second order differential equation, we have two constants in our
general solution.
Exercise 0.2.2: Check that the ๐‘ฆ given is really a solution to the equation.
Finally, consider the second order differential equation
๐‘‘2 ๐‘ฆ
2
= ๐‘˜ 2 ๐‘ฆ,
๐‘‘๐‘ฅ
for some constant ๐‘˜ > 0. The general solution for this equation is
๐‘ฆ(๐‘ฅ) = ๐ถ1 ๐‘’ ๐‘˜๐‘ฅ + ๐ถ2 ๐‘’ −๐‘˜๐‘ฅ ,
or
๐‘ฆ(๐‘ฅ) = ๐ท1 cosh(๐‘˜๐‘ฅ) + ๐ท2 sinh(๐‘˜๐‘ฅ).
For those that do not know, cosh and sinh are defined by
๐‘’ ๐‘ฅ + ๐‘’ −๐‘ฅ
๐‘’ ๐‘ฅ − ๐‘’ −๐‘ฅ
,
sinh ๐‘ฅ =
.
2
2
They are called the hyperbolic cosine and hyperbolic sine. These functions are sometimes
easier to work with than exponentials. They have some nice familiar properties such as
๐‘‘
๐‘‘
cosh 0 = 1, sinh 0 = 0, and ๐‘‘๐‘ฅ
cosh ๐‘ฅ = sinh ๐‘ฅ (no that is not a typo) and ๐‘‘๐‘ฅ
sinh ๐‘ฅ = cosh ๐‘ฅ.
cosh ๐‘ฅ =
Exercise 0.2.3: Check that both forms of the ๐‘ฆ given are really solutions to the equation.
Example 0.2.2: In equations of higher order, you get more constants you must solve for
๐‘‘2 ๐‘ฆ
to get a particular solution. The equation ๐‘‘๐‘ฅ 2 = 0 has the general solution ๐‘ฆ = ๐ถ1 ๐‘ฅ + ๐ถ2 ;
simply integrate twice and don’t forget about the constant of integration. Consider the
initial conditions ๐‘ฆ(0) = 2 and ๐‘ฆ 0(0) = 3. We plug in our general solution and solve for the
constants:
2 = ๐‘ฆ(0) = ๐ถ1 · 0 + ๐ถ2 = ๐ถ2 ,
3 = ๐‘ฆ 0(0) = ๐ถ1 .
In other words, ๐‘ฆ = 3๐‘ฅ + 2 is the particular solution we seek.
An interesting note about cosh: The graph of cosh is the exact shape of a hanging chain.
This shape is called a catenary. Contrary to popular belief this is not a parabola. If you
invert the graph of cosh, it is also the ideal arch for supporting its weight. For example, the
gateway arch in Saint Louis is an inverted graph of cosh—if it were just a parabola it might
fall. The formula used in the design is inscribed inside the arch:
๐‘ฆ = −127.7 ft · cosh(๐‘ฅ/127.7 ft) + 757.7 ft.
15
0.2. INTRODUCTION TO DIFFERENTIAL EQUATIONS
0.2.5
Exercises
Exercise 0.2.4: Show that ๐‘ฅ = ๐‘’ 4๐‘ก is a solution to ๐‘ฅ 000 − 12๐‘ฅ 00 + 48๐‘ฅ 0 − 64๐‘ฅ = 0.
Exercise 0.2.5: Show that ๐‘ฅ = ๐‘’ ๐‘ก is not a solution to ๐‘ฅ 000 − 12๐‘ฅ 00 + 48๐‘ฅ 0 − 64๐‘ฅ = 0.
Exercise 0.2.6: Is ๐‘ฆ = sin ๐‘ก a solution to
2
๐‘‘๐‘ฆ
๐‘‘๐‘ก
= 1 − ๐‘ฆ 2 ? Justify.
Exercise 0.2.7: Let ๐‘ฆ 00 + 2๐‘ฆ 0 − 8๐‘ฆ = 0. Now try a solution of the form ๐‘ฆ = ๐‘’ ๐‘Ÿ๐‘ฅ for some (unknown)
constant ๐‘Ÿ. Is this a solution for some ๐‘Ÿ? If so, find all such ๐‘Ÿ.
Exercise 0.2.8: Verify that ๐‘ฅ = ๐ถ๐‘’ −2๐‘ก is a solution to ๐‘ฅ 0 = −2๐‘ฅ. Find ๐ถ to solve for the initial
condition ๐‘ฅ(0) = 100.
Exercise 0.2.9: Verify that ๐‘ฅ = ๐ถ1 ๐‘’ −๐‘ก + ๐ถ2 ๐‘’ 2๐‘ก is a solution to ๐‘ฅ 00 − ๐‘ฅ 0 − 2๐‘ฅ = 0. Find ๐ถ1 and ๐ถ2
to solve for the initial conditions ๐‘ฅ(0) = 10 and ๐‘ฅ 0(0) = 0.
Exercise 0.2.10: Find a solution to (๐‘ฅ 0)2 + ๐‘ฅ 2 = 4 using your knowledge of derivatives of functions
that you know from basic calculus.
Exercise 0.2.11: Solve:
a)
๐‘‘๐ด
= −10๐ด,
๐‘‘๐‘ก
๐‘‘2 ๐‘ฆ
c)
= 4๐‘ฆ,
๐‘‘๐‘ฅ 2
๐ด(0) = 5
๐‘ฆ(0) = 0,
b)
๐‘ฆ 0(0)
=1
๐‘‘๐ป
= 3๐ป,
๐‘‘๐‘ฅ
๐ป(0) = 1
๐‘‘2 ๐‘ฅ
d)
= −9๐‘ฅ,
๐‘‘๐‘ฆ 2
๐‘ฅ(0) = 1,
๐‘ฅ 0(0) = 0
Exercise 0.2.12: Is there a solution to ๐‘ฆ 0 = ๐‘ฆ, such that ๐‘ฆ(0) = ๐‘ฆ(1)?
Exercise 0.2.13: The population of city X was 100 thousand 20 years ago, and the population of
city X was 120 thousand 10 years ago. Assuming constant growth, you can use the exponential
population model (like for the bacteria). What do you estimate the population is now?
Exercise 0.2.14: Suppose that a football coach gets a salary of one million dollars now, and a raise
of 10% every year (so exponential model, like population of bacteria). Let ๐‘  be the salary in millions
of dollars, and ๐‘ก is time in years.
a) What is ๐‘ (0) and ๐‘ (1).
b) Approximately how many years will it take
for the salary to be 10 million.
c) Approximately how many years will it take
for the salary to be 20 million.
d) Approximately how many years will it take
for the salary to be 30 million.
Note: Exercises with numbers 101 and higher have solutions in the back of the book.
Exercise 0.2.101: Show that ๐‘ฅ = ๐‘’ −2๐‘ก is a solution to ๐‘ฅ 00 + 4๐‘ฅ 0 + 4๐‘ฅ = 0.
Exercise 0.2.102: Is ๐‘ฆ = ๐‘ฅ 2 a solution to ๐‘ฅ 2 ๐‘ฆ 00 − 2๐‘ฆ = 0? Justify.
16
INTRODUCTION
Exercise 0.2.103: Let ๐‘ฅ๐‘ฆ 00 − ๐‘ฆ 0 = 0. Try a solution of the form ๐‘ฆ = ๐‘ฅ ๐‘Ÿ . Is this a solution for some
๐‘Ÿ? If so, find all such ๐‘Ÿ.
Exercise 0.2.104: Verify that ๐‘ฅ = ๐ถ1 ๐‘’ ๐‘ก + ๐ถ2 is a solution to ๐‘ฅ 00 − ๐‘ฅ 0 = 0. Find ๐ถ1 and ๐ถ2 so that ๐‘ฅ
satisfies ๐‘ฅ(0) = 10 and ๐‘ฅ 0(0) = 100.
Exercise 0.2.105: Solve
๐‘‘๐œ‘
๐‘‘๐‘ 
= 8๐œ‘ and ๐œ‘(0) = −9.
Exercise 0.2.106: Solve:
a)
๐‘‘๐‘ฅ
= −4๐‘ฅ,
๐‘‘๐‘ก
c)
๐‘‘๐‘
= 3๐‘,
๐‘‘๐‘ž
๐‘ฅ(0) = 9
๐‘(0) = 4
b)
๐‘‘2 ๐‘ฅ
= −4๐‘ฅ,
๐‘‘๐‘ก 2
d)
๐‘‘ 2๐‘‡
= 4๐‘‡,
๐‘‘๐‘ฅ 2
๐‘ฅ(0) = 1,
๐‘‡(0) = 0,
๐‘ฅ 0(0) = 2
๐‘‡ 0(0) = 6
0.3. CLASSIFICATION OF DIFFERENTIAL EQUATIONS
0.3
17
Classification of differential equations
Note: less than 1 lecture or left as reading, §1.3 in [BD]
There are many types of differential equations, and we classify them into different
categories based on their properties. Let us quickly go over the most basic classification.
We already saw the distinction between ordinary and partial differential equations:
• Ordinary differential equations or (ODE) are equations where the derivatives are taken
with respect to only one variable. That is, there is only one independent variable.
• Partial differential equations or (PDE) are equations that depend on partial derivatives
of several variables. That is, there are several independent variables.
Let us see some examples of ordinary differential equations:
๐‘‘๐‘ฆ
= ๐‘˜ ๐‘ฆ,
๐‘‘๐‘ก
๐‘‘๐‘ฆ
= ๐‘˜(๐ด − ๐‘ฆ),
๐‘‘๐‘ก
๐‘‘2 ๐‘ฅ
๐‘‘๐‘ฅ
+ ๐‘˜๐‘ฅ = ๐‘“ (๐‘ก).
๐‘š 2 +๐‘
๐‘‘๐‘ก
๐‘‘๐‘ก
(Exponential growth)
(Newton’s law of cooling)
(Mechanical vibrations)
And of partial differential equations:
๐œ•๐‘ฆ
๐œ•๐‘ฆ
+๐‘
= 0,
๐œ•๐‘ก
๐œ•๐‘ฅ
๐œ•๐‘ข ๐œ•2 ๐‘ข
,
=
๐œ•๐‘ก
๐œ•๐‘ฅ 2
๐œ•2 ๐‘ข ๐œ•2 ๐‘ข ๐œ•2 ๐‘ข
=
+
.
๐œ•๐‘ก 2
๐œ•๐‘ฅ 2 ๐œ•๐‘ฆ 2
(Transport equation)
(Heat equation)
(Wave equation in 2 dimensions)
If there are several equations working together, we have a so-called system of differential
equations. For example,
๐‘ฆ 0 = ๐‘ฅ,
๐‘ฅ0 = ๐‘ฆ
is a simple system of ordinary differential equations. Maxwell’s equations for electromagnetics,
® = ๐œŒ,
∇·๐ท
∇ · ๐ต® = 0,
๐œ• ๐ต®
∇ × ๐ธ® = − ,
๐œ•๐‘ก
®
® = ®๐ฝ + ๐œ• ๐ท ,
∇×๐ป
๐œ•๐‘ก
are a system of partial differential equations. The divergence operator ∇· and the curl
operator ∇× can be written out in partial derivatives of the functions involved in the ๐‘ฅ, ๐‘ฆ,
and ๐‘ง variables.
18
INTRODUCTION
The next bit of information is the order of the equation (or system). The order is simply
the order of the largest derivative that appears. If the highest derivative that appears is
the first derivative, the equation is of first order. If the highest derivative that appears is
the second derivative, then the equation is of second order. For example, Newton’s law
of cooling above is a first order equation, while the mechanical vibrations equation is a
second order equation. The equation governing transversal vibrations in a beam,
๐‘Ž4
๐œ•4 ๐‘ฆ ๐œ•2 ๐‘ฆ
+
= 0,
๐œ•๐‘ฅ 4 ๐œ•๐‘ก 2
is a fourth order partial differential equation. It is fourth order as at least one derivative is
the fourth derivative. It does not matter that the derivative in ๐‘ก is only of second order.
In the first chapter, we will start attacking first order ordinary differential equations,
๐‘‘๐‘ฆ
that is, equations of the form ๐‘‘๐‘ฅ = ๐‘“ (๐‘ฅ, ๐‘ฆ). In general, lower order equations are easier to
work with and have simpler behavior, which is why we start with them.
We also distinguish how the dependent variables appear in the equation (or system).
In particular, we say an equation is linear if the dependent variable (or variables) and their
derivatives appear linearly, that is only as first powers, they are not multiplied together,
and no other functions of the dependent variables appear. In other words, the equation is
a sum of terms, where each term is some function of the independent variables or some
function of the independent variables multiplied by a dependent variable or its derivative.
Otherwise, the equation is called nonlinear. For example, an ordinary differential equation
is linear if it can be put into the form
๐‘‘๐‘› ๐‘ฆ
๐‘‘ ๐‘›−1 ๐‘ฆ
๐‘‘๐‘ฆ
๐‘Ž ๐‘› (๐‘ฅ) ๐‘› + ๐‘Ž ๐‘›−1 (๐‘ฅ) ๐‘›−1 + · · · + ๐‘Ž 1 (๐‘ฅ)
+ ๐‘Ž 0 (๐‘ฅ)๐‘ฆ = ๐‘(๐‘ฅ).
๐‘‘๐‘ฅ
๐‘‘๐‘ฅ
๐‘‘๐‘ฅ
(2)
The functions ๐‘Ž 0 , ๐‘Ž 1 , . . . , ๐‘Ž ๐‘› are called the coefficients. The equation is allowed to depend
arbitrarily on the independent variable. So
๐‘’๐‘ฅ
๐‘‘2 ๐‘ฆ
๐‘‘๐‘ฆ
1
2
+
sin(๐‘ฅ)
+
๐‘ฅ
๐‘ฆ
=
๐‘‘๐‘ฅ
๐‘ฅ
๐‘‘๐‘ฅ 2
(3)
is still a linear equation as ๐‘ฆ and its derivatives only appear linearly.
All the equations and systems above as examples are linear. It may not be immediately
obvious for Maxwell’s equations unless you write out the divergence and curl in terms of
partial derivatives. Let us see some nonlinear equations. For example Burger’s equation,
๐œ•๐‘ฆ
๐œ•๐‘ฆ
๐œ•2 ๐‘ฆ
+๐‘ฆ
= ๐œˆ 2,
๐œ•๐‘ก
๐œ•๐‘ฅ
๐œ•๐‘ฅ
is a nonlinear second order partial differential equation. It is nonlinear because ๐‘ฆ and
are multiplied together. The equation
๐‘‘๐‘ฅ
= ๐‘ฅ2
๐‘‘๐‘ก
๐œ•๐‘ฆ
๐œ•๐‘ฅ
(4)
0.3. CLASSIFICATION OF DIFFERENTIAL EQUATIONS
19
is a nonlinear first order differential equation as there is a second power of the dependent
variable ๐‘ฅ.
A linear equation may further be called homogeneous if all terms depend on the dependent
variable. That is, if no term is a function of the independent variables alone. Otherwise,
the equation is called nonhomogeneous or inhomogeneous. For example, the exponential
growth equation, the wave equation, or the transport equation above are homogeneous.
The mechanical vibrations equation above is nonhomogeneous as long as ๐‘“ (๐‘ก) is not the
zero function. Similarly, if the ambient temperature ๐ด is nonzero, Newton’s law of cooling
is nonhomogeneous. A homogeneous linear ODE can be put into the form
๐‘‘๐‘› ๐‘ฆ
๐‘‘ ๐‘›−1 ๐‘ฆ
๐‘‘๐‘ฆ
๐‘Ž ๐‘› (๐‘ฅ) ๐‘› + ๐‘Ž ๐‘›−1 (๐‘ฅ) ๐‘›−1 + · · · + ๐‘Ž 1 (๐‘ฅ)
+ ๐‘Ž 0 (๐‘ฅ)๐‘ฆ = 0.
๐‘‘๐‘ฅ
๐‘‘๐‘ฅ
๐‘‘๐‘ฅ
Compare to (2) and notice there is no function ๐‘(๐‘ฅ).
If the coefficients of a linear equation are actually constant functions, then the equation
is said to have constant coefficients. The coefficients are the functions multiplying the
dependent variable(s) or one of its derivatives, not the function ๐‘(๐‘ฅ) standing alone. A
constant coefficient nonhomogeneous ODE is an equation of the form
๐‘‘๐‘› ๐‘ฆ
๐‘‘ ๐‘›−1 ๐‘ฆ
๐‘‘๐‘ฆ
+ ๐‘Ž 0 ๐‘ฆ = ๐‘(๐‘ฅ),
๐‘Ž ๐‘› ๐‘› + ๐‘Ž ๐‘›−1 ๐‘›−1 + · · · + ๐‘Ž1
๐‘‘๐‘ฅ
๐‘‘๐‘ฅ
๐‘‘๐‘ฅ
where ๐‘Ž 0 , ๐‘Ž1 , . . . , ๐‘Ž ๐‘› are all constants, but ๐‘ may depend on the independent variable ๐‘ฅ. The
mechanical vibrations equation above is a constant coefficient nonhomogeneous second
order ODE. The same nomenclature applies to PDEs, so the transport equation, heat
equation and wave equation are all examples of constant coefficient linear PDEs.
Finally, an equation (or system) is called autonomous if the equation does not depend on
the independent variable. For autonomous ordinary differential equations, the independent
variable is then thought of as time. Autonomous equation means an equation that does
not change with time. For example, Newton’s law of cooling is autonomous, so is equation
(4). On the other hand, mechanical vibrations or (3) are not autonomous.
0.3.1
Exercises
Exercise 0.3.1: Classify the following equations. Are they ODE or PDE? Is it an equation or a
system? What is the order? Is it linear or nonlinear, and if it is linear, is it homogeneous, constant
coefficient? If it is an ODE, is it autonomous?
a) sin(๐‘ก)
๐‘‘2 ๐‘ฅ
+ cos(๐‘ก)๐‘ฅ = ๐‘ก 2
๐‘‘๐‘ก 2
c) ๐‘ฆ 00 + 3๐‘ฆ + 5๐‘ฅ = 0,
e) ๐‘ฅ 00 + ๐‘ก๐‘ฅ 2 = ๐‘ก
๐‘ฅ 00 + ๐‘ฅ − ๐‘ฆ = 0
b)
๐œ•๐‘ข
๐œ•๐‘ข
+3
= ๐‘ฅ๐‘ฆ
๐œ•๐‘ฅ
๐œ•๐‘ฆ
d)
๐œ•2 ๐‘ข
๐œ•2 ๐‘ข
+
๐‘ข
=0
๐œ•๐‘ก 2
๐œ•๐‘  2
f)
๐‘‘4 ๐‘ฅ
=0
๐‘‘๐‘ก 4
20
INTRODUCTION
Exercise 0.3.2: If ๐‘ข® = (๐‘ข1 , ๐‘ข2 , ๐‘ข3 ) is a vector, we have the divergence ∇ · ๐‘ข® =
curl ∇ × ๐‘ข® =
๐œ•๐‘ข3
๐œ•๐‘ฆ
−
๐œ•๐‘ข2 ๐œ•๐‘ข1
, ๐œ•๐‘ง
๐œ•๐‘ง
−
๐œ•๐‘ข3 ๐œ•๐‘ข2
, ๐œ•๐‘ฅ
๐œ•๐‘ฅ
−
๐œ•๐‘ข1
๐œ•๐‘ฆ
๐œ•๐‘ข1
๐œ•๐‘ฅ
+
๐œ•๐‘ข2
๐œ•๐‘ฆ
+
๐œ•๐‘ข3
๐œ•๐‘ง
and
. Notice that curl of a vector is still a vector. Write
out Maxwell’s equations in terms of partial derivatives and classify the system.
Exercise 0.3.3: Suppose ๐น is a linear function, that is, ๐น(๐‘ฅ, ๐‘ฆ) = ๐‘Ž๐‘ฅ + ๐‘๐‘ฆ for constants ๐‘Ž and ๐‘.
What is the classification of equations of the form ๐น(๐‘ฆ 0 , ๐‘ฆ) = 0.
Exercise 0.3.4: Write down an explicit example of a third order, linear, nonconstant coefficient,
nonautonomous, nonhomogeneous system of two ODE such that every derivative that could appear,
does appear.
Exercise 0.3.101: Classify the following equations. Are they ODE or PDE? Is it an equation or a
system? What is the order? Is it linear or nonlinear, and if it is linear, is it homogeneous, constant
coefficient? If it is an ODE, is it autonomous?
a)
๐œ•2 ๐‘ฃ
๐œ•2 ๐‘ฃ
+
3
= sin(๐‘ฅ)
๐œ•๐‘ฅ 2
๐œ•๐‘ฆ 2
b)
c)
๐‘‘7 ๐น
= 3๐น(๐‘ฅ)
๐‘‘๐‘ฅ 7
d) ๐‘ฆ 00 + 8๐‘ฆ 0 = 1
00
0
e) ๐‘ฅ + ๐‘ก ๐‘ฆ๐‘ฅ = 0,
00
๐‘ฆ + ๐‘ก๐‘ฅ๐‘ฆ = 0
๐‘‘๐‘ฅ
+ cos(๐‘ก)๐‘ฅ = ๐‘ก 2 + ๐‘ก + 1
๐‘‘๐‘ก
๐œ•๐‘ข ๐œ•2 ๐‘ข
f)
= 2 + ๐‘ข2
๐œ•๐‘ก
๐œ•๐‘ 
Exercise 0.3.102: Write down the general zeroth order linear ordinary differential equation. Write
down the general solution.
Exercise 0.3.103: For which ๐‘˜ is
๐‘‘๐‘ฅ
๐‘‘๐‘ก
+ ๐‘ฅ ๐‘˜ = ๐‘ก ๐‘˜+2 linear. Hint: there are two answers.
Chapter 1
First order equations
1.1
Integrals as solutions
Note: 1 lecture (or less), §1.2 in [EP], covered in §1.2 and §2.1 in [BD]
A first order ODE is an equation of the form
๐‘‘๐‘ฆ
= ๐‘“ (๐‘ฅ, ๐‘ฆ),
๐‘‘๐‘ฅ
or just
๐‘ฆ 0 = ๐‘“ (๐‘ฅ, ๐‘ฆ).
In general, there is no simple formula or procedure one can follow to find solutions. In the
next few lectures we will look at special cases where solutions are not difficult to obtain. In
this section, let us assume that ๐‘“ is a function of ๐‘ฅ alone, that is, the equation is
๐‘ฆ 0 = ๐‘“ (๐‘ฅ).
(1.1)
We could just integrate (antidifferentiate) both sides with respect to ๐‘ฅ.
∫
0
๐‘ฆ (๐‘ฅ) ๐‘‘๐‘ฅ =
that is
๐‘ฆ(๐‘ฅ) =
∫
∫
๐‘“ (๐‘ฅ) ๐‘‘๐‘ฅ + ๐ถ,
๐‘“ (๐‘ฅ) ๐‘‘๐‘ฅ + ๐ถ.
This ๐‘ฆ(๐‘ฅ) is actually the general solution. So to solve (1.1), we find some antiderivative of
๐‘“ (๐‘ฅ) and then we add an arbitrary constant to get the general solution.
Now is a good time to discuss a point about calculus notation and terminology.
Calculus textbooks muddy the waters by talking about the integral as primarily the
so-called indefinite integral. The indefinite integral is really the antiderivative (in fact the
whole one-parameter family of antiderivatives). There really exists only one integral and
that is the definite integral. The only reason for the indefinite integral notation is that we
22
CHAPTER 1. FIRST ORDER EQUATIONS
can always write an antiderivative as a (definite)
integral. That is, by the fundamental
∫
theorem of calculus we can always write ๐‘“ (๐‘ฅ) ๐‘‘๐‘ฅ + ๐ถ as
∫
๐‘ฅ
๐‘“ (๐‘ก) ๐‘‘๐‘ก + ๐ถ.
๐‘ฅ0
Hence the terminology to integrate when we may really mean to antidifferentiate. Integration
is just one way to compute the antiderivative (and it is a way that always works, see the
following examples). Integration is defined as the area under the graph, it only happens to
also compute antiderivatives. For sake of consistency, we will keep using the indefinite
integral notation when we want an antiderivative, and you should always think of the
definite integral as a way to write it.
Example 1.1.1: Find the general solution of ๐‘ฆ 0 = 3๐‘ฅ 2 .
Elementary calculus tells us that the general solution must be ๐‘ฆ = ๐‘ฅ 3 + ๐ถ. Let us check
by differentiating: ๐‘ฆ 0 = 3๐‘ฅ 2 . We got precisely our equation back.
Normally, we also have an initial condition such as ๐‘ฆ(๐‘ฅ0 ) = ๐‘ฆ0 for some two numbers
๐‘ฅ0 and ๐‘ฆ0 (๐‘ฅ0 is usually 0, but not always). We can then write the solution as a definite
integral in a nice way. Suppose our problem is ๐‘ฆ 0 = ๐‘“ (๐‘ฅ), ๐‘ฆ(๐‘ฅ0 ) = ๐‘ฆ0 . Then the solution is
๐‘ฆ(๐‘ฅ) =
๐‘ฅ
∫
๐‘“ (๐‘ ) ๐‘‘๐‘  + ๐‘ฆ0 .
๐‘ฅ0
(1.2)
Let us check! We compute ๐‘ฆ 0 = ๐‘“ (๐‘ฅ), via the fundamental theorem of calculus, and
∫by๐‘ฅ0 Jupiter, ๐‘ฆ is a solution. Is it the one satisfying the initial condition? Well, ๐‘ฆ(๐‘ฅ0 ) =
๐‘“ (๐‘ฅ) ๐‘‘๐‘ฅ + ๐‘ฆ0 = ๐‘ฆ0 . It is!
๐‘ฅ0
Do note that the definite integral and the indefinite integral (antidifferentiation) are
completely different beasts. The definite integral always evaluates to a number. Therefore,
(1.2) is a formula we can plug into the calculator or a computer, and it will be happy to
calculate specific values for us. We will easily be able to plot the solution and work with it
just like with any other function. It is not so crucial to always find a closed form for the
antiderivative.
Example 1.1.2: Solve
2
๐‘ฆ 0 = ๐‘’ −๐‘ฅ ,
๐‘ฆ(0) = 1.
By the preceding discussion, the solution must be
๐‘ฆ(๐‘ฅ) =
∫
๐‘ฅ
2
๐‘’ −๐‘  ๐‘‘๐‘  + 1.
0
Here is a good way to make fun of your friends taking second semester calculus. Tell them
to find the closed form solution. Ha ha ha (bad math joke). It is not possible (in closed
form). There is absolutely nothing wrong with writing the solution as a definite integral.
This particular integral is in fact very important in statistics.
23
1.1. INTEGRALS AS SOLUTIONS
Using this method, we can also solve equations of the form
๐‘ฆ 0 = ๐‘“ (๐‘ฆ).
Let us write the equation in Leibniz notation.
๐‘‘๐‘ฆ
= ๐‘“ (๐‘ฆ).
๐‘‘๐‘ฅ
Now we use the inverse function theorem from calculus to switch the roles of ๐‘ฅ and ๐‘ฆ to
obtain
๐‘‘๐‘ฅ
1
=
.
๐‘‘๐‘ฆ
๐‘“ (๐‘ฆ)
What we are doing seems like algebra with ๐‘‘๐‘ฅ and ๐‘‘๐‘ฆ. It is tempting to just do algebra with
๐‘‘๐‘ฅ and ๐‘‘๐‘ฆ as if they were numbers. And in this case it does work. Be careful, however, as
this sort of hand-waving calculation can lead to trouble, especially when more than one
independent variable is involved. At this point, we can simply integrate,
๐‘ฅ(๐‘ฆ) =
∫
1
๐‘‘๐‘ฆ + ๐ถ.
๐‘“ (๐‘ฆ)
Finally, we try to solve for ๐‘ฆ.
Example 1.1.3: Previously, we guessed ๐‘ฆ 0 = ๐‘˜ ๐‘ฆ (for some ๐‘˜ > 0) has the solution ๐‘ฆ = ๐ถ๐‘’ ๐‘˜๐‘ฅ .
We can now find the solution without guessing. First we note that ๐‘ฆ = 0 is a solution.
Henceforth, we assume ๐‘ฆ ≠ 0. We write
1
๐‘‘๐‘ฅ
=
.
๐‘‘๐‘ฆ
๐‘˜๐‘ฆ
We integrate to obtain
1
ln |๐‘ฆ| + ๐ท,
๐‘˜
where ๐ท is an arbitrary constant. Now we solve for ๐‘ฆ (actually for |๐‘ฆ|).
๐‘ฅ(๐‘ฆ) = ๐‘ฅ =
|๐‘ฆ| = ๐‘’ ๐‘˜๐‘ฅ−๐‘˜๐ท = ๐‘’ −๐‘˜๐ท ๐‘’ ๐‘˜๐‘ฅ .
If we replace ๐‘’ −๐‘˜๐ท with an arbitrary constant ๐ถ, we can get rid of the absolute value bars
(which we can do as ๐ท was arbitrary). In this way, we also incorporate the solution ๐‘ฆ = 0.
We get the same general solution as we guessed before, ๐‘ฆ = ๐ถ๐‘’ ๐‘˜๐‘ฅ .
Example 1.1.4: Find the general solution of ๐‘ฆ 0 = ๐‘ฆ 2 .
First we note that ๐‘ฆ = 0 is a solution. We can now assume that ๐‘ฆ ≠ 0. Write
๐‘‘๐‘ฅ
1
= 2.
๐‘‘๐‘ฆ
๐‘ฆ
24
CHAPTER 1. FIRST ORDER EQUATIONS
We integrate to get
๐‘ฅ=
We solve for ๐‘ฆ =
1
๐ถ−๐‘ฅ .
−1
+ ๐ถ.
๐‘ฆ
So the general solution is
๐‘ฆ=
1
๐ถ−๐‘ฅ
๐‘ฆ = 0.
or
Note the singularities of the solution. If for example ๐ถ = 1, then the solution “blows up”
as we approach ๐‘ฅ = 1. See Figure 1.1. Generally, it is hard to tell from just looking at the
equation itself how the solution is going to behave. The equation ๐‘ฆ 0 = ๐‘ฆ 2 is very nice and
defined everywhere, but the solution is only defined on some interval (−∞, ๐ถ) or (๐ถ, ∞).
Usually when this happens we only consider one of these the solution. For example if
1
we impose a condition ๐‘ฆ(0) = 1, then the solution is ๐‘ฆ = 1−๐‘ฅ
, and we would consider this
solution only for ๐‘ฅ on the interval (−∞, 1). In the figure, it is the left side of the graph.
-3
-2
-1
0
1
2
3
3
3
2
2
1
1
0
0
-1
-1
-2
-2
-3
-3
-3
-2
-1
0
1
Figure 1.1: Plot of ๐‘ฆ =
2
3
1
1−๐‘ฅ .
Classical problems leading to differential equations solvable by integration are problems
dealing with velocity, acceleration and distance. You have surely seen these problems
before in your calculus class.
Example 1.1.5: Suppose a car drives at a speed ๐‘’ ๐‘ก/2 meters per second, where ๐‘ก is time in
seconds. How far did the car get in 2 seconds (starting at ๐‘ก = 0)? How far in 10 seconds?
Let ๐‘ฅ denote the distance the car traveled. The equation is
๐‘ฅ 0 = ๐‘’ ๐‘ก/2 .
We just integrate this equation to get that
๐‘ฅ(๐‘ก) = 2๐‘’ ๐‘ก/2 + ๐ถ.
25
1.1. INTEGRALS AS SOLUTIONS
We still need to figure out ๐ถ. We know that when ๐‘ก = 0, then ๐‘ฅ = 0. That is, ๐‘ฅ(0) = 0. So
0 = ๐‘ฅ(0) = 2๐‘’ 0/2 + ๐ถ = 2 + ๐ถ.
Thus ๐ถ = −2 and
๐‘ฅ(๐‘ก) = 2๐‘’ ๐‘ก/2 − 2.
Now we just plug in to get where the car is at 2 and at 10 seconds. We obtain
๐‘ฅ(2) = 2๐‘’ 2/2 − 2 ≈ 3.44 meters,
๐‘ฅ(10) = 2๐‘’ 10/2 − 2 ≈ 294 meters.
Example 1.1.6: Suppose that the car accelerates at a rate of ๐‘ก 2 m/s2 . At time ๐‘ก = 0 the car is
at the 1 meter mark and is traveling at 10 m/s. Where is the car at time ๐‘ก = 10?
Well this is actually a second order problem. If ๐‘ฅ is the distance traveled, then ๐‘ฅ 0 is the
velocity, and ๐‘ฅ 00 is the acceleration. The equation with initial conditions is
๐‘ฅ 00 = ๐‘ก 2 ,
๐‘ฅ(0) = 1,
๐‘ฅ 0(0) = 10.
What if we say ๐‘ฅ 0 = ๐‘ฃ. Then we have the problem
๐‘ฃ0 = ๐‘ก 2 ,
๐‘ฃ(0) = 10.
Once we solve for ๐‘ฃ, we can integrate and find ๐‘ฅ.
Exercise 1.1.1: Solve for ๐‘ฃ, and then solve for ๐‘ฅ. Find ๐‘ฅ(10) to answer the question.
1.1.1
Exercises
Exercise 1.1.2: Solve
๐‘‘๐‘ฆ
๐‘‘๐‘ฅ
= ๐‘ฅ 2 + ๐‘ฅ for ๐‘ฆ(1) = 3.
Exercise 1.1.3: Solve
๐‘‘๐‘ฆ
๐‘‘๐‘ฅ
= sin(5๐‘ฅ) for ๐‘ฆ(0) = 2.
Exercise 1.1.4: Solve
๐‘‘๐‘ฆ
๐‘‘๐‘ฅ
=
1
๐‘ฅ 2 −1
for ๐‘ฆ(0) = 0.
Exercise 1.1.5: Solve ๐‘ฆ 0 = ๐‘ฆ 3 for ๐‘ฆ(0) = 1.
Exercise 1.1.6 (little harder): Solve ๐‘ฆ 0 = (๐‘ฆ − 1)(๐‘ฆ + 1) for ๐‘ฆ(0) = 3.
Exercise 1.1.7: Solve
๐‘‘๐‘ฆ
๐‘‘๐‘ฅ
=
1
๐‘ฆ+1
for ๐‘ฆ(0) = 0.
Exercise 1.1.8 (harder): Solve ๐‘ฆ 00 = sin ๐‘ฅ for ๐‘ฆ(0) = 0, ๐‘ฆ 0(0) = 2.
Exercise 1.1.9: A spaceship is traveling at the speed 2๐‘ก 2 + 1 km/s (๐‘ก is time in seconds). It is pointing
directly away from earth and at time ๐‘ก = 0 it is 1000 kilometers from earth. How far from earth is it
at one minute from time ๐‘ก = 0?
Exercise 1.1.10: Solve
integral.
๐‘‘๐‘ฅ
๐‘‘๐‘ก
= sin(๐‘ก 2 ) + ๐‘ก, ๐‘ฅ(0) = 20. It is OK to leave your answer as a definite
26
CHAPTER 1. FIRST ORDER EQUATIONS
Exercise 1.1.11: A dropped ball accelerates downwards at a constant rate 9.8 meters per second
squared. Set up the differential equation for the height above ground โ„Ž in meters. Then supposing
โ„Ž(0) = 100 meters, how long does it take for the ball to hit the ground.
Exercise 1.1.12: Find the general solution of ๐‘ฆ 0 = ๐‘’ ๐‘ฅ , and then ๐‘ฆ 0 = ๐‘’ ๐‘ฆ .
Exercise 1.1.101: Solve
๐‘‘๐‘ฆ
๐‘‘๐‘ฅ
= ๐‘’ ๐‘ฅ + ๐‘ฅ and ๐‘ฆ(0) = 10.
Exercise 1.1.102: Solve ๐‘ฅ 0 =
1
,
๐‘ฅ2
๐‘ฅ(1) = 1.
Exercise 1.1.103: Solve ๐‘ฅ 0 =
1
,
cos(๐‘ฅ)
๐‘ฅ(0) = ๐œ‹4 .
Exercise 1.1.104: Sid is in a car traveling at speed 10๐‘ก + 70 miles per hour away from Las Vegas,
where ๐‘ก is in hours. At ๐‘ก = 0, Sid is 10 miles away from Vegas. How far from Vegas is Sid 2 hours
later?
Exercise 1.1.105: Solve ๐‘ฆ 0 = ๐‘ฆ ๐‘› , ๐‘ฆ(0) = 1, where ๐‘› is a positive integer. Hint: You have to consider
different cases.
Exercise 1.1.106: The rate of change of the volume of a snowball that is melting is proportional
to the surface area of the snowball. Suppose the snowball is perfectly spherical. The volume (in
centimeters cubed) of a ball of radius ๐‘Ÿ centimeters is (4/3)๐œ‹๐‘Ÿ 3 . The surface area is 4๐œ‹๐‘Ÿ 2 . Set up the
differential equation for how the radius ๐‘Ÿ is changing. Then, suppose that at time ๐‘ก = 0 minutes,
the radius is 10 centimeters. After 5 minutes, the radius is 8 centimeters. At what time ๐‘ก will the
snowball be completely melted?
Exercise 1.1.107: Find the general solution to ๐‘ฆ 0000 = 0. How many distinct constants do you need?
27
1.2. SLOPE FIELDS
1.2
Slope fields
Note: 1 lecture, §1.3 in [EP], §1.1 in [BD]
As we said, the general first order equation we are studying looks like
๐‘ฆ 0 = ๐‘“ (๐‘ฅ, ๐‘ฆ).
A lot of the time, we cannot simply solve these kinds of equations explicitly. It would
be nice if we could at least figure out the shape and behavior of the solutions, or find
approximate solutions.
1.2.1
Slope fields
The equation ๐‘ฆ 0 = ๐‘“ (๐‘ฅ, ๐‘ฆ) gives you a slope at each point in the (๐‘ฅ, ๐‘ฆ)-plane. And this is
the slope a solution ๐‘ฆ(๐‘ฅ) would have at ๐‘ฅ if its value was ๐‘ฆ. In other words, ๐‘“ (๐‘ฅ, ๐‘ฆ) is the
slope of a solution whose graph runs through the point (๐‘ฅ, ๐‘ฆ). At a point (๐‘ฅ, ๐‘ฆ), we plot
a short line with the slope ๐‘“ (๐‘ฅ, ๐‘ฆ). For example, if ๐‘“ (๐‘ฅ, ๐‘ฆ) = ๐‘ฅ๐‘ฆ, then at point (2, 1.5) we
draw a short line of slope ๐‘ฅ๐‘ฆ = 2 × 1.5 = 3. So, if ๐‘ฆ(๐‘ฅ) is a solution and ๐‘ฆ(2) = 1.5, then the
equation mandates that ๐‘ฆ 0(2) = 3. See Figure 1.2.
-3
-2
-1
0
1
2
3
3
3
2
2
1
1
0
0
-1
-1
-2
-2
-3
-3
-3
-2
-1
0
1
2
3
Figure 1.2: The slope ๐‘ฆ 0 = ๐‘ฅ ๐‘ฆ at (2, 1.5).
To get an idea of how solutions behave, we draw such lines at lots of points in the plane,
not just the point (2, 1.5). We would ideally want to see the slope at every point, but that is
just not possible. Usually we pick a grid of points fine enough so that it shows the behavior,
but not too fine so that we can still recognize the individual lines. We call this picture the
slope field of the equation. See Figure 1.3 on the following page for the slope field of the
equation ๐‘ฆ 0 = ๐‘ฅ๐‘ฆ. Usually in practice, one does not do this by hand, but has a computer do
the drawing.
28
CHAPTER 1. FIRST ORDER EQUATIONS
Suppose we are given a specific initial condition ๐‘ฆ(๐‘ฅ0 ) = ๐‘ฆ0 . A solution, that is, the
graph of the solution, would be a curve that follows the slopes we drew. For a few sample
solutions, see Figure 1.4. It is easy to roughly sketch (or at least imagine) possible solutions
in the slope field, just from looking at the slope field itself. You simply sketch a line that
roughly fits the little line segments and goes through your initial condition.
-3
-2
-1
0
1
2
3
-3
-2
-1
0
1
2
3
3
3
3
3
2
2
2
2
1
1
1
1
0
0
0
0
-1
-1
-1
-1
-2
-2
-2
-2
-3
-3
-3
-3
-2
-1
0
1
2
3
Figure 1.3: Slope field of ๐‘ฆ 0 = ๐‘ฅ๐‘ฆ.
-3
-3
-2
-1
0
1
2
3
Figure 1.4: Slope field of ๐‘ฆ 0 = ๐‘ฅ ๐‘ฆ with a graph
of solutions satisfying ๐‘ฆ(0) = 0.2, ๐‘ฆ(0) = 0, and
๐‘ฆ(0) = −0.2.
By looking at the slope field we get a lot of information about the behavior of solutions
without having to solve the equation. For example, in Figure 1.4 we see what the solutions
do when the initial conditions are ๐‘ฆ(0) > 0, ๐‘ฆ(0) = 0 and ๐‘ฆ(0) < 0. A small change in the
initial condition causes quite different behavior. We see this behavior just from the slope
field and imagining what solutions ought to do.
We see a different behavior for the equation ๐‘ฆ 0 = −๐‘ฆ. The slope field and a few solutions
is in see Figure 1.5 on the next page. If we think of moving from left to right (perhaps ๐‘ฅ is
time and time is usually increasing), then we see that no matter what ๐‘ฆ(0) is, all solutions
tend to zero as ๐‘ฅ tends to infinity. Again that behavior is clear from simply looking at the
slope field itself.
1.2.2
Existence and uniqueness
We wish to ask two fundamental questions about the problem
๐‘ฆ 0 = ๐‘“ (๐‘ฅ, ๐‘ฆ),
(i) Does a solution exist?
(ii) Is the solution unique (if it exists)?
๐‘ฆ(๐‘ฅ0 ) = ๐‘ฆ0 .
29
1.2. SLOPE FIELDS
-3
-2
-1
0
1
2
3
3
3
2
2
1
1
0
0
-1
-1
-2
-2
-3
-3
-3
-2
-1
0
1
2
3
Figure 1.5: Slope field of ๐‘ฆ 0 = −๐‘ฆ with a graph of a few solutions.
What do you think is the answer? The answer seems to be yes to both does it not? Well,
pretty much. But there are cases when the answer to either question can be no.
Since generally the equations we encounter in applications come from real life situations,
it seems logical that a solution always exists. It also has to be unique if we believe our
universe is deterministic. If the solution does not exist, or if it is not unique, we have
probably not devised the correct model. Hence, it is good to know when things go wrong
and why.
Example 1.2.1: Attempt to solve:
๐‘ฆ0 =
1
,
๐‘ฅ
๐‘ฆ(0) = 0.
Integrate to find the general solution ๐‘ฆ = ln |๐‘ฅ| + ๐ถ. The solution does not exist at ๐‘ฅ = 0.
See Figure 1.6 on the following page. The equation may have been written as the seemingly
harmless ๐‘ฅ๐‘ฆ 0 = 1.
Example 1.2.2: Solve:
0
q
๐‘ฆ = 2 |๐‘ฆ|,
๐‘ฆ(0) = 0.
See Figure 1.7 on the next page. Note that ๐‘ฆ = 0 is a solution. But another solution is
the function
(
๐‘ฅ2
if ๐‘ฅ ≥ 0,
๐‘ฆ(๐‘ฅ) =
−๐‘ฅ 2 if ๐‘ฅ < 0.
It is hard to tell by staring at the slope field that the solution is not unique. Is there any
hope? Of course there is. We have the following theorem, known as Picard’s theorem∗ .
∗ Named
after the French mathematician Charles Émile Picard (1856–1941)
30
CHAPTER 1. FIRST ORDER EQUATIONS
-3
-2
-1
0
1
2
3
-3
-2
-1
0
1
2
3
3
3
3
3
2
2
2
2
1
1
1
1
0
0
0
0
-1
-1
-1
-1
-2
-2
-2
-2
-3
-3
-3
-3
-2
-1
0
1
2
3
-3
-3
Figure 1.6: Slope field of ๐‘ฆ 0 = 1/๐‘ฅ .
-2
-1
0
1
2
3
Figure 1.7: Slope field of ๐‘ฆ 0 = 2 |๐‘ฆ| with two
solutions satisfying ๐‘ฆ(0) = 0.
p
Theorem 1.2.1 (Picard’s theorem on existence and uniqueness). If ๐‘“ (๐‘ฅ, ๐‘ฆ) is continuous (as a
๐œ•๐‘“
function of two variables) and ๐œ•๐‘ฆ exists and is continuous near some (๐‘ฅ0 , ๐‘ฆ0 ), then a solution to
๐‘ฆ 0 = ๐‘“ (๐‘ฅ, ๐‘ฆ),
๐‘ฆ(๐‘ฅ0 ) = ๐‘ฆ0 ,
exists (at least for ๐‘ฅ in some small interval) and is unique.
Note that the problems ๐‘ฆ 0 = 1/๐‘ฅ , ๐‘ฆ(0) = 0 and ๐‘ฆ 0 = 2 |๐‘ฆ|, ๐‘ฆ(0) = 0 do not satisfy the
hypothesis of the theorem. Even if we can use the theorem, we ought to be careful about
this existence business. It is quite possible that the solution only exists for a short while.
p
Example 1.2.3: For some constant ๐ด, solve:
๐‘ฆ0 = ๐‘ฆ 2 ,
๐‘ฆ(0) = ๐ด.
We know how to solve this equation. First assume that ๐ด ≠ 0, so ๐‘ฆ is not equal to zero at
1
least for some ๐‘ฅ near 0. So ๐‘ฅ 0 = 1/๐‘ฆ 2 , so ๐‘ฅ = −1/๐‘ฆ + ๐ถ, so ๐‘ฆ = ๐ถ−๐‘ฅ
. If ๐‘ฆ(0) = ๐ด, then ๐ถ = 1/๐ด so
๐‘ฆ=
1
.
1/๐ด − ๐‘ฅ
If ๐ด = 0, then ๐‘ฆ = 0 is a solution.
For example, when ๐ด = 1 the solution “blows up” at ๐‘ฅ = 1. Hence, the solution does
not exist for all ๐‘ฅ even if the equation is nice everywhere. The equation ๐‘ฆ 0 = ๐‘ฆ 2 certainly
looks nice.
For most of this course we will be interested in equations where existence and uniqueness
holds, and in fact holds “globally” unlike for the equation ๐‘ฆ 0 = ๐‘ฆ 2 .
31
1.2. SLOPE FIELDS
1.2.3
Exercises
Exercise 1.2.1: Sketch slope field for ๐‘ฆ 0 = ๐‘’ ๐‘ฅ−๐‘ฆ . How do the solutions behave as ๐‘ฅ grows? Can you
guess a particular solution by looking at the slope field?
Exercise 1.2.2: Sketch slope field for ๐‘ฆ 0 = ๐‘ฅ 2 .
Exercise 1.2.3: Sketch slope field for ๐‘ฆ 0 = ๐‘ฆ 2 .
Exercise 1.2.4: Is it possible to solve the equation ๐‘ฆ 0 =
๐‘ฅ๐‘ฆ
cos ๐‘ฅ
for ๐‘ฆ(0) = 1? Justify.
p
Exercise 1.2.5: Is it possible to solve the equation ๐‘ฆ 0 = ๐‘ฆ |๐‘ฅ| for ๐‘ฆ(0) = 0? Is the solution unique?
Justify.
Exercise 1.2.6: Match equations ๐‘ฆ 0 = 1 − ๐‘ฅ, ๐‘ฆ 0 = ๐‘ฅ − 2๐‘ฆ, ๐‘ฆ 0 = ๐‘ฅ(1 − ๐‘ฆ) to slope fields. Justify.
a)
b)
c)
Exercise 1.2.7 (challenging): Take ๐‘ฆ 0 = ๐‘“ (๐‘ฅ, ๐‘ฆ), ๐‘ฆ(0) = 0, where ๐‘“ (๐‘ฅ, ๐‘ฆ) > 1 for all ๐‘ฅ and ๐‘ฆ.
If the solution exists for all ๐‘ฅ, can you say what happens to ๐‘ฆ(๐‘ฅ) as ๐‘ฅ goes to positive infinity?
Explain.
Exercise 1.2.8 (challenging): Take (๐‘ฆ − ๐‘ฅ)๐‘ฆ 0 = 0, ๐‘ฆ(0) = 0.
a) Find two distinct solutions.
b) Explain why this does not violate Picard’s theorem.
Exercise 1.2.9: Suppose ๐‘ฆ 0 = ๐‘“ (๐‘ฅ, ๐‘ฆ). What will the slope field look like, explain and sketch an
example, if you know the following about ๐‘“ (๐‘ฅ, ๐‘ฆ):
a) ๐‘“ does not depend on ๐‘ฆ.
b) ๐‘“ does not depend on ๐‘ฅ.
c) ๐‘“ (๐‘ก, ๐‘ก) = 0 for any number ๐‘ก.
d) ๐‘“ (๐‘ฅ, 0) = 0 and ๐‘“ (๐‘ฅ, 1) = 1 for all ๐‘ฅ.
Exercise 1.2.10: Find a solution to ๐‘ฆ 0 = |๐‘ฆ|, ๐‘ฆ(0) = 0. Does Picard’s theorem apply?
Exercise 1.2.11: Take an equation ๐‘ฆ 0 = (๐‘ฆ − 2๐‘ฅ)๐‘”(๐‘ฅ, ๐‘ฆ) + 2 for some function ๐‘”(๐‘ฅ, ๐‘ฆ). Can you
solve the problem for the initial condition ๐‘ฆ(0) = 0, and if so what is the solution?
32
CHAPTER 1. FIRST ORDER EQUATIONS
Exercise 1.2.12 (challenging): Suppose ๐‘ฆ 0 = ๐‘“ (๐‘ฅ, ๐‘ฆ) is such that ๐‘“ (๐‘ฅ, 1) = 0 for every ๐‘ฅ, ๐‘“ is
๐œ•๐‘“
continuous and ๐œ•๐‘ฆ exists and is continuous for every ๐‘ฅ and ๐‘ฆ.
a) Guess a solution given the initial condition ๐‘ฆ(0) = 1.
b) Can graphs of two solutions of the equation for different initial conditions ever intersect?
c) Given ๐‘ฆ(0) = 0, what can you say about the solution. In particular, can ๐‘ฆ(๐‘ฅ) > 1 for any ๐‘ฅ?
Can ๐‘ฆ(๐‘ฅ) = 1 for any ๐‘ฅ? Why or why not?
Exercise 1.2.101: Sketch the slope field of ๐‘ฆ 0 = ๐‘ฆ 3 . Can you visually find the solution that satisfies
๐‘ฆ(0) = 0?
Exercise 1.2.102: Is it possible to solve ๐‘ฆ 0 = ๐‘ฅ๐‘ฆ for ๐‘ฆ(0) = 0? Is the solution unique?
Exercise 1.2.103: Is it possible to solve ๐‘ฆ 0 =
๐‘ฅ
๐‘ฅ 2 −1
for ๐‘ฆ(1) = 0?
Exercise 1.2.104: Match equations ๐‘ฆ 0 = sin ๐‘ฅ, ๐‘ฆ 0 = cos ๐‘ฆ, ๐‘ฆ 0 = ๐‘ฆ cos(๐‘ฅ) to slope fields. Justify.
a)
b)
c)
Exercise 1.2.105 (tricky): Suppose
(
๐‘“ (๐‘ฆ) =
0
if ๐‘ฆ > 0,
1
if ๐‘ฆ ≤ 0.
Does ๐‘ฆ 0 = ๐‘“ (๐‘ฆ), ๐‘ฆ(0) = 0 have a continuously differentiable solution? Does Picard apply? Why, or
why not?
Exercise 1.2.106: Consider an equation of the form ๐‘ฆ 0 = ๐‘“ (๐‘ฅ) for some continuous function ๐‘“ , and
an initial condition ๐‘ฆ(๐‘ฅ0 ) = ๐‘ฆ0 . Does a solution exist for all ๐‘ฅ? Why or why not?
33
1.3. SEPARABLE EQUATIONS
1.3
Separable equations
Note: 1 lecture, §1.4 in [EP], §2.2 in [BD]
0
∫ When a differential equation is of the form ๐‘ฆ = ๐‘“ (๐‘ฅ), we can just integrate: ๐‘ฆ =
๐‘“ (๐‘ฅ) ๐‘‘๐‘ฅ + ๐ถ. Unfortunately this method no longer works for the general form of the
equation ๐‘ฆ 0 = ๐‘“ (๐‘ฅ, ๐‘ฆ). Integrating both sides yields
∫
๐‘ฆ=
๐‘“ (๐‘ฅ, ๐‘ฆ) ๐‘‘๐‘ฅ + ๐ถ.
Notice the dependence on ๐‘ฆ in the integral.
1.3.1
Separable equations
We say a differential equation is separable if we can write it as
๐‘ฆ 0 = ๐‘“ (๐‘ฅ)๐‘”(๐‘ฆ),
for some functions ๐‘“ (๐‘ฅ) and ๐‘”(๐‘ฆ). Let us write the equation in the Leibniz notation
๐‘‘๐‘ฆ
= ๐‘“ (๐‘ฅ)๐‘”(๐‘ฆ).
๐‘‘๐‘ฅ
Then we rewrite the equation as
๐‘‘๐‘ฆ
= ๐‘“ (๐‘ฅ) ๐‘‘๐‘ฅ.
๐‘”(๐‘ฆ)
Both sides look like something we can integrate. We obtain
∫
๐‘‘๐‘ฆ
=
๐‘”(๐‘ฆ)
∫
๐‘“ (๐‘ฅ) ๐‘‘๐‘ฅ + ๐ถ.
If we can find closed form expressions for these two integrals, we can, perhaps, solve for ๐‘ฆ.
Example 1.3.1: Take the equation
๐‘ฆ 0 = ๐‘ฅ๐‘ฆ.
Note that ๐‘ฆ = 0 is a solution. We will remember that fact and assume ๐‘ฆ ≠ 0 from now on,
๐‘‘๐‘ฆ
so that we can divide by ๐‘ฆ. Write the equation as ๐‘‘๐‘ฅ = ๐‘ฅ๐‘ฆ. Then
∫
๐‘‘๐‘ฆ
=
๐‘ฆ
∫
๐‘ฅ ๐‘‘๐‘ฅ + ๐ถ.
We compute the antiderivatives to get
ln |๐‘ฆ| =
๐‘ฅ2
+ ๐ถ,
2
34
CHAPTER 1. FIRST ORDER EQUATIONS
or
|๐‘ฆ| = ๐‘’
๐‘ฅ2
2 +๐ถ
๐‘ฅ2
๐‘ฅ2
= ๐‘’ 2 ๐‘’ ๐ถ = ๐ท๐‘’ 2 ,
where ๐ท > 0 is some constant. Because ๐‘ฆ = 0 is also a solution and because of the absolute
value we can write:
๐‘ฅ2
๐‘ฆ = ๐ท๐‘’ 2 ,
for any number ๐ท (including zero or negative).
We check:
๐‘ฅ2 ๐‘ฅ2
๐‘ฆ 0 = ๐ท๐‘ฅ๐‘’ 2 = ๐‘ฅ ๐ท๐‘’ 2 = ๐‘ฅ๐‘ฆ.
Yay!
We should be a little bit more careful with this method. You may be worried that we
integrated in two different variables. We seemingly did a different operation to each side.
Let us work through this method more rigorously. Take
๐‘‘๐‘ฆ
= ๐‘“ (๐‘ฅ)๐‘”(๐‘ฆ).
๐‘‘๐‘ฅ
We rewrite the equation as follows. Note that ๐‘ฆ = ๐‘ฆ(๐‘ฅ) is a function of ๐‘ฅ and so is
๐‘‘๐‘ฆ
๐‘‘๐‘ฅ !
1 ๐‘‘๐‘ฆ
= ๐‘“ (๐‘ฅ).
๐‘”(๐‘ฆ) ๐‘‘๐‘ฅ
We integrate both sides with respect to ๐‘ฅ:
1 ๐‘‘๐‘ฆ
๐‘‘๐‘ฅ =
๐‘”(๐‘ฆ) ๐‘‘๐‘ฅ
∫
∫
๐‘“ (๐‘ฅ) ๐‘‘๐‘ฅ + ๐ถ.
We use the change of variables formula (substitution) on the left hand side:
∫
1
๐‘‘๐‘ฆ =
๐‘”(๐‘ฆ)
∫
๐‘“ (๐‘ฅ) ๐‘‘๐‘ฅ + ๐ถ.
And we are done.
1.3.2
Implicit solutions
We sometimes get stuck even if we can do the integration. Consider the separable equation
๐‘ฆ0 =
๐‘ฅ๐‘ฆ
.
+1
๐‘ฆ2
We separate variables,
๐‘ฆ2 + 1
1
๐‘‘๐‘ฆ = ๐‘ฆ +
๐‘ฆ
๐‘ฆ
๐‘‘๐‘ฆ = ๐‘ฅ ๐‘‘๐‘ฅ.
35
1.3. SEPARABLE EQUATIONS
We integrate to get
๐‘ฆ2
๐‘ฅ2
+ ln |๐‘ฆ| =
+ ๐ถ,
2
2
or perhaps the easier looking expression (where ๐ท = 2๐ถ)
๐‘ฆ 2 + 2 ln |๐‘ฆ| = ๐‘ฅ 2 + ๐ท.
It is not easy to find the solution explicitly as it is hard to solve for ๐‘ฆ. We, therefore, leave
the solution in this form and call it an implicit solution. It is still easy to check that an implicit
solution satisfies the differential equation. In this case, we differentiate with respect to ๐‘ฅ,
and remember that ๐‘ฆ is a function of ๐‘ฅ, to get
2
= 2๐‘ฅ.
๐‘ฆ 2๐‘ฆ +
๐‘ฆ
0
Multiply both sides by ๐‘ฆ and divide by 2(๐‘ฆ 2 + 1) and you will get exactly the differential
equation. We leave this computation to the reader.
If you have an implicit solution, and you want to compute values for ๐‘ฆ, you might
have to be tricky. You might get multiple solutions ๐‘ฆ for each ๐‘ฅ, so you have to pick one.
Sometimes you can graph ๐‘ฅ as a function of ๐‘ฆ, and then flip your paper. Sometimes you
have to do more.
Computers are also good at some of these tricks. More advanced mathematical software
usually has some way of plotting solutions to implicit equations. For example, for ๐ถ = 0 if
you plot all the points (๐‘ฅ, ๐‘ฆ) that are solutions to ๐‘ฆ 2 + 2 ln |๐‘ฆ| = ๐‘ฅ 2 , you find the two curves
in Figure 1.8 on the following page. This is not quite a graph of a function. For each ๐‘ฅ there
are two choices of ๐‘ฆ. To find a function you would have to pick one of these two curves.
You pick the one that satisfies your initial condition if you have one. For example, the top
curve satisfies the condition ๐‘ฆ(1) = 1. So for each ๐ถ we really got two solutions. As you can
see, computing values from an implicit solution can be somewhat tricky. But sometimes,
an implicit solution is the best we can do.
The equation above also has the solution ๐‘ฆ = 0. So the general solution is
๐‘ฆ 2 + 2 ln |๐‘ฆ| = ๐‘ฅ 2 + ๐ถ,
and
๐‘ฆ = 0.
These outlying solutions such as ๐‘ฆ = 0 are sometimes called singular solutions.
1.3.3
Examples of separable equations
Example 1.3.2: Solve ๐‘ฅ 2 ๐‘ฆ 0 = 1 − ๐‘ฅ 2 + ๐‘ฆ 2 − ๐‘ฅ 2 ๐‘ฆ 2 , ๐‘ฆ(1) = 0.
Factor the right-hand side
๐‘ฅ 2 ๐‘ฆ 0 = (1 − ๐‘ฅ 2 )(1 + ๐‘ฆ 2 ).
36
CHAPTER 1. FIRST ORDER EQUATIONS
-5.0
5.0
-2.5
0.0
2.5
5.0
5.0
2.5
2.5
0.0
0.0
-2.5
-2.5
-5.0
-5.0
-5.0
-2.5
0.0
2.5
5.0
Figure 1.8: The implicit solution ๐‘ฆ 2 + 2 ln |๐‘ฆ| = ๐‘ฅ 2 to ๐‘ฆ 0 =
๐‘ฅ๐‘ฆ
.
๐‘ฆ 2 +1
Separate variables, integrate, and solve for ๐‘ฆ:
๐‘ฆ0
1 − ๐‘ฅ2
=
,
1 + ๐‘ฆ2
๐‘ฅ2
๐‘ฆ0
1
= 2 − 1,
2
1+๐‘ฆ
๐‘ฅ
−1
arctan(๐‘ฆ) =
− ๐‘ฅ + ๐ถ,
๐‘ฅ −1
−๐‘ฅ+๐ถ .
๐‘ฆ = tan
๐‘ฅ
Solve for the initial condition, 0 = tan(−2 + ๐ถ) to get ๐ถ = 2 (or ๐ถ = 2 + ๐œ‹, or ๐ถ = 2 + 2๐œ‹,
etc.). The particular solution we seek is, therefore,
−1
๐‘ฆ = tan
−๐‘ฅ+2 .
๐‘ฅ
Example 1.3.3: Bob made a cup of coffee, and Bob likes to drink coffee only once reaches
60 degrees Celsius and will not burn him. Initially at time ๐‘ก = 0 minutes, Bob measured the
temperature and the coffee was 89 degrees Celsius. One minute later, Bob measured the
coffee again and it had 85 degrees. The temperature of the room (the ambient temperature)
is 22 degrees. When should Bob start drinking?
Let ๐‘‡ be the temperature of the coffee in degrees Celsius, and let ๐ด be the ambient
(room) temperature, also in degrees Celsius. Newton’s law of cooling states that the rate at
which the temperature of the coffee is changing is proportional to the difference between
the ambient temperature and the temperature of the coffee. That is,
๐‘‘๐‘‡
= ๐‘˜(๐ด − ๐‘‡),
๐‘‘๐‘ก
37
1.3. SEPARABLE EQUATIONS
for some constant ๐‘˜. For our setup ๐ด = 22, ๐‘‡(0) = 89, ๐‘‡(1) = 85. We separate variables and
integrate (let ๐ถ and ๐ท denote arbitrary constants):
1 ๐‘‘๐‘‡
= −๐‘˜,
๐‘‡ − ๐ด ๐‘‘๐‘ก
ln(๐‘‡ − ๐ด) = −๐‘˜๐‘ก + ๐ถ,
(note that ๐‘‡ − ๐ด > 0)
๐‘‡ − ๐ด = ๐ท ๐‘’ −๐‘˜๐‘ก ,
๐‘‡ = ๐ด + ๐ท ๐‘’ −๐‘˜๐‘ก .
That is, ๐‘‡ = 22 + ๐ท ๐‘’ −๐‘˜๐‘ก . We plug in the first condition: 89 = ๐‘‡(0) = 22 + ๐ท, and hence
๐ท = 67. So ๐‘‡ = 22 + 67 ๐‘’ −๐‘˜๐‘ก . The second condition says 85 = ๐‘‡(1) = 22 + 67 ๐‘’ −๐‘˜ . Solving for
๐‘˜ we get ๐‘˜ = − ln 85−22
67 ≈ 0.0616. Now we solve for the time ๐‘ก that gives us a temperature of
60 degrees. Namely, we solve
60 = 22 + 67๐‘’ −0.0616๐‘ก
ln
60−22
67
to get ๐‘ก = − 0.0616
≈ 9.21 minutes. So Bob can begin to drink the coffee at just over 9
minutes from the time Bob made it. That is probably about the amount of time it took us
to calculate how long it would take. See Figure 1.9.
0.0
2.5
5.0
7.5
10.0
90
12.5
90
80
80
70
70
60
60
0
20
40
60
80
80
80
60
60
40
40
20
0.0
2.5
5.0
7.5
10.0
12.5
20
0
20
40
60
80
Figure 1.9: Graphs of the coffee temperature function ๐‘‡(๐‘ก). On the left, horizontal lines are drawn at
temperatures 60, 85, and 89. Vertical lines are drawn at ๐‘ก = 1 and ๐‘ก = 9.21. Notice that the temperature
of the coffee hits 85 at ๐‘ก = 1, and 60 at ๐‘ก ≈ 9.21. On the right, the graph is over a longer period of time,
with a horizontal line at the ambient temperature 22.
−๐‘ฅ ๐‘ฆ 2
Example 1.3.4: Find the general solution to ๐‘ฆ 0 = 3 (including singular solutions).
First note that ๐‘ฆ = 0 is a solution (a singular solution). Now assume that ๐‘ฆ ≠ 0.
−3 0
๐‘ฆ = ๐‘ฅ,
๐‘ฆ2
38
CHAPTER 1. FIRST ORDER EQUATIONS
๐‘ฅ2
3
=
+ ๐ถ,
๐‘ฆ
2
3
6
๐‘ฆ= 2
= 2
.
๐‘ฅ /2 + ๐ถ
๐‘ฅ + 2๐ถ
So the general solution is,
๐‘ฆ=
1.3.4
๐‘ฅ2
6
,
+ 2๐ถ
๐‘ฆ = 0.
and
Exercises
Exercise 1.3.1: Solve ๐‘ฆ 0 = ๐‘ฅ/๐‘ฆ .
Exercise 1.3.2: Solve ๐‘ฆ 0 = ๐‘ฅ 2 ๐‘ฆ.
Exercise 1.3.3: Solve
๐‘‘๐‘ฅ
= (๐‘ฅ 2 − 1) ๐‘ก, for ๐‘ฅ(0) = 0.
๐‘‘๐‘ก
Exercise 1.3.4: Solve
๐‘‘๐‘ฅ
= ๐‘ฅ sin(๐‘ก), for ๐‘ฅ(0) = 1.
๐‘‘๐‘ก
Exercise 1.3.5: Solve
๐‘‘๐‘ฆ
= ๐‘ฅ๐‘ฆ + ๐‘ฅ + ๐‘ฆ + 1. Hint: Factor the right-hand side.
๐‘‘๐‘ฅ
Exercise 1.3.6: Solve ๐‘ฅ๐‘ฆ 0 = ๐‘ฆ + 2๐‘ฅ 2 ๐‘ฆ, where ๐‘ฆ(1) = 1.
Exercise 1.3.7: Solve
๐‘ฆ2 + 1
๐‘‘๐‘ฆ
= 2
, for ๐‘ฆ(0) = 1.
๐‘‘๐‘ฅ
๐‘ฅ +1
Exercise 1.3.8: Find an implicit solution for
๐‘‘๐‘ฆ
๐‘ฅ2 + 1
= 2
, for ๐‘ฆ(0) = 1.
๐‘‘๐‘ฅ
๐‘ฆ +1
Exercise 1.3.9: Find an explicit solution for ๐‘ฆ 0 = ๐‘ฅ๐‘’ −๐‘ฆ , ๐‘ฆ(0) = 1.
Exercise 1.3.10: Find an explicit solution for ๐‘ฅ๐‘ฆ 0 = ๐‘’ −๐‘ฆ , for ๐‘ฆ(1) = 1.
2
Exercise 1.3.11: Find an explicit solution for ๐‘ฆ 0 = ๐‘ฆ๐‘’ −๐‘ฅ , ๐‘ฆ(0) = 1. It is alright to leave a definite
integral in your answer.
Exercise 1.3.12: Suppose a cup of coffee is at 100 degrees Celsius at time ๐‘ก = 0, it is at 70 degrees
at ๐‘ก = 10 minutes, and it is at 50 degrees at ๐‘ก = 20 minutes. Compute the ambient temperature.
Exercise 1.3.101: Solve ๐‘ฆ 0 = 2๐‘ฅ๐‘ฆ.
Exercise 1.3.102: Solve ๐‘ฅ 0 = 3๐‘ฅ๐‘ก 2 − 3๐‘ก 2 , ๐‘ฅ(0) = 2.
Exercise 1.3.103: Find an implicit solution for ๐‘ฅ 0 =
1
,
3๐‘ฅ 2 +1
๐‘ฅ(0) = 1.
Exercise 1.3.104: Find an explicit solution to ๐‘ฅ๐‘ฆ 0 = ๐‘ฆ 2 , ๐‘ฆ(1) = 1.
39
1.3. SEPARABLE EQUATIONS
Exercise 1.3.105: Find an implicit solution to ๐‘ฆ 0 =
sin(๐‘ฅ)
.
cos(๐‘ฆ)
Exercise 1.3.106: Take Example 1.3.3 with the same numbers: 89 degrees at ๐‘ก = 0, 85 degrees at
๐‘ก = 1, and ambient temperature of 22 degrees. Suppose these temperatures were measured with
precision of ±0.5 degrees. Given this imprecision, the time it takes the coffee to cool to (exactly) 60
degrees is also only known in a certain range. Find this range. Hint: Think about what kind of
error makes the cooling time longer and what shorter.
Exercise 1.3.107: A population ๐‘ฅ of rabbits on an island is modeled by ๐‘ฅ 0 = ๐‘ฅ − 1/1000 ๐‘ฅ 2 , where
the independent variable is time in months. At time ๐‘ก = 0, there are 40 rabbits on the island.
a) Find the solution to the equation with the initial condition.
b) How many rabbits are on the island in 1 month, 5 months, 10 months, 15 months (round to
the nearest integer).
40
1.4
CHAPTER 1. FIRST ORDER EQUATIONS
Linear equations and the integrating factor
Note: 1 lecture, §1.5 in [EP], §2.1 in [BD]
One of the most important types of equations we will learn how to solve are the so-called
linear equations. In fact, the majority of the course is about linear equations. In this section
we focus on the first order linear equation. A first order equation is linear if we can put it into
the form:
๐‘ฆ 0 + ๐‘(๐‘ฅ)๐‘ฆ = ๐‘“ (๐‘ฅ).
(1.3)
The word “linear” means linear in ๐‘ฆ and ๐‘ฆ 0; no higher powers nor functions of ๐‘ฆ or ๐‘ฆ 0
appear. The dependence on ๐‘ฅ can be more complicated.
Solutions of linear equations have nice properties. For example, the solution exists
wherever ๐‘(๐‘ฅ) and ๐‘“ (๐‘ฅ) are defined, and has the same regularity (read: it is just as nice).
But most importantly for us right now, there is a method for solving linear first order
equations.
The trick is to rewrite the left-hand side of (1.3) as a derivative of a product of ๐‘ฆ with
another function. To this end we find a function ๐‘Ÿ(๐‘ฅ) such that
๐‘Ÿ(๐‘ฅ)๐‘ฆ 0 + ๐‘Ÿ(๐‘ฅ)๐‘(๐‘ฅ)๐‘ฆ =
i
๐‘‘ h
๐‘Ÿ(๐‘ฅ)๐‘ฆ .
๐‘‘๐‘ฅ
This is the left-hand side of (1.3) multiplied by ๐‘Ÿ(๐‘ฅ). So if we multiply (1.3) by ๐‘Ÿ(๐‘ฅ), we
obtain
i
๐‘‘ h
๐‘Ÿ(๐‘ฅ)๐‘ฆ = ๐‘Ÿ(๐‘ฅ) ๐‘“ (๐‘ฅ).
๐‘‘๐‘ฅ
Now we integrate both sides. The right-hand side does not depend on ๐‘ฆ and the left-hand
side is written as a derivative of a function. Afterwards, we solve for ๐‘ฆ. The function ๐‘Ÿ(๐‘ฅ)
is called the integrating factor and the method is called the integrating factor method.
We are looking for a function ๐‘Ÿ(๐‘ฅ), such that if we differentiate it, we get the same
function back multiplied by ๐‘(๐‘ฅ). That seems like a job for the exponential function! Let
๐‘Ÿ(๐‘ฅ) = ๐‘’
∫
๐‘(๐‘ฅ) ๐‘‘๐‘ฅ
.
We compute:
๐‘ฆ 0 + ๐‘(๐‘ฅ)๐‘ฆ = ๐‘“ (๐‘ฅ),
๐‘’
∫
๐‘(๐‘ฅ) ๐‘‘๐‘ฅ 0
∫
∫
๐‘ฆ + ๐‘’ ๐‘(๐‘ฅ) ๐‘‘๐‘ฅ ๐‘(๐‘ฅ)๐‘ฆ = ๐‘’
∫
๐‘‘ h ∫ ๐‘(๐‘ฅ) ๐‘‘๐‘ฅ i
๐‘’
๐‘ฆ =๐‘’
๐‘‘๐‘ฅ
∫
๐‘’
∫
๐‘(๐‘ฅ) ๐‘‘๐‘ฅ
๐‘ฆ=
๐‘ฆ=๐‘’
−
๐‘(๐‘ฅ) ๐‘‘๐‘ฅ
๐‘“ (๐‘ฅ),
๐‘(๐‘ฅ) ๐‘‘๐‘ฅ
๐‘“ (๐‘ฅ),
๐‘’
∫
∫
๐‘(๐‘ฅ) ๐‘‘๐‘ฅ
๐‘(๐‘ฅ) ๐‘‘๐‘ฅ
๐‘“ (๐‘ฅ) ๐‘‘๐‘ฅ + ๐ถ,
∫
๐‘’
∫
๐‘(๐‘ฅ) ๐‘‘๐‘ฅ
๐‘“ (๐‘ฅ) ๐‘‘๐‘ฅ + ๐ถ .
41
1.4. LINEAR EQUATIONS AND THE INTEGRATING FACTOR
Of course, to get a closed form formula for ๐‘ฆ, we need to be able to find a closed form
formula for the integrals appearing above.
Example 1.4.1: Solve
๐‘ฆ 0 + 2๐‘ฅ๐‘ฆ = ๐‘’ ๐‘ฅ−๐‘ฅ ,
2
๐‘ฆ(0) = −1.
2
๐‘’ ๐‘ฅ−๐‘ฅ .
First note that ๐‘(๐‘ฅ) = 2๐‘ฅ and ๐‘“ (๐‘ฅ) =
The integrating factor is ๐‘Ÿ(๐‘ฅ) = ๐‘’
We multiply both sides of the equation by ๐‘Ÿ(๐‘ฅ) to get
∫
๐‘(๐‘ฅ) ๐‘‘๐‘ฅ
= ๐‘’๐‘ฅ .
2
๐‘’ ๐‘ฅ ๐‘ฆ 0 + 2๐‘ฅ๐‘’ ๐‘ฅ ๐‘ฆ = ๐‘’ ๐‘ฅ−๐‘ฅ ๐‘’ ๐‘ฅ ,
๐‘‘ h ๐‘ฅ2 i
๐‘’ ๐‘ฆ = ๐‘’๐‘ฅ.
๐‘‘๐‘ฅ
2
2
2
2
We integrate
๐‘’ ๐‘ฅ ๐‘ฆ = ๐‘’ ๐‘ฅ + ๐ถ,
2
๐‘ฆ = ๐‘’ ๐‘ฅ−๐‘ฅ + ๐ถ๐‘’ −๐‘ฅ .
2
2
Next, we solve for the initial condition −1 = ๐‘ฆ(0) = 1 + ๐ถ, so ๐ถ = −2. The solution is
๐‘ฆ = ๐‘’ ๐‘ฅ−๐‘ฅ − 2๐‘’ −๐‘ฅ .
2
2
∫
Note that we do not care which antiderivative we take when computing ๐‘’ ๐‘(๐‘ฅ)๐‘‘๐‘ฅ . You
can always add a constant of integration, but those constants will not matter in the end.
Exercise 1.4.1: Try it! Add a constant of integration to the integral in the integrating factor and
show that the solution you get in the end is the same as what we got above.
Advice: Do not try to remember the formula itself, that is way too hard. It is easier to
remember the process and repeat it.
Since we cannot always evaluate the integrals in closed form, it is useful to know how
to write the solution in definite integral form. A definite integral is something that you can
plug into a computer or a calculator. Suppose we are given
๐‘ฆ 0 + ๐‘(๐‘ฅ)๐‘ฆ = ๐‘“ (๐‘ฅ),
๐‘ฆ(๐‘ฅ 0 ) = ๐‘ฆ0 .
Look at the solution and write the integrals as definite integrals.
๐‘ฆ(๐‘ฅ) = ๐‘’
−
∫๐‘ฅ
๐‘ฅ0
๐‘(๐‘ ) ๐‘‘๐‘ 
∫
๐‘ฅ
๐‘’
∫๐‘ก
๐‘ฅ0
๐‘(๐‘ ) ๐‘‘๐‘ 
๐‘ฅ0
๐‘“ (๐‘ก) ๐‘‘๐‘ก + ๐‘ฆ0 .
(1.4)
You should be careful to properly use dummy variables here. If you now plug such a
formula into a computer or a calculator, it will be happy to give you numerical answers.
Exercise 1.4.2: Check that ๐‘ฆ(๐‘ฅ0 ) = ๐‘ฆ0 in formula (1.4).
42
CHAPTER 1. FIRST ORDER EQUATIONS
Exercise 1.4.3: Write the solution of the following problem as a definite integral, but try to simplify
as far as you can. You will not be able to find the solution in closed form.
๐‘ฆ0 + ๐‘ฆ = ๐‘’ ๐‘ฅ
2 −๐‘ฅ
,
๐‘ฆ(0) = 10.
Remark 1.4.1: Before we move on, we should note some interesting properties of linear
equations. First, for the linear initial value problem ๐‘ฆ 0 + ๐‘(๐‘ฅ)๐‘ฆ = ๐‘“ (๐‘ฅ), ๐‘ฆ(๐‘ฅ 0 ) = ๐‘ฆ0 , there is
always an explicit formula (1.4) for the solution. Second, it follows from the formula (1.4)
that if ๐‘(๐‘ฅ) and ๐‘“ (๐‘ฅ) are continuous on some interval (๐‘Ž, ๐‘), then the solution ๐‘ฆ(๐‘ฅ) exists
and is differentiable on (๐‘Ž, ๐‘). Compare with the simple nonlinear example we have seen
previously, ๐‘ฆ 0 = ๐‘ฆ 2 , and compare to Theorem 1.2.1.
Example 1.4.2: Let us discuss a common simple application of linear equations. This type
of problem is used often in real life. For example, linear equations are used in figuring out
the concentration of chemicals in bodies of water (rivers and lakes).
A 100 liter tank contains 10 kilograms of salt dissolved in 60
5 L/min, 0.1 kg/L
liters of water. Solution of water and salt (brine) with concentration
of 0.1 kilograms per liter is flowing in at the rate of 5 liters a minute.
The solution in the tank is well stirred and flows out at a rate of 3
liters a minute. How much salt is in the tank when the tank is full?
Let us come up with the equation. Let ๐‘ฅ denote the kilograms
60 L
of salt in the tank, let ๐‘ก denote the time in minutes. For a small
10 kg salt
change Δ๐‘ก in time, the change in ๐‘ฅ (denoted Δ๐‘ฅ) is approximately
3 L/min
Δ๐‘ฅ ≈ (rate in × concentration in)Δ๐‘ก − (rate out × concentration out)Δ๐‘ก.
Dividing through by Δ๐‘ก and taking the limit Δ๐‘ก → 0, we see that
๐‘‘๐‘ฅ
= (rate in × concentration in) − (rate out × concentration out).
๐‘‘๐‘ก
In our example,
rate in = 5,
concentration in = 0.1,
rate out = 3,
concentration out =
๐‘ฅ
๐‘ฅ
=
.
volume 60 + (5 − 3)๐‘ก
Our equation is, therefore,
๐‘‘๐‘ฅ
๐‘ฅ = (5 × 0.1) − 3
.
๐‘‘๐‘ก
60 + 2๐‘ก
Or in the form (1.3)
๐‘‘๐‘ฅ
3
+
๐‘ฅ = 0.5.
๐‘‘๐‘ก 60 + 2๐‘ก
43
1.4. LINEAR EQUATIONS AND THE INTEGRATING FACTOR
Let us solve. The integrating factor is
๐‘Ÿ(๐‘ก) = exp
∫
3
3
๐‘‘๐‘ก = exp
ln(60 + 2๐‘ก) = (60 + 2๐‘ก)3/2 .
60 + 2๐‘ก
2
We multiply both sides of the equation to get
(60 + 2๐‘ก)3/2
3
๐‘‘๐‘ฅ
+ (60 + 2๐‘ก)3/2
๐‘ฅ = 0.5(60 + 2๐‘ก)3/2 ,
๐‘‘๐‘ก
60 + 2๐‘ก
i
๐‘‘ h
(60 + 2๐‘ก)3/2 ๐‘ฅ = 0.5(60 + 2๐‘ก)3/2 ,
๐‘‘๐‘ก
∫
(60 + 2๐‘ก)3/2 ๐‘ฅ =
0.5(60 + 2๐‘ก)3/2 ๐‘‘๐‘ก + ๐ถ,
−3/2
๐‘ฅ = (60 + 2๐‘ก)
๐‘ฅ = (60 + 2๐‘ก)−3/2
๐‘ฅ=
(60 + 2๐‘ก)3/2
๐‘‘๐‘ก + ๐ถ(60 + 2๐‘ก)−3/2 ,
2
1
(60 + 2๐‘ก)5/2 + ๐ถ(60 + 2๐‘ก)−3/2 ,
10
60 + 2๐‘ก
+ ๐ถ(60 + 2๐‘ก)−3/2 .
10
We need to find ๐ถ. We know that at
๐‘ก = 0, ๐‘ฅ = 10. So
10 = ๐‘ฅ(0) =
∫
60
+ ๐ถ(60)−3/2 = 6 + ๐ถ(60)−3/2 ,
10
or
๐ถ = 4(603/2 ) ≈ 1859.03.
We are interested in ๐‘ฅ when the tank is
full. The tank is full when 60 + 2๐‘ก = 100, or
when ๐‘ก = 20. So
0
5
10
15
20
11.5
11.5
11.0
11.0
10.5
10.5
10.0
60 + 40
๐‘ฅ(20) =
+ ๐ถ(60 + 40)−3/2
10
≈ 10 + 1859.03(100)−3/2 ≈ 11.86.
10.0
0
5
10
15
20
Figure 1.10: Graph of the solution ๐‘ฅ kilograms of
salt in the tank at time ๐‘ก.
See Figure 1.10 for the graph of ๐‘ฅ over ๐‘ก.
The concentration when the tank is full is approximately 0.1186 kg/liter, and we started
with 1/6 or 0.167 kg/liter.
1.4.1
Exercises
In the exercises, feel free to leave answer as a definite integral if a closed form solution
cannot be found. If you can find a closed form solution, you should give that.
Exercise 1.4.4: Solve ๐‘ฆ 0 + ๐‘ฅ๐‘ฆ = ๐‘ฅ.
44
CHAPTER 1. FIRST ORDER EQUATIONS
Exercise 1.4.5: Solve ๐‘ฆ 0 + 6๐‘ฆ = ๐‘’ ๐‘ฅ .
3
Exercise 1.4.6: Solve ๐‘ฆ 0 + 3๐‘ฅ 2 ๐‘ฆ = sin(๐‘ฅ) ๐‘’ −๐‘ฅ , with ๐‘ฆ(0) = 1.
Exercise 1.4.7: Solve ๐‘ฆ 0 + cos(๐‘ฅ)๐‘ฆ = cos(๐‘ฅ).
Exercise 1.4.8: Solve
1
๐‘ฅ 2 +1
๐‘ฆ 0 + ๐‘ฅ๐‘ฆ = 3, with ๐‘ฆ(0) = 0.
Exercise 1.4.9: Suppose there are two lakes located on a stream. Clean water flows into the first
lake, then the water from the first lake flows into the second lake, and then water from the second
lake flows further downstream. The in and out flow from each lake is 500 liters per hour. The first
lake contains 100 thousand liters of water and the second lake contains 200 thousand liters of water.
A truck with 500 kg of toxic substance crashes into the first lake. Assume that the water is being
continually mixed perfectly by the stream.
a) Find the concentration of toxic substance as a function of time in both lakes.
b) When will the concentration in the first lake be below 0.001 kg per liter?
c) When will the concentration in the second lake be maximal?
Exercise 1.4.10: Newton’s law of cooling states that ๐‘‘๐‘ฅ
๐‘‘๐‘ก = −๐‘˜(๐‘ฅ − ๐ด) where ๐‘ฅ is the temperature, ๐‘ก
is time, ๐ด is the ambient temperature, and ๐‘˜ > 0 is a constant. Suppose that ๐ด = ๐ด0 cos(๐œ”๐‘ก) for
some constants ๐ด0 and ๐œ”. That is, the ambient temperature oscillates (for example night and day
temperatures).
a) Find the general solution.
b) In the long term, will the initial conditions make much of a difference? Why or why not?
Exercise 1.4.11: Initially 5 grams of salt are dissolved in 20 liters of water. Brine with concentration
of salt 2 grams of salt per liter is added at a rate of 3 liters a minute. The tank is mixed well and is
drained at 3 liters a minute. How long does the process have to continue until there are 20 grams of
salt in the tank?
Exercise 1.4.12: Initially a tank contains 10 liters of pure water. Brine of unknown (but constant)
concentration of salt is flowing in at 1 liter per minute. The water is mixed well and drained at 1
liter per minute. In 20 minutes there are 15 grams of salt in the tank. What is the concentration of
salt in the incoming brine?
Exercise 1.4.101: Solve ๐‘ฆ 0 + 3๐‘ฅ 2 ๐‘ฆ = ๐‘ฅ 2 .
Exercise 1.4.102: Solve ๐‘ฆ 0 + 2 sin(2๐‘ฅ)๐‘ฆ = 2 sin(2๐‘ฅ), ๐‘ฆ(๐œ‹/2) = 3.
Exercise 1.4.103: Suppose a water tank is being pumped out at 3 L/min. The water tank starts at
10 L of clean water. Water with toxic substance is flowing into the tank at 2 L/min, with concentration
20๐‘ก g/L at time ๐‘ก. When the tank is half empty, how many grams of toxic substance are in the tank
(assuming perfect mixing)?
1.4. LINEAR EQUATIONS AND THE INTEGRATING FACTOR
45
Exercise 1.4.104: There is bacteria on a plate and a toxic substance is being added that slows down
the rate of growth of the bacteria. That is, suppose that ๐‘‘๐‘ƒ
๐‘‘๐‘ก = (2 − 0.1 ๐‘ก)๐‘ƒ. If ๐‘ƒ(0) = 1000, find the
population at ๐‘ก = 5.
Exercise 1.4.105: A cylindrical water tank has water flowing in at ๐ผ cubic meters per second. Let
๐ด be the area of the cross section of the tank in square meters. Suppose water is flowing out from
the bottom of the tank at a rate proportional to the height of the water level. Set up the differential
equation for โ„Ž, the height of the water, introducing and naming constants that you need. You should
also give the units for your constants.
46
1.5
CHAPTER 1. FIRST ORDER EQUATIONS
Substitution
Note: 1 lecture, can safely be skipped, §1.6 in [EP], not in [BD]
Just as when solving integrals, one method to try is to change variables to end up with
a simpler equation to solve.
1.5.1
Substitution
The equation
๐‘ฆ 0 = (๐‘ฅ − ๐‘ฆ + 1)2
is neither separable nor linear. What can we do? How about trying to change variables, so
that in the new variables the equation is simpler. We use another variable ๐‘ฃ, which we
treat as a function of ๐‘ฅ. Let us try
๐‘ฃ = ๐‘ฅ − ๐‘ฆ + 1.
We need to figure out ๐‘ฆ 0 in terms of ๐‘ฃ 0, ๐‘ฃ and ๐‘ฅ. We differentiate (in ๐‘ฅ) to obtain ๐‘ฃ 0 = 1 − ๐‘ฆ 0.
So ๐‘ฆ 0 = 1 − ๐‘ฃ 0. We plug this into the equation to get
1 − ๐‘ฃ0 = ๐‘ฃ 2 .
In other words, ๐‘ฃ 0 = 1 − ๐‘ฃ 2 . Such an equation we know how to solve by separating variables:
1
๐‘‘๐‘ฃ = ๐‘‘๐‘ฅ.
1 − ๐‘ฃ2
So
๐‘ฃ+1
1
ln
= ๐‘ฅ + ๐ถ,
2
๐‘ฃ−1
or
๐‘ฃ+1
= ๐‘’ 2๐‘ฅ+2๐ถ ,
๐‘ฃ−1
or
๐‘ฃ+1
= ๐ท๐‘’ 2๐‘ฅ ,
๐‘ฃ−1
for some constant ๐ท. Note that ๐‘ฃ = 1 and ๐‘ฃ = −1 are also solutions.
Now we need to “unsubstitute” to obtain
๐‘ฅ−๐‘ฆ+2
= ๐ท๐‘’ 2๐‘ฅ ,
๐‘ฅ−๐‘ฆ
and also the two solutions ๐‘ฅ − ๐‘ฆ + 1 = 1 or ๐‘ฆ = ๐‘ฅ, and ๐‘ฅ − ๐‘ฆ + 1 = −1 or ๐‘ฆ = ๐‘ฅ + 2. We solve
the first equation for ๐‘ฆ.
๐‘ฅ − ๐‘ฆ + 2 = (๐‘ฅ − ๐‘ฆ)๐ท๐‘’ 2๐‘ฅ ,
๐‘ฅ − ๐‘ฆ + 2 = ๐ท๐‘ฅ๐‘’ 2๐‘ฅ − ๐‘ฆ๐ท๐‘’ 2๐‘ฅ ,
−๐‘ฆ + ๐‘ฆ๐ท๐‘’ 2๐‘ฅ = ๐ท๐‘ฅ๐‘’ 2๐‘ฅ − ๐‘ฅ − 2,
๐‘ฆ (−1 + ๐ท๐‘’ 2๐‘ฅ ) = ๐ท๐‘ฅ๐‘’ 2๐‘ฅ − ๐‘ฅ − 2,
๐‘ฆ=
๐ท๐‘ฅ๐‘’ 2๐‘ฅ − ๐‘ฅ − 2
.
๐ท๐‘’ 2๐‘ฅ − 1
47
1.5. SUBSTITUTION
Note that ๐ท = 0 gives ๐‘ฆ = ๐‘ฅ + 2, but no value of ๐ท gives the solution ๐‘ฆ = ๐‘ฅ.
Substitution in differential equations is applied in much the same way that it is applied
in calculus. You guess. Several different substitutions might work. There are some general
patterns to look for. We summarize a few of these in a table.
When you see
Try substituting
๐‘ฆ๐‘ฆ 0
๐‘ฆ 2 ๐‘ฆ0
(cos ๐‘ฆ)๐‘ฆ 0
(sin ๐‘ฆ)๐‘ฆ 0
๐‘ฆ0 ๐‘’ ๐‘ฆ
๐‘ฃ
๐‘ฃ
๐‘ฃ
๐‘ฃ
๐‘ฃ
= ๐‘ฆ2
= ๐‘ฆ3
= sin ๐‘ฆ
= cos ๐‘ฆ
= ๐‘’๐‘ฆ
Usually you try to substitute in the “most complicated” part of the equation with the
hopes of simplifying it. The table above is just a rule of thumb. You might have to modify
your guesses. If a substitution does not work (it does not make the equation any simpler),
try a different one.
1.5.2
Bernoulli equations
There are some forms of equations where there is a general rule for substitution that always
works. One such example is the so-called Bernoulli equation∗ :
๐‘ฆ 0 + ๐‘(๐‘ฅ)๐‘ฆ = ๐‘ž(๐‘ฅ)๐‘ฆ ๐‘› .
This equation looks a lot like a linear equation except for the ๐‘ฆ ๐‘› . If ๐‘› = 0 or ๐‘› = 1, then the
equation is linear and we can solve it. Otherwise, the substitution ๐‘ฃ = ๐‘ฆ 1−๐‘› transforms the
Bernoulli equation into a linear equation. Note that ๐‘› need not be an integer.
Example 1.5.1: Solve
๐‘ฅ๐‘ฆ 0 + ๐‘ฆ(๐‘ฅ + 1) + ๐‘ฅ๐‘ฆ 5 = 0,
๐‘ฆ(1) = 1.
The equation is a Bernoulli equation, ๐‘(๐‘ฅ) = (๐‘ฅ + 1)/๐‘ฅ and ๐‘ž(๐‘ฅ) = −1. We substitute
๐‘ฃ = ๐‘ฆ 1−5 = ๐‘ฆ −4 ,
๐‘ฃ 0 = −4๐‘ฆ −5 ๐‘ฆ 0 .
In other words, (−1/4) ๐‘ฆ 5 ๐‘ฃ 0 = ๐‘ฆ 0. So
๐‘ฅ๐‘ฆ 0 + ๐‘ฆ(๐‘ฅ + 1) + ๐‘ฅ๐‘ฆ 5 = 0,
−๐‘ฅ๐‘ฆ 5 0
๐‘ฃ + ๐‘ฆ(๐‘ฅ + 1) + ๐‘ฅ๐‘ฆ 5 = 0,
4
∗ There
are several things called Bernoulli equations, this is just one of them. The Bernoullis were a
prominent Swiss family of mathematicians. These particular equations are named for Jacob Bernoulli
(1654–1705).
48
CHAPTER 1. FIRST ORDER EQUATIONS
−๐‘ฅ 0
๐‘ฃ + ๐‘ฆ −4 (๐‘ฅ + 1) + ๐‘ฅ = 0,
4
−๐‘ฅ 0
๐‘ฃ + ๐‘ฃ(๐‘ฅ + 1) + ๐‘ฅ = 0,
4
and finally
4(๐‘ฅ + 1)
๐‘ฃ = 4.
๐‘ฅ
The equation is now linear. We can use the integrating factor method. In particular, we
use formula (1.4). We assume that ๐‘ฅ > 0 so |๐‘ฅ| = ๐‘ฅ. This assumption is OK, as our initial
condition is at ๐‘ฅ = 1 > 0. Let us compute the integrating factor. Here ๐‘(๐‘ ) from formula
.
(1.4) is −4(๐‘ +1)
๐‘ 
๐‘ฃ0 −
๐‘’
๐‘’−
∫๐‘ฅ
1
∫๐‘ฅ
1
๐‘(๐‘ ) ๐‘‘๐‘ 
๐‘ฅ
∫
= exp
1
๐‘(๐‘ ) ๐‘‘๐‘ 
−4(๐‘  + 1)
๐‘’ −4๐‘ฅ+4
−4๐‘ฅ−4 ln(๐‘ฅ)+4
−4๐‘ฅ+4 −4
=๐‘’
๐‘ฅ =
๐‘‘๐‘  = ๐‘’
,
๐‘ 
๐‘ฅ4
= ๐‘’ 4๐‘ฅ+4 ln(๐‘ฅ)−4 = ๐‘’ 4๐‘ฅ−4 ๐‘ฅ 4 .
We now plug in to (1.4)
๐‘ฃ(๐‘ฅ) = ๐‘’ −
=๐‘’
∫๐‘ฅ
1
๐‘(๐‘ ) ๐‘‘๐‘ 
๐‘ฅ
4๐‘ฅ−4 4
๐‘ฅ
∫
∫
1
๐‘ฅ
๐‘’
∫๐‘ก
1
๐‘(๐‘ ) ๐‘‘๐‘ 
4 ๐‘‘๐‘ก + 1
1
๐‘’ −4๐‘ก+4
4 4 ๐‘‘๐‘ก + 1 .
๐‘ก
The integral in this expression is not possible to find in closed form. As we said before, it is
perfectly fine to have a definite integral in our solution. Now “unsubstitute”
๐‘ฅ
∫
๐‘ฆ −4 = ๐‘’ 4๐‘ฅ−4 ๐‘ฅ 4 4
1
๐‘ฆ=
๐‘’ −๐‘ฅ+1
∫๐‘ฅ
๐‘ฅ 4
1.5.3
๐‘’ −4๐‘ก+4
๐‘‘๐‘ก + 1 ,
๐‘ก4
๐‘’ −4๐‘ก+4
๐‘ก4
1
๐‘‘๐‘ก + 1
1/4 .
Homogeneous equations
Another type of equations we can solve by substitution are the so-called homogeneous
equations. Suppose that we can write the differential equation as
0
๐‘ฆ =๐น
๐‘ฆ
๐‘ฅ
.
Here we try the substitutions
๐‘ฃ=
๐‘ฆ
๐‘ฅ
and therefore
๐‘ฆ 0 = ๐‘ฃ + ๐‘ฅ๐‘ฃ 0 .
49
1.5. SUBSTITUTION
We note that the equation is transformed into
๐‘ฃ + ๐‘ฅ๐‘ฃ 0 = ๐น(๐‘ฃ)
๐‘ฅ๐‘ฃ 0 = ๐น(๐‘ฃ) − ๐‘ฃ
or
or
๐‘ฃ0
1
= .
๐‘ฅ
๐น(๐‘ฃ) − ๐‘ฃ
Hence an implicit solution is
∫
1
๐‘‘๐‘ฃ = ln |๐‘ฅ| + ๐ถ.
๐น(๐‘ฃ) − ๐‘ฃ
Example 1.5.2: Solve
๐‘ฅ 2 ๐‘ฆ 0 = ๐‘ฆ 2 + ๐‘ฅ๐‘ฆ,
๐‘ฆ(1) = 1.
We put the equation into the form ๐‘ฆ 0 = ( ๐‘ฆ/๐‘ฅ )2 + ๐‘ฆ/๐‘ฅ . We substitute ๐‘ฃ = ๐‘ฆ/๐‘ฅ to get the
separable equation
๐‘ฅ๐‘ฃ 0 = ๐‘ฃ 2 + ๐‘ฃ − ๐‘ฃ = ๐‘ฃ 2 ,
which has a solution
∫
1
๐‘‘๐‘ฃ = ln |๐‘ฅ| + ๐ถ,
๐‘ฃ2
−1
= ln |๐‘ฅ| + ๐ถ,
๐‘ฃ
−1
๐‘ฃ=
.
ln |๐‘ฅ| + ๐ถ
We unsubstitute
๐‘ฆ
−1
=
,
๐‘ฅ ln |๐‘ฅ| + ๐ถ
−๐‘ฅ
๐‘ฆ=
.
ln |๐‘ฅ| + ๐ถ
We want ๐‘ฆ(1) = 1, so
1 = ๐‘ฆ(1) =
−1
−1
=
.
๐ถ
ln |1| + ๐ถ
Thus ๐ถ = −1 and the solution we are looking for is
๐‘ฆ=
1.5.4
−๐‘ฅ
.
ln |๐‘ฅ| − 1
Exercises
Hint: Answers need not always be in closed form.
Exercise 1.5.1: Solve ๐‘ฆ 0 + ๐‘ฆ(๐‘ฅ 2 − 1) + ๐‘ฅ๐‘ฆ 6 = 0, with ๐‘ฆ(1) = 1.
Exercise 1.5.2: Solve 2๐‘ฆ๐‘ฆ 0 + 1 = ๐‘ฆ 2 + ๐‘ฅ, with ๐‘ฆ(0) = 1.
50
CHAPTER 1. FIRST ORDER EQUATIONS
Exercise 1.5.3: Solve ๐‘ฆ 0 + ๐‘ฅ๐‘ฆ = ๐‘ฆ 4 , with ๐‘ฆ(0) = 1.
Exercise 1.5.4: Solve ๐‘ฆ๐‘ฆ 0 + ๐‘ฅ =
p
๐‘ฅ2 + ๐‘ฆ2.
Exercise 1.5.5: Solve ๐‘ฆ 0 = (๐‘ฅ + ๐‘ฆ − 1)2 .
Exercise 1.5.6: Solve ๐‘ฆ 0 =
๐‘ฅ 2 −๐‘ฆ 2
๐‘ฅ๐‘ฆ ,
with ๐‘ฆ(1) = 2.
Exercise 1.5.101: Solve ๐‘ฅ๐‘ฆ 0 + ๐‘ฆ + ๐‘ฆ 2 = 0, ๐‘ฆ(1) = 2.
Exercise 1.5.102: Solve ๐‘ฅ๐‘ฆ 0 + ๐‘ฆ + ๐‘ฅ = 0, ๐‘ฆ(1) = 1.
Exercise 1.5.103: Solve ๐‘ฆ 2 ๐‘ฆ 0 = ๐‘ฆ 3 − 3๐‘ฅ, ๐‘ฆ(0) = 2.
Exercise 1.5.104: Solve 2๐‘ฆ๐‘ฆ 0 = ๐‘’ ๐‘ฆ
2 −๐‘ฅ 2
+ 2๐‘ฅ.
51
1.6. AUTONOMOUS EQUATIONS
1.6
Autonomous equations
Note: 1 lecture, §2.2 in [EP], §2.5 in [BD]
Consider problems of the form
๐‘‘๐‘ฅ
= ๐‘“ (๐‘ฅ),
๐‘‘๐‘ก
where the derivative of solutions depends only on ๐‘ฅ (the dependent variable). Such
equations are called autonomous equations. If we think of ๐‘ก as time, the naming comes from
the fact that the equation is independent of time.
We return to the cooling coffee problem (Example 1.3.3). Newton’s law of cooling says
๐‘‘๐‘ฅ
= ๐‘˜(๐ด − ๐‘ฅ),
๐‘‘๐‘ก
where ๐‘ฅ is the temperature, ๐‘ก is time, ๐‘˜ is some positive constant, and ๐ด is the ambient
temperature. See Figure 1.11 for an example with ๐‘˜ = 0.3 and ๐ด = 5.
Note the solution ๐‘ฅ = ๐ด (in the figure ๐‘ฅ = 5). We call these constant solutions the
equilibrium solutions. The points on the ๐‘ฅ-axis where ๐‘“ (๐‘ฅ) = 0 are called critical points. The
point ๐‘ฅ = ๐ด is a critical point. In fact, each critical point corresponds to an equilibrium
solution. Note also, by looking at the graph, that the solution ๐‘ฅ = ๐ด is “stable” in that
small perturbations in ๐‘ฅ do not lead to substantially different solutions as ๐‘ก grows. If we
change the initial condition a little bit, then as ๐‘ก → ∞ we get ๐‘ฅ(๐‘ก) → ๐ด. We call such a
critical point stable. In this simple example it turns out that all solutions in fact go to ๐ด as
๐‘ก → ∞. If a critical point is not stable, we say it is unstable.
0
5
10
15
20
0
10
10
5
5
0
0
-5
-5
-10
-10
0
5
10
15
20
Figure 1.11: The slope field and some solutions of
๐‘ฅ 0 = 0.3 (5 − ๐‘ฅ).
5
10
15
20
10.0
10.0
7.5
7.5
5.0
5.0
2.5
2.5
0.0
0.0
-2.5
-2.5
-5.0
-5.0
0
5
10
15
20
Figure 1.12: The slope field and some solutions of
๐‘ฅ 0 = 0.1 ๐‘ฅ (5 − ๐‘ฅ).
Consider now the logistic equation
๐‘‘๐‘ฅ
= ๐‘˜๐‘ฅ(๐‘€ − ๐‘ฅ),
๐‘‘๐‘ก
52
CHAPTER 1. FIRST ORDER EQUATIONS
for some positive ๐‘˜ and ๐‘€. This equation is commonly used to model population if we
know the limiting population ๐‘€, that is the maximum sustainable population. The logistic
equation leads to less catastrophic predictions on world population than ๐‘ฅ 0 = ๐‘˜๐‘ฅ. In the
real world there is no such thing as negative population, but we will still consider negative
๐‘ฅ for the purposes of the math.
See Figure 1.12 on the preceding page for an example, ๐‘ฅ 0 = 0.1๐‘ฅ(5 − ๐‘ฅ). There are two
critical points, ๐‘ฅ = 0 and ๐‘ฅ = 5. The critical point at ๐‘ฅ = 5 is stable, while the critical point
at ๐‘ฅ = 0 is unstable.
It is not necessary to find the exact solutions to talk about the long term behavior of the
solutions. From the slope field above of ๐‘ฅ 0 = 0.1๐‘ฅ(5 − ๐‘ฅ), we see that
if ๐‘ฅ(0) > 0,
๏ฃฑ
๏ฃด
5
๏ฃด
๏ฃฒ
๏ฃด
lim ๐‘ฅ(๐‘ก) = 0
๐‘ก→∞
if ๐‘ฅ(0) = 0,
๏ฃด
๏ฃด
๏ฃด DNE or −∞ if ๐‘ฅ(0) < 0.
๏ฃณ
Here DNE means “does not exist.” From just looking at the slope field we cannot quite
decide what happens if ๐‘ฅ(0) < 0. It could be that the solution does not exist for ๐‘ก all the
way to ∞. Think of the equation ๐‘ฅ 0 = ๐‘ฅ 2 ; we have seen that solutions only exist for some
finite period of time. Same can happen here. In our example equation above it turns out
that the solution does not exist for all time, but to see that we would have to solve the
equation. In any case, the solution does go to −∞, but it may get there rather quickly.
If we are interested only in the long term behavior of the solution, we would be doing
unnecessary work if we solved the equation exactly. We could draw the slope field, but it
is easier to just look at the phase diagram or phase portrait, which is a simple way to visualize
the behavior of autonomous equations. In this case there is one dependent variable ๐‘ฅ.
We draw the ๐‘ฅ-axis, we mark all the critical points, and then we draw arrows in between.
Since ๐‘ฅ is the dependent variable we draw the axis vertically, as it appears in the slope
field diagrams above. If ๐‘“ (๐‘ฅ) > 0, we draw an up arrow. If ๐‘“ (๐‘ฅ) < 0, we draw a down
arrow. To figure this out, we could just plug in some ๐‘ฅ between the critical points, ๐‘“ (๐‘ฅ)
will have the same sign at all ๐‘ฅ between two critical points as long ๐‘“ (๐‘ฅ) is continuous. For
example, ๐‘“ (6) = −0.6 < 0, so ๐‘“ (๐‘ฅ) < 0 for ๐‘ฅ > 5, and the arrow above ๐‘ฅ = 5 is a down arrow.
Next, ๐‘“ (1) = 0.4 > 0, so ๐‘“ (๐‘ฅ) > 0 whenever 0 < ๐‘ฅ < 5, and the arrow points up. Finally,
๐‘“ (−1) = −0.6 < 0 so ๐‘“ (๐‘ฅ) < 0 when ๐‘ฅ < 0, and the arrow points down.
๐‘ฅ=5
๐‘ฅ=0
53
1.6. AUTONOMOUS EQUATIONS
Armed with the phase diagram, it is easy to sketch the solutions approximately: As
time ๐‘ก moves from left to right, the graph of a solution goes up if the arrow is up, and it
goes down if the arrow is down.
Exercise 1.6.1: Try sketching a few solutions simply from looking at the phase diagram. Check with
the preceding graphs if you are getting the same type of curves.
Once we draw the phase diagram, we classify critical points as stable or unstable∗ .
unstable
stable
Since any mathematical model we cook up will only be an approximation to the real
world, unstable points are generally bad news.
Let us think about the logistic equation with harvesting. Suppose an alien race really
likes to eat humans. They keep a planet with humans on it and harvest the humans at a
rate of โ„Ž million humans per year. Suppose ๐‘ฅ is the number of humans in millions on the
planet and ๐‘ก is time in years. Let ๐‘€ be the limiting population when no harvesting is done.
The number ๐‘˜ > 0 is a constant depending on how fast humans multiply. Our equation
becomes
๐‘‘๐‘ฅ
= ๐‘˜๐‘ฅ(๐‘€ − ๐‘ฅ) − โ„Ž.
๐‘‘๐‘ก
We expand the right-hand side and set it to zero.
๐‘˜๐‘ฅ(๐‘€ − ๐‘ฅ) − โ„Ž = −๐‘˜๐‘ฅ 2 + ๐‘˜๐‘€๐‘ฅ − โ„Ž = 0.
Solving for the critical points, let us call them ๐ด and ๐ต, we get
๐‘˜๐‘€ +
๐ด=
q
2
(๐‘˜๐‘€) − 4โ„Ž๐‘˜
2๐‘˜
๐‘˜๐‘€ −
,
๐ต=
q
(๐‘˜๐‘€)2 − 4โ„Ž๐‘˜
2๐‘˜
.
Exercise 1.6.2: Sketch a phase diagram for different possibilities. Note that these possibilities are
๐ด > ๐ต, or ๐ด = ๐ต, or ๐ด and ๐ต both complex (i.e. no real solutions). Hint: Fix some simple ๐‘˜ and ๐‘€
and then vary โ„Ž.
For example, let ๐‘€ = 8 and ๐‘˜ = 0.1. When โ„Ž = 1, then ๐ด and ๐ต are distinct and positive.
The slope field we get is in Figure 1.13 on the next page. As long as the population starts
above ๐ต, which is approximately 1.55 million, then the population will not die out. It will in
fact tend towards ๐ด ≈ 6.45 million. If ever some catastrophe happens and the population
drops below ๐ต, humans will die out, and the fast food restaurant serving them will go out
of business.
∗ Unstable points with one of the arrows pointing towards the critical point are sometimes called semistable.
54
CHAPTER 1. FIRST ORDER EQUATIONS
0
5
10
15
0
20
5
10
15
20
10.0
10.0
10.0
10.0
7.5
7.5
7.5
7.5
5.0
5.0
5.0
5.0
2.5
2.5
2.5
2.5
0.0
0.0
0.0
0
5
10
15
0.0
0
20
Figure 1.13: The slope field and some solutions of
๐‘ฅ 0 = 0.1 ๐‘ฅ (8 − ๐‘ฅ) − 1.
5
10
15
20
Figure 1.14: The slope field and some solutions of
๐‘ฅ 0 = 0.1 ๐‘ฅ (8 − ๐‘ฅ) − 1.6.
When โ„Ž = 1.6, then ๐ด = ๐ต = 4. There is only one critical point and it is unstable. When
the population starts above 4 million it will tend towards 4 million. If it ever drops below 4
million, humans will die out on the planet. This scenario is not one that we (as the human
fast food proprietor) want to be in. A small perturbation of the equilibrium state and we
are out of business. There is no room for error. See Figure 1.14.
Finally if we are harvesting at 2 million humans per year, there are no critical points.
The population will always plummet towards zero, no matter how well stocked the planet
starts. See Figure 1.15.
0
5
10
15
20
10.0
10.0
7.5
7.5
5.0
5.0
2.5
2.5
0.0
0.0
0
5
10
15
20
Figure 1.15: The slope field and some solutions of ๐‘ฅ 0 = 0.1 ๐‘ฅ (8 − ๐‘ฅ) − 2.
55
1.6. AUTONOMOUS EQUATIONS
1.6.1
Exercises
Exercise 1.6.3: Consider ๐‘ฅ 0 = ๐‘ฅ 2 .
a) Draw the phase diagram, find the critical points, and mark them stable or unstable.
b) Sketch typical solutions of the equation.
c) Find lim ๐‘ฅ(๐‘ก) for the solution with the initial condition ๐‘ฅ(0) = −1.
๐‘ก→∞
Exercise 1.6.4: Consider ๐‘ฅ 0 = sin ๐‘ฅ.
a) Draw the phase diagram for −4๐œ‹ ≤ ๐‘ฅ ≤ 4๐œ‹. On this interval mark the critical points stable
or unstable.
b) Sketch typical solutions of the equation.
c) Find lim ๐‘ฅ(๐‘ก) for the solution with the initial condition ๐‘ฅ(0) = 1.
๐‘ก→∞
Exercise 1.6.5: Suppose ๐‘“ (๐‘ฅ) is positive for 0 < ๐‘ฅ < 1, it is zero when ๐‘ฅ = 0 and ๐‘ฅ = 1, and it is
negative for all other ๐‘ฅ.
a) Draw the phase diagram for ๐‘ฅ 0 = ๐‘“ (๐‘ฅ), find the critical points, and mark them stable or
unstable.
b) Sketch typical solutions of the equation.
c) Find lim ๐‘ฅ(๐‘ก) for the solution with the initial condition ๐‘ฅ(0) = 0.5.
๐‘ก→∞
Exercise 1.6.6: Start with the logistic equation ๐‘‘๐‘ฅ
๐‘‘๐‘ก = ๐‘˜๐‘ฅ(๐‘€ − ๐‘ฅ). Suppose we modify our harvesting.
That is we will only harvest an amount proportional to current population. In other words, we
harvest โ„Ž๐‘ฅ per unit of time for some โ„Ž > 0 (similar to earlier example with โ„Ž replaced with โ„Ž๐‘ฅ).
a) Construct the differential equation.
b) Show that if ๐‘˜๐‘€ > โ„Ž, then the equation is still logistic.
c) What happens when ๐‘˜๐‘€ < โ„Ž?
Exercise 1.6.7: A disease is spreading through the country. Let ๐‘ฅ be the number of people infected.
Let the constant ๐‘† be the number of people susceptible to infection. The infection rate ๐‘‘๐‘ฅ
๐‘‘๐‘ก is
proportional to the product of already infected people, ๐‘ฅ, and the number of susceptible but uninfected
people, ๐‘† − ๐‘ฅ.
a) Write down the differential equation.
b) Supposing ๐‘ฅ(0) > 0, that is, some people are infected at time ๐‘ก = 0, what is lim ๐‘ฅ(๐‘ก).
๐‘ก→∞
c) Does the solution to part b) agree with your intuition? Why or why not?
56
CHAPTER 1. FIRST ORDER EQUATIONS
Exercise 1.6.101: Let ๐‘ฅ 0 = (๐‘ฅ − 1)(๐‘ฅ − 2)๐‘ฅ 2 .
a) Sketch the phase diagram and find critical points.
b) Classify the critical points.
c) If ๐‘ฅ(0) = 0.5, then find lim ๐‘ฅ(๐‘ก).
๐‘ก→∞
Exercise 1.6.102: Let ๐‘ฅ 0 = ๐‘’ −๐‘ฅ .
a) Find and classify all critical points.
b) Find lim ๐‘ฅ(๐‘ก) given any initial condition.
๐‘ก→∞
Exercise 1.6.103: Assume that a population of fish in a lake satisfies
suppose that fish are continually added at ๐ด fish per unit of time.
a) Find the differential equation for ๐‘ฅ.
Exercise 1.6.104: Suppose
๐‘‘๐‘ฅ
๐‘‘๐‘ก
๐‘‘๐‘ฅ
๐‘‘๐‘ก
= ๐‘˜๐‘ฅ(๐‘€ − ๐‘ฅ). Now
b) What is the new limiting population?
= (๐‘ฅ − ๐›ผ)(๐‘ฅ − ๐›ฝ) for two numbers ๐›ผ < ๐›ฝ.
a) Find the critical points, and classify them.
For b), c), d), find lim ๐‘ฅ(๐‘ก) based on the phase diagram.
๐‘ก→∞
b) ๐‘ฅ(0) < ๐›ผ,
c) ๐›ผ < ๐‘ฅ(0) < ๐›ฝ,
d) ๐›ฝ < ๐‘ฅ(0).
57
1.7. NUMERICAL METHODS: EULER’S METHOD
1.7
Numerical methods: Euler’s method
Note: 1 lecture, can safely be skipped, §2.4 in [EP], §8.1 in [BD]
Unless ๐‘“ (๐‘ฅ, ๐‘ฆ) is of a special form, it is generally very hard if not impossible to get a
nice formula for the solution of the problem
๐‘ฆ 0 = ๐‘“ (๐‘ฅ, ๐‘ฆ),
๐‘ฆ(๐‘ฅ0 ) = ๐‘ฆ0 .
If the equation can be solved in closed form, we should do that. But what if we have
an equation that cannot be solved in closed form? What if we want to find the value
of the solution at some particular ๐‘ฅ? Or perhaps we want to produce a graph of the
solution to inspect the behavior. In this section we will learn about the basics of numerical
approximation of solutions.
The simplest method for approximating a solution is Euler’s method∗ . It works as follows:
Take ๐‘ฅ0 and compute the slope ๐‘˜ = ๐‘“ (๐‘ฅ0 , ๐‘ฆ0 ). The slope is the change in ๐‘ฆ per unit change
in ๐‘ฅ. Follow the line for an interval of length โ„Ž on the ๐‘ฅ-axis. Hence if ๐‘ฆ = ๐‘ฆ0 at ๐‘ฅ0 , then we
say that ๐‘ฆ1 (the approximate value of ๐‘ฆ at ๐‘ฅ1 = ๐‘ฅ 0 + โ„Ž) is ๐‘ฆ1 = ๐‘ฆ0 + โ„Ž๐‘˜. Rinse, repeat! Let
๐‘˜ = ๐‘“ (๐‘ฅ1 , ๐‘ฆ1 ), and then compute ๐‘ฅ2 = ๐‘ฅ 1 + โ„Ž, and ๐‘ฆ2 = ๐‘ฆ1 + โ„Ž๐‘˜. Now compute ๐‘ฅ3 and ๐‘ฆ3
using ๐‘ฅ2 and ๐‘ฆ2 , etc. Consider the equation ๐‘ฆ 0 = ๐‘ฆ 2/3, ๐‘ฆ(0) = 1, and โ„Ž = 1. Then ๐‘ฅ0 = 0 and
๐‘ฆ0 = 1. We compute
๐‘ฅ1 = ๐‘ฅ 0 + โ„Ž = 0 + 1 = 1,
๐‘ฆ1 = ๐‘ฆ0 + โ„Ž ๐‘“ (๐‘ฅ0 , ๐‘ฆ0 ) = 1 + 1 · 1/3 = 4/3 ≈ 1.333,
๐‘ฅ2 = ๐‘ฅ 1 + โ„Ž = 1 + 1 = 2,
๐‘ฆ2 = ๐‘ฆ1 + โ„Ž ๐‘“ (๐‘ฅ1 , ๐‘ฆ1 ) = 4/3 + 1 ·
(4/3)2 52
= /27 ≈ 1.926.
3
We then draw an approximate graph of the solution by connecting the points (๐‘ฅ0 , ๐‘ฆ0 ),
(๐‘ฅ1 , ๐‘ฆ1 ), (๐‘ฅ2 , ๐‘ฆ2 ),. . . . For the first two steps of the method see Figure 1.16 on the following
page.
More abstractly, for any ๐‘– = 0, 1, 2, 3, . . ., we compute
๐‘ฅ ๐‘–+1 = ๐‘ฅ ๐‘– + โ„Ž,
๐‘ฆ ๐‘–+1 = ๐‘ฆ ๐‘– + โ„Ž ๐‘“ (๐‘ฅ ๐‘– , ๐‘ฆ ๐‘– ).
The line segments we get are an approximate graph of the solution. Generally it is not
exactly the solution. See Figure 1.17 on the next page for the plot of the real solution and
the approximation.
We continue with the equation ๐‘ฆ 0 = ๐‘ฆ 2/3, ๐‘ฆ(0) = 1. Let us try to approximate ๐‘ฆ(2) using
Euler’s method. In Figures 1.16 and 1.17 we have graphically approximated ๐‘ฆ(2) with step
size 1. With step size 1, we have ๐‘ฆ(2) ≈ 1.926. The real answer is 3. We are approximately
1.074 off. Let us halve the step size. Computing ๐‘ฆ4 with โ„Ž = 0.5, we find that ๐‘ฆ(2) ≈ 2.209,
so an error of about 0.791. Table 1.1 on page 59 gives the values computed for various
parameters.
∗ Named
after the Swiss mathematician Leonhard Paul Euler (1707–1783). The correct pronunciation of
the name sounds more like “oiler.”
58
CHAPTER 1. FIRST ORDER EQUATIONS
-1
0
1
2
3
-1
0
1
2
3
3.0
3.0
3.0
3.0
2.5
2.5
2.5
2.5
2.0
2.0
2.0
2.0
1.5
1.5
1.5
1.5
1.0
1.0
1.0
1.0
0.5
0.5
0.5
0.5
0.0
0.0
0.0
-1
0
1
2
3
0.0
-1
0
1
Figure 1.16: First two steps of Euler’s method with โ„Ž = 1 for the equation ๐‘ฆ 0 =
๐‘ฆ(0) = 1.
-1
0
1
2
2
๐‘ฆ2
3
3
with initial conditions
3
3.0
3.0
2.5
2.5
2.0
2.0
1.5
1.5
1.0
1.0
0.5
0.5
0.0
0.0
-1
0
1
2
3
Figure 1.17: Two steps of Euler’s method (step size 1) and the exact solution for the equation ๐‘ฆ 0 =
initial conditions ๐‘ฆ(0) = 1.
๐‘ฆ2
3
with
Exercise 1.7.1: Solve this equation exactly and show that ๐‘ฆ(2) = 3.
The difference between the actual solution and the approximate solution is called the
error. We usually talk about just the size of the error and we do not care much about its
sign. The point is, we usually do not know the real solution, so we only have a vague
understanding of the error. If we knew the error exactly . . . what is the point of doing the
approximation?
Notice that except for the first few times, every time we halved the interval the error
approximately halved. This halving of the error is a general feature of Euler’s method as it
is a first order method. There exists an improved Euler method, see the exercises, which is
a second order method. A second order method reduces the error to approximately one
59
1.7. NUMERICAL METHODS: EULER’S METHOD
โ„Ž
Approximate ๐‘ฆ(2)
Error
Error
Previous error
1
0.5
0.25
0.125
0.0625
0.03125
0.015625
0.0078125
1.92593
2.20861
2.47250
2.68034
2.82040
2.90412
2.95035
2.97472
1.07407
0.79139
0.52751
0.31966
0.17960
0.09588
0.04965
0.02528
0.73681
0.66656
0.60599
0.56184
0.53385
0.51779
0.50913
Table 1.1: Euler’s method approximation of ๐‘ฆ(2) where of ๐‘ฆ 0 = ๐‘ฆ 2/3, ๐‘ฆ(0) = 1.
quarter every time we halve the interval. The meaning of “second” order is the squaring in
= 1/2 × 1/2 = (1/2)2 .
To get the error to be within 0.1 of the answer we had to already do 64 steps. To get
it to within 0.01 we would have to halve another three or four times, meaning doing 512
to 1024 steps. That is quite a bit to do by hand. The improved Euler method from the
exercises should quarter the error every time we halve the interval, so we would have to
approximately do half as many “halvings” to get the same error. This reduction can be a
big deal. With 10 halvings (starting at โ„Ž = 1) we have 1024 steps, whereas with 5 halvings
we only have to do 32 steps, assuming that the error was comparable to start with. A
computer may not care about this difference for a problem this simple, but suppose each
step would take a second to compute (the function may be substantially more difficult to
compute than ๐‘ฆ 2/3). Then the difference is 32 seconds versus about 17 minutes. We are not
being altogether fair, a second order method would probably double the time to do each
step. Even so, it is 1 minute versus 17 minutes. Next, suppose that we have to repeat such
a calculation for different parameters a thousand times. You get the idea.
Note that in practice we do not know how large the error is! How do we know what is
the right step size? Well, essentially we keep halving the interval, and if we are lucky, we
can estimate the error from a few of these calculations and the assumption that the error
goes down by a factor of one half each time (if we are using standard Euler).
1/4
Exercise 1.7.2: In the table above, suppose you do not know the error. Take the approximate values
of the function in the last two lines, assume that the error goes down by a factor of 2. Can you
estimate the error in the last time from this? Does it (approximately) agree with the table? Now do
it for the first two rows. Does this agree with the table?
Let us talk a little bit more about the example ๐‘ฆ 0 = ๐‘ฆ 2/3, ๐‘ฆ(0) = 1. Suppose that instead
of the value ๐‘ฆ(2) we wish to find ๐‘ฆ(3). The results of this effort are listed in Table 1.2 on the
next page for successive halvings of โ„Ž. What is going on here? Well, you should solve the
60
CHAPTER 1. FIRST ORDER EQUATIONS
equation exactly and you will notice that the solution does not exist at ๐‘ฅ = 3. In fact, the
solution goes to infinity when you approach ๐‘ฅ = 3.
โ„Ž
Approximate ๐‘ฆ(3)
1
0.5
0.25
0.125
0.0625
0.03125
0.015625
0.0078125
3.16232
4.54329
6.86079
10.80321
17.59893
29.46004
50.40121
87.75769
Table 1.2: Attempts to use Euler’s to approximate ๐‘ฆ(3) where of ๐‘ฆ 0 = ๐‘ฆ 2/3, ๐‘ฆ(0) = 1.
Another case where things go bad is if the solution oscillates wildly near some point.
The solution may exist at all points, but even a much better numerical method than
Euler would need an insanely small step size to approximate the solution with reasonable
precision. And computers might not be able to easily handle such a small step size.
In real applications we would not use a simple method such as Euler’s. The simplest
method that would probably be used in a real application is the standard Runge–Kutta
method (see exercises). That is a fourth order method, meaning that if we halve the interval,
the error generally goes down by a factor of 16 (it is fourth order as 1/16 = 1/2 × 1/2 × 1/2 × 1/2).
Choosing the right method to use and the right step size can be very tricky. There are
several competing factors to consider.
• Computational time: Each step takes computer time. Even if the function ๐‘“ is simple
to compute, we do it many times over. Large step size means faster computation, but
perhaps not the right precision.
• Roundoff errors: Computers only compute with a certain number of significant
digits. Errors introduced by rounding numbers off during our computations become
noticeable when the step size becomes too small relative to the quantities we are
working with. So reducing step size may in fact make errors worse. There is a certain
optimum step size such that the precision increases as we approach it, but then starts
getting worse as we make our step size smaller still. Trouble is: this optimum may be
hard to find.
• Stability: Certain equations may be numerically unstable. What may happen is that
the numbers never seem to stabilize no matter how many times we halve the interval.
61
1.7. NUMERICAL METHODS: EULER’S METHOD
We may need a ridiculously small interval size, which may not be practical due to
roundoff errors or computational time considerations. Such problems are sometimes
called stiff . In the worst case, the numerical computations might be giving us bogus
numbers that look like a correct answer. Just because the numbers seem to have
stabilized after successive halving, does not mean that we must have the right answer.
We have seen just the beginnings of the challenges that appear in real applications.
Numerical approximation of solutions to differential equations is an active research area
for engineers and mathematicians. For example, the general purpose method used for the
ODE solver in Matlab and Octave (as of this writing) is a method that appeared in the
literature only in the 1980s.
1.7.1
Exercises
Exercise 1.7.3: Consider
approximate ๐‘ฅ(1).
Exercise 1.7.4: Consider
๐‘‘๐‘ฅ
= (2๐‘ก − ๐‘ฅ)2 , ๐‘ฅ(0) = 2. Use Euler’s method with step size โ„Ž = 0.5 to
๐‘‘๐‘ก
๐‘‘๐‘ฅ
= ๐‘ก − ๐‘ฅ, ๐‘ฅ(0) = 1.
๐‘‘๐‘ก
a) Use Euler’s method with step sizes โ„Ž = 1, 1/2, 1/4, 1/8 to approximate ๐‘ฅ(1).
b) Solve the equation exactly.
c) Describe what happens to the errors for each โ„Ž you used. That is, find the factor by which the
error changed each time you halved the interval.
Exercise 1.7.5: Approximate the value of ๐‘’ by looking at the initial value problem ๐‘ฆ 0 = ๐‘ฆ with
๐‘ฆ(0) = 1 and approximating ๐‘ฆ(1) using Euler’s method with a step size of 0.2.
Exercise 1.7.6: Example of numerical instability: Take ๐‘ฆ 0 = −5๐‘ฆ, ๐‘ฆ(0) = 1. We know that the
solution should decay to zero as ๐‘ฅ grows. Using Euler’s method, start with โ„Ž = 1 and compute
๐‘ฆ1 , ๐‘ฆ2 , ๐‘ฆ3 , ๐‘ฆ4 to try to approximate ๐‘ฆ(4). What happened? Now halve the interval. Keep halving
the interval and approximating ๐‘ฆ(4) until the numbers you are getting start to stabilize (that is,
until they start going towards zero). Note: You might want to use a calculator.
๐‘‘๐‘ฆ
The simplest method used in practice is the Runge–Kutta method. Consider ๐‘‘๐‘ฅ = ๐‘“ (๐‘ฅ, ๐‘ฆ),
๐‘ฆ(๐‘ฅ0 ) = ๐‘ฆ0 , and a step size โ„Ž. Everything is the same as in Euler’s method, except the
computation of ๐‘ฆ ๐‘–+1 and ๐‘ฅ ๐‘–+1 .
๐‘˜ 1 = ๐‘“ (๐‘ฅ ๐‘– , ๐‘ฆ ๐‘– ),
๐‘˜ 2 = ๐‘“ ๐‘ฅ ๐‘– + โ„Ž/2, ๐‘ฆ ๐‘– + ๐‘˜ 1 ( โ„Ž/2) ,
๐‘˜ 3 = ๐‘“ ๐‘ฅ ๐‘– + โ„Ž/2, ๐‘ฆ ๐‘– + ๐‘˜ 2 ( โ„Ž/2) ,
๐‘˜ 4 = ๐‘“ (๐‘ฅ ๐‘– + โ„Ž, ๐‘ฆ ๐‘– + ๐‘˜ 3 โ„Ž).
๐‘ฅ ๐‘–+1 = ๐‘ฅ ๐‘– + โ„Ž,
๐‘˜ 1 + 2๐‘˜ 2 + 2๐‘˜ 3 + ๐‘˜4
๐‘ฆ ๐‘–+1 = ๐‘ฆ ๐‘– +
โ„Ž,
6
62
CHAPTER 1. FIRST ORDER EQUATIONS
Exercise 1.7.7: Consider
๐‘‘๐‘ฆ
= ๐‘ฆ๐‘ฅ 2 , ๐‘ฆ(0) = 1.
๐‘‘๐‘ฅ
a) Use Runge–Kutta (see above) with step sizes โ„Ž = 1 and โ„Ž = 1/2 to approximate ๐‘ฆ(1).
b) Use Euler’s method with โ„Ž = 1 and โ„Ž = 1/2.
c) Solve exactly, find the exact value of ๐‘ฆ(1), and compare.
Exercise 1.7.101: Let ๐‘ฅ 0 = sin(๐‘ฅ๐‘ก), and ๐‘ฅ(0) = 1. Approximate ๐‘ฅ(1) using Euler’s method with
step sizes 1, 0.5, 0.25. Use a calculator and compute up to 4 decimal digits.
Exercise 1.7.102: Let ๐‘ฅ 0 = 2๐‘ก, and ๐‘ฅ(0) = 0.
a) Approximate ๐‘ฅ(4) using Euler’s method with step sizes 4, 2, and 1.
b) Solve exactly, and compute the errors.
c) Compute the factor by which the errors changed.
Exercise 1.7.103: Let ๐‘ฅ 0 = ๐‘ฅ๐‘’ ๐‘ฅ๐‘ก+1 , and ๐‘ฅ(0) = 0.
a) Approximate ๐‘ฅ(4) using Euler’s method with step sizes 4, 2, and 1.
b) Guess an exact solution based on part a) and compute the errors.
There is a simple way to improve Euler’s method to make it a second order method
๐‘‘๐‘ฆ
by doing just one extra step. Consider ๐‘‘๐‘ฅ = ๐‘“ (๐‘ฅ, ๐‘ฆ), ๐‘ฆ(๐‘ฅ 0 ) = ๐‘ฆ0 , and a step size โ„Ž. What
we do is to pretend we compute the next step as in Euler, that is, we start with (๐‘ฅ ๐‘– , ๐‘ฆ ๐‘– ),
we compute a slope ๐‘˜1 = ๐‘“ (๐‘ฅ ๐‘– , ๐‘ฆ ๐‘– ), and then look at the point (๐‘ฅ ๐‘– + โ„Ž, ๐‘ฆ ๐‘– + ๐‘˜1 โ„Ž). Instead of
letting our new point be (๐‘ฅ ๐‘– + โ„Ž, ๐‘ฆ ๐‘– + ๐‘˜ 1 โ„Ž), we compute the slope at that point, call it ๐‘˜ 2 ,
and then take the average of ๐‘˜1 and ๐‘˜ 2 , hoping that the average is going to be closer to the
actual slope on the interval from ๐‘ฅ ๐‘– to ๐‘ฅ ๐‘– + โ„Ž. And we are correct, if we halve the step, the
error should go down by a factor of 22 = 4. To summarize, the setup is the same as for
regular Euler, except the computation of ๐‘ฆ ๐‘–+1 and ๐‘ฅ ๐‘–+1 .
๐‘˜ 1 = ๐‘“ (๐‘ฅ ๐‘– , ๐‘ฆ ๐‘– ),
๐‘˜ 2 = ๐‘“ (๐‘ฅ ๐‘– + โ„Ž, ๐‘ฆ ๐‘– + ๐‘˜ 1 โ„Ž),
Exercise 1.7.104: Consider
๐‘ฅ ๐‘–+1 = ๐‘ฅ ๐‘– + โ„Ž,
๐‘˜1 + ๐‘˜2
๐‘ฆ ๐‘–+1 = ๐‘ฆ ๐‘– +
โ„Ž.
2
๐‘‘๐‘ฆ
= ๐‘ฅ + ๐‘ฆ, ๐‘ฆ(0) = 1.
๐‘‘๐‘ฅ
a) Use the improved Euler’s method (see above) with step sizes โ„Ž = 1/4 and โ„Ž = 1/8 to approximate
๐‘ฆ(1).
b) Use Euler’s method with โ„Ž = 1/4 and โ„Ž = 1/8.
c) Solve exactly, find the exact value of ๐‘ฆ(1).
d) Compute the errors, and the factors by which the errors changed.
63
1.8. EXACT EQUATIONS
1.8
Exact equations
Note: 1–2 lectures, can safely be skipped, §1.6 in [EP], §2.6 in [BD]
Another type of equation that comes up quite often in physics and engineering is an exact
equation. Suppose ๐น(๐‘ฅ, ๐‘ฆ) is a function of two variables, which we call the potential function.
The naming should suggest potential energy, or electric potential. Exact equations and
potential functions appear when there is a conservation law at play, such as conservation
of energy. Let us make up a simple example. Let
๐น(๐‘ฅ, ๐‘ฆ) = ๐‘ฅ 2 + ๐‘ฆ 2 .
We are interested in the lines of constant
energy, that is lines where the energy is
conserved; we want curves where ๐น(๐‘ฅ, ๐‘ฆ) =
๐ถ, for some constant ๐ถ. In our example,
the curves ๐‘ฅ 2 + ๐‘ฆ 2 = ๐ถ are circles. See
Figure 1.18.
We take the total derivative of ๐น:
๐‘‘๐น =
๐œ•๐น
๐œ•๐น
๐‘‘๐‘ฅ +
๐‘‘๐‘ฆ.
๐œ•๐‘ฅ
๐œ•๐‘ฆ
For convenience, we will make use of the
notation of ๐น๐‘ฅ = ๐œ•๐น
and ๐น ๐‘ฆ = ๐œ•๐น
. In our
๐œ•๐‘ฅ
๐œ•๐‘ฆ
example,
๐‘‘๐น = 2๐‘ฅ ๐‘‘๐‘ฅ + 2๐‘ฆ ๐‘‘๐‘ฆ.
-10
-5
0
5
10
10
10
5
5
0
0
-5
-5
-10
-10
-10
-5
0
5
10
Figure 1.18: Solutions to ๐น(๐‘ฅ, ๐‘ฆ) = ๐‘ฅ 2 + ๐‘ฆ 2 = ๐ถ
for various ๐ถ.
We apply the total derivative to ๐น(๐‘ฅ, ๐‘ฆ) = ๐ถ,
to find the differential equation ๐‘‘๐น = 0. The differential equation we obtain in such a way
has the form
๐‘‘๐‘ฆ
๐‘€ ๐‘‘๐‘ฅ + ๐‘ ๐‘‘๐‘ฆ = 0,
or
๐‘€+๐‘
= 0.
๐‘‘๐‘ฅ
An equation of this form is called exact if it was obtained as ๐‘‘๐น = 0 for some potential
function ๐น. In our simple example, we obtain the equation
2๐‘ฅ ๐‘‘๐‘ฅ + 2๐‘ฆ ๐‘‘๐‘ฆ = 0,
or
2๐‘ฅ + 2๐‘ฆ
๐‘‘๐‘ฆ
= 0.
๐‘‘๐‘ฅ
Since we obtained this equation by differentiating ๐‘ฅ 2 + ๐‘ฆ 2 = ๐ถ, the equation is exact. We
often wish to solve for ๐‘ฆ in terms of ๐‘ฅ. In our example,
√
๐‘ฆ = ± ๐ถ2 − ๐‘ฅ2.
An interpretation of the setup is that at each point ๐‘ฃ® = (๐‘€, ๐‘) is a vector in the plane,
that is, a direction and a magnitude. As ๐‘€ and ๐‘ are functions of (๐‘ฅ, ๐‘ฆ), we have a
64
CHAPTER 1. FIRST ORDER EQUATIONS
vector field. The particular vector field ๐‘ฃ® that comes from an exact equation is a so-called
conservative vector field, that is, a vector field that comes with a potential function ๐น(๐‘ฅ, ๐‘ฆ),
such that
๐œ•๐น ๐œ•๐น
๐‘ฃ® =
,
.
๐œ•๐‘ฅ ๐œ•๐‘ฆ
Let ๐›พ be a path in the plane starting at (๐‘ฅ1 , ๐‘ฆ1 ) and ending at (๐‘ฅ 2 , ๐‘ฆ2 ). If we think of ๐‘ฃ® as
force, then the work required to move along ๐›พ is
∫
๐›พ
๐‘ฃ® (®๐‘Ÿ ) · ๐‘‘®๐‘Ÿ =
∫
๐›พ
๐‘€ ๐‘‘๐‘ฅ + ๐‘ ๐‘‘๐‘ฆ = ๐น(๐‘ฅ2 , ๐‘ฆ2 ) − ๐น(๐‘ฅ1 , ๐‘ฆ1 ).
That is, the work done only depends on endpoints, that is where we start and where we
end. For example, suppose ๐น is gravitational potential. The derivative of ๐น given by ๐‘ฃ® is
the gravitational force. What we are saying is that the work required to move a heavy box
from the ground floor to the roof, only depends on the change in potential energy. That
is, the work done is the same no matter what path we took; if we took the stairs or the
elevator. Although if we took the elevator, the elevator is doing the work for us. The curves
๐น(๐‘ฅ, ๐‘ฆ) = ๐ถ are those where no work need be done, such as the heavy box sliding along
without accelerating or breaking on a perfectly flat roof, on a cart with incredibly well oiled
wheels.
An exact equation is a conservative vector field, and the implicit solution of this equation
is the potential function.
1.8.1
Solving exact equations
Now you, the reader, should ask: Where did we solve a differential equation? Well, in
applications we generally know ๐‘€ and ๐‘, but we do not know ๐น. That is, we may have
๐‘‘๐‘ฆ
just started with 2๐‘ฅ + 2๐‘ฆ ๐‘‘๐‘ฅ = 0, or perhaps even
๐‘ฅ+๐‘ฆ
๐‘‘๐‘ฆ
= 0.
๐‘‘๐‘ฅ
It is up to us to find some potential ๐น that works. Many different ๐น will work; adding
a constant to ๐น does
not change the equation. Once we have a potential function ๐น, the
equation ๐น ๐‘ฅ, ๐‘ฆ(๐‘ฅ) = ๐ถ gives an implicit solution of the ODE.
๐‘‘๐‘ฆ
Example 1.8.1: Let us find the general solution to 2๐‘ฅ + 2๐‘ฆ ๐‘‘๐‘ฅ = 0. Forget we knew what ๐น
was.
If we know that this is an exact equation, we start looking for a potential function ๐น.
We have ๐‘€ = 2๐‘ฅ and ๐‘ = 2๐‘ฆ. If ๐น exists, it must be such that ๐น๐‘ฅ (๐‘ฅ, ๐‘ฆ) = 2๐‘ฅ. Integrate in
the ๐‘ฅ variable to find
๐น(๐‘ฅ, ๐‘ฆ) = ๐‘ฅ 2 + ๐ด(๐‘ฆ),
(1.5)
for some function ๐ด(๐‘ฆ). The function ๐ด is the “constant of integration”, though it is only
constant as far as ๐‘ฅ is concerned, and may still depend on ๐‘ฆ. Now differentiate (1.5) in ๐‘ฆ
65
1.8. EXACT EQUATIONS
and set it equal to ๐‘, which is what ๐น ๐‘ฆ is supposed to be:
2๐‘ฆ = ๐น ๐‘ฆ (๐‘ฅ, ๐‘ฆ) = ๐ด0(๐‘ฆ).
Integrating, we find ๐ด(๐‘ฆ) = ๐‘ฆ 2 . We could add a constant of integration if we wanted to,
but there is no need. We found ๐น(๐‘ฅ, ๐‘ฆ) = ๐‘ฅ 2 + ๐‘ฆ 2 . Next for a constant ๐ถ, we solve
๐น ๐‘ฅ, ๐‘ฆ(๐‘ฅ) = ๐ถ.
√
for ๐‘ฆ in terms of ๐‘ฅ. In this case, we obtain ๐‘ฆ = ± ๐ถ 2 − ๐‘ฅ 2 as we did before.
Exercise 1.8.1: Why did we not need to add a constant of integration when integrating ๐ด0(๐‘ฆ) = 2๐‘ฆ?
Add a constant of integration, say 3, and see what ๐น you get. What is the difference from what we
got above, and why does it not matter?
The procedure, once we know that the equation is exact, is:
(i) Integrate ๐น๐‘ฅ = ๐‘€ in ๐‘ฅ resulting in ๐น(๐‘ฅ, ๐‘ฆ) = something + ๐ด(๐‘ฆ).
(ii) Differentiate this ๐น in ๐‘ฆ, and set that equal to ๐‘, so that we may find ๐ด(๐‘ฆ) by
integration.
The procedure can also be done by first integrating in ๐‘ฆ and then differentiating in ๐‘ฅ. Pretty
easy huh? Let’s try this again.
๐‘‘๐‘ฆ
Example 1.8.2: Consider now 2๐‘ฅ + ๐‘ฆ + ๐‘ฅ๐‘ฆ ๐‘‘๐‘ฅ = 0.
OK, so ๐‘€ = 2๐‘ฅ + ๐‘ฆ and ๐‘ = ๐‘ฅ๐‘ฆ. We try to proceed as before. Suppose ๐น exists. Then
๐น๐‘ฅ (๐‘ฅ, ๐‘ฆ) = 2๐‘ฅ + ๐‘ฆ. We integrate:
๐น(๐‘ฅ, ๐‘ฆ) = ๐‘ฅ 2 + ๐‘ฅ๐‘ฆ + ๐ด(๐‘ฆ)
for some function ๐ด(๐‘ฆ). Differentiate in ๐‘ฆ and set equal to ๐‘:
๐‘ = ๐‘ฅ๐‘ฆ = ๐น ๐‘ฆ (๐‘ฅ, ๐‘ฆ) = ๐‘ฅ + ๐ด0(๐‘ฆ).
But there is no way to satisfy this requirement! The function ๐‘ฅ๐‘ฆ cannot be written as ๐‘ฅ
plus a function of ๐‘ฆ. The equation is not exact; no potential function ๐น exists.
Is there an easier way to check for the existence of ๐น, other than failing in trying to find
it? Turns out there is. Suppose ๐‘€ = ๐น๐‘ฅ and ๐‘ = ๐น ๐‘ฆ . Then as long as the second derivatives
are continuous,
๐œ•๐‘€
๐œ•2 ๐น
๐œ•2 ๐น
๐œ•๐‘
=
=
=
.
๐œ•๐‘ฆ
๐œ•๐‘ฆ๐œ•๐‘ฅ ๐œ•๐‘ฅ๐œ•๐‘ฆ
๐œ•๐‘ฅ
Let us state it as a theorem. Usually this is called the Poincaré Lemma∗ .
Theorem 1.8.1 (Poincaré). If ๐‘€ and ๐‘ are continuously differentiable functions of (๐‘ฅ, ๐‘ฆ), and
๐œ•๐‘€
= ๐œ•๐‘
, then near any point there is a function ๐น(๐‘ฅ, ๐‘ฆ) such that ๐‘€ = ๐œ•๐น
and ๐‘ = ๐œ•๐น
.
๐œ•๐‘ฆ
๐œ•๐‘ฅ
๐œ•๐‘ฅ
๐œ•๐‘ฆ
∗ Named
for the French polymath Jules Henri Poincaré (1854–1912).
66
CHAPTER 1. FIRST ORDER EQUATIONS
The theorem doesn’t give us a global ๐น defined everywhere. In general, we can only
find the potential locally, near some initial point. By this time, we have come to expect this
from differential equations.
Let us return to Example 1.8.2 where ๐‘€ = 2๐‘ฅ + ๐‘ฆ and ๐‘ = ๐‘ฅ๐‘ฆ. Notice ๐‘€ ๐‘ฆ = 1 and
๐‘๐‘ฅ = ๐‘ฆ, which are clearly not equal. The equation is not exact.
Example 1.8.3: Solve
๐‘‘๐‘ฆ −2๐‘ฅ − ๐‘ฆ
=
,
๐‘‘๐‘ฅ
๐‘ฅ−1
๐‘ฆ(0) = 1.
We write the equation as
(2๐‘ฅ + ๐‘ฆ) + (๐‘ฅ − 1)
๐‘‘๐‘ฆ
= 0,
๐‘‘๐‘ฅ
so ๐‘€ = 2๐‘ฅ + ๐‘ฆ and ๐‘ = ๐‘ฅ − 1. Then
๐‘€ ๐‘ฆ = 1 = ๐‘๐‘ฅ .
The equation is exact. Integrating ๐‘€ in ๐‘ฅ, we find
๐น(๐‘ฅ, ๐‘ฆ) = ๐‘ฅ 2 + ๐‘ฅ๐‘ฆ + ๐ด(๐‘ฆ).
Differentiating in ๐‘ฆ and setting to ๐‘, we find
๐‘ฅ − 1 = ๐‘ฅ + ๐ด0(๐‘ฆ).
So ๐ด0(๐‘ฆ) = −1, and ๐ด(๐‘ฆ) = −๐‘ฆ will work. Take ๐น(๐‘ฅ, ๐‘ฆ) = ๐‘ฅ 2 + ๐‘ฅ๐‘ฆ − ๐‘ฆ. We wish to solve
๐‘ฅ 2 + ๐‘ฅ ๐‘ฆ − ๐‘ฆ = ๐ถ. First let us find ๐ถ. As ๐‘ฆ(0) = 1 then ๐น(0, 1) = ๐ถ. Therefore 02 + 0 × 1 − 1 = ๐ถ,
so ๐ถ = −1. Now we solve ๐‘ฅ 2 + ๐‘ฅ๐‘ฆ − ๐‘ฆ = −1 for ๐‘ฆ to get
๐‘ฆ=
−๐‘ฅ 2 − 1
.
๐‘ฅ−1
Example 1.8.4: Solve
−
๐‘ฅ2
๐‘ฆ
๐‘ฅ
๐‘‘๐‘ฅ + 2
๐‘‘๐‘ฆ = 0,
2
+๐‘ฆ
๐‘ฅ + ๐‘ฆ2
๐‘ฆ(1) = 2.
We leave to the reader to check that ๐‘€ ๐‘ฆ = ๐‘๐‘ฅ .
This vector field (๐‘€, ๐‘) is not conservative if considered as a vector field of the entire
plane minus the origin. The problem is that if the curve ๐›พ is a circle around the origin, say
starting at (1, 0) and ending at (1, 0) going counterclockwise, then if ๐น existed we would
expect
0 = ๐น(1, 0) − ๐น(1, 0) =
∫
๐›พ
๐น๐‘ฅ ๐‘‘๐‘ฅ + ๐น ๐‘ฆ ๐‘‘๐‘ฆ =
∫
๐›พ
−๐‘ฆ
๐‘ฅ
๐‘‘๐‘ฅ + 2
๐‘‘๐‘ฆ = 2๐œ‹.
2
+๐‘ฆ
๐‘ฅ + ๐‘ฆ2
๐‘ฅ2
That is nonsense! We leave the computation of the path integral to the interested reader, or
you can consult your multivariable calculus textbook. So there is no potential function ๐น
defined everywhere outside the origin (0, 0).
67
1.8. EXACT EQUATIONS
If we think back to the theorem, it does not guarantee such a function anyway. It only
guarantees a potential function locally, that is only in some region near the initial point. As
๐‘ฆ(1) = 2 we start at the point (1, 2). Considering ๐‘ฅ > 0 and integrating ๐‘€ in ๐‘ฅ or ๐‘ in ๐‘ฆ,
we find
๐น(๐‘ฅ, ๐‘ฆ) = arctan ๐‘ฆ/๐‘ฅ .
The implicit solution is arctan ๐‘ฆ/๐‘ฅ = ๐ถ. Solving, ๐‘ฆ = tan(๐ถ)๐‘ฅ. That is, the solution is
a straight line. Solving ๐‘ฆ(1) = 2 gives us that tan(๐ถ) = 2, and so ๐‘ฆ = 2๐‘ฅ is the desired
solution. See Figure 1.19, and note that the solution only exists for ๐‘ฅ > 0.
-5.0
10
-2.5
0.0
2.5
5.0
10
5
5
0
0
-5
-5
-10
-5.0
-10
-2.5
๐‘ฆ
Figure 1.19: Solution to − ๐‘ฅ 2 +๐‘ฆ 2 ๐‘‘๐‘ฅ +
0.0
๐‘ฅ
๐‘‘๐‘ฆ
๐‘ฅ 2 +๐‘ฆ 2
2.5
5.0
= 0, ๐‘ฆ(1) = 2, with initial point marked.
Example 1.8.5: Solve
๐‘‘๐‘ฆ
= 0.
๐‘‘๐‘ฅ
The reader should check that this equation is exact. Let ๐‘€ = ๐‘ฅ 2 + ๐‘ฆ 2 and ๐‘ = 2๐‘ฆ(๐‘ฅ + 1).
We follow the procedure for exact equations
๐‘ฅ 2 + ๐‘ฆ 2 + 2๐‘ฆ(๐‘ฅ + 1)
๐น(๐‘ฅ, ๐‘ฆ) =
1 3
๐‘ฅ + ๐‘ฅ๐‘ฆ 2 + ๐ด(๐‘ฆ),
3
and
2๐‘ฆ(๐‘ฅ + 1) = 2๐‘ฅ๐‘ฆ + ๐ด0(๐‘ฆ).
Therefore ๐ด0(๐‘ฆ) = 2๐‘ฆ or ๐ด(๐‘ฆ) = ๐‘ฆ 2 and ๐น(๐‘ฅ, ๐‘ฆ) = 31 ๐‘ฅ 3 + ๐‘ฅ๐‘ฆ 2 + ๐‘ฆ 2 . We try to solve ๐น(๐‘ฅ, ๐‘ฆ) = ๐ถ.
We easily solve for ๐‘ฆ 2 and then just take the square root:
๐ถ − (1/3)๐‘ฅ 3
๐‘ฆ2 =
,
๐‘ฅ+1
๐‘‘๐‘ฆ
r
so
๐‘ฆ=±
๐ถ − (1/3)๐‘ฅ 3
.
๐‘ฅ+1
When ๐‘ฅ = −1, the term in front of ๐‘‘๐‘ฅ vanishes. You can also see that our solution is not
valid in that case. However, one could in that case try to solve for ๐‘ฅ in terms of ๐‘ฆ starting
from the implicit solution 13 ๐‘ฅ 3 + ๐‘ฅ๐‘ฆ 2 + ๐‘ฆ 2 = ๐ถ. The solution is somewhat messy and we
leave it as implicit.
68
1.8.2
CHAPTER 1. FIRST ORDER EQUATIONS
Integrating factors
Sometimes an equation ๐‘€ ๐‘‘๐‘ฅ + ๐‘ ๐‘‘๐‘ฆ = 0 is not exact, but it can be made exact by multiplying
with a function ๐‘ข(๐‘ฅ, ๐‘ฆ). That is, perhaps for some nonzero function ๐‘ข(๐‘ฅ, ๐‘ฆ),
๐‘ข(๐‘ฅ, ๐‘ฆ)๐‘€(๐‘ฅ, ๐‘ฆ) ๐‘‘๐‘ฅ + ๐‘ข(๐‘ฅ, ๐‘ฆ)๐‘(๐‘ฅ, ๐‘ฆ) ๐‘‘๐‘ฆ = 0
is exact. Any solution to this new equation is also a solution to ๐‘€ ๐‘‘๐‘ฅ + ๐‘ ๐‘‘๐‘ฆ = 0.
In fact, a linear equation
๐‘‘๐‘ฆ
+ ๐‘(๐‘ฅ)๐‘ฆ = ๐‘“ (๐‘ฅ),
๐‘‘๐‘ฅ
or
๐‘(๐‘ฅ)๐‘ฆ − ๐‘“ (๐‘ฅ) ๐‘‘๐‘ฅ + ๐‘‘๐‘ฆ = 0
∫
is always such an equation. Let ๐‘Ÿ(๐‘ฅ) = ๐‘’ ๐‘(๐‘ฅ) ๐‘‘๐‘ฅ be the integrating factor for a linear equation.
๐‘‘๐‘ฆ
Multiply the equation by ๐‘Ÿ(๐‘ฅ) and write it in the form of ๐‘€ + ๐‘ ๐‘‘๐‘ฅ = 0.
๐‘Ÿ(๐‘ฅ)๐‘(๐‘ฅ)๐‘ฆ − ๐‘Ÿ(๐‘ฅ) ๐‘“ (๐‘ฅ) + ๐‘Ÿ(๐‘ฅ)
๐‘‘๐‘ฆ
= 0.
๐‘‘๐‘ฅ
Then ๐‘€ = ๐‘Ÿ(๐‘ฅ)๐‘(๐‘ฅ)๐‘ฆ − ๐‘Ÿ(๐‘ฅ) ๐‘“ (๐‘ฅ), so ๐‘€ ๐‘ฆ = ๐‘Ÿ(๐‘ฅ)๐‘(๐‘ฅ), while ๐‘ = ๐‘Ÿ(๐‘ฅ), so ๐‘๐‘ฅ = ๐‘Ÿ 0(๐‘ฅ) = ๐‘Ÿ(๐‘ฅ)๐‘(๐‘ฅ).
In other words, we have an exact equation. Integrating factors for linear functions are just
a special case of integrating factors for exact equations.
But how do we find the integrating factor ๐‘ข? Well, given an equation
๐‘€ ๐‘‘๐‘ฅ + ๐‘ ๐‘‘๐‘ฆ = 0,
๐‘ข should be a function such that
๐œ• ๐œ• ๐‘ข๐‘€ = ๐‘ข ๐‘ฆ ๐‘€ + ๐‘ข๐‘€ ๐‘ฆ =
๐‘ข๐‘ = ๐‘ข๐‘ฅ ๐‘ + ๐‘ข๐‘๐‘ฅ .
๐œ•๐‘ฆ
๐œ•๐‘ฅ
Therefore,
(๐‘€ ๐‘ฆ − ๐‘๐‘ฅ )๐‘ข = ๐‘ข๐‘ฅ ๐‘ − ๐‘ข ๐‘ฆ ๐‘€.
At first it may seem we replaced one differential equation by another. True, but all hope is
not lost.
A strategy that often works is to look for a ๐‘ข that is a function of ๐‘ฅ alone, or a function
of ๐‘ฆ alone. If ๐‘ข is a function of ๐‘ฅ alone, that is ๐‘ข(๐‘ฅ), then we write ๐‘ข 0(๐‘ฅ) instead of ๐‘ข๐‘ฅ , and
๐‘ข ๐‘ฆ is just zero. Then
๐‘€ ๐‘ฆ − ๐‘๐‘ฅ
๐‘ข = ๐‘ข0.
๐‘
๐‘€ −๐‘
In particular, ๐‘ฆ๐‘ ๐‘ฅ ought to be a function of ๐‘ฅ alone (not depend on ๐‘ฆ). If so, then we
have a linear equation
๐‘€ ๐‘ฆ − ๐‘๐‘ฅ
๐‘ข0 −
๐‘ข = 0.
๐‘
69
1.8. EXACT EQUATIONS
๐‘€ ๐‘ฆ −๐‘๐‘ฅ
,
๐‘
๐‘ƒ(๐‘ฅ) ๐‘‘๐‘ฅ
Letting ๐‘ƒ(๐‘ฅ)
=
∫
we solve using the standard integrating factor method, to find
๐‘ข(๐‘ฅ) = ๐ถ๐‘’
. The constant in the solution
is not relevant, we need any nonzero
∫
๐‘ƒ(๐‘ฅ) ๐‘‘๐‘ฅ
solution, so we take ๐ถ = 1. Then ๐‘ข(๐‘ฅ) = ๐‘’
is the integrating factor.
Similarly we could try a function of the form ๐‘ข(๐‘ฆ). Then
๐‘€ ๐‘ฆ − ๐‘๐‘ฅ
๐‘€
In particular,
๐‘€ ๐‘ฆ −๐‘๐‘ฅ
๐‘€
ought to be a function of ๐‘ฆ alone. If so, then we have a linear equation
๐‘ข0 +
๐‘€ ๐‘ฆ − ๐‘๐‘ฅ
๐‘€
๐‘€ −๐‘
Letting ๐‘„(๐‘ฆ) = ๐‘ฆ๐‘€ ๐‘ฅ , we find ๐‘ข(๐‘ฆ) = ๐ถ๐‘’ −
the integrating factor.
Example 1.8.6: Solve
Let ๐‘€ =
๐‘ฅ 2 +๐‘ฆ 2
๐‘ฅ+1
๐‘ข = −๐‘ข 0 .
∫
๐‘ข = 0.
๐‘„(๐‘ฆ) ๐‘‘๐‘ฆ
. We take ๐ถ = 1. So ๐‘ข(๐‘ฆ) = ๐‘’ −
๐‘ฅ2 + ๐‘ฆ2
๐‘‘๐‘ฆ
+ 2๐‘ฆ
= 0.
๐‘ฅ+1
๐‘‘๐‘ฅ
and ๐‘ = 2๐‘ฆ. Compute
๐‘€ ๐‘ฆ − ๐‘๐‘ฅ =
2๐‘ฆ
2๐‘ฆ
−0=
.
๐‘ฅ+1
๐‘ฅ+1
As this is not zero, the equation is not exact. We notice
๐‘ƒ(๐‘ฅ) =
๐‘€ ๐‘ฆ − ๐‘๐‘ฅ
๐‘
=
2๐‘ฆ 1
1
=
๐‘ฅ + 1 2๐‘ฆ
๐‘ฅ+1
is a function of ๐‘ฅ alone. We compute the integrating factor
๐‘’
∫
๐‘ƒ(๐‘ฅ) ๐‘‘๐‘ฅ
= ๐‘’ ln(๐‘ฅ+1) = ๐‘ฅ + 1.
We multiply our given equation by (๐‘ฅ + 1) to obtain
๐‘ฅ 2 + ๐‘ฆ 2 + 2๐‘ฆ(๐‘ฅ + 1)
๐‘‘๐‘ฆ
= 0,
๐‘‘๐‘ฅ
which is an exact equation that we solved in Example 1.8.5. The solution was
r
๐‘ฆ=±
๐ถ − (1/3)๐‘ฅ 3
.
๐‘ฅ+1
Example 1.8.7: Solve
๐‘ฆ 2 + (๐‘ฅ๐‘ฆ + 1)
๐‘‘๐‘ฆ
= 0.
๐‘‘๐‘ฅ
∫
๐‘„(๐‘ฆ) ๐‘‘๐‘ฆ
is
70
CHAPTER 1. FIRST ORDER EQUATIONS
First compute
๐‘€ ๐‘ฆ − ๐‘๐‘ฅ = 2๐‘ฆ − ๐‘ฆ = ๐‘ฆ.
As this is not zero, the equation is not exact. We observe
๐‘„(๐‘ฆ) =
๐‘€ ๐‘ฆ − ๐‘๐‘ฅ
๐‘€
=
๐‘ฆ
1
=
๐‘ฆ
๐‘ฆ2
is a function of ๐‘ฆ alone. We compute the integrating factor
๐‘’−
∫
๐‘„(๐‘ฆ) ๐‘‘๐‘ฆ
= ๐‘’ − ln ๐‘ฆ =
1
.
๐‘ฆ
Therefore we look at the exact equation
๐‘ฆ+
๐‘ฅ๐‘ฆ + 1 ๐‘‘๐‘ฆ
= 0.
๐‘ฆ ๐‘‘๐‘ฅ
The reader should double check that this equation is exact. We follow the procedure for
exact equations
๐น(๐‘ฅ, ๐‘ฆ) = ๐‘ฅ๐‘ฆ + ๐ด(๐‘ฆ),
and
๐‘ฅ๐‘ฆ + 1
1
= ๐‘ฅ + = ๐‘ฅ + ๐ด0(๐‘ฆ).
๐‘ฆ
๐‘ฆ
(1.6)
Consequently ๐ด0(๐‘ฆ) = 1๐‘ฆ or ๐ด(๐‘ฆ) = ln ๐‘ฆ. Thus ๐น(๐‘ฅ, ๐‘ฆ) = ๐‘ฅ๐‘ฆ + ln ๐‘ฆ. It is not possible to solve
๐น(๐‘ฅ, ๐‘ฆ) = ๐ถ for ๐‘ฆ in terms of elementary functions, so let us be content with the implicit
solution:
๐‘ฅ๐‘ฆ + ln ๐‘ฆ = ๐ถ.
We are looking for the general solution and we divided by ๐‘ฆ above. We should check what
happens when ๐‘ฆ = 0, as the equation itself makes perfect sense in that case. We plug in
๐‘ฆ = 0 to find the equation is satisfied. So ๐‘ฆ = 0 is also a solution.
1.8.3
Exercises
Exercise 1.8.2: Solve the following exact equations, implicit general solutions will suffice:
a) (2๐‘ฅ ๐‘ฆ + ๐‘ฅ 2 ) ๐‘‘๐‘ฅ + (๐‘ฅ 2 + ๐‘ฆ 2 + 1) ๐‘‘๐‘ฆ = 0
๐‘‘๐‘ฆ
c) ๐‘’ ๐‘ฅ + ๐‘ฆ 3 + 3๐‘ฅ๐‘ฆ 2 ๐‘‘๐‘ฅ = 0
๐‘‘๐‘ฆ
b) ๐‘ฅ 5 + ๐‘ฆ 5 ๐‘‘๐‘ฅ = 0
d) (๐‘ฅ + ๐‘ฆ) cos(๐‘ฅ) + sin(๐‘ฅ) + sin(๐‘ฅ)๐‘ฆ 0 = 0
Exercise 1.8.3: Find the integrating factor for the following equations making them into exact
equations:
๐‘ฆ
a) ๐‘’ ๐‘ฅ ๐‘ฆ ๐‘‘๐‘ฅ + ๐‘ฅ ๐‘’ ๐‘ฅ ๐‘ฆ ๐‘‘๐‘ฆ = 0
c) 4(๐‘ฆ 2 + ๐‘ฅ) ๐‘‘๐‘ฅ +
2๐‘ฅ+2๐‘ฆ 2
๐‘ฆ
b)
๐‘‘๐‘ฆ = 0
๐‘’ ๐‘ฅ +๐‘ฆ 3
๐‘ฆ2
๐‘‘๐‘ฅ + 3๐‘ฅ ๐‘‘๐‘ฆ = 0
d) 2 sin(๐‘ฆ) ๐‘‘๐‘ฅ + ๐‘ฅ cos(๐‘ฆ) ๐‘‘๐‘ฆ = 0
71
1.8. EXACT EQUATIONS
๐‘‘๐‘ฆ
Exercise 1.8.4: Suppose you have an equation of the form: ๐‘“ (๐‘ฅ) + ๐‘”(๐‘ฆ) ๐‘‘๐‘ฅ = 0.
a) Show it is exact.
b) Find the form of the potential function in terms of ๐‘“ and ๐‘”.
Exercise 1.8.5: Suppose that we have the equation ๐‘“ (๐‘ฅ) ๐‘‘๐‘ฅ − ๐‘‘๐‘ฆ = 0.
a) Is this equation exact?
b) Find the general solution using a definite integral.
Exercise 1.8.6: Find the potential function ๐น(๐‘ฅ, ๐‘ฆ) of the exact equation
in two different ways.
1+๐‘ฅ ๐‘ฆ
๐‘ฅ
๐‘‘๐‘ฅ + 1/๐‘ฆ + ๐‘ฅ ๐‘‘๐‘ฆ = 0
a) Integrate ๐‘€ in terms of ๐‘ฅ and then differentiate in ๐‘ฆ and set to ๐‘.
b) Integrate ๐‘ in terms of ๐‘ฆ and then differentiate in ๐‘ฅ and set to ๐‘€.
Exercise 1.8.7: A function ๐‘ข(๐‘ฅ, ๐‘ฆ) is said to be a harmonic function if ๐‘ข๐‘ฅ๐‘ฅ + ๐‘ข ๐‘ฆ ๐‘ฆ = 0.
a) Show if ๐‘ข is harmonic, −๐‘ข ๐‘ฆ ๐‘‘๐‘ฅ + ๐‘ข๐‘ฅ ๐‘‘๐‘ฆ = 0 is an exact equation. So there exists (at least
locally) the so-called harmonic conjugate function ๐‘ฃ(๐‘ฅ, ๐‘ฆ) such that ๐‘ฃ ๐‘ฅ = −๐‘ข ๐‘ฆ and ๐‘ฃ ๐‘ฆ = ๐‘ข๐‘ฅ .
Verify that the following ๐‘ข are harmonic and find the corresponding harmonic conjugates ๐‘ฃ:
c) ๐‘ข = ๐‘’ ๐‘ฅ cos ๐‘ฆ
b) ๐‘ข = 2๐‘ฅ๐‘ฆ
d) ๐‘ข = ๐‘ฅ 3 − 3๐‘ฅ๐‘ฆ 2
Exercise 1.8.101: Solve the following exact equations, implicit general solutions will suffice:
a) cos(๐‘ฅ) + ๐‘ฆ๐‘’ ๐‘ฅ ๐‘ฆ + ๐‘ฅ๐‘’ ๐‘ฅ ๐‘ฆ ๐‘ฆ 0 = 0
๐‘‘๐‘ฆ
c) ๐‘’ ๐‘ฅ + ๐‘’ ๐‘ฆ ๐‘‘๐‘ฅ = 0
b) (2๐‘ฅ + ๐‘ฆ) ๐‘‘๐‘ฅ + (๐‘ฅ − 4๐‘ฆ) ๐‘‘๐‘ฆ = 0
d) (3๐‘ฅ 2 + 3๐‘ฆ) ๐‘‘๐‘ฅ + (3๐‘ฆ 2 + 3๐‘ฅ) ๐‘‘๐‘ฆ = 0
Exercise 1.8.102: Find the integrating factor for the following equations making them into exact
equations:
a)
c)
1
๐‘ฆ
๐‘‘๐‘ฅ + 3๐‘ฆ ๐‘‘๐‘ฆ = 0
cos(๐‘ฅ)
๐‘ฆ2
+
1
๐‘ฆ
๐‘‘๐‘ฅ +
b) ๐‘‘๐‘ฅ − ๐‘’ −๐‘ฅ−๐‘ฆ ๐‘‘๐‘ฆ = 0
๐‘ฅ
๐‘ฆ2
๐‘‘๐‘ฆ = 0
d) 2๐‘ฆ +
๐‘ฆ2 ๐‘ฅ
๐‘‘๐‘ฅ + (2๐‘ฆ + ๐‘ฅ) ๐‘‘๐‘ฆ = 0
Exercise 1.8.103:
a) Show that every separable equation ๐‘ฆ 0 = ๐‘“ (๐‘ฅ)๐‘”(๐‘ฆ) can be written as an exact equation, and
verify that it is indeed exact.
b) Using this rewrite ๐‘ฆ 0 = ๐‘ฅ๐‘ฆ as an exact equation, solve it and verify that the solution is the
same as it was in Example 1.3.1.
72
1.9
CHAPTER 1. FIRST ORDER EQUATIONS
First order linear PDE
Note: 1 lecture, can safely be skipped
We only considered ODE so far, so let us solve a linear first order PDE. Consider the
equation
๐‘Ž(๐‘ฅ, ๐‘ก) ๐‘ข๐‘ฅ + ๐‘(๐‘ฅ, ๐‘ก) ๐‘ข๐‘ก + ๐‘(๐‘ฅ, ๐‘ก) ๐‘ข = ๐‘”(๐‘ฅ, ๐‘ก),
๐‘ข(๐‘ฅ, 0) = ๐‘“ (๐‘ฅ),
−∞ < ๐‘ฅ < ∞,
๐‘ก > 0,
where ๐‘ข(๐‘ฅ, ๐‘ก) is a function of ๐‘ฅ and ๐‘ก. The initial condition ๐‘ข(๐‘ฅ, 0) = ๐‘“ (๐‘ฅ) is now a function
of ๐‘ฅ rather than just a number. In these problems, it is useful to think of ๐‘ฅ as position and ๐‘ก
as time. The equation describes the evolution of a function of ๐‘ฅ as time goes on. Below,
the coefficients ๐‘Ž, ๐‘, ๐‘, and the function ๐‘” are mostly going to be constant or zero. The
method we describe works with nonconstant coefficients, although the computations may
get difficult quickly.
This method we use is the method of characteristics. The idea is that we find lines along
which the equation is an ODE that we solve. We will see this technique again for second
order PDE when we encounter the wave equation in § 4.8.
Example 1.9.1: Consider the equation
๐‘ข๐‘ก + ๐›ผ๐‘ข๐‘ฅ = 0,
๐‘ข(๐‘ฅ, 0) = ๐‘“ (๐‘ฅ).
This particular equation, ๐‘ข๐‘ก + ๐›ผ๐‘ข๐‘ฅ = 0, is called the transport equation.
The data will propagate along curves called characteristics. The idea is to change to the
so-called characteristic coordinates. If we change to these coordinates, the equation simplifies.
The change of variables for this equation is
๐œ‰ = ๐‘ฅ − ๐›ผ๐‘ก,
๐‘  = ๐‘ก.
Let’s see what the equation becomes. Remember the chain rule in several variables.
๐‘ข๐‘ก = ๐‘ข๐œ‰ ๐œ‰๐‘ก + ๐‘ข๐‘  ๐‘  ๐‘ก = −๐›ผ๐‘ข๐œ‰ + ๐‘ข๐‘  ,
๐‘ข ๐‘ฅ = ๐‘ข๐œ‰ ๐œ‰ ๐‘ฅ + ๐‘ข ๐‘  ๐‘  ๐‘ฅ = ๐‘ข๐œ‰ .
The equation in the coordinates ๐œ‰ and ๐‘  becomes
(−๐›ผ๐‘ข๐œ‰ + ๐‘ข๐‘  ) +๐›ผ (๐‘ข๐œ‰ ) = 0,
|
{z
๐‘ข๐‘ก
}
|{z}
๐‘ข๐‘ฅ
or in other words
๐‘ข๐‘  = 0.
That is trivial to solve. Treating ๐œ‰ as simply a parameter, we have obtained the ODE ๐‘‘๐‘ข
๐‘‘๐‘  = 0.
The solution is a function that does not depend on ๐‘  (but it does depend on ๐œ‰). That is,
there is some function ๐ด such that
๐‘ข = ๐ด(๐œ‰) = ๐ด(๐‘ฅ − ๐›ผ๐‘ก).
73
1.9. FIRST ORDER LINEAR PDE
The initial condition says that:
๐‘“ (๐‘ฅ) = ๐‘ข(๐‘ฅ, 0) = ๐ด(๐‘ฅ − ๐›ผ0) = ๐ด(๐‘ฅ),
so ๐ด = ๐‘“ . In other words,
๐‘ข(๐‘ฅ, ๐‘ก) = ๐‘“ (๐‘ฅ − ๐›ผ๐‘ก).
Everything is simply moving right at speed ๐›ผ as ๐‘ก increases. The curve given by the
equation
๐œ‰ = constant
is called the characteristic. See Figure 1.20. In this case, the solution does not change along
the characteristic.
In the (๐‘ฅ, ๐‘ก) coordinates, the characteristic
curves satisfy ๐‘ก = ๐›ผ1 (๐‘ฅ − ๐œ‰), and are in fact lines.
๐‘ก ๐œ‰=0 ๐œ‰=1 ๐œ‰=2
1
The slope of characteristic lines is ๐›ผ , and for each
different ๐œ‰ we get a different characteristic line.
We see why ๐‘ข๐‘ก + ๐›ผ๐‘ข๐‘ฅ = 0 is called the transport
equation: everything travels at some constant
speed. Sometimes this is called convection. An
example application is material being moved by
๐‘ฅ
a river where the material does not diffuse and
Figure 1.20: Characteristic curves.
is simply carried along. In this setup, ๐‘ฅ is the
position along the river, ๐‘ก is the time, and ๐‘ข(๐‘ฅ, ๐‘ก)
the concentration the material at position ๐‘ฅ and time ๐‘ก. See Figure 1.21 for an example.
-3
-2
-1
0
1
2
3
-3
-2
-1
0
1
2
3
1.00
1.00
1.00
1.00
0.75
0.75
0.75
0.75
0.50
0.50
0.50
0.50
0.25
0.25
0.25
0.25
0.00
0.00
0.00
-3
-2
-1
0
1
2
3
0.00
-3
-2
-1
0
1
2
3
Figure 1.21: Example of “transport” in ๐‘ข๐‘ก − ๐‘ข๐‘ฅ = 0 (that is, ๐›ผ = 1) where the initial condition ๐‘“ (๐‘ฅ) is a
peak at the origin. On the left is a graph of the initial condition ๐‘ข(๐‘ฅ, 0). On the right is a graph of the
function ๐‘ข(๐‘ฅ, 1), that is at time ๐‘ก = 1. Notice it is the same graph shifted one unit to the right.
We use similar idea in the more general case:
๐‘Ž๐‘ข๐‘ฅ + ๐‘๐‘ข๐‘ก + ๐‘๐‘ข = ๐‘”,
๐‘ข(๐‘ฅ, 0) = ๐‘“ (๐‘ฅ).
74
CHAPTER 1. FIRST ORDER EQUATIONS
We change coordinates to the characteristic coordinates. Let us call these coordinates (๐œ‰, ๐‘ ).
These are coordinates where ๐‘Ž๐‘ข๐‘ฅ + ๐‘๐‘ข๐‘ก becomes differentiation in the ๐‘  variable.
Along the characteristic curves (where ๐œ‰ is constant), we get a new ODE in the ๐‘  variable.
In the transport equation, we got the simple ๐‘‘๐‘ข
๐‘‘๐‘  = 0. In general, we get the linear equation
๐‘‘๐‘ข
+ ๐‘๐‘ข = ๐‘”.
๐‘‘๐‘ 
(1.7)
We think of everything as a function of ๐œ‰ and ๐‘ , although we are thinking of ๐œ‰ as a parameter
rather than an independent variable. So the equation is an ODE. It is a linear ODE that we
can solve using the integrating factor.
To find the characteristics, think of a curve given parametrically ๐‘ฅ(๐‘ ), ๐‘ก(๐‘ ) . We try to
have the curve satisfy
๐‘‘๐‘ก
๐‘‘๐‘ฅ
= ๐‘Ž,
= ๐‘.
๐‘‘๐‘ 
๐‘‘๐‘ 
Why? Because when we think of ๐‘ฅ and ๐‘ก as functions of ๐‘  we find, using the chain rule,
๐‘‘๐‘ข
๐‘‘๐‘ฅ
๐‘‘๐‘ก
+ ๐‘๐‘ข = ๐‘ข๐‘ฅ
+ ๐‘ข๐‘ก
+๐‘๐‘ข = ๐‘Ž๐‘ข๐‘ฅ + ๐‘๐‘ข๐‘ก + ๐‘๐‘ข = ๐‘”.
๐‘‘๐‘ 
๐‘‘๐‘ 
๐‘‘๐‘ 
|
{z
๐‘‘๐‘ข
๐‘‘๐‘ 
}
So we get the ODE (1.7), which then describes the value of the solution ๐‘ข of the PDE along
this characteristic curve. It is also convenient to make sure that ๐‘  = 0 corresponds to ๐‘ก = 0,
that is ๐‘ก(0) = 0. It will be convenient also for ๐‘ฅ(0) = ๐œ‰. See Figure 1.22.
๐‘ก
๐œ‰ = constant
๐‘ฅ(๐‘ ), ๐‘ก(๐‘ )
๐‘ =0
๐‘ฅ
๐‘ฅ=๐œ‰
Figure 1.22: General characteristic curve.
Example 1.9.2: Consider
๐‘ข๐‘ฅ + ๐‘ข๐‘ก + ๐‘ข = ๐‘ฅ,
2
๐‘ข(๐‘ฅ, 0) = ๐‘’ −๐‘ฅ .
We find the characteristics, that is, the curves given by
๐‘‘๐‘ฅ
= 1,
๐‘‘๐‘ 
๐‘‘๐‘ก
= 1.
๐‘‘๐‘ 
75
1.9. FIRST ORDER LINEAR PDE
So
๐‘ฅ = ๐‘  + ๐‘1 ,
๐‘ก = ๐‘  + ๐‘2 ,
for some ๐‘1 and ๐‘ 2 . At ๐‘  = 0 we want ๐‘ก = 0, and ๐‘ฅ should be ๐œ‰. So we let ๐‘ 1 = ๐œ‰ and ๐‘2 = 0:
๐‘ฅ = ๐‘  + ๐œ‰,
The ODE is
๐‘‘๐‘ข
๐‘‘๐‘ 
๐‘ก = ๐‘ .
+ ๐‘ข = ๐‘ฅ, and ๐‘ฅ = ๐‘  + ๐œ‰. So, the ODE to solve along the characteristic is
๐‘‘๐‘ข
+ ๐‘ข = ๐‘  + ๐œ‰.
๐‘‘๐‘ 
The general solution of this equation, treating ๐œ‰ as a parameter, is ๐‘ข = ๐ถ๐‘’ −๐‘  + ๐‘  + ๐œ‰ − 1,
2
for some constant ๐ถ. At ๐‘  = 0, our initial condition is that ๐‘ข is ๐‘’ −๐œ‰ , since at ๐‘  = 0 we have
2
๐‘ฅ = ๐œ‰. Given this initial condition, we find ๐ถ = ๐‘’ −๐œ‰ − ๐œ‰ + 1. So,
2
๐‘ข = ๐‘’ −๐œ‰ − ๐œ‰ + 1 ๐‘’ −๐‘  + ๐‘  + ๐œ‰ − 1
= ๐‘’ −๐œ‰
2 −๐‘ 
+ (1 − ๐œ‰)๐‘’ −๐‘  + ๐‘  + ๐œ‰ − 1.
Substitute ๐œ‰ = ๐‘ฅ − ๐‘ก and ๐‘  = ๐‘ก to find ๐‘ข in terms of ๐‘ฅ and ๐‘ก:
๐‘ข = ๐‘’ −๐œ‰
2 −๐‘ 
+ (1 − ๐œ‰)๐‘’ −๐‘  + ๐‘  + ๐œ‰ − 1
2
= ๐‘’ −(๐‘ฅ−๐‘ก)
−๐‘ก
+ (1 − ๐‘ฅ + ๐‘ก)๐‘’ −๐‘ก + ๐‘ฅ − 1.
See Figure 1.23 on the next page for a plot of ๐‘ข(๐‘ฅ, ๐‘ก) as a function of two variables.
When the coefficients are not constants, the characteristic curves are not going to be
straight lines anymore.
Example 1.9.3: Consider the following variable coefficient equation:
๐‘ฅ๐‘ข๐‘ฅ + ๐‘ข๐‘ก + 2๐‘ข = 0,
๐‘ข(๐‘ฅ, 0) = cos(๐‘ฅ).
We find the characteristics, that is, the curves given by
๐‘‘๐‘ฅ
= ๐‘ฅ,
๐‘‘๐‘ 
So
๐‘ฅ = ๐‘1 ๐‘’ ๐‘  ,
๐‘‘๐‘ก
= 1.
๐‘‘๐‘ 
๐‘ก = ๐‘  + ๐‘2 .
At ๐‘  = 0, we wish to get the line ๐‘ก = 0, and ๐‘ฅ should be ๐œ‰. So
๐‘ฅ = ๐œ‰๐‘’ ๐‘  ,
๐‘ก = ๐‘ .
OK, the ODE we need to solve is
๐‘‘๐‘ข
+ 2๐‘ข = 0.
๐‘‘๐‘ 
76
CHAPTER 1. FIRST ORDER EQUATIONS
3
x
3.0
2
2.5
1
t
2.0
0
1.5
-1
1.0
-2
-3
u(x,t)
0.5
0.0
2
2
1
1
0
0
-1
1.73
1.15
0.58
0.00
-0.58
-1.15
-1.73
-2.30
-2.88
-3.46
-1
-2
-2
-3
-3
3.0
2.5
3
2
2.0
1
1.5
0
1.0
t
-1
0.5
-2
x
0.0 -3
2
Figure 1.23: Plot of the solution ๐‘ข(๐‘ฅ, ๐‘ก) to ๐‘ข๐‘ฅ + ๐‘ข๐‘ก + ๐‘ข = ๐‘ฅ, ๐‘ข(๐‘ฅ, 0) = ๐‘’ −๐‘ฅ .
This is for a fixed ๐œ‰. At ๐‘  = 0, we should get that ๐‘ข is cos(๐œ‰), so that is our initial condition.
Consequently,
๐‘ข = ๐‘’ −2๐‘  cos(๐œ‰) = ๐‘’ −2๐‘ก cos(๐‘ฅ๐‘’ −๐‘ก ).
We make a few closing remarks. One thing to keep in mind is that we would get into
trouble if the coefficient in front of ๐‘ข๐‘ก , that is the ๐‘, is ever zero. Let us consider a quick
example of what can go wrong:
๐‘ข๐‘ฅ + ๐‘ข = 0,
๐‘ข(๐‘ฅ, 0) = sin(๐‘ฅ).
This problem has no solution. If we had a solution, it would imply that ๐‘ข๐‘ฅ (๐‘ฅ, 0) = cos(๐‘ฅ),
but ๐‘ข๐‘ฅ (๐‘ฅ, 0) + ๐‘ข(๐‘ฅ, 0) = cos(๐‘ฅ) + sin(๐‘ฅ) ≠ 0. The problem is that the characteristic curve is
now the line ๐‘ก = 0, and the solution is already provided on that line!
As long as ๐‘ is nonzero, it is convenient to ensure that ๐‘ is positive by multiplying by
−1 if necessary, so that positive ๐‘  means positive ๐‘ก.
Another remark is that if ๐‘Ž or ๐‘ in the equation are variable, the computations can
quickly get out of hand, as the expressions for the characteristic coordinates become messy
and then solving the ODE becomes even messier. In the examples above, ๐‘ was always 1,
meaning we got ๐‘  = ๐‘ก in the characteristic coordinates. If ๐‘ is not constant, your expression
for ๐‘  will be more complicated.
Finding the characteristic coordinates is really a system of ODE in general if ๐‘Ž depends
on ๐‘ก or if ๐‘ depends on ๐‘ฅ. In that case, we would need techniques of systems of ODE to
77
1.9. FIRST ORDER LINEAR PDE
solve, see chapter 3 or chapter 8. In general, if ๐‘Ž and ๐‘ are not linear functions or constants,
finding closed form expressions for the characteristic coordinates may be impossible.
Finally, the method of characteristics applies to nonlinear first order PDE as well. In the
nonlinear case, the characteristics depend not only on the differential equation, but also on
the initial data. This leads to not only more difficult computations, but also the formation
of singularities where the solution breaks down at a certain point in time. An example
application where first order nonlinear PDE come up is traffic flow theory, and you have
probably experienced the formation of singularities: traffic jams. But we digress.
1.9.1
Exercises
Exercise 1.9.1: Solve
a) ๐‘ข๐‘ก + 9๐‘ข๐‘ฅ = 0, ๐‘ข(๐‘ฅ, 0) = sin(๐‘ฅ),
b) ๐‘ข๐‘ก − 8๐‘ข๐‘ฅ = 0, ๐‘ข(๐‘ฅ, 0) = sin(๐‘ฅ),
c) ๐‘ข๐‘ก + ๐œ‹๐‘ข๐‘ฅ = 0, ๐‘ข(๐‘ฅ, 0) = sin(๐‘ฅ),
d) ๐‘ข๐‘ก + ๐œ‹๐‘ข๐‘ฅ + ๐‘ข = 0, ๐‘ข(๐‘ฅ, 0) = sin(๐‘ฅ).
Exercise 1.9.2: Solve ๐‘ข๐‘ก + 3๐‘ข๐‘ฅ = 1, ๐‘ข(๐‘ฅ, 0) = ๐‘ฅ 2 .
Exercise 1.9.3: Solve ๐‘ข๐‘ก + 3๐‘ข๐‘ฅ = ๐‘ฅ, ๐‘ข(๐‘ฅ, 0) = ๐‘’ ๐‘ฅ .
Exercise 1.9.4: Solve ๐‘ข๐‘ฅ + ๐‘ข๐‘ก + ๐‘ฅ๐‘ข = 0, ๐‘ข(๐‘ฅ, 0) = cos(๐‘ฅ).
Exercise 1.9.5:
a) Find the characteristic coordinates for the following equations:
1) ๐‘ข๐‘ฅ + ๐‘ข๐‘ก + ๐‘ข = 1, ๐‘ข(๐‘ฅ, 0) = cos(๐‘ฅ),
2) 2๐‘ข๐‘ฅ + 2๐‘ข๐‘ก + 2๐‘ข = 2, ๐‘ข(๐‘ฅ, 0) = cos(๐‘ฅ).
b) Solve the two equations using the coordinates.
c) Explain why you got the same solution, although the characteristic coordinates you found
were different.
Exercise 1.9.6: Solve (1 + ๐‘ฅ 2 )๐‘ข๐‘ก + ๐‘ฅ 2 ๐‘ข๐‘ฅ + ๐‘’ ๐‘ฅ ๐‘ข = 0, ๐‘ข(๐‘ฅ, 0) = 0. Hint: Think a little out of the box.
Exercise 1.9.101: Solve
a) ๐‘ข๐‘ก − 5๐‘ข๐‘ฅ = 0, ๐‘ข(๐‘ฅ, 0) =
1
,
1+๐‘ฅ 2
b) ๐‘ข๐‘ก + 2๐‘ข๐‘ฅ = 0, ๐‘ข(๐‘ฅ, 0) = cos(๐‘ฅ).
Exercise 1.9.102: Solve ๐‘ข๐‘ฅ + ๐‘ข๐‘ก + ๐‘ก๐‘ข = 0, ๐‘ข(๐‘ฅ, 0) = cos(๐‘ฅ).
Exercise 1.9.103: Solve ๐‘ข๐‘ฅ + ๐‘ข๐‘ก = 5, ๐‘ข(๐‘ฅ, 0) = ๐‘ฅ.
78
CHAPTER 1. FIRST ORDER EQUATIONS
Chapter 2
Higher order linear ODEs
2.1
Second order linear ODEs
Note: 1 lecture, reduction of order optional, first part of §3.1 in [EP], parts of §3.1 and §3.2 in [BD]
Let us consider the general second order linear differential equation
๐ด(๐‘ฅ)๐‘ฆ 00 + ๐ต(๐‘ฅ)๐‘ฆ 0 + ๐ถ(๐‘ฅ)๐‘ฆ = ๐น(๐‘ฅ).
We usually divide through by ๐ด(๐‘ฅ) to get
๐‘ฆ 00 + ๐‘(๐‘ฅ)๐‘ฆ 0 + ๐‘ž(๐‘ฅ)๐‘ฆ = ๐‘“ (๐‘ฅ),
(2.1)
where ๐‘(๐‘ฅ) = ๐ต(๐‘ฅ)/๐ด(๐‘ฅ), ๐‘ž(๐‘ฅ) = ๐ถ(๐‘ฅ)/๐ด(๐‘ฅ), and ๐‘“ (๐‘ฅ) = ๐น(๐‘ฅ)/๐ด(๐‘ฅ). The word linear means that the
equation contains no powers nor functions of ๐‘ฆ, ๐‘ฆ 0, and ๐‘ฆ 00.
In the special case when ๐‘“ (๐‘ฅ) = 0, we have a so-called homogeneous equation
๐‘ฆ 00 + ๐‘(๐‘ฅ)๐‘ฆ 0 + ๐‘ž(๐‘ฅ)๐‘ฆ = 0.
(2.2)
We have already seen some second order linear homogeneous equations.
๐‘ฆ 00 + ๐‘˜ 2 ๐‘ฆ = 0
Two solutions are:
๐‘ฆ1 = cos(๐‘˜๐‘ฅ),
๐‘ฆ 00 − ๐‘˜ 2 ๐‘ฆ = 0
Two solutions are:
๐‘ฆ1 = ๐‘’ ๐‘˜๐‘ฅ ,
๐‘ฆ2 = sin(๐‘˜๐‘ฅ).
๐‘ฆ2 = ๐‘’ −๐‘˜๐‘ฅ .
If we know two solutions of a linear homogeneous equation, we know many more of
them.
Theorem 2.1.1 (Superposition). Suppose ๐‘ฆ1 and ๐‘ฆ2 are two solutions of the homogeneous equation
(2.2). Then
๐‘ฆ(๐‘ฅ) = ๐ถ1 ๐‘ฆ1 (๐‘ฅ) + ๐ถ2 ๐‘ฆ2 (๐‘ฅ),
also solves (2.2) for arbitrary constants ๐ถ1 and ๐ถ2 .
80
CHAPTER 2. HIGHER ORDER LINEAR ODES
That is, we can add solutions together and multiply them by constants to obtain new
and different solutions. We call the expression ๐ถ1 ๐‘ฆ1 + ๐ถ2 ๐‘ฆ2 a linear combination of ๐‘ฆ1 and
๐‘ฆ2 . Let us prove this theorem; the proof is very enlightening and illustrates how linear
equations work.
Proof: Let ๐‘ฆ = ๐ถ1 ๐‘ฆ1 + ๐ถ2 ๐‘ฆ2 . Then
๐‘ฆ 00 + ๐‘๐‘ฆ 0 + ๐‘ž๐‘ฆ = (๐ถ1 ๐‘ฆ1 + ๐ถ2 ๐‘ฆ2 )00 + ๐‘(๐ถ1 ๐‘ฆ1 + ๐ถ2 ๐‘ฆ2 )0 + ๐‘ž(๐ถ1 ๐‘ฆ1 + ๐ถ2 ๐‘ฆ2 )
= ๐ถ1 ๐‘ฆ100 + ๐ถ2 ๐‘ฆ200 + ๐ถ1 ๐‘๐‘ฆ10 + ๐ถ2 ๐‘๐‘ฆ20 + ๐ถ1 ๐‘ž๐‘ฆ1 + ๐ถ2 ๐‘ž๐‘ฆ2
= ๐ถ1 (๐‘ฆ100 + ๐‘๐‘ฆ10 + ๐‘ž๐‘ฆ1 ) + ๐ถ2 (๐‘ฆ200 + ๐‘๐‘ฆ20 + ๐‘ž๐‘ฆ2 )
= ๐ถ1 · 0 + ๐ถ2 · 0 = 0.
The proof becomes even simpler to state if we use the operator notation. An operator is
an object that eats functions and spits out functions (kind of like what a function is, but a
function eats numbers and spits out numbers). Define the operator ๐ฟ by
๐ฟ๐‘ฆ = ๐‘ฆ 00 + ๐‘๐‘ฆ 0 + ๐‘ž๐‘ฆ.
The differential equation now becomes ๐ฟ๐‘ฆ = 0. The operator (and the equation) ๐ฟ being
linear means that ๐ฟ(๐ถ1 ๐‘ฆ1 + ๐ถ2 ๐‘ฆ2 ) = ๐ถ1 ๐ฟ๐‘ฆ1 + ๐ถ2 ๐ฟ๐‘ฆ2 . The proof above becomes
๐ฟ๐‘ฆ = ๐ฟ(๐ถ1 ๐‘ฆ1 + ๐ถ2 ๐‘ฆ2 ) = ๐ถ1 ๐ฟ๐‘ฆ1 + ๐ถ2 ๐ฟ๐‘ฆ2 = ๐ถ1 · 0 + ๐ถ2 · 0 = 0.
Two different solutions to the second equation ๐‘ฆ 00 − ๐‘˜ 2 ๐‘ฆ = 0 are ๐‘ฆ1 = cosh(๐‘˜๐‘ฅ) and
๐‘ฅ
−๐‘ฅ
๐‘ฅ
−๐‘ฅ
๐‘ฆ2 = sinh(๐‘˜๐‘ฅ). Let us remind ourselves of the definition, cosh ๐‘ฅ = ๐‘’ +๐‘’
and sinh ๐‘ฅ = ๐‘’ −๐‘’
2
2 .
Therefore, these are solutions by superposition as they are linear combinations of the two
exponential solutions.
The functions sinh and cosh are sometimes more convenient to use than the exponential.
Let us review some of their properties:
cosh 0 = 1,
i
๐‘‘ h
cosh ๐‘ฅ = sinh ๐‘ฅ,
๐‘‘๐‘ฅ
cosh2 ๐‘ฅ − sinh2 ๐‘ฅ = 1.
sinh 0 = 0,
i
๐‘‘ h
sinh ๐‘ฅ = cosh ๐‘ฅ,
๐‘‘๐‘ฅ
Exercise 2.1.1: Derive these properties using the definitions of sinh and cosh in terms of exponentials.
Linear equations have nice and simple answers to the existence and uniqueness question.
Theorem 2.1.2 (Existence and uniqueness). Suppose ๐‘, ๐‘ž, ๐‘“ are continuous functions on some
interval ๐ผ, ๐‘Ž is a number in ๐ผ, and ๐‘Ž, ๐‘0 , ๐‘1 are constants. The equation
๐‘ฆ 00 + ๐‘(๐‘ฅ)๐‘ฆ 0 + ๐‘ž(๐‘ฅ)๐‘ฆ = ๐‘“ (๐‘ฅ),
has exactly one solution ๐‘ฆ(๐‘ฅ) defined on the same interval ๐ผ satisfying the initial conditions
๐‘ฆ(๐‘Ž) = ๐‘ 0 ,
๐‘ฆ 0(๐‘Ž) = ๐‘ 1 .
81
2.1. SECOND ORDER LINEAR ODES
For example, the equation ๐‘ฆ 00 + ๐‘˜ 2 ๐‘ฆ = 0 with ๐‘ฆ(0) = ๐‘ 0 and ๐‘ฆ 0(0) = ๐‘ 1 has the solution
๐‘ฆ(๐‘ฅ) = ๐‘ 0 cos(๐‘˜๐‘ฅ) +
๐‘1
sin(๐‘˜๐‘ฅ).
๐‘˜
The equation ๐‘ฆ 00 − ๐‘˜ 2 ๐‘ฆ = 0 with ๐‘ฆ(0) = ๐‘ 0 and ๐‘ฆ 0(0) = ๐‘ 1 has the solution
๐‘ฆ(๐‘ฅ) = ๐‘ 0 cosh(๐‘˜๐‘ฅ) +
๐‘1
sinh(๐‘˜๐‘ฅ).
๐‘˜
Using cosh and sinh in this solution allows us to solve for the initial conditions in a cleaner
way than if we have used the exponentials.
The initial conditions for a second order ODE consist of two equations. Common sense
tells us that if we have two arbitrary constants and two equations, then we should be able
to solve for the constants and find a solution to the differential equation satisfying the
initial conditions.
Question: Suppose we find two different solutions ๐‘ฆ1 and ๐‘ฆ2 to the homogeneous
equation (2.2). Can every solution be written (using superposition) in the form ๐‘ฆ =
๐ถ 1 ๐‘ฆ1 + ๐ถ 2 ๐‘ฆ2 ?
Answer is affirmative! Provided that ๐‘ฆ1 and ๐‘ฆ2 are different enough in the following
sense. We say ๐‘ฆ1 and ๐‘ฆ2 are linearly independent if one is not a constant multiple of the other.
Theorem 2.1.3. Let ๐‘, ๐‘ž be continuous functions. Let ๐‘ฆ1 and ๐‘ฆ2 be two linearly independent
solutions to the homogeneous equation (2.2). Then every other solution is of the form
๐‘ฆ = ๐ถ 1 ๐‘ฆ1 + ๐ถ 2 ๐‘ฆ2 .
That is, ๐‘ฆ = ๐ถ1 ๐‘ฆ1 + ๐ถ2 ๐‘ฆ2 is the general solution.
For example, we found the solutions ๐‘ฆ1 = sin ๐‘ฅ and ๐‘ฆ2 = cos ๐‘ฅ for the equation
๐‘ฆ 00 + ๐‘ฆ = 0. It is not hard to see that sine and cosine are not constant multiples of each other.
If sin ๐‘ฅ = ๐ด cos ๐‘ฅ for some constant ๐ด, we let ๐‘ฅ = 0 and this would imply ๐ด = 0. But then
sin ๐‘ฅ = 0 for all ๐‘ฅ, which is preposterous. So ๐‘ฆ1 and ๐‘ฆ2 are linearly independent. Hence,
๐‘ฆ = ๐ถ1 cos ๐‘ฅ + ๐ถ2 sin ๐‘ฅ
is the general solution to ๐‘ฆ 00 + ๐‘ฆ = 0.
For two functions, checking linear independence is rather simple. Let us see another
example. Consider ๐‘ฆ 00 − 2๐‘ฅ −2 ๐‘ฆ = 0. Then ๐‘ฆ1 = ๐‘ฅ 2 and ๐‘ฆ2 = 1/๐‘ฅ are solutions. To see that
they are linearly indepedent, suppose one is a multple of the other: ๐‘ฆ1 = ๐ด๐‘ฆ2 , we just
have to find out that ๐ด cannot be a constant. In this case we have ๐ด = ๐‘ฆ1/๐‘ฆ2 = ๐‘ฅ 3 , this most
decidedly not a constant. So ๐‘ฆ = ๐ถ1 ๐‘ฅ 2 + ๐ถ2 1/๐‘ฅ is the general solution.
If you have one solution to a second order linear homogeneous equation, then you
can find another one. This is the reduction of order method. The idea is that if we somehow
82
CHAPTER 2. HIGHER ORDER LINEAR ODES
found ๐‘ฆ1 as a solution of ๐‘ฆ 00 + ๐‘(๐‘ฅ)๐‘ฆ 0 + ๐‘ž(๐‘ฅ)๐‘ฆ = 0 we try a second solution of the form
๐‘ฆ2 (๐‘ฅ) = ๐‘ฆ1 (๐‘ฅ)๐‘ฃ(๐‘ฅ). We just need to find ๐‘ฃ. We plug ๐‘ฆ2 into the equation:
0 = ๐‘ฆ200 + ๐‘(๐‘ฅ)๐‘ฆ20 + ๐‘ž(๐‘ฅ)๐‘ฆ2 = ๐‘ฆ100๐‘ฃ + 2๐‘ฆ10 ๐‘ฃ 0 + ๐‘ฆ1 ๐‘ฃ 00 + ๐‘(๐‘ฅ)(๐‘ฆ10 ๐‘ฃ + ๐‘ฆ1 ๐‘ฃ 0) + ๐‘ž(๐‘ง)๐‘ฆ1 ๐‘ฃ
00
= ๐‘ฆ1 ๐‘ฃ +
(2๐‘ฆ10
:0
0
+ ๐‘(๐‘ฅ)๐‘ฆ1 )๐‘ฃ +
0 ๐‘ฆ100 +
๐‘(๐‘ฅ)๐‘ฆ
+ ๐‘ž(๐‘ฅ)๐‘ฆ1
1
๐‘ฃ.
In other words, ๐‘ฆ1 ๐‘ฃ 00 + (2๐‘ฆ10 + ๐‘(๐‘ฅ)๐‘ฆ1 )๐‘ฃ 0 = 0. Using ๐‘ค = ๐‘ฃ 0 we have the first order linear
equation ๐‘ฆ1 ๐‘ค 0 + (2๐‘ฆ10 + ๐‘(๐‘ฅ)๐‘ฆ1 )๐‘ค = 0. After solving this equation for ๐‘ค (integrating factor),
we find ๐‘ฃ by antidifferentiating ๐‘ค. We then form ๐‘ฆ2 by computing ๐‘ฆ1 ๐‘ฃ. For example,
suppose we somehow know ๐‘ฆ1 = ๐‘ฅ is a solution to ๐‘ฆ 00 + ๐‘ฅ −1 ๐‘ฆ 0 − ๐‘ฅ −2 ๐‘ฆ = 0. The equation
for ๐‘ค is then ๐‘ฅ๐‘ค 0 + 3๐‘ค = 0. We find a solution, ๐‘ค = ๐ถ๐‘ฅ −3 , and we find an antiderivative
−๐ถ
−๐ถ
1
๐‘ฃ = 2๐‘ฅ
2 . Hence ๐‘ฆ2 = ๐‘ฆ1 ๐‘ฃ = 2๐‘ฅ . Any ๐ถ works and so ๐ถ = −2 makes ๐‘ฆ2 = /๐‘ฅ . Thus, the
1
general solution is ๐‘ฆ = ๐ถ1 ๐‘ฅ + ๐ถ2 /๐‘ฅ .
Since we have a formula for the solution to the first order linear equation, we can write
a formula for ๐‘ฆ2 :
∫
๐‘ฆ2 (๐‘ฅ) = ๐‘ฆ1 (๐‘ฅ)
∫
๐‘’−
๐‘(๐‘ฅ) ๐‘‘๐‘ฅ
๐‘ฆ1 (๐‘ฅ)
2 ๐‘‘๐‘ฅ
However, it is much easier to remember that we just need to try ๐‘ฆ2 (๐‘ฅ) = ๐‘ฆ1 (๐‘ฅ)๐‘ฃ(๐‘ฅ) and find
๐‘ฃ(๐‘ฅ) as we did above. Also, the technique works for higher order equations too: you get to
reduce the order for each solution you find. So it is better to remember how to do it rather
than a specific formula.
We will study the solution of nonhomogeneous equations in § 2.5. We will first focus
on finding general solutions to homogeneous equations.
2.1.1
Exercises
Exercise 2.1.2: Show that ๐‘ฆ = ๐‘’ ๐‘ฅ and ๐‘ฆ = ๐‘’ 2๐‘ฅ are linearly independent.
Exercise 2.1.3: Take ๐‘ฆ 00 + 5๐‘ฆ = 10๐‘ฅ + 5. Find (guess!) a solution.
Exercise 2.1.4: Prove the superposition principle for nonhomogeneous equations. Suppose that ๐‘ฆ1
is a solution to ๐ฟ๐‘ฆ1 = ๐‘“ (๐‘ฅ) and ๐‘ฆ2 is a solution to ๐ฟ๐‘ฆ2 = ๐‘”(๐‘ฅ) (same linear operator ๐ฟ). Show that
๐‘ฆ = ๐‘ฆ1 + ๐‘ฆ2 solves ๐ฟ๐‘ฆ = ๐‘“ (๐‘ฅ) + ๐‘”(๐‘ฅ).
Exercise 2.1.5: For the equation ๐‘ฅ 2 ๐‘ฆ 00 − ๐‘ฅ๐‘ฆ 0 = 0, find two solutions, show that they are linearly
independent and find the general solution. Hint: Try ๐‘ฆ = ๐‘ฅ ๐‘Ÿ .
Equations of the form ๐‘Ž๐‘ฅ 2 ๐‘ฆ 00 + ๐‘๐‘ฅ๐‘ฆ 0 + ๐‘๐‘ฆ = 0 are called Euler’s equations or Cauchy–Euler
equations. They are solved by trying ๐‘ฆ = ๐‘ฅ ๐‘Ÿ and solving for ๐‘Ÿ (assume that ๐‘ฅ ≥ 0 for
simplicity).
83
2.1. SECOND ORDER LINEAR ODES
Exercise 2.1.6: Suppose that (๐‘ − ๐‘Ž)2 − 4๐‘Ž๐‘ > 0.
a) Find a formula for the general solution of ๐‘Ž๐‘ฅ 2 ๐‘ฆ 00 + ๐‘๐‘ฅ๐‘ฆ 0 + ๐‘๐‘ฆ = 0. Hint: Try ๐‘ฆ = ๐‘ฅ ๐‘Ÿ and
find a formula for ๐‘Ÿ.
b) What happens when (๐‘ − ๐‘Ž)2 − 4๐‘Ž๐‘ = 0 or (๐‘ − ๐‘Ž)2 − 4๐‘Ž๐‘ < 0?
We will revisit the case when (๐‘ − ๐‘Ž)2 − 4๐‘Ž๐‘ < 0 later.
Exercise 2.1.7: Same equation as in Exercise 2.1.6. Suppose (๐‘ − ๐‘Ž)2 − 4๐‘Ž๐‘ = 0. Find a formula
for the general solution of ๐‘Ž๐‘ฅ 2 ๐‘ฆ 00 + ๐‘๐‘ฅ๐‘ฆ 0 + ๐‘๐‘ฆ = 0. Hint: Try ๐‘ฆ = ๐‘ฅ ๐‘Ÿ ln ๐‘ฅ for the second solution.
Exercise 2.1.8 (reduction of order): Suppose ๐‘ฆ1 is a solution to ๐‘ฆ 00 + ๐‘(๐‘ฅ)๐‘ฆ 0 + ๐‘ž(๐‘ฅ)๐‘ฆ = 0. By
directly plugging into the equation, show that
๐‘ฆ2 (๐‘ฅ) = ๐‘ฆ1 (๐‘ฅ)
∫
๐‘’−
∫
๐‘(๐‘ฅ) ๐‘‘๐‘ฅ
๐‘ฆ1 (๐‘ฅ)
2 ๐‘‘๐‘ฅ
is also a solution.
Exercise 2.1.9 (Chebyshev’s equation of order 1): Take (1 − ๐‘ฅ 2 )๐‘ฆ 00 − ๐‘ฅ๐‘ฆ 0 + ๐‘ฆ = 0.
a) Show that ๐‘ฆ = ๐‘ฅ is a solution.
b) Use reduction of order to find a second linearly independent solution.
c) Write down the general solution.
Exercise 2.1.10 (Hermite’s equation of order 2): Take ๐‘ฆ 00 − 2๐‘ฅ๐‘ฆ 0 + 4๐‘ฆ = 0.
a) Show that ๐‘ฆ = 1 − 2๐‘ฅ 2 is a solution.
b) Use reduction of order to find a second linearly independent solution. (It’s OK to leave a
definite integral in the formula.)
c) Write down the general solution.
Exercise 2.1.101: Are sin(๐‘ฅ) and ๐‘’ ๐‘ฅ linearly independent? Justify.
Exercise 2.1.102: Are ๐‘’ ๐‘ฅ and ๐‘’ ๐‘ฅ+2 linearly independent? Justify.
Exercise 2.1.103: Guess a solution to ๐‘ฆ 00 + ๐‘ฆ 0 + ๐‘ฆ = 5.
Exercise 2.1.104: Find the general solution to ๐‘ฅ๐‘ฆ 00 + ๐‘ฆ 0 = 0. Hint: It is a first order ODE in ๐‘ฆ 0.
Exercise 2.1.105: Write down an equation (guess) for which we have the solutions ๐‘’ ๐‘ฅ and ๐‘’ 2๐‘ฅ .
Hint: Try an equation of the form ๐‘ฆ 00 + ๐ด๐‘ฆ 0 + ๐ต๐‘ฆ = 0 for constants ๐ด and ๐ต, plug in both ๐‘’ ๐‘ฅ and
๐‘’ 2๐‘ฅ and solve for ๐ด and ๐ต.
84
CHAPTER 2. HIGHER ORDER LINEAR ODES
2.2
Constant coefficient second order linear ODEs
Note: more than 1 lecture, second part of §3.1 in [EP], §3.1 in [BD]
2.2.1
Solving constant coefficient equations
Consider the problem
๐‘ฆ 00 − 6๐‘ฆ 0 + 8๐‘ฆ = 0,
๐‘ฆ(0) = −2,
๐‘ฆ 0(0) = 6.
This is a second order linear homogeneous equation with constant coefficients. Constant
coefficients means that the functions in front of ๐‘ฆ 00, ๐‘ฆ 0, and ๐‘ฆ are constants, they do not
depend on ๐‘ฅ.
To guess a solution, think of a function that stays essentially the same when we
differentiate it, so that we can take the function and its derivatives, add some multiples of
these together, and end up with zero. Yes, we are talking about the exponential.
Let us try∗ a solution of the form ๐‘ฆ = ๐‘’ ๐‘Ÿ๐‘ฅ . Then ๐‘ฆ 0 = ๐‘Ÿ๐‘’ ๐‘Ÿ๐‘ฅ and ๐‘ฆ 00 = ๐‘Ÿ 2 ๐‘’ ๐‘Ÿ๐‘ฅ . Plug in to get
๐‘ฆ 00 − 6๐‘ฆ 0 + 8๐‘ฆ = 0,
๐‘Ÿ 2 ๐‘’ ๐‘Ÿ๐‘ฅ −6 ๐‘Ÿ๐‘’ ๐‘Ÿ๐‘ฅ +8 ๐‘’ ๐‘Ÿ๐‘ฅ = 0,
|{z}
๐‘ฆ 00
|{z}
๐‘ฆ0
|{z}
๐‘ฆ
๐‘Ÿ 2 − 6๐‘Ÿ + 8 = 0
(divide through by ๐‘’ ๐‘Ÿ๐‘ฅ ),
(๐‘Ÿ − 2)(๐‘Ÿ − 4) = 0.
Hence, if ๐‘Ÿ = 2 or ๐‘Ÿ = 4, then ๐‘’ ๐‘Ÿ๐‘ฅ is a solution. So let ๐‘ฆ1 = ๐‘’ 2๐‘ฅ and ๐‘ฆ2 = ๐‘’ 4๐‘ฅ .
Exercise 2.2.1: Check that ๐‘ฆ1 and ๐‘ฆ2 are solutions.
The functions ๐‘’ 2๐‘ฅ and ๐‘’ 4๐‘ฅ are linearly independent. If they were not linearly independent,
we could write ๐‘’ 4๐‘ฅ = ๐ถ๐‘’ 2๐‘ฅ for some constant ๐ถ, implying that ๐‘’ 2๐‘ฅ = ๐ถ for all ๐‘ฅ, which is
clearly not possible. Hence, we can write the general solution as
๐‘ฆ = ๐ถ1 ๐‘’ 2๐‘ฅ + ๐ถ2 ๐‘’ 4๐‘ฅ .
We need to solve for ๐ถ1 and ๐ถ2 . To apply the initial conditions, we first find ๐‘ฆ 0 =
2๐ถ1 ๐‘’ 2๐‘ฅ + 4๐ถ2 ๐‘’ 4๐‘ฅ . We plug ๐‘ฅ = 0 into ๐‘ฆ and ๐‘ฆ 0 and solve.
−2 = ๐‘ฆ(0) = ๐ถ1 + ๐ถ2 ,
6 = ๐‘ฆ 0(0) = 2๐ถ1 + 4๐ถ2 .
∗ Making
an educated guess with some parameters to solve for is such a central technique in differential
equations, that people sometimes use a fancy name for such a guess: ansatz, German for “initial placement of
a tool at a work piece.” Yes, the Germans have a word for that.
85
2.2. CONSTANT COEFFICIENT SECOND ORDER LINEAR ODES
Either apply some matrix algebra, or just solve these by high school math. For example,
divide the second equation by 2 to obtain 3 = ๐ถ1 + 2๐ถ2 , and subtract the two equations to
get 5 = ๐ถ2 . Then ๐ถ1 = −7 as −2 = ๐ถ1 + 5. Hence, the solution we are looking for is
๐‘ฆ = −7๐‘’ 2๐‘ฅ + 5๐‘’ 4๐‘ฅ .
Let us generalize this example into a method. Suppose that we have an equation
๐‘Ž๐‘ฆ 00 + ๐‘๐‘ฆ 0 + ๐‘๐‘ฆ = 0,
(2.3)
where ๐‘Ž, ๐‘, ๐‘ are constants. Try the solution ๐‘ฆ = ๐‘’ ๐‘Ÿ๐‘ฅ to obtain
๐‘Ž๐‘Ÿ 2 ๐‘’ ๐‘Ÿ๐‘ฅ + ๐‘๐‘Ÿ๐‘’ ๐‘Ÿ๐‘ฅ + ๐‘๐‘’ ๐‘Ÿ๐‘ฅ = 0.
Divide by ๐‘’ ๐‘Ÿ๐‘ฅ to obtain the so-called characteristic equation of the ODE:
๐‘Ž๐‘Ÿ 2 + ๐‘๐‘Ÿ + ๐‘ = 0.
Solve for the ๐‘Ÿ by using the quadratic formula.
√
−๐‘ ± ๐‘ 2 − 4๐‘Ž๐‘
๐‘Ÿ1 , ๐‘Ÿ2 =
.
2๐‘Ž
So ๐‘’ ๐‘Ÿ1 ๐‘ฅ and ๐‘’ ๐‘Ÿ2 ๐‘ฅ are solutions. There is still a difficulty if ๐‘Ÿ1 = ๐‘Ÿ2 , but it is not hard to
overcome.
Theorem 2.2.1. Suppose that ๐‘Ÿ1 and ๐‘Ÿ2 are the roots of the characteristic equation.
(i) If ๐‘Ÿ1 and ๐‘Ÿ2 are distinct and real (when ๐‘ 2 − 4๐‘Ž๐‘ > 0), then (2.3) has the general solution
๐‘ฆ = ๐ถ 1 ๐‘’ ๐‘Ÿ1 ๐‘ฅ + ๐ถ 2 ๐‘’ ๐‘Ÿ2 ๐‘ฅ .
(ii) If ๐‘Ÿ1 = ๐‘Ÿ2 (happens when ๐‘ 2 − 4๐‘Ž๐‘ = 0), then (2.3) has the general solution
๐‘ฆ = (๐ถ1 + ๐ถ2 ๐‘ฅ) ๐‘’ ๐‘Ÿ1 ๐‘ฅ .
Example 2.2.1: Solve
๐‘ฆ 00 − ๐‘˜ 2 ๐‘ฆ = 0.
The characteristic equation is ๐‘Ÿ 2 − ๐‘˜ 2 = 0 or (๐‘Ÿ − ๐‘˜)(๐‘Ÿ + ๐‘˜) = 0. Consequently, ๐‘’ −๐‘˜๐‘ฅ and ๐‘’ ๐‘˜๐‘ฅ
are the two linearly independent solutions, and the general solution is
๐‘ฆ = ๐ถ1 ๐‘’ ๐‘˜๐‘ฅ + ๐ถ2 ๐‘’ −๐‘˜๐‘ฅ .
Since cosh ๐‘  =
๐‘’ ๐‘  +๐‘’ −๐‘ 
2
and sinh ๐‘  =
๐‘’ ๐‘  −๐‘’ −๐‘ 
2 ,
we can also write the general solution as
๐‘ฆ = ๐ท1 cosh(๐‘˜๐‘ฅ) + ๐ท2 sinh(๐‘˜๐‘ฅ).
Example 2.2.2: Find the general solution of
๐‘ฆ 00 − 8๐‘ฆ 0 + 16๐‘ฆ = 0.
The characteristic equation is ๐‘Ÿ 2 − 8๐‘Ÿ + 16 = (๐‘Ÿ − 4)2 = 0. The equation has a double
root ๐‘Ÿ1 = ๐‘Ÿ2 = 4. The general solution is, therefore,
๐‘ฆ = (๐ถ1 + ๐ถ2 ๐‘ฅ) ๐‘’ 4๐‘ฅ = ๐ถ1 ๐‘’ 4๐‘ฅ + ๐ถ2 ๐‘ฅ๐‘’ 4๐‘ฅ .
86
CHAPTER 2. HIGHER ORDER LINEAR ODES
Exercise 2.2.2: Check that ๐‘’ 4๐‘ฅ and ๐‘ฅ๐‘’ 4๐‘ฅ are linearly independent.
That ๐‘’ 4๐‘ฅ solves the equation is clear. If ๐‘ฅ๐‘’ 4๐‘ฅ solves the equation, then we know we are
done. Let us compute ๐‘ฆ 0 = ๐‘’ 4๐‘ฅ + 4๐‘ฅ๐‘’ 4๐‘ฅ and ๐‘ฆ 00 = 8๐‘’ 4๐‘ฅ + 16๐‘ฅ๐‘’ 4๐‘ฅ . Plug in
๐‘ฆ 00 − 8๐‘ฆ 0 + 16๐‘ฆ = 8๐‘’ 4๐‘ฅ + 16๐‘ฅ๐‘’ 4๐‘ฅ − 8(๐‘’ 4๐‘ฅ + 4๐‘ฅ๐‘’ 4๐‘ฅ ) + 16๐‘ฅ๐‘’ 4๐‘ฅ = 0.
In some sense, a doubled root rarely happens. If coefficients are picked randomly, a
doubled root is unlikely. There are, however, some natural phenomena (such as resonance
as we will see) where a doubled root does happen, so we cannot just dismiss this case.
Let us give a short argument for why the solution ๐‘ฅ๐‘’ ๐‘Ÿ๐‘ฅ works when the root is doubled.
This case is really a limiting case of when the two roots are distinct and very close. Note
๐‘Ÿ ๐‘ฅ
๐‘Ÿ1 ๐‘ฅ
that ๐‘’ 2๐‘Ÿ2 −๐‘’
−๐‘Ÿ1 is a solution when the roots are distinct. When we take the limit as ๐‘Ÿ1 goes to
๐‘Ÿ2 , we are really taking the derivative of ๐‘’ ๐‘Ÿ๐‘ฅ using ๐‘Ÿ as the variable. Therefore, the limit is
๐‘ฅ๐‘’ ๐‘Ÿ๐‘ฅ , and hence this is a solution in the doubled root case.
2.2.2
Complex numbers and Euler’s formula
A polynomial may have complex roots. The equation ๐‘Ÿ 2 + 1 = 0 has no real roots, but it
does have two complex roots. Here we review some properties of complex numbers.
Complex numbers may seem a strange concept, especially because of the terminology.
There is nothing imaginary or really complicated about complex numbers. A complex
number is simply a pair of real numbers, (๐‘Ž, ๐‘). Think of a complex number as a point in the
plane. We add complex numbers in the straightforward way: (๐‘Ž, ๐‘) + (๐‘, ๐‘‘) = (๐‘Ž + ๐‘, ๐‘ + ๐‘‘).
We define multiplication by
def
(๐‘Ž, ๐‘) × (๐‘, ๐‘‘) = (๐‘Ž๐‘ − ๐‘๐‘‘, ๐‘Ž๐‘‘ + ๐‘๐‘).
It turns out that with this multiplication rule, all the standard properties of arithmetic hold.
Further, and most importantly (0, 1) × (0, 1) = (−1, 0).
Generally we write (๐‘Ž, ๐‘) as ๐‘Ž + ๐‘–๐‘, and we treat ๐‘– as if it were an unknown. When ๐‘ is
zero, then (๐‘Ž, 0) is just the number ๐‘Ž. We do arithmetic with complex numbers just as we
would with polynomials. The property we just mentioned becomes ๐‘– 2 = −1. So whenever
we see ๐‘– 2 , we replace it by −1. For example,
(2 + 3๐‘–)(4๐‘–) − 5๐‘– = (2 × 4)๐‘– + (3 × 4)๐‘– 2 − 5๐‘– = 8๐‘– + 12(−1) − 5๐‘– = −12 + 3๐‘–.
The numbers ๐‘– and −๐‘– are the two roots of ๐‘Ÿ 2 + 1 = 0. Engineers often use the letter ๐‘—
instead of ๐‘– for the square root of −1. We use the mathematicians’ convention and use ๐‘–.
Exercise 2.2.3: Make sure you understand (that you can justify) the following identities:
1
= −๐‘–,
๐‘–
a) ๐‘– 2 = −1, ๐‘– 3 = −๐‘–, ๐‘– 4 = 1,
b)
c) (3 − 7๐‘–)(−2 − 9๐‘–) = · · · = −69 − 13๐‘–,
d) (3−2๐‘–)(3+2๐‘–) = 32 −(2๐‘–)2 = 32 +22 = 13,
e)
1
3−2๐‘–
=
1 3+2๐‘–
3−2๐‘– 3+2๐‘–
=
3+2๐‘–
13
=
3
13
+
2
13 ๐‘–.
2.2. CONSTANT COEFFICIENT SECOND ORDER LINEAR ODES
87
We also define the exponential ๐‘’ ๐‘Ž+๐‘–๐‘ of a complex number. We do this by writing down
the Taylor series and plugging in the complex number. Because most properties of the
exponential can be proved by looking at the Taylor series, these properties still hold for the
complex exponential. For example the very important property: ๐‘’ ๐‘ฅ+๐‘ฆ = ๐‘’ ๐‘ฅ ๐‘’ ๐‘ฆ . This means
that ๐‘’ ๐‘Ž+๐‘–๐‘ = ๐‘’ ๐‘Ž ๐‘’ ๐‘–๐‘ . Hence if we can compute ๐‘’ ๐‘–๐‘ , we can compute ๐‘’ ๐‘Ž+๐‘–๐‘ . For ๐‘’ ๐‘–๐‘ we use the
so-called Euler’s formula.
Theorem 2.2.2 (Euler’s formula).
๐‘’ ๐‘–๐œƒ = cos ๐œƒ + ๐‘– sin ๐œƒ
and
๐‘’ −๐‘–๐œƒ = cos ๐œƒ − ๐‘– sin ๐œƒ.
In other words, ๐‘’ ๐‘Ž+๐‘–๐‘ = ๐‘’ ๐‘Ž cos(๐‘) + ๐‘– sin(๐‘) = ๐‘’ ๐‘Ž cos(๐‘) + ๐‘–๐‘’ ๐‘Ž sin(๐‘).
Exercise 2.2.4: Using Euler’s formula, check the identities:
cos ๐œƒ =
๐‘’ ๐‘–๐œƒ + ๐‘’ −๐‘–๐œƒ
2
and
sin ๐œƒ =
๐‘’ ๐‘–๐œƒ − ๐‘’ −๐‘–๐œƒ
.
2๐‘–
2
Exercise 2.2.5: Double angle identities: Start with ๐‘’ ๐‘–(2๐œƒ) = ๐‘’ ๐‘–๐œƒ . Use Euler on each side and
deduce:
cos(2๐œƒ) = cos2 ๐œƒ − sin2 ๐œƒ
and
sin(2๐œƒ) = 2 sin ๐œƒ cos ๐œƒ.
For a complex number ๐‘Ž + ๐‘–๐‘ we call ๐‘Ž the real part and ๐‘ the imaginary part of the
number. Often the following notation is used,
Re(๐‘Ž + ๐‘–๐‘) = ๐‘Ž
2.2.3
and
Im(๐‘Ž + ๐‘–๐‘) = ๐‘.
Complex roots
Suppose the equation ๐‘Ž๐‘ฆ 00 + ๐‘๐‘ฆ 0 + ๐‘๐‘ฆ = 0 has the characteristic equation
๐‘Ž๐‘Ÿ 2 + ๐‘๐‘Ÿ + ๐‘ = 0
√
that has complex roots. By the quadratic formula, the roots are
complex if ๐‘ 2 − 4๐‘Ž๐‘ < 0. In this case the roots are
√
−๐‘
4๐‘Ž๐‘ − ๐‘ 2
๐‘Ÿ1 , ๐‘Ÿ2 =
±๐‘–
.
2๐‘Ž
2๐‘Ž
−๐‘± ๐‘ 2 −4๐‘Ž๐‘
.
2๐‘Ž
These roots are
As you can see, we always get a pair of roots of the form ๐›ผ ± ๐‘–๐›ฝ. In this case we can still
write the solution as
๐‘ฆ = ๐ถ1 ๐‘’ (๐›ผ+๐‘–๐›ฝ)๐‘ฅ + ๐ถ2 ๐‘’ (๐›ผ−๐‘–๐›ฝ)๐‘ฅ .
However, the exponential is now complex-valued. We need to allow ๐ถ1 and ๐ถ2 to be
complex numbers to obtain a real-valued solution (which is what we are after). While
there is nothing particularly wrong with this approach, it can make calculations harder
and it is generally preferred to find two real-valued solutions.
Here we can use Euler’s formula. Let
๐‘ฆ1 = ๐‘’ (๐›ผ+๐‘–๐›ฝ)๐‘ฅ
and
๐‘ฆ2 = ๐‘’ (๐›ผ−๐‘–๐›ฝ)๐‘ฅ .
88
CHAPTER 2. HIGHER ORDER LINEAR ODES
Then
๐‘ฆ1 = ๐‘’ ๐›ผ๐‘ฅ cos(๐›ฝ๐‘ฅ) + ๐‘–๐‘’ ๐›ผ๐‘ฅ sin(๐›ฝ๐‘ฅ),
๐‘ฆ2 = ๐‘’ ๐›ผ๐‘ฅ cos(๐›ฝ๐‘ฅ) − ๐‘–๐‘’ ๐›ผ๐‘ฅ sin(๐›ฝ๐‘ฅ).
Linear combinations of solutions are also solutions. Hence,
๐‘ฆ1 + ๐‘ฆ2
= ๐‘’ ๐›ผ๐‘ฅ cos(๐›ฝ๐‘ฅ),
2
๐‘ฆ1 − ๐‘ฆ2
= ๐‘’ ๐›ผ๐‘ฅ sin(๐›ฝ๐‘ฅ),
๐‘ฆ4 =
2๐‘–
๐‘ฆ3 =
are also solutions. Furthermore, they are real-valued. It is not hard to see that they are
linearly independent (not multiples of each other). Therefore, we have the following
theorem.
Theorem 2.2.3. Take the equation
๐‘Ž๐‘ฆ 00 + ๐‘๐‘ฆ 0 + ๐‘๐‘ฆ = 0.
If the characteristic equation has the roots ๐›ผ ± ๐‘–๐›ฝ (when ๐‘ 2 − 4๐‘Ž๐‘ < 0), then the general solution is
๐‘ฆ = ๐ถ1 ๐‘’ ๐›ผ๐‘ฅ cos(๐›ฝ๐‘ฅ) + ๐ถ2 ๐‘’ ๐›ผ๐‘ฅ sin(๐›ฝ๐‘ฅ).
Example 2.2.3: Find the general solution of ๐‘ฆ 00 + ๐‘˜ 2 ๐‘ฆ = 0, for a constant ๐‘˜ > 0.
The characteristic equation is ๐‘Ÿ 2 + ๐‘˜ 2 = 0. Therefore, the roots are ๐‘Ÿ = ±๐‘–๐‘˜, and by the
theorem, we have the general solution
๐‘ฆ = ๐ถ1 cos(๐‘˜๐‘ฅ) + ๐ถ2 sin(๐‘˜๐‘ฅ).
Example 2.2.4: Find the solution of ๐‘ฆ 00 − 6๐‘ฆ 0 + 13๐‘ฆ = 0, ๐‘ฆ(0) = 0, ๐‘ฆ 0(0) = 10.
The characteristic equation is ๐‘Ÿ 2 − 6๐‘Ÿ + 13 = 0. By completing the square we get
(๐‘Ÿ − 3)2 + 22 = 0 and hence the roots are ๐‘Ÿ = 3 ± 2๐‘–. By the theorem we have the general
solution
๐‘ฆ = ๐ถ1 ๐‘’ 3๐‘ฅ cos(2๐‘ฅ) + ๐ถ2 ๐‘’ 3๐‘ฅ sin(2๐‘ฅ).
To find the solution satisfying the initial conditions, we first plug in zero to get
0 = ๐‘ฆ(0) = ๐ถ1 ๐‘’ 0 cos 0 + ๐ถ2 ๐‘’ 0 sin 0 = ๐ถ1 .
Hence, ๐ถ1 = 0 and ๐‘ฆ = ๐ถ2 ๐‘’ 3๐‘ฅ sin(2๐‘ฅ). We differentiate,
๐‘ฆ 0 = 3๐ถ2 ๐‘’ 3๐‘ฅ sin(2๐‘ฅ) + 2๐ถ2 ๐‘’ 3๐‘ฅ cos(2๐‘ฅ).
We again plug in the initial condition and obtain 10 = ๐‘ฆ 0(0) = 2๐ถ2 , or ๐ถ2 = 5. The solution
we are seeking is
๐‘ฆ = 5๐‘’ 3๐‘ฅ sin(2๐‘ฅ).
2.2. CONSTANT COEFFICIENT SECOND ORDER LINEAR ODES
2.2.4
89
Exercises
Exercise 2.2.6: Find the general solution of 2๐‘ฆ 00 + 2๐‘ฆ 0 − 4๐‘ฆ = 0.
Exercise 2.2.7: Find the general solution of ๐‘ฆ 00 + 9๐‘ฆ 0 − 10๐‘ฆ = 0.
Exercise 2.2.8: Solve ๐‘ฆ 00 − 8๐‘ฆ 0 + 16๐‘ฆ = 0 for ๐‘ฆ(0) = 2, ๐‘ฆ 0(0) = 0.
Exercise 2.2.9: Solve ๐‘ฆ 00 + 9๐‘ฆ 0 = 0 for ๐‘ฆ(0) = 1, ๐‘ฆ 0(0) = 1.
Exercise 2.2.10: Find the general solution of 2๐‘ฆ 00 + 50๐‘ฆ = 0.
Exercise 2.2.11: Find the general solution of ๐‘ฆ 00 + 6๐‘ฆ 0 + 13๐‘ฆ = 0.
Exercise 2.2.12: Find the general solution of ๐‘ฆ 00 = 0 using the methods of this section.
Exercise 2.2.13: The method of this section applies to equations of other orders than two. We will
see higher orders later. Try to solve the first order equation 2๐‘ฆ 0 + 3๐‘ฆ = 0 using the methods of this
section.
Exercise 2.2.14: Let us revisit the Cauchy–Euler equations of Exercise 2.1.6 on page 82. Suppose
now that (๐‘ − ๐‘Ž)2 − 4๐‘Ž๐‘ < 0. Find a formula for the general solution of ๐‘Ž๐‘ฅ 2 ๐‘ฆ 00 + ๐‘๐‘ฅ๐‘ฆ 0 + ๐‘๐‘ฆ = 0.
Hint: Note that ๐‘ฅ ๐‘Ÿ = ๐‘’ ๐‘Ÿ ln ๐‘ฅ .
Exercise 2.2.15: Find the solution to ๐‘ฆ 00 − (2๐›ผ)๐‘ฆ 0 + ๐›ผ2 ๐‘ฆ = 0, ๐‘ฆ(0) = ๐‘Ž, ๐‘ฆ 0(0) = ๐‘, where ๐›ผ, ๐‘Ž, and
๐‘ are real numbers.
Exercise 2.2.16: Construct an equation such that ๐‘ฆ = ๐ถ1 ๐‘’ −2๐‘ฅ cos(3๐‘ฅ) + ๐ถ2 ๐‘’ −2๐‘ฅ sin(3๐‘ฅ) is the
general solution.
Exercise 2.2.101: Find the general solution to ๐‘ฆ 00 + 4๐‘ฆ 0 + 2๐‘ฆ = 0.
Exercise 2.2.102: Find the general solution to ๐‘ฆ 00 − 6๐‘ฆ 0 + 9๐‘ฆ = 0.
Exercise 2.2.103: Find the solution to 2๐‘ฆ 00 + ๐‘ฆ 0 + ๐‘ฆ = 0, ๐‘ฆ(0) = 1, ๐‘ฆ 0(0) = −2.
Exercise 2.2.104: Find the solution to 2๐‘ฆ 00 + ๐‘ฆ 0 − 3๐‘ฆ = 0, ๐‘ฆ(0) = ๐‘Ž, ๐‘ฆ 0(0) = ๐‘.
Exercise 2.2.105: Find the solution to ๐‘ง 00(๐‘ก) = −2๐‘ง 0(๐‘ก) − 2๐‘ง(๐‘ก), ๐‘ง(0) = 2, ๐‘ง 0(0) = −2.
Exercise 2.2.106: Find the solution to ๐‘ฆ 00 − (๐›ผ + ๐›ฝ)๐‘ฆ 0 + ๐›ผ๐›ฝ๐‘ฆ = 0, ๐‘ฆ(0) = ๐‘Ž, ๐‘ฆ 0(0) = ๐‘, where ๐›ผ, ๐›ฝ,
๐‘Ž, and ๐‘ are real numbers, and ๐›ผ ≠ ๐›ฝ.
Exercise 2.2.107: Construct an equation such that ๐‘ฆ = ๐ถ1 ๐‘’ 3๐‘ฅ + ๐ถ2 ๐‘’ −2๐‘ฅ is the general solution.
90
2.3
CHAPTER 2. HIGHER ORDER LINEAR ODES
Higher order linear ODEs
Note: somewhat more than 1 lecture, §3.2 and §3.3 in [EP], §4.1 and §4.2 in [BD]
We briefly study higher order equations. Equations appearing in applications tend to
be second order. Higher order equations do appear from time to time, but generally the
world around us is “second order.”
The basic results about linear ODEs of higher order are essentially the same as for second
order equations, with 2 replaced by ๐‘›. The important concept of linear independence is
somewhat more complicated when more than two functions are involved. For higher order
constant coefficient ODEs, the methods developed are also somewhat harder to apply, but
we will not dwell on these complications. It is also possible to use the methods for systems
of linear equations from chapter 3 to solve higher order constant coefficient equations.
Let us start with a general homogeneous linear equation
๐‘ฆ (๐‘›) + ๐‘ ๐‘›−1 (๐‘ฅ)๐‘ฆ (๐‘›−1) + · · · + ๐‘1 (๐‘ฅ)๐‘ฆ 0 + ๐‘ 0 (๐‘ฅ)๐‘ฆ = 0.
(2.4)
Theorem 2.3.1 (Superposition). Suppose ๐‘ฆ1 , ๐‘ฆ2 , . . . , ๐‘ฆ๐‘› are solutions of the homogeneous
equation (2.4). Then
๐‘ฆ(๐‘ฅ) = ๐ถ1 ๐‘ฆ1 (๐‘ฅ) + ๐ถ2 ๐‘ฆ2 (๐‘ฅ) + · · · + ๐ถ ๐‘› ๐‘ฆ๐‘› (๐‘ฅ)
also solves (2.4) for arbitrary constants ๐ถ1 , ๐ถ2 , . . . , ๐ถ ๐‘› .
In other words, a linear combination of solutions to (2.4) is also a solution to (2.4). We
also have the existence and uniqueness theorem for nonhomogeneous linear equations.
Theorem 2.3.2 (Existence and uniqueness). Suppose ๐‘0 through ๐‘ ๐‘›−1 , and ๐‘“ are continuous
functions on some interval ๐ผ, ๐‘Ž is a number in ๐ผ, and ๐‘0 , ๐‘1 , . . . , ๐‘ ๐‘›−1 are constants. The equation
๐‘ฆ (๐‘›) + ๐‘ ๐‘›−1 (๐‘ฅ)๐‘ฆ (๐‘›−1) + · · · + ๐‘ 1 (๐‘ฅ)๐‘ฆ 0 + ๐‘0 (๐‘ฅ)๐‘ฆ = ๐‘“ (๐‘ฅ)
has exactly one solution ๐‘ฆ(๐‘ฅ) defined on the same interval ๐ผ satisfying the initial conditions
๐‘ฆ(๐‘Ž) = ๐‘ 0 ,
2.3.1
๐‘ฆ 0(๐‘Ž) = ๐‘1 ,
...,
๐‘ฆ (๐‘›−1) (๐‘Ž) = ๐‘ ๐‘›−1 .
Linear independence
When we had two functions ๐‘ฆ1 and ๐‘ฆ2 we said they were linearly independent if one was
not the multiple of the other. Same idea holds for ๐‘› functions. In this case it is easier to
state as follows. The functions ๐‘ฆ1 , ๐‘ฆ2 , . . . , ๐‘ฆ๐‘› are linearly independent if the equation
๐‘ 1 ๐‘ฆ1 + ๐‘ 2 ๐‘ฆ2 + · · · + ๐‘ ๐‘› ๐‘ฆ ๐‘› = 0
has only the trivial solution ๐‘ 1 = ๐‘2 = · · · = ๐‘ ๐‘› = 0, where the equation must hold for all ๐‘ฅ.
If we can solve equation with some constants where for example ๐‘1 ≠ 0, then we can solve
for ๐‘ฆ1 as a linear combination of the others. If the functions are not linearly independent,
they are linearly dependent.
91
2.3. HIGHER ORDER LINEAR ODES
Example 2.3.1: Show that ๐‘’ ๐‘ฅ , ๐‘’ 2๐‘ฅ , ๐‘’ 3๐‘ฅ are linearly independent.
Let us give several ways to show this fact. Many textbooks (including [EP] and [F])
introduce Wronskians, but it is difficult to see why they work and they are not really
necessary here.
Let us write down
๐‘1 ๐‘’ ๐‘ฅ + ๐‘2 ๐‘’ 2๐‘ฅ + ๐‘3 ๐‘’ 3๐‘ฅ = 0.
We use rules of exponentials and write ๐‘ง = ๐‘’ ๐‘ฅ . Hence ๐‘ง 2 = ๐‘’ 2๐‘ฅ and ๐‘ง 3 = ๐‘’ 3๐‘ฅ . Then we have
๐‘ 1 ๐‘ง + ๐‘2 ๐‘ง 2 + ๐‘ 3 ๐‘ง 3 = 0.
The left-hand side is a third degree polynomial in ๐‘ง. It is either identically zero, or it has
at most 3 zeros. Therefore, it is identically zero, ๐‘ 1 = ๐‘ 2 = ๐‘3 = 0, and the functions are
linearly independent.
Let us try another way. As before we write
๐‘1 ๐‘’ ๐‘ฅ + ๐‘2 ๐‘’ 2๐‘ฅ + ๐‘3 ๐‘’ 3๐‘ฅ = 0.
This equation has to hold for all ๐‘ฅ. We divide through by ๐‘’ 3๐‘ฅ to get
๐‘1 ๐‘’ −2๐‘ฅ + ๐‘2 ๐‘’ −๐‘ฅ + ๐‘ 3 = 0.
As the equation is true for all ๐‘ฅ, let ๐‘ฅ → ∞. After taking the limit we see that ๐‘ 3 = 0. Hence
our equation becomes
๐‘ 1 ๐‘’ ๐‘ฅ + ๐‘ 2 ๐‘’ 2๐‘ฅ = 0.
Rinse, repeat!
How about yet another way. We again write
๐‘1 ๐‘’ ๐‘ฅ + ๐‘2 ๐‘’ 2๐‘ฅ + ๐‘3 ๐‘’ 3๐‘ฅ = 0.
We can evaluate the equation and its derivatives at different values of ๐‘ฅ to obtain equations
for ๐‘1 , ๐‘ 2 , and ๐‘3 . Let us first divide by ๐‘’ ๐‘ฅ for simplicity.
๐‘ 1 + ๐‘ 2 ๐‘’ ๐‘ฅ + ๐‘ 3 ๐‘’ 2๐‘ฅ = 0.
We set ๐‘ฅ = 0 to get the equation ๐‘ 1 + ๐‘ 2 + ๐‘ 3 = 0. Now differentiate both sides
๐‘ 2 ๐‘’ ๐‘ฅ + 2๐‘3 ๐‘’ 2๐‘ฅ = 0.
We set ๐‘ฅ = 0 to get ๐‘2 + 2๐‘3 = 0. We divide by ๐‘’ ๐‘ฅ again and differentiate to get 2๐‘3 ๐‘’ ๐‘ฅ = 0.
It is clear that ๐‘3 is zero. Then ๐‘2 must be zero as ๐‘ 2 = −2๐‘ 3 , and ๐‘1 must be zero because
๐‘1 + ๐‘2 + ๐‘3 = 0.
There is no one best way to do it. All of these methods are perfectly valid. The important
thing is to understand why the functions are linearly independent.
Example 2.3.2: On the other hand, the functions ๐‘’ ๐‘ฅ , ๐‘’ −๐‘ฅ , and cosh ๐‘ฅ are linearly dependent.
Simply apply definition of the hyperbolic cosine:
cosh ๐‘ฅ =
๐‘’ ๐‘ฅ + ๐‘’ −๐‘ฅ
2
or
2 cosh ๐‘ฅ − ๐‘’ ๐‘ฅ − ๐‘’ −๐‘ฅ = 0.
92
CHAPTER 2. HIGHER ORDER LINEAR ODES
2.3.2
Constant coefficient higher order ODEs
When we have a higher order constant coefficient homogeneous linear equation, the song
and dance is exactly the same as it was for second order. We just need to find more
solutions. If the equation is ๐‘› th order, we need to find ๐‘› linearly independent solutions. It
is best seen by example.
Example 2.3.3: Find the general solution to
๐‘ฆ 000 − 3๐‘ฆ 00 − ๐‘ฆ 0 + 3๐‘ฆ = 0.
(2.5)
Try: ๐‘ฆ = ๐‘’ ๐‘Ÿ๐‘ฅ . We plug in and get
๐‘Ÿ 3 ๐‘’ ๐‘Ÿ๐‘ฅ −3 ๐‘Ÿ 2 ๐‘’ ๐‘Ÿ๐‘ฅ − ๐‘Ÿ๐‘’ ๐‘Ÿ๐‘ฅ +3 ๐‘’ ๐‘Ÿ๐‘ฅ = 0.
|{z}
๐‘ฆ 000
|{z} |{z}
๐‘ฆ 00
๐‘ฆ0
|{z}
๐‘ฆ
We divide through by ๐‘’ ๐‘Ÿ๐‘ฅ . Then
๐‘Ÿ 3 − 3๐‘Ÿ 2 − ๐‘Ÿ + 3 = 0.
The trick now is to find the roots. There is a formula for the roots of degree 3 and 4
polynomials but it is very complicated. There is no formula for higher degree polynomials.
That does not mean that the roots do not exist. There are always ๐‘› roots for an ๐‘› th degree
polynomial. They may be repeated and they may be complex. Computers are pretty good
at finding roots approximately for reasonable size polynomials.
A good place to start is to plot the polynomial and check where it is zero. We can also
simply try plugging in. We just start plugging in numbers ๐‘Ÿ = −2, −1, 0, 1, 2, . . . and see if
we get a hit (we can also try complex numbers). Even if we do not get a hit, we may get
an indication of where the root is. For example, we plug ๐‘Ÿ = −2 into our polynomial and
get −15; we plug in ๐‘Ÿ = 0 and get 3. That means there is a root between ๐‘Ÿ = −2 and ๐‘Ÿ = 0,
because the sign changed. If we find one root, say ๐‘Ÿ1 , then we know (๐‘Ÿ − ๐‘Ÿ1 ) is a factor of
our polynomial. Polynomial long division can then be used.
A good strategy is to begin with ๐‘Ÿ = 0, 1, or −1. These are easy to compute. Our
polynomial has two such roots, ๐‘Ÿ1 = −1 and ๐‘Ÿ2 = 1. There should be 3 roots and the last
root is reasonably easy to find. The constant term in a monic∗ polynomial such as this is the
multiple of the negations of all the roots because ๐‘Ÿ 3 − 3๐‘Ÿ 2 − ๐‘Ÿ + 3 = (๐‘Ÿ − ๐‘Ÿ1 )(๐‘Ÿ − ๐‘Ÿ2 )(๐‘Ÿ − ๐‘Ÿ3 ). So
3 = (−๐‘Ÿ1 )(−๐‘Ÿ2 )(−๐‘Ÿ3 ) = (1)(−1)(−๐‘Ÿ3 ) = ๐‘Ÿ3 .
You should check that ๐‘Ÿ3 = 3 really is a root. Hence ๐‘’ −๐‘ฅ , ๐‘’ ๐‘ฅ and ๐‘’ 3๐‘ฅ are solutions to (2.5).
They are linearly independent as can easily be checked, and there are 3 of them, which
happens to be exactly the number we need. So the general solution is
๐‘ฆ = ๐ถ1 ๐‘’ −๐‘ฅ + ๐ถ2 ๐‘’ ๐‘ฅ + ๐ถ3 ๐‘’ 3๐‘ฅ .
∗ The
word monic means that the coefficient of the top degree ๐‘Ÿ ๐‘‘ , in our case ๐‘Ÿ 3 , is 1.
93
2.3. HIGHER ORDER LINEAR ODES
Suppose we were given some initial conditions ๐‘ฆ(0) = 1, ๐‘ฆ 0(0) = 2, and ๐‘ฆ 00(0) = 3. Then
1 = ๐‘ฆ(0) = ๐ถ1 + ๐ถ2 + ๐ถ3 ,
2 = ๐‘ฆ 0(0) = −๐ถ1 + ๐ถ2 + 3๐ถ3 ,
3 = ๐‘ฆ 00(0) = ๐ถ1 + ๐ถ2 + 9๐ถ3 .
It is possible to find the solution by high school algebra, but it would be a pain. The
sensible way to solve a system of equations such as this is to use matrix algebra, see § 3.2
or appendix A. For now we note that the solution is ๐ถ1 = −1/4, ๐ถ2 = 1, and ๐ถ3 = 1/4. The
specific solution to the ODE is
1
−1 −๐‘ฅ
๐‘’ + ๐‘’ ๐‘ฅ + ๐‘’ 3๐‘ฅ .
4
4
Next, suppose that we have real roots, but they are repeated. Let us say we have a root
๐‘Ÿ repeated ๐‘˜ times. In the spirit of the second order solution, and for the same reasons, we
have the solutions
๐‘’ ๐‘Ÿ๐‘ฅ , ๐‘ฅ๐‘’ ๐‘Ÿ๐‘ฅ , ๐‘ฅ 2 ๐‘’ ๐‘Ÿ๐‘ฅ , . . . , ๐‘ฅ ๐‘˜−1 ๐‘’ ๐‘Ÿ๐‘ฅ .
๐‘ฆ=
We take a linear combination of these solutions to find the general solution.
Example 2.3.4: Solve
๐‘ฆ (4) − 3๐‘ฆ 000 + 3๐‘ฆ 00 − ๐‘ฆ 0 = 0.
We note that the characteristic equation is
๐‘Ÿ 4 − 3๐‘Ÿ 3 + 3๐‘Ÿ 2 − ๐‘Ÿ = 0.
By inspection we note that ๐‘Ÿ 4 − 3๐‘Ÿ 3 + 3๐‘Ÿ 2 − ๐‘Ÿ = ๐‘Ÿ(๐‘Ÿ − 1)3 . Hence the roots given with
multiplicity are ๐‘Ÿ = 0, 1, 1, 1. Thus the general solution is
๐‘ฆ = (๐ถ1 + ๐ถ2 ๐‘ฅ + ๐ถ3 ๐‘ฅ 2 ) ๐‘’ ๐‘ฅ +
|
{z
terms coming from ๐‘Ÿ=1
๐ถ4
.
|{z}
}
from ๐‘Ÿ=0
The case of complex roots is similar to second order equations. Complex roots always
come in pairs ๐‘Ÿ = ๐›ผ ± ๐‘–๐›ฝ. Suppose we have two such complex roots, each repeated ๐‘˜ times.
The corresponding solution is
(๐ถ0 + ๐ถ1 ๐‘ฅ + · · · + ๐ถ ๐‘˜−1 ๐‘ฅ ๐‘˜−1 ) ๐‘’ ๐›ผ๐‘ฅ cos(๐›ฝ๐‘ฅ) + (๐ท0 + ๐ท1 ๐‘ฅ + · · · + ๐ท ๐‘˜−1 ๐‘ฅ ๐‘˜−1 ) ๐‘’ ๐›ผ๐‘ฅ sin(๐›ฝ๐‘ฅ).
where ๐ถ0 , . . . , ๐ถ ๐‘˜−1 , ๐ท0 , . . . , ๐ท ๐‘˜−1 are arbitrary constants.
Example 2.3.5: Solve
๐‘ฆ (4) − 4๐‘ฆ 000 + 8๐‘ฆ 00 − 8๐‘ฆ 0 + 4๐‘ฆ = 0.
The characteristic equation is
๐‘Ÿ 4 − 4๐‘Ÿ 3 + 8๐‘Ÿ 2 − 8๐‘Ÿ + 4 = 0,
2
(๐‘Ÿ 2 − 2๐‘Ÿ + 2) = 0,
(๐‘Ÿ − 1)2 + 1
2
= 0.
94
CHAPTER 2. HIGHER ORDER LINEAR ODES
Hence the roots are 1 ± ๐‘–, both with multiplicity 2. Hence the general solution to the ODE is
๐‘ฆ = (๐ถ1 + ๐ถ2 ๐‘ฅ) ๐‘’ ๐‘ฅ cos ๐‘ฅ + (๐ถ3 + ๐ถ4 ๐‘ฅ) ๐‘’ ๐‘ฅ sin ๐‘ฅ.
The way we solved the characteristic equation above is really by guessing or by inspection.
It is not so easy in general. We could also have asked a computer or an advanced calculator
for the roots.
2.3.3
Exercises
Exercise 2.3.1: Find the general solution for ๐‘ฆ 000 − ๐‘ฆ 00 + ๐‘ฆ 0 − ๐‘ฆ = 0.
Exercise 2.3.2: Find the general solution for ๐‘ฆ (4) − 5๐‘ฆ 000 + 6๐‘ฆ 00 = 0.
Exercise 2.3.3: Find the general solution for ๐‘ฆ 000 + 2๐‘ฆ 00 + 2๐‘ฆ 0 = 0.
Exercise 2.3.4: Suppose the characteristic equation for an ODE is (๐‘Ÿ − 1)2 (๐‘Ÿ − 2)2 = 0.
a) Find such a differential equation.
b) Find its general solution.
Exercise 2.3.5: Suppose that a fourth order equation has a solution ๐‘ฆ = 2๐‘’ 4๐‘ฅ ๐‘ฅ cos ๐‘ฅ.
a) Find such an equation.
b) Find the initial conditions that the given solution satisfies.
Exercise 2.3.6: Find the general solution for the equation of Exercise 2.3.5.
Exercise 2.3.7: Let ๐‘“ (๐‘ฅ) = ๐‘’ ๐‘ฅ − cos ๐‘ฅ, ๐‘”(๐‘ฅ) = ๐‘’ ๐‘ฅ + cos ๐‘ฅ, and โ„Ž(๐‘ฅ) = cos ๐‘ฅ. Are ๐‘“ (๐‘ฅ), ๐‘”(๐‘ฅ), and
โ„Ž(๐‘ฅ) linearly independent? If so, show it, if not, find a linear combination that works.
Exercise 2.3.8: Let ๐‘“ (๐‘ฅ) = 0, ๐‘”(๐‘ฅ) = cos ๐‘ฅ, and โ„Ž(๐‘ฅ) = sin ๐‘ฅ. Are ๐‘“ (๐‘ฅ), ๐‘”(๐‘ฅ), and โ„Ž(๐‘ฅ) linearly
independent? If so, show it, if not, find a linear combination that works.
Exercise 2.3.9: Are ๐‘ฅ, ๐‘ฅ 2 , and ๐‘ฅ 4 linearly independent? If so, show it, if not, find a linear
combination that works.
Exercise 2.3.10: Are ๐‘’ ๐‘ฅ , ๐‘ฅ๐‘’ ๐‘ฅ , and ๐‘ฅ 2 ๐‘’ ๐‘ฅ linearly independent? If so, show it, if not, find a linear
combination that works.
Exercise 2.3.11: Find an equation such that ๐‘ฆ = ๐‘ฅ๐‘’ −2๐‘ฅ sin(3๐‘ฅ) is a solution.
Exercise 2.3.101: Find the general solution of ๐‘ฆ (5) − ๐‘ฆ (4) = 0.
2.3. HIGHER ORDER LINEAR ODES
95
Exercise 2.3.102: Suppose that the characteristic equation of a third order differential equation has
roots ±2๐‘– and 3.
a) What is the characteristic equation?
b) Find the corresponding differential equation.
c) Find the general solution.
√
Exercise 2.3.103: Solve 1001๐‘ฆ 000 + 3.2๐‘ฆ 00 + ๐œ‹๐‘ฆ 0 − 4๐‘ฆ = 0, ๐‘ฆ(0) = 0, ๐‘ฆ 0(0) = 0, ๐‘ฆ 00(0) = 0.
Exercise 2.3.104: Are ๐‘’ ๐‘ฅ , ๐‘’ ๐‘ฅ+1 , ๐‘’ 2๐‘ฅ , sin(๐‘ฅ) linearly independent? If so, show it, if not find a linear
combination that works.
Exercise 2.3.105: Are sin(๐‘ฅ), ๐‘ฅ, ๐‘ฅ sin(๐‘ฅ) linearly independent? If so, show it, if not find a linear
combination that works.
Exercise 2.3.106: Find an equation such that ๐‘ฆ = cos(๐‘ฅ), ๐‘ฆ = sin(๐‘ฅ), ๐‘ฆ = ๐‘’ ๐‘ฅ are solutions.
96
CHAPTER 2. HIGHER ORDER LINEAR ODES
2.4
Mechanical vibrations
Note: 2 lectures, §3.4 in [EP], §3.7 in [BD]
Let us look at some applications of linear second order constant coefficient equations.
2.4.1
Some examples
Our first example is a mass on a spring. Suppose we have a
๐‘˜
๐น(๐‘ก)
mass ๐‘š > 0 (in kilograms) connected by a spring with spring
๐‘š
constant ๐‘˜ > 0 (in newtons per meter) to a fixed wall. There
may be some external force ๐น(๐‘ก) (in newtons) acting on the
damping ๐‘
mass. Finally, there is some friction measured by ๐‘ ≥ 0 (in
newton-seconds per meter) as the mass slides along the floor (or perhaps a damper is
connected).
Let ๐‘ฅ be the displacement of the mass (๐‘ฅ = 0 is the rest position), with ๐‘ฅ growing to
the right (away from the wall). The force exerted by the spring is proportional to the
compression of the spring by Hooke’s law. Therefore, it is ๐‘˜๐‘ฅ in the negative direction.
Similarly the amount of force exerted by friction is proportional to the velocity of the mass.
By Newton’s second law we know that force equals mass times acceleration and hence
๐‘š๐‘ฅ 00 = ๐น(๐‘ก) − ๐‘๐‘ฅ 0 − ๐‘˜๐‘ฅ or
๐‘š๐‘ฅ 00 + ๐‘๐‘ฅ 0 + ๐‘˜๐‘ฅ = ๐น(๐‘ก).
This is a linear second order constant coefficient ODE. We say the motion is
(i) forced, if ๐น . 0 (if ๐น is not identically zero),
(ii) unforced or free, if ๐น ≡ 0 (if ๐น is identically zero),
(iii) damped, if ๐‘ > 0, and
(iv) undamped, if ๐‘ = 0.
This system appears in lots of applications even if it does not at first seem like it. Many
real-world scenarios can be simplified to a mass on a spring. For example, a bungee
jump setup is essentially a mass and spring system (you are the mass). It would be good
if someone did the math before you jump off the bridge, right? Let us give two other
examples.
Here is an example for electrical engineers. Consider the pictured
C
RLC circuit. There is a resistor with a resistance of ๐‘… ohms, an inductor E
L
with an inductance of ๐ฟ henries, and a capacitor with a capacitance
R
of ๐ถ farads. There is also an electric source (such as a battery) giving
a voltage of ๐ธ(๐‘ก) volts at time ๐‘ก (measured in seconds). Let ๐‘„(๐‘ก) be the
charge in coulombs on the capacitor and ๐ผ(๐‘ก) be the current in the circuit. The relation
97
2.4. MECHANICAL VIBRATIONS
between the two is ๐‘„ 0 = ๐ผ. By elementary principles we find ๐ฟ๐ผ 0 + ๐‘…๐ผ + ๐‘„/๐ถ = ๐ธ. We
differentiate to get
1
๐ฟ๐ผ 00(๐‘ก) + ๐‘…๐ผ 0(๐‘ก) + ๐ผ(๐‘ก) = ๐ธ0(๐‘ก).
๐ถ
This is a nonhomogeneous second order constant coefficient linear equation. As ๐ฟ, ๐‘…, and
๐ถ are all positive, this system behaves just like the mass and spring system. Position of
the mass is replaced by current. Mass is replaced by inductance, damping is replaced by
resistance, and the spring constant is replaced by one over the capacitance. The change in
voltage becomes the forcing function—for constant voltage this is an unforced motion.
Our next example behaves like a mass and spring system
only approximately. Suppose a mass ๐‘š hangs on a pendulum
๐ฟ
of length ๐ฟ. We seek an equation for the angle ๐œƒ(๐‘ก) (in radians).
๐‘š๐ฟ๐œƒ00
๐œƒ
Let ๐‘” be the force of gravity. Elementary physics mandates that
the equation is
๐‘š
๐‘”
00
๐œƒ + sin ๐œƒ = 0.
๐ฟ
๐‘š๐‘”
๐‘š ๐‘” sin ๐œƒ
Let us derive this equation using Newton’s second law: force
equals mass times acceleration. The acceleration is ๐ฟ๐œƒ00 and
mass is ๐‘š. So ๐‘š๐ฟ๐œƒ00 has to be equal to the tangential component of the force given by
the gravity, which is ๐‘š ๐‘” sin ๐œƒ in the opposite direction. So ๐‘š๐ฟ๐œƒ00 = −๐‘š ๐‘” sin ๐œƒ. The ๐‘š
curiously cancels from the equation.
Now we make our approximation. For small ๐œƒ we have that approximately sin ๐œƒ ≈ ๐œƒ.
This can be seen by looking at the graph. In Figure 2.1 we can see that for approximately
−0.5 < ๐œƒ < 0.5 (in radians) the graphs of sin ๐œƒ and ๐œƒ are almost the same.
Therefore, when the swings are small, ๐œƒ
is small and we can model the behavior by
the simpler linear equation
-1.0
๐‘”
๐œƒ + ๐œƒ = 0.
๐ฟ
-0.5
0.0
0.5
1.0
1.0
1.0
0.5
0.5
0.0
0.0
00
The errors from this approximation build
up. So after a long time, the state of the
real-world system might be substantially
different from our solution. Also we will see
that in a mass-spring system, the amplitude
is independent of the period. This is not
Figure 2.1: The graphs of sin ๐œƒ and ๐œƒ (in radians).
true for a pendulum. Nevertheless, for
reasonably short periods of time and small
swings (that is, only small angles ๐œƒ), the approximation is reasonably good.
In real-world problems it is often necessary to make these types of simplifications.
We must understand both the mathematics and the physics of the situation to see if the
simplification is valid in the context of the questions we are trying to answer.
-0.5
-0.5
-1.0
-1.0
-1.0
-0.5
0.0
0.5
1.0
98
2.4.2
CHAPTER 2. HIGHER ORDER LINEAR ODES
Free undamped motion
In this section we only consider free or unforced motion, as we do not know yet how to
solve nonhomogeneous equations. Let us start with undamped motion where ๐‘ = 0. The
equation is
๐‘š๐‘ฅ 00 + ๐‘˜๐‘ฅ = 0.
We divide by ๐‘š and let ๐œ”0 =
p
๐‘˜/๐‘š
to rewrite the equation as
๐‘ฅ 00 + ๐œ”02 ๐‘ฅ = 0.
The general solution to this equation is
๐‘ฅ(๐‘ก) = ๐ด cos(๐œ”0 ๐‘ก) + ๐ต sin(๐œ”0 ๐‘ก).
By a trigonometric identity
๐ด cos(๐œ”0 ๐‘ก) + ๐ต sin(๐œ”0 ๐‘ก) = ๐ถ cos(๐œ”0 ๐‘ก − ๐›พ),
√
for two different constants ๐ถ and ๐›พ. It is not hard to compute that ๐ถ = ๐ด2 + ๐ต2 and tan ๐›พ =
๐ต/๐ด. Therefore, we let ๐ถ and ๐›พ be our arbitrary constants and write ๐‘ฅ(๐‘ก) = ๐ถ cos(๐œ”0 ๐‘ก − ๐›พ).
Exercise 2.4.1: Justify the identity ๐ด cos(๐œ”0 ๐‘ก) + ๐ต sin(๐œ”0 ๐‘ก) = ๐ถ cos(๐œ”0 ๐‘ก − ๐›พ) and verify the
equations for ๐ถ and ๐›พ. Hint: Start with cos(๐›ผ − ๐›ฝ) = cos(๐›ผ) cos(๐›ฝ) + sin(๐›ผ) sin(๐›ฝ) and multiply
by ๐ถ. Then what should ๐›ผ and ๐›ฝ be?
While it is generally easier to use the first form with ๐ด and ๐ต to solve for the initial
conditions, the second form is much more natural. The constants ๐ถ and ๐›พ have nice
physical interpretation. Write the solution as
๐‘ฅ(๐‘ก) = ๐ถ cos(๐œ”0 ๐‘ก − ๐›พ).
This is a pure-frequency oscillation (a sine wave). The amplitude is ๐ถ, ๐œ”0 is the (angular)
frequency, and ๐›พ is the so-called phase shift. The phase shift just shifts the graph left or right.
We call ๐œ”0 the natural (angular) frequency. This entire setup is called simple harmonic motion.
Let us pause to explain the word angular before the word frequency. The units of ๐œ”0
are radians per unit time, not cycles per unit time as is the usual measure of frequency.
๐œ”0
Because one cycle is 2๐œ‹ radians, the usual frequency is given by 2๐œ‹
. It is simply a matter of
where we put the constant 2๐œ‹, and that is a matter of taste.
The period of the motion is one over the frequency (in cycles per unit time) and hence
2๐œ‹
๐œ”0 . That is the amount of time it takes to complete one full cycle.
Example 2.4.1: Suppose that ๐‘š = 2 kg and ๐‘˜ = 8 N/m. The whole mass and spring setup
is sitting on a truck that was traveling at 1 m/s. The truck crashes and hence stops. The
mass was held in place 0.5 meters forward from the rest position. During the crash
the mass gets loose. That is, the mass is now moving forward at 1 m/s, while the other
end of the spring is held in place. The mass therefore starts oscillating. What is the
99
2.4. MECHANICAL VIBRATIONS
frequency of the resulting oscillation? What is the amplitude? The units are the mks units
(meters-kilograms-seconds).
The setup means that the mass was at half a meter in the positive direction during the
crash and relative to the wall the spring is mounted to, the mass was moving forward (in
the positive direction) at 1 m/s. This gives us the initial conditions.
So the equation with initial conditions is
2๐‘ฅ 00 + 8๐‘ฅ = 0,
๐‘ฅ(0) = 0.5,
๐‘ฅ 0(0) = 1.
p
√
We directly compute ๐œ”0 = ๐‘˜/๐‘š = 4 = 2. Hence the angular frequency is 2. The usual
frequency in Hertz (cycles per second) is 2/2๐œ‹ = 1/๐œ‹ ≈ 0.318.
The general solution is
๐‘ฅ(๐‘ก) = ๐ด cos(2๐‘ก) + ๐ต sin(2๐‘ก).
Letting ๐‘ฅ(0) = 0.5 means ๐ด = 0.5. Then ๐‘ฅ 0(๐‘ก) = −2(0.5)
๐‘ฅ 0(0) = 1
√ sin(2๐‘ก) +√2๐ต cos(2๐‘ก). Letting
√
we get ๐ต = 0.5. Therefore, the amplitude is ๐ถ = ๐ด2 + ๐ต2 = 0.25 + 0.25 = 0.5 ≈ 0.707.
The solution is
๐‘ฅ(๐‘ก) = 0.5 cos(2๐‘ก) + 0.5 sin(2๐‘ก).
A plot of ๐‘ฅ(๐‘ก) is shown in Figure 2.2.
In general, for free undamped motion, a
solution of the form
๐‘ฅ(๐‘ก) = ๐ด cos(๐œ”0 ๐‘ก) + ๐ต sin(๐œ”0 ๐‘ก),
1.0
0.0
2.5
5.0
7.5
10.0
1.0
0.5
0.5
corresponds to the initial conditions ๐‘ฅ(0) =
๐ด and ๐‘ฅ 0(0) = ๐œ”0 ๐ต. Therefore, it is easy to
figure out ๐ด and ๐ต from the initial conditions. The amplitude and the phase shift
can then be computed from ๐ด and ๐ต. In
the example, we have already found the amplitude ๐ถ. Let us compute the phase shift.
We know that tan ๐›พ = ๐ต/๐ด = 1. We take the
Figure 2.2: Simple undamped oscillation.
arctangent of 1 and get ๐œ‹/4 or approximately
0.785. We still need to check if this ๐›พ is in
the correct quadrant (and add ๐œ‹ to ๐›พ if it is not). Since both ๐ด and ๐ต are positive, then ๐›พ
should be in the first quadrant, ๐œ‹/4 radians is in the first quadrant, so ๐›พ = ๐œ‹/4.
Note: Many calculators and computer software have not only the atan function for
arctangent, but also what is sometimes called atan2. This function takes two arguments, ๐ต
and ๐ด, and returns a ๐›พ in the correct quadrant for you.
0.0
0.0
-0.5
-0.5
-1.0
0.0
2.4.3
2.5
Free damped motion
Let us now focus on damped motion. Let us rewrite the equation
๐‘š๐‘ฅ 00 + ๐‘๐‘ฅ 0 + ๐‘˜๐‘ฅ = 0,
5.0
7.5
-1.0
10.0
100
CHAPTER 2. HIGHER ORDER LINEAR ODES
as
๐‘ฅ 00 + 2๐‘๐‘ฅ 0 + ๐œ”02 ๐‘ฅ = 0,
where
r
๐œ”0 =
๐‘˜
,
๐‘š
๐‘=
๐‘
.
2๐‘š
The characteristic equation is
๐‘Ÿ 2 + 2๐‘๐‘Ÿ + ๐œ”02 = 0.
Using the quadratic formula we get that the roots are
๐‘Ÿ = −๐‘ ±
q
๐‘ 2 − ๐œ”02 .
The form of the solution depends on whether we get complex or real roots. We get real
roots if and only if the following number is nonnegative:
๐‘ 2 − ๐œ”02 =
๐‘ 2
2๐‘š
−
๐‘˜
๐‘ 2 − 4๐‘˜๐‘š
=
.
๐‘š
4๐‘š 2
The sign of ๐‘ 2 − ๐œ”02 is the same as the sign of ๐‘ 2 − 4๐‘˜๐‘š. Thus we get real roots if and only if
๐‘ 2 − 4๐‘˜๐‘š is nonnegative, or in other words if ๐‘ 2 ≥ 4๐‘˜๐‘š.
Overdamping
When ๐‘ 2 − 4๐‘˜๐‘š > 0, the system is overdamped. In this case, there are two distinct
real roots
q ๐‘Ÿ1 and ๐‘Ÿ2 . Both roots are negative: As
๐‘ 2 − ๐œ”02 is always less than ๐‘, then
0
25
50
75
100
1.5
1.5
1.0
1.0
0.5
0.5
0.0
0.0
q
−๐‘ ± ๐‘ 2 − ๐œ”02 is negative in either case.
The solution is
๐‘ฅ(๐‘ก) = ๐ถ1 ๐‘’ ๐‘Ÿ1 ๐‘ก + ๐ถ2 ๐‘’ ๐‘Ÿ2 ๐‘ก .
Since ๐‘Ÿ1 , ๐‘Ÿ2 are negative, ๐‘ฅ(๐‘ก) → 0 as ๐‘ก → ∞.
Thus the mass will tend towards the rest
position as time goes to infinity. For a few
Figure 2.3: Overdamped motion for several differsample plots for different initial conditions, ent initial conditions.
see Figure 2.3.
No oscillation happens. In fact, the
graph crosses the ๐‘ฅ-axis at most once. To see why, we try to solve 0 = ๐ถ1 ๐‘’ ๐‘Ÿ1 ๐‘ก + ๐ถ2 ๐‘’ ๐‘Ÿ2 ๐‘ก .
Therefore, ๐ถ1 ๐‘’ ๐‘Ÿ1 ๐‘ก = −๐ถ2 ๐‘’ ๐‘Ÿ2 ๐‘ก and using laws of exponents we obtain
0
−๐ถ1
= ๐‘’ (๐‘Ÿ2 −๐‘Ÿ1 )๐‘ก .
๐ถ2
25
50
75
100
101
2.4. MECHANICAL VIBRATIONS
This equation has at most one solution ๐‘ก ≥ 0. For some initial conditions the graph never
crosses the ๐‘ฅ-axis, as is evident from the sample graphs.
Example 2.4.2: Suppose the mass is released from rest. That is ๐‘ฅ(0) = ๐‘ฅ 0 and ๐‘ฅ 0(0) = 0.
Then
๐‘ฅ0
๐‘Ÿ 1 ๐‘’ ๐‘Ÿ2 ๐‘ก − ๐‘Ÿ 2 ๐‘’ ๐‘Ÿ1 ๐‘ก .
๐‘ฅ(๐‘ก) =
๐‘Ÿ1 − ๐‘Ÿ2
It is not hard to see that this satisfies the initial conditions.
Critical damping
When ๐‘ 2 − 4๐‘˜๐‘š = 0, the system is critically damped. In this case, there is one root of
multiplicity 2 and this root is −๐‘. Our solution is
๐‘ฅ(๐‘ก) = ๐ถ1 ๐‘’ −๐‘๐‘ก + ๐ถ2 ๐‘ก๐‘’ −๐‘๐‘ก .
The behavior of a critically damped system is very similar to an overdamped system. After
all a critically damped system is in some sense a limit of overdamped systems. Since
these equations are really only an approximation to the real world, in reality we are never
critically damped, it is a place we can only reach in theory. We are always a little bit
underdamped or a little bit overdamped. It is better not to dwell on critical damping.
Underdamping
When ๐‘ 2 − 4๐‘˜๐‘š < 0, the system is underdamped. In this case, the roots are complex.
๐‘Ÿ = −๐‘ ±
q
๐‘ 2 − ๐œ”02
√
= −๐‘ ± −1 ๐œ”02 − ๐‘ 2
0
5
10
15
20
25
30
1.0
1.0
0.5
0.5
0.0
0.0
-0.5
-0.5
q
= −๐‘ ± ๐‘–๐œ”1 ,
where ๐œ”1 =
q
๐œ”02 − ๐‘ 2 . Our solution is
๐‘ฅ(๐‘ก) = ๐‘’ −๐‘๐‘ก ๐ด cos(๐œ”1 ๐‘ก) + ๐ต sin(๐œ”1 ๐‘ก) ,
or
๐‘ฅ(๐‘ก) = ๐ถ๐‘’ −๐‘๐‘ก cos(๐œ”1 ๐‘ก − ๐›พ).
-1.0
-1.0
0
5
10
15
20
25
30
Figure 2.4: Underdamped motion with the envelope
curves shown.
An example plot is given in Figure 2.4. Note that we still have that ๐‘ฅ(๐‘ก) → 0 as ๐‘ก → ∞.
The figure also shows the envelope curves ๐ถ๐‘’ −๐‘๐‘ก and −๐ถ๐‘’ −๐‘๐‘ก . The solution is the oscillating
line between the two envelope curves. The envelope curves give the maximum amplitude
of the oscillation at any given point in time. For example, if you are bungee jumping, you
are really interested in computing the envelope curve as not to hit the concrete with your
head.
102
CHAPTER 2. HIGHER ORDER LINEAR ODES
The phase shift ๐›พ shifts the oscillation left or right, but within the envelope curves (the
envelope curves do not change if ๐›พ changes).
Notice that the angular pseudo-frequency∗ becomes smaller when the damping ๐‘ (and
hence ๐‘) becomes larger. This makes sense. When we change the damping just a little bit,
we do not expect the behavior of the solution to change dramatically. If we keep making ๐‘
larger, then at some point the solution should start looking like the solution for critical
damping or overdamping, where no oscillation happens. So if ๐‘ 2 approaches 4๐‘˜๐‘š, we
want ๐œ”1 to approach 0.
On the other hand, when ๐‘ gets smaller, ๐œ”1 approaches ๐œ”0 (๐œ”1 is always smaller than
๐œ”0 ), and the solution looks more and more like the steady periodic motion of the undamped
case. The envelope curves become flatter and flatter as ๐‘ (and hence ๐‘) goes to 0.
2.4.4
Exercises
Exercise 2.4.2: Consider a mass and spring system with a mass ๐‘š = 2, spring constant ๐‘˜ = 3, and
damping constant ๐‘ = 1.
a) Set up and find the general solution of the system.
b) Is the system underdamped, overdamped or critically damped?
c) If the system is not critically damped, find a ๐‘ that makes the system critically damped.
Exercise 2.4.3: Do Exercise 2.4.2 for ๐‘š = 3, ๐‘˜ = 12, and ๐‘ = 12.
Exercise 2.4.4: Using the mks units (meters-kilograms-seconds), suppose you have a spring with
spring constant 4 N/m. You want to use it to weigh items. Assume no friction. You place the mass
on the spring and put it in motion.
a) You count and find that the frequency is 0.8 Hz (cycles per second). What is the mass?
b) Find a formula for the mass ๐‘š given the frequency ๐œ” in Hz.
Exercise 2.4.5: Suppose we add possible friction to Exercise 2.4.4. Further, suppose you do not
know the spring constant, but you have two reference weights 1 kg and 2 kg to calibrate your setup.
You put each in motion on your spring and measure the frequency. For the 1 kg weight you measured
1.1 Hz, for the 2 kg weight you measured 0.8 Hz.
a) Find ๐‘˜ (spring constant) and ๐‘ (damping constant).
b) Find a formula for the mass in terms of the frequency in Hz. Note that there may be more
than one possible mass for a given frequency.
c) For an unknown object you measured 0.2 Hz, what is the mass of the object? Suppose that
you know that the mass of the unknown object is more than a kilogram.
∗ We
do not call ๐œ”1 a frequency since the solution is not really a periodic function.
2.4. MECHANICAL VIBRATIONS
103
Exercise 2.4.6: Suppose you wish to measure the friction a mass of 0.1 kg experiences as it slides
along a floor (you wish to find ๐‘). You have a spring with spring constant ๐‘˜ = 5 N/m. You take the
spring, you attach it to the mass and fix it to a wall. Then you pull on the spring and let the mass
go. You find that the mass oscillates with frequency 1 Hz. What is the friction?
Exercise 2.4.101: A mass of 2 kilograms is on a spring with spring constant ๐‘˜ newtons per meter
with no damping. Suppose the system is at rest and at time ๐‘ก = 0 the mass is kicked and starts
traveling at 2 meters per second. How large does ๐‘˜ have to be to so that the mass does not go further
than 3 meters from the rest position?
Exercise 2.4.102: Suppose we have an RLC circuit with a resistor of 100 milliohms (0.1 ohms),
inductor of inductance of 50 millihenries (0.05 henries), and a capacitor of 5 farads, with constant
voltage.
a) Set up the ODE equation for the current ๐ผ.
b) Find the general solution.
c) Solve for ๐ผ(0) = 10 and ๐ผ 0(0) = 0.
Exercise 2.4.103: A 5000 kg railcar hits a bumper (a spring) at 1 m/s, and the spring compresses by
0.1 m. Assume no damping.
a) Find ๐‘˜.
b) How far does the spring compress when a 10000 kg railcar hits the spring at the same speed?
c) If the spring would break if it compresses further than 0.3 m, what is the maximum mass of a
railcar that can hit it at 1 m/s?
d) What is the maximum mass of a railcar that can hit the spring without breaking at 2 m/s?
Exercise 2.4.104: A mass of ๐‘š kg is on a spring with ๐‘˜ = 3 N/m and ๐‘ = 2 Ns/m. Find the mass
๐‘š0 for which there is critical damping. If ๐‘š < ๐‘š0 , does the system oscillate or not, that is, is it
underdamped or overdamped?
104
2.5
CHAPTER 2. HIGHER ORDER LINEAR ODES
Nonhomogeneous equations
Note: 2 lectures, §3.5 in [EP], §3.5 and §3.6 in [BD]
2.5.1
Solving nonhomogeneous equations
We have solved linear constant coefficient homogeneous equations. What about nonhomogeneous linear ODEs? For example, the equations for forced mechanical vibrations. That
is, suppose we have an equation such as
๐‘ฆ 00 + 5๐‘ฆ 0 + 6๐‘ฆ = 2๐‘ฅ + 1.
(2.6)
We will write ๐ฟ๐‘ฆ = 2๐‘ฅ + 1 when the exact form of the operator is not important. We
solve (2.6) in the following manner. First, we find the general solution ๐‘ฆ ๐‘ to the associated
homogeneous equation
๐‘ฆ 00 + 5๐‘ฆ 0 + 6๐‘ฆ = 0.
(2.7)
We call ๐‘ฆ ๐‘ the complementary solution. Next, we find a single particular solution ๐‘ฆ ๐‘ to (2.6) in
some way. Then
๐‘ฆ = ๐‘ฆ๐‘ + ๐‘ฆ๐‘
is the general solution to (2.6). We have ๐ฟ๐‘ฆ ๐‘ = 0 and ๐ฟ๐‘ฆ ๐‘ = 2๐‘ฅ + 1. As ๐ฟ is a linear operator
we verify that ๐‘ฆ is a solution, ๐ฟ๐‘ฆ = ๐ฟ(๐‘ฆ ๐‘ + ๐‘ฆ ๐‘ ) = ๐ฟ๐‘ฆ ๐‘ + ๐ฟ๐‘ฆ ๐‘ = 0 + (2๐‘ฅ + 1). Let us see why
we obtain the general solution.
Let ๐‘ฆ ๐‘ and ๐‘ฆ˜ ๐‘ be two different particular solutions to (2.6). Write the difference as
๐‘ค = ๐‘ฆ ๐‘ − ๐‘ฆ˜ ๐‘ . Then plug ๐‘ค into the left-hand side of the equation to get
๐‘ค 00 + 5๐‘ค 0 + 6๐‘ค = (๐‘ฆ ๐‘00 + 5๐‘ฆ ๐‘0 + 6๐‘ฆ ๐‘ ) − ( ๐‘ฆ˜ ๐‘00 + 5 ๐‘ฆ˜ ๐‘0 + 6 ๐‘ฆ˜ ๐‘ ) = (2๐‘ฅ + 1) − (2๐‘ฅ + 1) = 0.
Using the operator notation the calculation becomes simpler. As ๐ฟ is a linear operator we
write
๐ฟ๐‘ค = ๐ฟ(๐‘ฆ ๐‘ − ๐‘ฆ˜ ๐‘ ) = ๐ฟ๐‘ฆ ๐‘ − ๐ฟ ๐‘ฆ˜ ๐‘ = (2๐‘ฅ + 1) − (2๐‘ฅ + 1) = 0.
So ๐‘ค = ๐‘ฆ ๐‘ − ๐‘ฆ˜ ๐‘ is a solution to (2.7), that is ๐ฟ๐‘ค = 0. Any two solutions of (2.6) differ by a
solution to the homogeneous equation (2.7). The solution ๐‘ฆ = ๐‘ฆ ๐‘ + ๐‘ฆ ๐‘ includes all solutions
to (2.6), since ๐‘ฆ ๐‘ is the general solution to the associated homogeneous equation.
Theorem 2.5.1. Let ๐ฟ๐‘ฆ = ๐‘“ (๐‘ฅ) be a linear ODE (not necessarily constant coefficient). Let ๐‘ฆ ๐‘ be
the complementary solution (the general solution to the associated homogeneous equation ๐ฟ๐‘ฆ = 0)
and let ๐‘ฆ ๐‘ be any particular solution to ๐ฟ๐‘ฆ = ๐‘“ (๐‘ฅ). Then the general solution to ๐ฟ๐‘ฆ = ๐‘“ (๐‘ฅ) is
๐‘ฆ = ๐‘ฆ๐‘ + ๐‘ฆ๐‘ .
The moral of the story is that we can find the particular solution in any old way. If we
find a different particular solution (by a different method, or simply by guessing), then we
still get the same general solution. The formula may look different, and the constants we
have to choose to satisfy the initial conditions may be different, but it is the same solution.
105
2.5. NONHOMOGENEOUS EQUATIONS
2.5.2
Undetermined coefficients
The trick is to somehow, in a smart way, guess one particular solution to (2.6). Note that
2๐‘ฅ + 1 is a polynomial, and the left-hand side of the equation will be a polynomial if we let
๐‘ฆ be a polynomial of the same degree. Let us try
๐‘ฆ ๐‘ = ๐ด๐‘ฅ + ๐ต.
We plug ๐‘ฆ ๐‘ into the left hand side to obtain
๐‘ฆ ๐‘00 + 5๐‘ฆ ๐‘0 + 6๐‘ฆ ๐‘ = (๐ด๐‘ฅ + ๐ต)00 + 5(๐ด๐‘ฅ + ๐ต)0 + 6(๐ด๐‘ฅ + ๐ต)
= 0 + 5๐ด + 6๐ด๐‘ฅ + 6๐ต = 6๐ด๐‘ฅ + (5๐ด + 6๐ต).
So 6๐ด๐‘ฅ +(5๐ด +6๐ต) = 2๐‘ฅ +1. Therefore, ๐ด = 1/3 and ๐ต = −1/9. That means ๐‘ฆ ๐‘ =
Solving the complementary problem (exercise!) we get
1
3
๐‘ฅ − 19 =
3๐‘ฅ−1
9 .
๐‘ฆ ๐‘ = ๐ถ1 ๐‘’ −2๐‘ฅ + ๐ถ2 ๐‘’ −3๐‘ฅ .
Hence the general solution to (2.6) is
๐‘ฆ = ๐ถ1 ๐‘’ −2๐‘ฅ + ๐ถ2 ๐‘’ −3๐‘ฅ +
3๐‘ฅ − 1
.
9
Now suppose we are further given some initial conditions. For example, ๐‘ฆ(0) = 0 and
๐‘ฆ 0(0) = 1/3. First find ๐‘ฆ 0 = −2๐ถ1 ๐‘’ −2๐‘ฅ − 3๐ถ2 ๐‘’ −3๐‘ฅ + 1/3. Then
1
0 = ๐‘ฆ(0) = ๐ถ1 + ๐ถ2 − ,
9
1
1
= ๐‘ฆ 0(0) = −2๐ถ1 − 3๐ถ2 + .
3
3
We solve to get ๐ถ1 = 1/3 and ๐ถ2 = −2/9. The particular solution we want is
๐‘ฆ(๐‘ฅ) =
1 −2๐‘ฅ 2 −3๐‘ฅ 3๐‘ฅ − 1 3๐‘’ −2๐‘ฅ − 2๐‘’ −3๐‘ฅ + 3๐‘ฅ − 1
๐‘’
− ๐‘’
+
=
.
3
9
9
9
Exercise 2.5.1: Check that ๐‘ฆ really solves the equation (2.6) and the given initial conditions.
Note: A common mistake is to solve for constants using the initial conditions with ๐‘ฆ ๐‘
and only add the particular solution ๐‘ฆ ๐‘ after that. That will not work. You need to first
compute ๐‘ฆ = ๐‘ฆ ๐‘ + ๐‘ฆ ๐‘ and only then solve for the constants using the initial conditions.
A right-hand side consisting of exponentials, sines, and cosines can be handled similarly.
For example,
๐‘ฆ 00 + 2๐‘ฆ 0 + 2๐‘ฆ = cos(2๐‘ฅ).
Let us find some ๐‘ฆ ๐‘ . We start by guessing the solution includes some multiple of cos(2๐‘ฅ).
We may have to also add a multiple of sin(2๐‘ฅ) to our guess since derivatives of cosine are
sines. We try
๐‘ฆ ๐‘ = ๐ด cos(2๐‘ฅ) + ๐ต sin(2๐‘ฅ).
106
CHAPTER 2. HIGHER ORDER LINEAR ODES
We plug ๐‘ฆ ๐‘ into the equation and we get
−4๐ด cos(2๐‘ฅ) − 4๐ต sin(2๐‘ฅ) +2 −2๐ด sin(2๐‘ฅ) + 2๐ต cos(2๐‘ฅ)
|
{z
๐‘ฆ ๐‘00
}
|
{z
}
๐‘ฆ ๐‘0
+ 2 ๐ด cos(2๐‘ฅ) + 2๐ต sin(2๐‘ฅ) = cos(2๐‘ฅ),
|
{z
๐‘ฆ๐‘
}
or
(−4๐ด + 4๐ต + 2๐ด) cos(2๐‘ฅ) + (−4๐ต − 4๐ด + 2๐ต) sin(2๐‘ฅ) = cos(2๐‘ฅ).
The left-hand side must equal to right-hand side. Namely, −4๐ด + 4๐ต + 2๐ด = 1 and
−4๐ต − 4๐ด + 2๐ต = 0. So −2๐ด + 4๐ต = 1 and 2๐ด + ๐ต = 0 and hence ๐ด = −1/10 and ๐ต = 1/5. So
๐‘ฆ ๐‘ = ๐ด cos(2๐‘ฅ) + ๐ต sin(2๐‘ฅ) =
− cos(2๐‘ฅ) + 2 sin(2๐‘ฅ)
.
10
Similarly, if the right-hand side contains exponentials we try exponentials. If
๐ฟ๐‘ฆ = ๐‘’ 3๐‘ฅ ,
we try ๐‘ฆ = ๐ด๐‘’ 3๐‘ฅ as our guess and try to solve for ๐ด.
When the right-hand side is a multiple of sines, cosines, exponentials, and polynomials,
we can use the product rule for differentiation to come up with a guess. We need to guess
a form for ๐‘ฆ ๐‘ such that ๐ฟ๐‘ฆ ๐‘ is of the same form, and has all the terms needed to for the
right-hand side. For example,
๐ฟ๐‘ฆ = (1 + 3๐‘ฅ 2 ) ๐‘’ −๐‘ฅ cos(๐œ‹๐‘ฅ).
For this equation, we guess
๐‘ฆ ๐‘ = (๐ด + ๐ต๐‘ฅ + ๐ถ๐‘ฅ 2 ) ๐‘’ −๐‘ฅ cos(๐œ‹๐‘ฅ) + (๐ท + ๐ธ๐‘ฅ + ๐น๐‘ฅ 2 ) ๐‘’ −๐‘ฅ sin(๐œ‹๐‘ฅ).
We plug in and then hopefully get equations that we can solve for ๐ด, ๐ต, ๐ถ, ๐ท, ๐ธ, and ๐น. As
you can see this can make for a very long and tedious calculation very quickly. C’est la vie!
There is one hiccup in all this. It could be that our guess actually solves the associated
homogeneous equation. That is, suppose we have
๐‘ฆ 00 − 9๐‘ฆ = ๐‘’ 3๐‘ฅ .
We would love to guess ๐‘ฆ = ๐ด๐‘’ 3๐‘ฅ , but if we plug this into the left-hand side of the equation
we get
๐‘ฆ 00 − 9๐‘ฆ = 9๐ด๐‘’ 3๐‘ฅ − 9๐ด๐‘’ 3๐‘ฅ = 0 ≠ ๐‘’ 3๐‘ฅ .
There is no way we can choose ๐ด to make the left-hand side be ๐‘’ 3๐‘ฅ . The trick in this case is
to multiply our guess by ๐‘ฅ to get rid of duplication with the complementary solution. That
is first we compute ๐‘ฆ ๐‘ (solution to ๐ฟ๐‘ฆ = 0)
๐‘ฆ ๐‘ = ๐ถ1 ๐‘’ −3๐‘ฅ + ๐ถ2 ๐‘’ 3๐‘ฅ ,
2.5. NONHOMOGENEOUS EQUATIONS
107
and we note that the ๐‘’ 3๐‘ฅ term is a duplicate with our desired guess. We modify our guess
to ๐‘ฆ = ๐ด๐‘ฅ๐‘’ 3๐‘ฅ so that there is no duplication anymore. Let us try: ๐‘ฆ 0 = ๐ด๐‘’ 3๐‘ฅ + 3๐ด๐‘ฅ๐‘’ 3๐‘ฅ and
๐‘ฆ 00 = 6๐ด๐‘’ 3๐‘ฅ + 9๐ด๐‘ฅ๐‘’ 3๐‘ฅ , so
๐‘ฆ 00 − 9๐‘ฆ = 6๐ด๐‘’ 3๐‘ฅ + 9๐ด๐‘ฅ๐‘’ 3๐‘ฅ − 9๐ด๐‘ฅ๐‘’ 3๐‘ฅ = 6๐ด๐‘’ 3๐‘ฅ .
Thus 6๐ด๐‘’ 3๐‘ฅ is supposed to equal ๐‘’ 3๐‘ฅ . Hence, 6๐ด = 1 and so ๐ด = 1/6. We can now write the
general solution as
1
๐‘ฆ = ๐‘ฆ ๐‘ + ๐‘ฆ ๐‘ = ๐ถ1 ๐‘’ −3๐‘ฅ + ๐ถ2 ๐‘’ 3๐‘ฅ + ๐‘ฅ๐‘’ 3๐‘ฅ .
6
It is possible that multiplying by ๐‘ฅ does not get rid of all duplication. For example,
๐‘ฆ 00 − 6๐‘ฆ 0 + 9๐‘ฆ = ๐‘’ 3๐‘ฅ .
The complementary solution is ๐‘ฆ ๐‘ = ๐ถ1 ๐‘’ 3๐‘ฅ + ๐ถ2 ๐‘ฅ๐‘’ 3๐‘ฅ . Guessing ๐‘ฆ = ๐ด๐‘ฅ๐‘’ 3๐‘ฅ would not get us
anywhere. In this case we want to guess ๐‘ฆ ๐‘ = ๐ด๐‘ฅ 2 ๐‘’ 3๐‘ฅ . Basically, we want to multiply our
guess by ๐‘ฅ until all duplication is gone. But no more! Multiplying too many times will not
work.
Finally, what if the right-hand side has several terms, such as
๐ฟ๐‘ฆ = ๐‘’ 2๐‘ฅ + cos ๐‘ฅ.
In this case we find ๐‘ข that solves ๐ฟ๐‘ข = ๐‘’ 2๐‘ฅ and ๐‘ฃ that solves ๐ฟ๐‘ฃ = cos ๐‘ฅ (that is, do each
term separately). Then note that if ๐‘ฆ = ๐‘ข + ๐‘ฃ, then ๐ฟ๐‘ฆ = ๐‘’ 2๐‘ฅ + cos ๐‘ฅ. This is because ๐ฟ is
linear; we have ๐ฟ๐‘ฆ = ๐ฟ(๐‘ข + ๐‘ฃ) = ๐ฟ๐‘ข + ๐ฟ๐‘ฃ = ๐‘’ 2๐‘ฅ + cos ๐‘ฅ.
2.5.3
Variation of parameters
The method of undetermined coefficients works for many basic problems that crop up.
But it does not work all the time. It only works when the right-hand side of the equation
๐ฟ๐‘ฆ = ๐‘“ (๐‘ฅ) has finitely many linearly independent derivatives, so that we can write a guess
that consists of them all. Some equations are a bit tougher. Consider
๐‘ฆ 00 + ๐‘ฆ = tan ๐‘ฅ.
Each new derivative of tan ๐‘ฅ looks completely different and cannot be written as a linear
combination of the previous derivatives. If we start differentiating tan ๐‘ฅ, we get:
sec2 ๐‘ฅ,
2 sec2 ๐‘ฅ tan ๐‘ฅ,
4 sec2 ๐‘ฅ tan2 ๐‘ฅ + 2 sec4 ๐‘ฅ,
8 sec2 ๐‘ฅ tan3 ๐‘ฅ + 16 sec4 ๐‘ฅ tan ๐‘ฅ,
16 sec2 ๐‘ฅ tan4 ๐‘ฅ + 88 sec4 ๐‘ฅ tan2 ๐‘ฅ + 16 sec6 ๐‘ฅ,
...
This equation calls for a different method. We present the method of variation of
parameters, which handles any equation of the form ๐ฟ๐‘ฆ = ๐‘“ (๐‘ฅ), provided we can solve
certain integrals. For simplicity, we restrict ourselves to second order constant coefficient
108
CHAPTER 2. HIGHER ORDER LINEAR ODES
equations, but the method works for higher order equations just as well (the computations
become more tedious). The method also works for equations with nonconstant coefficients,
provided we can solve the associated homogeneous equation.
Perhaps it is best to explain this method by example. Let us try to solve the equation
๐ฟ๐‘ฆ = ๐‘ฆ 00 + ๐‘ฆ = tan ๐‘ฅ.
First we find the complementary solution (solution to ๐ฟ๐‘ฆ ๐‘ = 0). We get ๐‘ฆ ๐‘ = ๐ถ1 ๐‘ฆ1 + ๐ถ2 ๐‘ฆ2 ,
where ๐‘ฆ1 = cos ๐‘ฅ and ๐‘ฆ2 = sin ๐‘ฅ. To find a particular solution to the nonhomogeneous
equation we try
๐‘ฆ ๐‘ = ๐‘ฆ = ๐‘ข1 ๐‘ฆ1 + ๐‘ข2 ๐‘ฆ2 ,
where ๐‘ข1 and ๐‘ข2 are functions and not constants. We are trying to satisfy ๐ฟ๐‘ฆ = tan ๐‘ฅ. That
gives us one condition on the functions ๐‘ข1 and ๐‘ข2 . Compute (note the product rule!)
๐‘ฆ 0 = (๐‘ข10 ๐‘ฆ1 + ๐‘ข20 ๐‘ฆ2 ) + (๐‘ข1 ๐‘ฆ10 + ๐‘ข2 ๐‘ฆ20 ).
We can still impose one more condition at our discretion to simplify computations (we
have two unknown functions, so we should be allowed two conditions). We require that
(๐‘ข10 ๐‘ฆ1 + ๐‘ข20 ๐‘ฆ2 ) = 0. This makes computing the second derivative easier.
๐‘ฆ 0 = ๐‘ข1 ๐‘ฆ10 + ๐‘ข2 ๐‘ฆ20 ,
๐‘ฆ 00 = (๐‘ข10 ๐‘ฆ10 + ๐‘ข20 ๐‘ฆ20 ) + (๐‘ข1 ๐‘ฆ100 + ๐‘ข2 ๐‘ฆ200).
Since ๐‘ฆ1 and ๐‘ฆ2 are solutions to ๐‘ฆ 00 + ๐‘ฆ = 0, we find ๐‘ฆ100 = −๐‘ฆ1 and ๐‘ฆ200 = −๐‘ฆ2 . (If the equation
was a more general ๐‘ฆ 00 + ๐‘(๐‘ฅ)๐‘ฆ 0 + ๐‘ž(๐‘ฅ)๐‘ฆ = 0, we would have ๐‘ฆ ๐‘–00 = −๐‘(๐‘ฅ)๐‘ฆ ๐‘–0 − ๐‘ž(๐‘ฅ)๐‘ฆ ๐‘– .) So
๐‘ฆ 00 = (๐‘ข10 ๐‘ฆ10 + ๐‘ข20 ๐‘ฆ20 ) − (๐‘ข1 ๐‘ฆ1 + ๐‘ข2 ๐‘ฆ2 ).
We have (๐‘ข1 ๐‘ฆ1 + ๐‘ข2 ๐‘ฆ2 ) = ๐‘ฆ and so
๐‘ฆ 00 = (๐‘ข10 ๐‘ฆ10 + ๐‘ข20 ๐‘ฆ20 ) − ๐‘ฆ,
and hence
๐‘ฆ 00 + ๐‘ฆ = ๐ฟ๐‘ฆ = ๐‘ข10 ๐‘ฆ10 + ๐‘ข20 ๐‘ฆ20 .
For ๐‘ฆ to satisfy ๐ฟ๐‘ฆ = ๐‘“ (๐‘ฅ) we must have ๐‘“ (๐‘ฅ) = ๐‘ข10 ๐‘ฆ10 + ๐‘ข20 ๐‘ฆ20 .
What we need to solve are the two equations (conditions) we imposed on ๐‘ข1 and ๐‘ข2 :
๐‘ข10 ๐‘ฆ1 + ๐‘ข20 ๐‘ฆ2 = 0,
๐‘ข10 ๐‘ฆ10 + ๐‘ข20 ๐‘ฆ20 = ๐‘“ (๐‘ฅ).
We solve for ๐‘ข10 and ๐‘ข20 in terms of ๐‘“ (๐‘ฅ), ๐‘ฆ1 and ๐‘ฆ2 . We always get these formulas for any
๐ฟ๐‘ฆ = ๐‘“ (๐‘ฅ), where ๐ฟ๐‘ฆ = ๐‘ฆ 00 + ๐‘(๐‘ฅ)๐‘ฆ 0 + ๐‘ž(๐‘ฅ)๐‘ฆ. There is a general formula for the solution we
109
2.5. NONHOMOGENEOUS EQUATIONS
could just plug into, but instead of memorizing that, it is better, and easier, to just repeat
what we do below. In our case the two equations are
๐‘ข10 cos(๐‘ฅ) + ๐‘ข20 sin(๐‘ฅ) = 0,
−๐‘ข10 sin(๐‘ฅ) + ๐‘ข20 cos(๐‘ฅ) = tan(๐‘ฅ).
Hence
๐‘ข10 cos(๐‘ฅ) sin(๐‘ฅ) + ๐‘ข20 sin2 (๐‘ฅ) = 0,
−๐‘ข10 sin(๐‘ฅ) cos(๐‘ฅ) + ๐‘ข20 cos2 (๐‘ฅ) = tan(๐‘ฅ) cos(๐‘ฅ) = sin(๐‘ฅ).
And thus
๐‘ข20 sin2 (๐‘ฅ) + cos2 (๐‘ฅ) = sin(๐‘ฅ),
๐‘ข20 = sin(๐‘ฅ),
๐‘ข10 =
− sin2 (๐‘ฅ)
= − tan(๐‘ฅ) sin(๐‘ฅ).
cos(๐‘ฅ)
We integrate ๐‘ข10 and ๐‘ข20 to get ๐‘ข1 and ๐‘ข2 .
๐‘ข1 =
๐‘ข2 =
∫
๐‘ข10
∫
๐‘ข20
๐‘‘๐‘ฅ =
๐‘‘๐‘ฅ =
∫
∫
− tan(๐‘ฅ) sin(๐‘ฅ) ๐‘‘๐‘ฅ =
1
sin(๐‘ฅ) − 1
ln
+ sin(๐‘ฅ),
2
sin(๐‘ฅ) + 1
sin(๐‘ฅ) ๐‘‘๐‘ฅ = − cos(๐‘ฅ).
So our particular solution is
๐‘ฆ ๐‘ = ๐‘ข1 ๐‘ฆ1 + ๐‘ข2 ๐‘ฆ2 =
1
sin(๐‘ฅ) − 1
cos(๐‘ฅ) ln
+ cos(๐‘ฅ) sin(๐‘ฅ) − cos(๐‘ฅ) sin(๐‘ฅ) =
2
sin(๐‘ฅ) + 1
sin(๐‘ฅ) − 1
1
.
= cos(๐‘ฅ) ln
2
sin(๐‘ฅ) + 1
The general solution to ๐‘ฆ 00 + ๐‘ฆ = tan ๐‘ฅ is, therefore,
๐‘ฆ = ๐ถ1 cos(๐‘ฅ) + ๐ถ2 sin(๐‘ฅ) +
2.5.4
1
sin(๐‘ฅ) − 1
cos(๐‘ฅ) ln
.
2
sin(๐‘ฅ) + 1
Exercises
Exercise 2.5.2: Find a particular solution of ๐‘ฆ 00 − ๐‘ฆ 0 − 6๐‘ฆ = ๐‘’ 2๐‘ฅ .
Exercise 2.5.3: Find a particular solution of ๐‘ฆ 00 − 4๐‘ฆ 0 + 4๐‘ฆ = ๐‘’ 2๐‘ฅ .
Exercise 2.5.4: Solve the initial value problem ๐‘ฆ 00 + 9๐‘ฆ = cos(3๐‘ฅ) + sin(3๐‘ฅ) for ๐‘ฆ(0) = 2, ๐‘ฆ 0(0) = 1.
Exercise 2.5.5: Set up the form of the particular solution but do not solve for the coefficients for
๐‘ฆ (4) − 2๐‘ฆ 000 + ๐‘ฆ 00 = ๐‘’ ๐‘ฅ .
Exercise 2.5.6: Set up the form of the particular solution but do not solve for the coefficients for
๐‘ฆ (4) − 2๐‘ฆ 000 + ๐‘ฆ 00 = ๐‘’ ๐‘ฅ + ๐‘ฅ + sin ๐‘ฅ.
110
CHAPTER 2. HIGHER ORDER LINEAR ODES
Exercise 2.5.7:
a) Using variation of parameters find a particular solution of ๐‘ฆ 00 − 2๐‘ฆ 0 + ๐‘ฆ = ๐‘’ ๐‘ฅ .
b) Find a particular solution using undetermined coefficients.
c) Are the two solutions you found the same? See also Exercise 2.5.10.
Exercise 2.5.8: Find a particular solution of ๐‘ฆ 00 − 2๐‘ฆ 0 + ๐‘ฆ = sin(๐‘ฅ 2 ). It is OK to leave the answer
as a definite integral.
Exercise 2.5.9: For an arbitrary constant ๐‘ find a particular solution to ๐‘ฆ 00 − ๐‘ฆ = ๐‘’ ๐‘๐‘ฅ . Hint: Make
sure to handle every possible real ๐‘.
Exercise 2.5.10:
a) Using variation of parameters find a particular solution of ๐‘ฆ 00 − ๐‘ฆ = ๐‘’ ๐‘ฅ .
b) Find a particular solution using undetermined coefficients.
c) Are the two solutions you found the same? What is going on?
Exercise 2.5.11: Find a polynomial ๐‘ƒ(๐‘ฅ), so that ๐‘ฆ = 2๐‘ฅ 2 + 3๐‘ฅ + 4 solves ๐‘ฆ 00 + 5๐‘ฆ 0 + ๐‘ฆ = ๐‘ƒ(๐‘ฅ).
Exercise 2.5.101: Find a particular solution to ๐‘ฆ 00 − ๐‘ฆ 0 + ๐‘ฆ = 2 sin(3๐‘ฅ).
Exercise 2.5.102:
a) Find a particular solution to ๐‘ฆ 00 + 2๐‘ฆ = ๐‘’ ๐‘ฅ + ๐‘ฅ 3 .
b) Find the general solution.
Exercise 2.5.103: Solve ๐‘ฆ 00 + 2๐‘ฆ 0 + ๐‘ฆ = ๐‘ฅ 2 , ๐‘ฆ(0) = 1, ๐‘ฆ 0(0) = 2.
Exercise 2.5.104: Use variation of parameters to find a particular solution of ๐‘ฆ 00 − ๐‘ฆ =
1
๐‘’ ๐‘ฅ +๐‘’ −๐‘ฅ .
Exercise 2.5.105: For an arbitrary constant ๐‘ find the general solution to ๐‘ฆ 00 − 2๐‘ฆ = sin(๐‘ฅ + ๐‘).
Exercise 2.5.106: Undetermined coefficients can sometimes be used to guess a particular solution
to other equations than constant coefficients. Find a polynomial ๐‘ฆ(๐‘ฅ) that solves ๐‘ฆ 0 + ๐‘ฅ๐‘ฆ =
๐‘ฅ 3 + 2๐‘ฅ 2 + 5๐‘ฅ + 2.
Note: Not every right hand side will allow a polynomial solution, for example, ๐‘ฆ 0 + ๐‘ฅ๐‘ฆ = 1 does
not, but a technique based on undetermined coefficients does work, see chapter 7.
111
2.6. FORCED OSCILLATIONS AND RESONANCE
2.6
Forced oscillations and resonance
Note: 2 lectures, §3.6 in [EP], §3.8 in [BD]
๐‘˜
Let us return back to the example of a mass on a spring.
We examine the case of forced oscillations, which we did not
yet handle. That is, we consider the equation
๐น(๐‘ก)
๐‘š
damping ๐‘
๐‘š๐‘ฅ 00 + ๐‘๐‘ฅ 0 + ๐‘˜๐‘ฅ = ๐น(๐‘ก),
for some nonzero ๐น(๐‘ก). The setup is again: ๐‘š is mass, ๐‘ is friction, ๐‘˜ is the spring constant,
and ๐น(๐‘ก) is an external force acting on the mass.
We are interested in periodic forcing, such as noncentered rotating parts, or perhaps loud
sounds, or other sources of periodic force. Once we learn about Fourier series in chapter 4,
we will see that we cover all periodic functions by simply considering ๐น(๐‘ก) = ๐น0 cos(๐œ”๐‘ก) (or
sine instead of cosine, the calculations are essentially the same).
2.6.1
Undamped forced motion and resonance
First let us consider undamped (๐‘ = 0) motion. We have the equation
๐‘š๐‘ฅ 00 + ๐‘˜๐‘ฅ = ๐น0 cos(๐œ”๐‘ก).
This equation has the complementary solution (solution to the associated homogeneous
equation)
๐‘ฅ ๐‘ = ๐ถ1 cos(๐œ”0 ๐‘ก) + ๐ถ2 sin(๐œ”0 ๐‘ก),
p
where ๐œ”0 = ๐‘˜/๐‘š is the natural frequency (angular). It is the frequency at which the system
“wants to oscillate” without external interference.
Suppose that ๐œ”0 ≠ ๐œ”. We try the solution ๐‘ฅ ๐‘ = ๐ด cos(๐œ”๐‘ก) and solve for ๐ด. We do not
need a sine in our trial solution as after plugging in we only have cosines. If you include a
sine, it is fine; you will find that its coefficient is zero (I could not find a second rhyme).
We solve using the method of undetermined coefficients. We find that
๐‘ฅ๐‘ =
๐น0
๐‘š(๐œ”02
− ๐œ”2 )
cos(๐œ”๐‘ก).
We leave it as an exercise to do the algebra required.
The general solution is
๐‘ฅ = ๐ถ1 cos(๐œ”0 ๐‘ก) + ๐ถ2 sin(๐œ”0 ๐‘ก) +
๐น0
๐‘š(๐œ”02 − ๐œ”2 )
cos(๐œ”๐‘ก).
Written another way
๐‘ฅ = ๐ถ cos(๐œ”0 ๐‘ก − ๐›พ) +
๐น0
๐‘š(๐œ”02 − ๐œ”2 )
cos(๐œ”๐‘ก).
The solution is a superposition of two cosine waves at different frequencies.
112
CHAPTER 2. HIGHER ORDER LINEAR ODES
Example 2.6.1: Take
0.5๐‘ฅ 00 + 8๐‘ฅ = 10 cos(๐œ‹๐‘ก),
๐‘ฅ 0(0) = 0.
๐‘ฅ(0) = 0,
Let us compute. First we read off the parameters: ๐œ” = ๐œ‹, ๐œ”0 =
๐‘š = 0.5. The general solution is
๐‘ฅ = ๐ถ1 cos(4๐‘ก) + ๐ถ2 sin(4๐‘ก) +
Solve for ๐ถ1 and ๐ถ2 using the initial
−20
conditions: ๐ถ1 = 16−๐œ‹
2 and ๐ถ 2 = 0. Hence
20
cos(๐œ‹๐‘ก)
−
cos(4๐‘ก)
.
16 − ๐œ‹2
Do notice the “beating” behavior in
Figure 2.5. To see it, use the trigonometric
identity
๐‘ฅ=
๐ด+๐ต
๐ด−๐ต
sin
= cos ๐ต − cos ๐ด
2 sin
2
2
to get
๐‘ฅ=
p
8/0.5
= 4, ๐น0 = 10,
20
cos(๐œ‹๐‘ก).
16 − ๐œ‹2
0
5
10
15
20
10
10
5
5
0
0
-5
-5
-10
-10
0
20
4−๐œ‹
4+๐œ‹
2
sin
๐‘ก
sin
๐‘ก
2
2
16 − ๐œ‹2
.
5
Figure 2.5: Graph of
10
20
16−๐œ‹2
15
20
cos(๐œ‹๐‘ก) − cos(4๐‘ก) .
The function ๐‘ฅ is a high frequency wave modulated by a low frequency wave.
Now suppose ๐œ”0 = ๐œ”. Obviously, we cannot try the solution ๐ด cos(๐œ”๐‘ก) and then use
the method of undetermined coefficients. We notice that cos(๐œ”๐‘ก) solves the associated
homogeneous equation. Therefore, we try ๐‘ฅ ๐‘ = ๐ด๐‘ก cos(๐œ”๐‘ก) + ๐ต๐‘ก sin(๐œ”๐‘ก). This time we need
the sine term, since the second derivative of ๐‘ก cos(๐œ”๐‘ก) contains sines. We write the equation
๐‘ฅ 00 + ๐œ” 2 ๐‘ฅ =
๐น0
cos(๐œ”๐‘ก).
๐‘š
Plugging ๐‘ฅ ๐‘ into the left-hand side we get
2๐ต๐œ” cos(๐œ”๐‘ก) − 2๐ด๐œ” sin(๐œ”๐‘ก) =
Hence ๐ด = 0 and ๐ต =
is
๐น0
2๐‘š๐œ” .
Our particular solution is
๐น0
cos(๐œ”๐‘ก).
๐‘š
๐น0
2๐‘š๐œ”
๐‘ก sin(๐œ”๐‘ก) and our general solution
๐น0
๐‘ก sin(๐œ”๐‘ก).
2๐‘š๐œ”
The important term is the last one (the particular solution we found). This term grows
๐น0 ๐‘ก
−๐น0 ๐‘ก
without bound as ๐‘ก → ∞. In fact it oscillates between 2๐‘š๐œ”
and 2๐‘š๐œ”
. The first two terms
๐‘ฅ = ๐ถ1 cos(๐œ”๐‘ก) + ๐ถ2 sin(๐œ”๐‘ก) +
q
only oscillate between ± ๐ถ12 + ๐ถ22 , which becomes smaller and smaller in proportion to
the oscillations of the last term as ๐‘ก gets larger. In Figure 2.6 on the facing page we see the
graph with ๐ถ1 = ๐ถ2 = 0, ๐น0 = 2, ๐‘š = 1, ๐œ” = ๐œ‹.
113
2.6. FORCED OSCILLATIONS AND RESONANCE
By forcing the system in just the right frequency we produce very wild oscillations.
This kind of behavior is called resonance or
perhaps pure resonance. Sometimes resonance is desired. For example, remember
when as a kid you could start swinging by
just moving back and forth on the swing
seat in the “correct frequency”? You were
trying to achieve resonance. The force of
each one of your moves was small, but after
a while it produced large swings.
On the other hand resonance can be deFigure 2.6: Graph of ๐œ‹1 ๐‘ก sin(๐œ‹๐‘ก).
structive. In an earthquake some buildings
collapse while others may be relatively undamaged. This is due to different buildings having different resonance frequencies. So
figuring out the resonance frequency can be very important.
A common (but wrong) example of destructive force of resonance is the Tacoma Narrows
bridge failure. It turns out there was a different phenomenon at play∗ .
0
5
15
20
5.0
2.5
2.5
0.0
0.0
-2.5
-2.5
-5.0
-5.0
0
2.6.2
10
5.0
5
10
15
20
Damped forced motion and practical resonance
In real life things are not as simple as they were above. There is, of course, some damping.
Our equation becomes
๐‘š๐‘ฅ 00 + ๐‘๐‘ฅ 0 + ๐‘˜๐‘ฅ = ๐น0 cos(๐œ”๐‘ก),
(2.8)
for some ๐‘ > 0. We solved the homogeneous problem before. We let
๐‘
๐‘=
,
2๐‘š
r
๐œ”0 =
๐‘˜
.
๐‘š
We replace equation (2.8) with
๐น0
cos(๐œ”๐‘ก).
๐‘š
The roots of q
the characteristic equation of the associated homogeneous problem are
๐‘ฅ 00 + 2๐‘๐‘ฅ 0 + ๐œ”02 ๐‘ฅ =
๐‘Ÿ1 , ๐‘Ÿ2 = −๐‘ ±
๐‘ 2 − ๐œ”02 . The form of the general solution of the associated homogeneous
equation depends on the sign of ๐‘ 2 − ๐œ”02 , or equivalently on the sign of ๐‘ 2 − 4๐‘˜๐‘š, as before:
๏ฃฑ
๏ฃด
๐ถ ๐‘’ ๐‘Ÿ1 ๐‘ก + ๐ถ 2 ๐‘’ ๐‘Ÿ2 ๐‘ก
๏ฃด
๏ฃฒ 1
๏ฃด
๐‘ฅ ๐‘ = ๐ถ1 ๐‘’ −๐‘๐‘ก + ๐ถ2 ๐‘ก๐‘’ −๐‘๐‘ก
where ๐œ”1 =
∗ K.
q
๏ฃด
๏ฃด
๏ฃด ๐‘’ −๐‘๐‘ก ๐ถ1 cos(๐œ”1 ๐‘ก) + ๐ถ2 sin(๐œ”1 ๐‘ก)
๏ฃณ
if ๐‘ 2 > 4๐‘˜๐‘š,
if ๐‘ 2 = 4๐‘˜๐‘š,
if ๐‘ 2 < 4๐‘˜๐‘š,
๐œ”02 − ๐‘ 2 . In any case, we see that ๐‘ฅ ๐‘ (๐‘ก) → 0 as ๐‘ก → ∞.
Billah and R. Scanlan, Resonance, Tacoma Narrows Bridge Failure, and Undergraduate Physics Textbooks,
American Journal of Physics, 59(2), 1991, 118–124, http://www.ketchum.org/billah/Billah-Scanlan.pdf
114
CHAPTER 2. HIGHER ORDER LINEAR ODES
Let us find a particular solution. There can be no conflicts when trying to solve for the
undetermined coefficients by trying ๐‘ฅ ๐‘ = ๐ด cos(๐œ”๐‘ก) + ๐ต sin(๐œ”๐‘ก). Let us plug in and solve
for ๐ด and ๐ต. We get (the tedious details are left to reader)
(๐œ”02 − ๐œ”2 )๐ต − 2๐œ”๐‘๐ด sin(๐œ”๐‘ก) + (๐œ”02 − ๐œ”2 )๐ด + 2๐œ”๐‘๐ต cos(๐œ”๐‘ก) =
๐น0
cos(๐œ”๐‘ก).
๐‘š
We solve for ๐ด and ๐ต:
๐ด=
๐ต=
We also compute ๐ถ =
√
(๐œ”02 − ๐œ” 2 )๐น0
2
๐‘š(2๐œ”๐‘)2 + ๐‘š(๐œ”02 − ๐œ” 2 )
2๐œ”๐‘๐น0
๐‘š(2๐œ”๐‘)2 + ๐‘š(๐œ”02 − ๐œ” 2 )
2
,
.
๐ด2 + ๐ต2 to be
๐น0
๐ถ=
q
2
๐‘š (2๐œ”๐‘) +
.
(๐œ”02
−
2
๐œ”2 )
Thus our particular solution is
๐‘ฅ๐‘ =
(๐œ”02 − ๐œ” 2 )๐น0
2
๐‘š(2๐œ”๐‘)2 + ๐‘š(๐œ”02 − ๐œ”2 )
cos(๐œ”๐‘ก) +
2๐œ”๐‘๐น0
2
๐‘š(2๐œ”๐‘)2 + ๐‘š(๐œ”02 − ๐œ”2 )
sin(๐œ”๐‘ก).
Or in the alternative notation we have amplitude ๐ถ and phase shift ๐›พ where (if ๐œ” ≠ ๐œ”0 )
tan ๐›พ =
2๐œ”๐‘
๐ต
= 2
.
๐ด ๐œ” − ๐œ”2
0
Hence,
๐น0
๐‘ฅ๐‘ =
q
2
๐‘š (2๐œ”๐‘) +
If ๐œ” = ๐œ”0 , then ๐ด = 0, ๐ต = ๐ถ =
๐น0
2๐‘š๐œ”๐‘ ,
(๐œ”02
−
2
๐œ”2 )
cos(๐œ”๐‘ก − ๐›พ).
and ๐›พ = ๐œ‹/2.
For reasons we will explain in a moment, we call ๐‘ฅ ๐‘ the transient solution and denote
it by ๐‘ฅ ๐‘ก๐‘Ÿ . We call the ๐‘ฅ ๐‘ from above the steady periodic solution and denote it by ๐‘ฅ ๐‘ ๐‘ . The
general solution is
๐‘ฅ = ๐‘ฅ ๐‘ + ๐‘ฅ ๐‘ = ๐‘ฅ ๐‘ก๐‘Ÿ + ๐‘ฅ ๐‘ ๐‘ .
The transient solution ๐‘ฅ ๐‘ = ๐‘ฅ ๐‘ก๐‘Ÿ goes to zero as ๐‘ก → ∞, as all the terms involve an
exponential with a negative exponent. So for large ๐‘ก, the effect of ๐‘ฅ ๐‘ก๐‘Ÿ is negligible and we
see essentially only ๐‘ฅ ๐‘ ๐‘ . Hence the name transient. Notice that ๐‘ฅ ๐‘ ๐‘ involves no arbitrary
constants, and the initial conditions only affect ๐‘ฅ ๐‘ก๐‘Ÿ . Thus, the effect of the initial conditions
is negligible after some period of time. We might as well focus on the steady periodic
solution and ignore the transient solution. See Figure 2.7 on the next page for a graph
given several different initial conditions.
115
2.6. FORCED OSCILLATIONS AND RESONANCE
The speed at which ๐‘ฅ ๐‘ก๐‘Ÿ goes to zero depends on ๐‘ (and hence ๐‘). The bigger ๐‘ is
(the bigger ๐‘ is), the “faster” ๐‘ฅ ๐‘ก๐‘Ÿ becomes
negligible. So the smaller the damping, the
longer the “transient region.” This is consistent with the observation that when ๐‘ = 0,
the initial conditions affect the behavior for
all time (i.e. an infinite “transient region”).
0
5
10
15
20
5.0
5.0
2.5
2.5
0.0
0.0
-2.5
-2.5
Let us describe what we mean by resonance when damping is present. Since
there were no conflicts when solving with
undetermined coefficient, there is no term
Figure 2.7: Solutions with different initial conthat goes to infinity. We look instead at
ditions for parameters ๐‘˜ = 1, ๐‘š = 1, ๐น0 = 1,
the maximum value of the amplitude of
๐‘ = 0.7, and ๐œ” = 1.1.
the steady periodic solution. Let ๐ถ be the
amplitude of ๐‘ฅ ๐‘ ๐‘ . If we plot ๐ถ as a function
of ๐œ” (with all other parameters fixed), we can find its maximum. We call the ๐œ” that
achieves this maximum the practical resonance frequency. We call the maximal amplitude
๐ถ(๐œ”) the practical resonance amplitude. Thus when damping is present we talk of practical
resonance rather than pure resonance. A sample plot for three different values of ๐‘ is given
in Figure 2.8. As you can see the practical resonance amplitude grows as damping gets
smaller, and practical resonance can disappear altogether when damping is large.
-5.0
-5.0
0
0.0
0.5
1.0
1.5
5
2.0
10
2.5
15
20
3.0
2.5
2.5
2.0
2.0
1.5
1.5
1.0
1.0
0.5
0.5
0.0
0.0
0.0
0.5
1.0
1.5
2.0
2.5
3.0
Figure 2.8: Graph of ๐ถ(๐œ”) showing practical resonance with parameters ๐‘˜ = 1, ๐‘š = 1, ๐น0 = 1. The top
line is with ๐‘ = 0.4, the middle line with ๐‘ = 0.8, and the bottom line with ๐‘ = 1.6.
To find the maximum we need to find the derivative ๐ถ 0(๐œ”). Computation shows
๐ถ 0(๐œ”) =
−2๐œ”(2๐‘ 2 + ๐œ” 2 − ๐œ”02 )๐น0
2
๐‘š (2๐œ”๐‘) +
(๐œ”02
−
2 3/2
๐œ”2 )
.
116
CHAPTER 2. HIGHER ORDER LINEAR ODES
This is zero either when ๐œ” = 0 or when 2๐‘ 2 + ๐œ”2 − ๐œ”02 = 0. In other words, ๐ถ 0(๐œ”) = 0 when
๐œ”=
q
๐œ”02 − 2๐‘ 2
or
๐œ” = 0.
q
If ๐œ”02 − 2๐‘ 2 is positive, then ๐œ”02 − 2๐‘ 2 is the practical resonance frequency (that is the
point where ๐ถ(๐œ”) is maximal). This follows by the first derivative test for example as then
๐ถ 0(๐œ”) > 0 for small ๐œ” in this case. If on the other hand ๐œ”02 − 2๐‘ 2 is not positive, then ๐ถ(๐œ”)
achieves its maximum at ๐œ” = 0, and there is no practical resonance since we assume ๐œ” > 0
in our system. In this case the amplitude gets larger as the forcing frequency gets smaller.
If practical resonance occurs, the frequency is smaller than ๐œ”0 . As the damping ๐‘ (and
hence ๐‘) becomes smaller, the practical resonance frequency goes to ๐œ”0 . So when damping
is very small, ๐œ”0 is a good estimate of the practical resonance frequency. This behavior
agrees with the observation that when ๐‘ = 0, then ๐œ”0 is the resonance frequency.
Another interesting observation to make is that when ๐œ” → ∞, then ๐ถ → 0. This means
that if the forcing frequency gets too high it does not manage to get the mass moving in
the mass-spring system. This is quite reasonable intuitively. If we wiggle back and forth
really fast while sitting on a swing, we will not get it moving at all, no matter how forceful.
Fast vibrations just cancel each other out before the mass has any chance of responding by
moving one way or the other.
The behavior is more complicated if the forcing function is not an exact cosine wave,
but for example a square wave. A general periodic function will be the sum (superposition)
of many cosine waves of different frequencies. The reader is encouraged to come back to
this section once we have learned about the Fourier series.
2.6.3
Exercises
Exercise 2.6.1: Derive a formula for ๐‘ฅ ๐‘ ๐‘ if the equation is ๐‘š๐‘ฅ 00 + ๐‘๐‘ฅ 0 + ๐‘˜๐‘ฅ = ๐น0 sin(๐œ”๐‘ก). Assume
๐‘ > 0.
Exercise 2.6.2: Derive a formula for ๐‘ฅ ๐‘ ๐‘ if the equation is ๐‘š๐‘ฅ 00 + ๐‘๐‘ฅ 0 + ๐‘˜๐‘ฅ = ๐น0 cos(๐œ”๐‘ก) +
๐น1 cos(3๐œ”๐‘ก). Assume ๐‘ > 0.
Exercise 2.6.3: Take ๐‘š๐‘ฅ 00 + ๐‘๐‘ฅ 0 + ๐‘˜๐‘ฅ = ๐น0 cos(๐œ”๐‘ก). Fix ๐‘š > 0, ๐‘˜ > 0, and ๐น0 > 0. Consider the
function ๐ถ(๐œ”). For what values of ๐‘ (solve in terms of ๐‘š, ๐‘˜, and ๐น0 ) will there be no practical
resonance (that is, for what values of ๐‘ is there no maximum of ๐ถ(๐œ”) for ๐œ” > 0)?
Exercise 2.6.4: Take ๐‘š๐‘ฅ 00 + ๐‘๐‘ฅ 0 + ๐‘˜๐‘ฅ = ๐น0 cos(๐œ”๐‘ก). Fix ๐‘ > 0, ๐‘˜ > 0, and ๐น0 > 0. Consider the
function ๐ถ(๐œ”). For what values of ๐‘š (solve in terms of ๐‘, ๐‘˜, and ๐น0 ) will there be no practical
resonance (that is, for what values of ๐‘š is there no maximum of ๐ถ(๐œ”) for ๐œ” > 0)?
2.6. FORCED OSCILLATIONS AND RESONANCE
117
Exercise 2.6.5: A water tower in an earthquake acts as a mass-spring system. Assume that the
container on top is full and the water does not move around. The container then acts as the mass
and the support acts as the spring, where the induced vibrations are horizontal. The container with
water has a mass of ๐‘š = 10, 000 kg. It takes a force of 1000 newtons to displace the container 1
meter. For simplicity assume no friction. When the earthquake hits the water tower is at rest (it is
not moving). The earthquake induces an external force ๐น(๐‘ก) = ๐‘š๐ด๐œ”2 cos(๐œ”๐‘ก).
a) What is the natural frequency of the water tower?
b) If ๐œ” is not the natural frequency, find a formula for the maximal amplitude of the resulting
oscillations of the water container (the maximal deviation from the rest position). The motion
will be a high frequency wave modulated by a low frequency wave, so simply find the constant
in front of the sines.
c) Suppose ๐ด = 1 and an earthquake with frequency 0.5 cycles per second comes. What is the
amplitude of the oscillations? Suppose that if the water tower moves more than 1.5 meter
from the rest position, the tower collapses. Will the tower collapse?
Exercise 2.6.101: A mass of 4 kg on a spring with ๐‘˜ = 4 N/m and a damping constant ๐‘ = 1 Ns/m.
Suppose that ๐น0 = 2 N. Using forcing function ๐น0 cos(๐œ”๐‘ก), find the ๐œ” that causes practical
resonance and find the amplitude.
Exercise 2.6.102: Derive a formula for ๐‘ฅ ๐‘ ๐‘ for ๐‘š๐‘ฅ 00 + ๐‘๐‘ฅ 0 + ๐‘˜๐‘ฅ = ๐น0 cos(๐œ”๐‘ก) + ๐ด, where ๐ด is some
constant. Assume ๐‘ > 0.
Exercise 2.6.103: Suppose there is no damping in a mass and spring system with ๐‘š = 5, ๐‘˜ = 20,
and ๐น0 = 5. Suppose ๐œ” is chosen to be precisely the resonance frequency.
a) Find ๐œ”.
b) Find the amplitude of the oscillations at time ๐‘ก = 100, given the system is at rest at ๐‘ก = 0.
118
CHAPTER 2. HIGHER ORDER LINEAR ODES
Chapter 3
Systems of ODEs
3.1
Introduction to systems of ODEs
Note: 1 to 1.5 lectures, §4.1 in [EP], §7.1 in [BD]
3.1.1
Systems
Often we do not have just one dependent variable and one equation. And as we will see,
we may end up with systems of several equations and several dependent variables even if
we start with a single equation.
If we have several dependent variables, suppose ๐‘ฆ1 , ๐‘ฆ2 , . . . , ๐‘ฆ๐‘› , then we can have
a differential equation involving all of them and their derivatives with respect to one
independent variable ๐‘ฅ. For example, ๐‘ฆ100 = ๐‘“ (๐‘ฆ10 , ๐‘ฆ20 , ๐‘ฆ1 , ๐‘ฆ2 , ๐‘ฅ). Usually, when we have two
dependent variables we have two equations such as
๐‘ฆ100 = ๐‘“1 (๐‘ฆ10 , ๐‘ฆ20 , ๐‘ฆ1 , ๐‘ฆ2 , ๐‘ฅ),
๐‘ฆ200 = ๐‘“2 (๐‘ฆ10 , ๐‘ฆ20 , ๐‘ฆ1 , ๐‘ฆ2 , ๐‘ฅ),
for some functions ๐‘“1 and ๐‘“2 . We call the above a system of differential equations. More
precisely, the above is a second order system of ODEs as second order derivatives appear.
The system
๐‘ฅ 10 = ๐‘”1 (๐‘ฅ1 , ๐‘ฅ2 , ๐‘ฅ3 , ๐‘ก),
๐‘ฅ20 = ๐‘”2 (๐‘ฅ1 , ๐‘ฅ2 , ๐‘ฅ3 , ๐‘ก),
๐‘ฅ30 = ๐‘”3 (๐‘ฅ1 , ๐‘ฅ2 , ๐‘ฅ3 , ๐‘ก),
is a first order system, where ๐‘ฅ1 , ๐‘ฅ2 , ๐‘ฅ3 are the dependent variables, and ๐‘ก is the independent
variable.
The terminology for systems is essentially the same as for single equations. For the
120
CHAPTER 3. SYSTEMS OF ODES
system above, a solution is a set of three functions ๐‘ฅ1 (๐‘ก), ๐‘ฅ2 (๐‘ก), ๐‘ฅ3 (๐‘ก), such that
๐‘ฅ 10 (๐‘ก) = ๐‘”1 ๐‘ฅ1 (๐‘ก), ๐‘ฅ2 (๐‘ก), ๐‘ฅ3 (๐‘ก), ๐‘ก ,
๐‘ฅ 20 (๐‘ก) = ๐‘”2 ๐‘ฅ1 (๐‘ก), ๐‘ฅ2 (๐‘ก), ๐‘ฅ3 (๐‘ก), ๐‘ก ,
0
๐‘ฅ 3 (๐‘ก) = ๐‘”3 ๐‘ฅ1 (๐‘ก), ๐‘ฅ2 (๐‘ก), ๐‘ฅ3 (๐‘ก), ๐‘ก .
We usually also have an initial condition. Just like for single equations we specify ๐‘ฅ1 , ๐‘ฅ2 ,
and ๐‘ฅ3 for some fixed ๐‘ก. For example, ๐‘ฅ1 (0) = ๐‘Ž1 , ๐‘ฅ 2 (0) = ๐‘Ž 2 , ๐‘ฅ3 (0) = ๐‘Ž3 . For some constants
๐‘Ž 1 , ๐‘Ž 2 , and ๐‘Ž 3 . For the second order system we would also specify the first derivatives at a
point. And if we find a solution with constants in it, where by solving for the constants we
find a solution for any initial condition, we call this solution the general solution. Best to
look at a simple example.
Example 3.1.1: Sometimes a system is easy to solve by solving for one variable and then
for the second variable. Take the first order system
๐‘ฆ10 = ๐‘ฆ1 ,
๐‘ฆ20 = ๐‘ฆ1 − ๐‘ฆ2 ,
with ๐‘ฆ1 , ๐‘ฆ2 as the dependent variables and ๐‘ฅ as the independent variable. And consider
initial conditions ๐‘ฆ1 (0) = 1, ๐‘ฆ2 (0) = 2.
We note that ๐‘ฆ1 = ๐ถ1 ๐‘’ ๐‘ฅ is the general solution of the first equation. We then plug this ๐‘ฆ1
into the second equation and get the equation ๐‘ฆ20 = ๐ถ1 ๐‘’ ๐‘ฅ − ๐‘ฆ2 , which is a linear first order
equation that is easily solved for ๐‘ฆ2 . By the method of integrating factor we get
๐‘’ ๐‘ฅ ๐‘ฆ2 =
or ๐‘ฆ2 =
๐ถ1 ๐‘ฅ
2 ๐‘’
๐ถ1 2๐‘ฅ
๐‘’ + ๐ถ2 ,
2
+ ๐ถ2 ๐‘’ −๐‘ฅ . The general solution to the system is, therefore,
๐‘ฆ1 = ๐ถ 1 ๐‘’ ๐‘ฅ ,
๐‘ฆ2 =
๐ถ1 ๐‘ฅ
๐‘’ + ๐ถ2 ๐‘’ −๐‘ฅ .
2
We solve for ๐ถ1 and ๐ถ2 given the initial conditions. We substitute ๐‘ฅ = 0 and find that ๐ถ1 = 1
and ๐ถ2 = 3/2. Thus the solution is ๐‘ฆ1 = ๐‘’ ๐‘ฅ , and ๐‘ฆ2 = (1/2)๐‘’ ๐‘ฅ + (3/2)๐‘’ −๐‘ฅ .
Generally, we will not be so lucky to be able to solve for each variable separately as in
the example above, and we will have to solve for all variables at once. While we won’t
generally be able to solve for one variable and then the next, we will try to salvage as much
as possible from this technique. It will turn out that in a certain sense we will still (try to)
solve a bunch of single equations and put their solutions together. Let’s not worry right
now about how to solve systems yet.
We will mostly consider the linear systems. The example above is a so-called linear first
order system. It is linear as none of the dependent variables or their derivatives appear
in nonlinear functions or with powers higher than one (๐‘ฅ, ๐‘ฆ, ๐‘ฅ 0 and ๐‘ฆ 0, constants, and
121
3.1. INTRODUCTION TO SYSTEMS OF ODES
functions of ๐‘ก can appear, but not ๐‘ฅ๐‘ฆ or (๐‘ฆ 0)2 or ๐‘ฅ 3 ). Another, more complicated, example
of a linear system is
๐‘ฆ100 = ๐‘’ ๐‘ก ๐‘ฆ10 + ๐‘ก 2 ๐‘ฆ1 + 5๐‘ฆ2 + sin(๐‘ก),
๐‘ฆ200 = ๐‘ก ๐‘ฆ10 − ๐‘ฆ20 + 2๐‘ฆ1 + cos(๐‘ก).
3.1.2
Applications
Let us consider some simple applications of systems and how to set up the equations.
Example 3.1.2: First, we consider salt and brine tanks, but this time water flows from one
to the other and back. We again consider that the tanks are evenly mixed.
๐‘ฅ1
Vol. = ๐‘‰
๐‘ฅ2
๐‘Ÿ
๐‘Ÿ
Vol. = ๐‘‰
Figure 3.1: A closed system of two brine tanks.
Suppose we have two tanks, each containing volume ๐‘‰ liters of salt brine. The amount
of salt in the first tank is ๐‘ฅ1 grams, and the amount of salt in the second tank is ๐‘ฅ2 grams.
The liquid is perfectly mixed and flows at the rate ๐‘Ÿ liters per second out of each tank into
the other. See Figure 3.1.
The rate of change of ๐‘ฅ1 , that is ๐‘ฅ10 , is the rate of salt coming in minus the rate going out.
The rate coming in is the density of the salt in tank 2, that is ๐‘ฅ๐‘‰2 , times the rate ๐‘Ÿ. The rate
coming out is the density of the salt in tank 1, that is ๐‘ฅ๐‘‰1 , times the rate ๐‘Ÿ. In other words it is
๐‘ฅ10 =
๐‘ฅ2
๐‘ฅ1
๐‘Ÿ
๐‘Ÿ
๐‘Ÿ
๐‘Ÿ − ๐‘Ÿ = ๐‘ฅ 2 − ๐‘ฅ1 = (๐‘ฅ2 − ๐‘ฅ1 ).
๐‘‰
๐‘‰
๐‘‰
๐‘‰
๐‘‰
Similarly we find the rate ๐‘ฅ20 , where the roles of ๐‘ฅ1 and ๐‘ฅ2 are reversed. All in all, the
system of ODEs for this problem is
๐‘Ÿ
(๐‘ฅ2 − ๐‘ฅ1 ),
๐‘‰
๐‘Ÿ
๐‘ฅ20 = (๐‘ฅ1 − ๐‘ฅ2 ).
๐‘‰
๐‘ฅ10 =
In this system we cannot solve for ๐‘ฅ 1 or ๐‘ฅ2 separately. We must solve for both ๐‘ฅ1 and ๐‘ฅ2 at
once, which is intuitively clear since the amount of salt in one tank affects the amount in
the other. We can’t know ๐‘ฅ1 before we know ๐‘ฅ2 , and vice versa.
122
CHAPTER 3. SYSTEMS OF ODES
We don’t yet know how to find all the solutions, but intuitively we can at least find
some solutions. Suppose we know that initially the tanks have the same amount of salt.
That is, we have an initial condition such as ๐‘ฅ1 (0) = ๐‘ฅ2 (0) = ๐ถ. Then clearly the amount
of salt coming and out of each tank is the same, so the amounts are not changing. In
other words, ๐‘ฅ 1 = ๐ถ and ๐‘ฅ 2 = ๐ถ (the constant functions) is a solution: ๐‘ฅ10 = ๐‘ฅ20 = 0, and
๐‘ฅ2 − ๐‘ฅ1 = ๐‘ฅ1 − ๐‘ฅ2 = 0, so the equations are satisfied.
Let us think about the setup a little bit more without solving it. Suppose the initial
conditions are ๐‘ฅ 1 (0) = ๐ด and ๐‘ฅ2 (0) = ๐ต, for two different constants ๐ด and ๐ต. Since no salt is
coming in or out of this closed system, the total amount of salt is constant. That is, ๐‘ฅ1 + ๐‘ฅ2
is constant, and so it equals ๐ด + ๐ต. Intuitively if ๐ด is bigger than ๐ต, then more salt will flow
out of tank one than into it. Eventually, after a long time we would then expect the amount
of salt in each tank to equalize. In other words, the solutions of both ๐‘ฅ 1 and ๐‘ฅ2 should tend
towards ๐ด+๐ต
2 . Once you know how to solve systems you will find out that this really is so.
Example 3.1.3: Let us look at a second order example. We return to the mass and spring
setup, but this time we consider two masses.
Consider one spring with constant ๐‘˜ and two masses ๐‘š1
๐‘˜
and ๐‘š2 . Think of the masses as carts that ride along a straight
๐‘š1
๐‘š2
track with no friction. Let ๐‘ฅ 1 be the displacement of the first
cart and ๐‘ฅ2 be the displacement of the second cart. That is, we
๐‘ฅ1
๐‘ฅ2
put the two carts somewhere with no tension on the spring,
and we mark the position of the first and second cart and call those the zero positions.
Then ๐‘ฅ1 measures how far the first cart is from its zero position, and ๐‘ฅ2 measures how far
the second cart is from its zero position. The force exerted by the spring on the first cart
is ๐‘˜(๐‘ฅ2 − ๐‘ฅ 1 ), since ๐‘ฅ2 − ๐‘ฅ1 is how far the string is stretched (or compressed) from the rest
position. The force exerted on the second cart is the opposite, thus the same thing with a
negative sign. Newton’s second law states that force equals mass times acceleration. So the
system of equations is
๐‘š1 ๐‘ฅ100 = ๐‘˜(๐‘ฅ2 − ๐‘ฅ1 ),
๐‘š2 ๐‘ฅ200 = −๐‘˜(๐‘ฅ2 − ๐‘ฅ1 ).
Again, we cannot solve for the ๐‘ฅ1 or ๐‘ฅ2 variable separately. That we must solve for both
๐‘ฅ1 and ๐‘ฅ2 at once is intuitively clear, since where the first cart goes depends on exactly
where the second cart goes and vice versa.
3.1.3
Changing to first order
Before we talk about how to handle systems, let us note that in some sense we need only
consider first order systems. Let us take an ๐‘› th order differential equation
๐‘ฆ (๐‘›) = ๐น(๐‘ฆ (๐‘›−1) , . . . , ๐‘ฆ 0 , ๐‘ฆ, ๐‘ฅ).
123
3.1. INTRODUCTION TO SYSTEMS OF ODES
We define new variables ๐‘ข1 , ๐‘ข2 , . . . , ๐‘ข๐‘› and write the system
๐‘ข10 = ๐‘ข2 ,
๐‘ข20 = ๐‘ข3 ,
..
.
0
๐‘ข๐‘›−1
= ๐‘ข๐‘› ,
๐‘ข๐‘›0 = ๐น(๐‘ข๐‘› , ๐‘ข๐‘›−1 , . . . , ๐‘ข2 , ๐‘ข1 , ๐‘ฅ).
We solve this system for ๐‘ข1 , ๐‘ข2 , . . . , ๐‘ข๐‘› . Once we have solved for the ๐‘ข, we can discard ๐‘ข2
through ๐‘ข๐‘› and let ๐‘ฆ = ๐‘ข1 . This ๐‘ฆ solves the original equation.
Example 3.1.4: Take ๐‘ฅ 000 = 2๐‘ฅ 00 + 8๐‘ฅ 0 + ๐‘ฅ + ๐‘ก. Letting ๐‘ข1 = ๐‘ฅ, ๐‘ข2 = ๐‘ฅ 0, ๐‘ข3 = ๐‘ฅ 00, we find the
system:
๐‘ข10 = ๐‘ข2 ,
๐‘ข20 = ๐‘ข3 ,
๐‘ข30 = 2๐‘ข3 + 8๐‘ข2 + ๐‘ข1 + ๐‘ก.
A similar process can be followed for a system of higher order differential equations.
For example, a system of ๐‘˜ differential equations in ๐‘˜ unknowns, all of order ๐‘›, can be
transformed into a first order system of ๐‘› × ๐‘˜ equations and ๐‘› × ๐‘˜ unknowns.
Example 3.1.5: Consider the system from the carts example,
๐‘š1 ๐‘ฅ 100 = ๐‘˜(๐‘ฅ 2 − ๐‘ฅ1 ),
๐‘š2 ๐‘ฅ200 = −๐‘˜(๐‘ฅ2 − ๐‘ฅ1 ).
Let ๐‘ข1 = ๐‘ฅ1 , ๐‘ข2 = ๐‘ฅ10 , ๐‘ข3 = ๐‘ฅ2 , ๐‘ข4 = ๐‘ฅ 20 . The second order system becomes the first order
system
๐‘ข10 = ๐‘ข2 ,
๐‘š1 ๐‘ข20 = ๐‘˜(๐‘ข3 − ๐‘ข1 ),
๐‘ข30 = ๐‘ข4 ,
๐‘š2 ๐‘ข40 = −๐‘˜(๐‘ข3 − ๐‘ข1 ).
Example 3.1.6: The idea works in reverse as well. Consider the system
๐‘ฅ 0 = 2๐‘ฆ − ๐‘ฅ,
๐‘ฆ 0 = ๐‘ฅ,
where the independent variable is ๐‘ก. We wish to solve for the initial conditions ๐‘ฅ(0) = 1,
๐‘ฆ(0) = 0.
If we differentiate the second equation, we get ๐‘ฆ 00 = ๐‘ฅ 0. We know what ๐‘ฅ 0 is in terms of
๐‘ฅ and ๐‘ฆ, and we know that ๐‘ฅ = ๐‘ฆ 0. So,
๐‘ฆ 00 = ๐‘ฅ 0 = 2๐‘ฆ − ๐‘ฅ = 2๐‘ฆ − ๐‘ฆ 0 .
We now have the equation ๐‘ฆ 00 + ๐‘ฆ 0 − 2๐‘ฆ = 0. We know how to solve this equation and we
find that ๐‘ฆ = ๐ถ1 ๐‘’ −2๐‘ก + ๐ถ2 ๐‘’ ๐‘ก . Once we have ๐‘ฆ, we use the equation ๐‘ฆ 0 = ๐‘ฅ to get ๐‘ฅ.
๐‘ฅ = ๐‘ฆ 0 = −2๐ถ1 ๐‘’ −2๐‘ก + ๐ถ2 ๐‘’ ๐‘ก .
We solve for the initial conditions 1 = ๐‘ฅ(0) = −2๐ถ1 + ๐ถ2 and 0 = ๐‘ฆ(0) = ๐ถ1 + ๐ถ2 . Hence,
๐ถ1 = −๐ถ2 and 1 = 3๐ถ2 . So ๐ถ1 = −1/3 and ๐ถ2 = 1/3. Our solution is
๐‘ฅ=
2๐‘’ −2๐‘ก + ๐‘’ ๐‘ก
,
3
๐‘ฆ=
−๐‘’ −2๐‘ก + ๐‘’ ๐‘ก
.
3
124
CHAPTER 3. SYSTEMS OF ODES
Exercise 3.1.1: Plug in and check that this really is the solution.
It is useful to go back and forth between systems and higher order equations for other
reasons. For example, software for solving ODE numerically (approximation) is generally
for first order systems. To use it, you take whatever ODE you want to solve and convert
it to a first order system. It is not very hard to adapt computer code for the Euler or
Runge–Kutta method for first order equations to handle first order systems. We simply
treat the dependent variable not as a number but as a vector. In many mathematical
computer languages there is almost no distinction in syntax.
3.1.4
Autonomous systems and vector fields
A system where the equations do not depend on the independent variable is called an
autonomous system. For example the system ๐‘ฆ 0 = 2๐‘ฆ − ๐‘ฅ, ๐‘ฆ 0 = ๐‘ฅ is autonomous as ๐‘ก is the
independent variable but does not appear in the equations.
For autonomous systems we can draw the so-called direction field or vector field, a plot
similar to a slope field, but instead of giving a slope at each point, we give a direction (and
a magnitude). The previous example, ๐‘ฅ 0 = 2๐‘ฆ − ๐‘ฅ, ๐‘ฆ 0 = ๐‘ฅ, says that at the point (๐‘ฅ, ๐‘ฆ) the
direction in which we should travel to satisfy the equations should be the direction of the
vector (2๐‘ฆ − ๐‘ฅ, ๐‘ฅ) with the speed equal to the magnitude of this vector. So we draw the
vector (2๐‘ฆ − ๐‘ฅ, ๐‘ฅ) at the point (๐‘ฅ, ๐‘ฆ) and we do this for many
points on the ๐‘ฅ๐‘ฆ-plane. For
example, at the point (1, 2) we draw the vector 2(2) − 1, 1 = (3, 1), a vector pointing
to the
right and a little bit up, while at the point (2, 1) we draw the vector 2(1) − 2, 2 = (0, 2) a
vector that points straight up. When drawing the vectors, we will scale down their size to
fit many of them on the same direction field. If we drew the arrows at the actual size, the
diagram would be a jumbled mess once you would draw more than a couple of arrows. So
we scale them all so that not even the longest one interferes with the others. We are mostly
interested in their direction and relative size. See Figure 3.2 on the facing page.
We can draw a path of the solution in the plane. Suppose the solution is given by
๐‘ฅ = ๐‘“ (๐‘ก), ๐‘ฆ = ๐‘”(๐‘ก). We
pick an interval of ๐‘ก (say 0 ≤ ๐‘ก ≤ 2 for our example) and plot all
the points ๐‘“ (๐‘ก), ๐‘”(๐‘ก) for ๐‘ก in the selected range. The resulting picture is called the phase
portrait (or phase plane portrait). The particular curve obtained is called the trajectory or
solution curve. See an example plot in Figure 3.3 on the next page. In the figure the solution
starts at (1, 0) and travels along the vector field for a distance of 2 units of ๐‘ก. We solved this
system precisely, so we compute ๐‘ฅ(2) and ๐‘ฆ(2) to find ๐‘ฅ(2) ≈ 2.475 and ๐‘ฆ(2) ≈ 2.457. This
point corresponds to the top right end of the plotted solution curve in the figure.
Notice the similarity to the diagrams we drew for autonomous systems in one dimension.
But note how much more complicated things become when we allow just one extra
dimension.
We can draw phase portraits and trajectories in the ๐‘ฅ๐‘ฆ-plane even if the system is not
autonomous. In this case, however, we cannot draw the direction field, since the field
changes as ๐‘ก changes. For each ๐‘ก we would get a different direction field.
125
3.1. INTRODUCTION TO SYSTEMS OF ODES
-1
0
1
2
-1
3
0
1
2
3
3
3
3
3
2
2
2
2
1
1
1
1
0
0
0
0
-1
-1
-1
-1
0
1
2
3
Figure 3.2: The direction field for ๐‘ฅ 0 = 2๐‘ฆ − ๐‘ฅ,
๐‘ฆ 0 = ๐‘ฅ.
3.1.5
-1
-1
0
1
2
3
Figure 3.3: The direction field for ๐‘ฅ 0 = 2๐‘ฆ − ๐‘ฅ,
๐‘ฆ 0 = ๐‘ฅ with the trajectory of the solution starting
at (1, 0) for 0 ≤ ๐‘ก ≤ 2.
Picard’s theorem
Perhaps before going further, let us mention that Picard’s theorem on existence and
uniqueness still holds for systems of ODE. Let us restate this theorem in the setting of
systems. A general first order system is of the form
๐‘ฅ10 = ๐น1 (๐‘ฅ1 , ๐‘ฅ2 , . . . , ๐‘ฅ ๐‘› , ๐‘ก),
๐‘ฅ 20 = ๐น2 (๐‘ฅ1 , ๐‘ฅ2 , . . . , ๐‘ฅ ๐‘› , ๐‘ก),
..
.
(3.1)
๐‘ฅ 0๐‘› = ๐น๐‘› (๐‘ฅ1 , ๐‘ฅ2 , . . . , ๐‘ฅ ๐‘› , ๐‘ก).
Theorem 3.1.1 (Picard’s theorem on existence and uniqueness for systems). If for every
๐œ•๐น
๐‘— = 1, 2, . . . , ๐‘› and every ๐‘˜ = 1, 2, . . . , ๐‘› each ๐น ๐‘— is continuous and the derivative ๐œ•๐‘ฅ ๐‘— exists and is
๐‘˜
continuous near some (๐‘ฅ10 , ๐‘ฅ20 , . . . , ๐‘ฅ ๐‘›0 , ๐‘ก 0 ), then a solution to (3.1) subject to the initial condition
๐‘ฅ1 (๐‘ก 0 ) = ๐‘ฅ10 , ๐‘ฅ2 (๐‘ก 0 ) = ๐‘ฅ20 , . . . , ๐‘ฅ ๐‘› (๐‘ก 0 ) = ๐‘ฅ ๐‘›0 exists (at least for ๐‘ก in some small interval) and is
unique.
That is, a unique solution exists for any initial condition given that the system is
reasonable (๐น ๐‘— and its partial derivatives in the ๐‘ฅ variables are continuous). As for single
equations we may not have a solution for all time ๐‘ก, but at least for some short period of
time.
As we can change any ๐‘›th order ODE into a first order system, then we notice that this
theorem provides also the existence and uniqueness of solutions for higher order equations
that we have until now not stated explicitly.
126
3.1.6
CHAPTER 3. SYSTEMS OF ODES
Exercises
Exercise 3.1.2: Find the general solution of ๐‘ฅ10 = ๐‘ฅ2 − ๐‘ฅ1 + ๐‘ก, ๐‘ฅ20 = ๐‘ฅ 2 .
Exercise 3.1.3: Find the general solution of ๐‘ฅ10 = 3๐‘ฅ 1 − ๐‘ฅ 2 + ๐‘’ ๐‘ก , ๐‘ฅ20 = ๐‘ฅ 1 .
Exercise 3.1.4: Write ๐‘Ž๐‘ฆ 00 + ๐‘๐‘ฆ 0 + ๐‘๐‘ฆ = ๐‘“ (๐‘ฅ) as a first order system of ODEs.
Exercise 3.1.5: Write ๐‘ฅ 00 + ๐‘ฆ 2 ๐‘ฆ 0 − ๐‘ฅ 3 = sin(๐‘ก), ๐‘ฆ 00 + (๐‘ฅ 0 + ๐‘ฆ 0)2 − ๐‘ฅ = 0 as a first order system of
ODEs.
Exercise 3.1.6: Suppose two masses on carts on frictionless surface are at displacements ๐‘ฅ1 and ๐‘ฅ2
as in Example 3.1.3 on page 122. Suppose that a rocket applies force ๐น in the positive direction on
cart ๐‘ฅ1 . Set up the system of equations.
Exercise 3.1.7: Suppose the tanks are as in Example 3.1.2 on page 121, starting both at volume ๐‘‰,
but now the rate of flow from tank 1 to tank 2 is ๐‘Ÿ1 , and rate of flow from tank 2 to tank one is ๐‘Ÿ2 .
Notice that the volumes are now not constant. Set up the system of equations.
Exercise 3.1.101: Find the general solution to ๐‘ฆ10 = 3๐‘ฆ1 , ๐‘ฆ20 = ๐‘ฆ1 + ๐‘ฆ2 , ๐‘ฆ30 = ๐‘ฆ1 + ๐‘ฆ3 .
Exercise 3.1.102: Solve ๐‘ฆ 0 = 2๐‘ฅ, ๐‘ฅ 0 = ๐‘ฅ + ๐‘ฆ, ๐‘ฅ(0) = 1, ๐‘ฆ(0) = 3.
Exercise 3.1.103: Write ๐‘ฅ 000 = ๐‘ฅ + ๐‘ก as a first order system.
Exercise 3.1.104: Write ๐‘ฆ100 + ๐‘ฆ1 + ๐‘ฆ2 = ๐‘ก, ๐‘ฆ200 + ๐‘ฆ1 − ๐‘ฆ2 = ๐‘ก 2 as a first order system.
Exercise 3.1.105: Suppose two masses on carts on frictionless surface are at displacements ๐‘ฅ1 and
๐‘ฅ2 as in Example 3.1.3 on page 122. Suppose initial displacement is ๐‘ฅ1 (0) = ๐‘ฅ2 (0) = 0, and initial
velocity is ๐‘ฅ 10 (0) = ๐‘ฅ20 (0) = ๐‘Ž for some number ๐‘Ž. Use your intuition to solve the system, explain
your reasoning.
Exercise 3.1.106: Suppose the tanks are as in Example 3.1.2 on page 121 except that clean water
flows in at the rate ๐‘  liters per second into tank 1, and brine flows out of tank 2 and into the sewer
also at the rate of ๐‘  liters per second.
a) Draw the picture.
b) Set up the system of equations.
c) Intuitively, what happens as ๐‘ก goes to infinity, explain.
127
3.2. MATRICES AND LINEAR SYSTEMS
3.2
Matrices and linear systems
Note: 1.5 lectures, first part of §5.1 in [EP], §7.2 and §7.3 in [BD], see also appendix A
3.2.1
Matrices and vectors
Before we start talking about linear systems of ODEs, we need to talk about matrices, so let
us review these briefly. A matrix is an ๐‘š × ๐‘› array of numbers (๐‘š rows and ๐‘› columns).
For example, we denote a 3 × 5 matrix as follows
๏ฃฎ ๐‘Ž11 ๐‘Ž12 ๐‘Ž13 ๐‘Ž14 ๐‘Ž15 ๏ฃน
๏ฃฏ
๏ฃบ
๐ด = ๏ฃฏ๏ฃฏ ๐‘Ž 21 ๐‘Ž 22 ๐‘Ž 23 ๐‘Ž 24 ๐‘Ž 25 ๏ฃบ๏ฃบ .
๏ฃฏ ๐‘Ž31 ๐‘Ž32 ๐‘Ž33 ๐‘Ž34 ๐‘Ž35 ๏ฃบ
๏ฃฐ
๏ฃป
The numbers ๐‘Ž ๐‘–๐‘— are called elements or entries.
By a vector we usually mean a column vector, that is an ๐‘š × 1 matrix. If we mean a row
vector, we will explicitly say so (a row vector is a 1 × ๐‘› matrix). We usually denote matrices
® By 0®
by upper case letters and vectors by lower case letters with an arrow such as ๐‘ฅ® or ๐‘.
we mean the vector of all zeros.
We define some operations on matrices. We want 1 × 1 matrices to really act like
numbers, so our operations have to be compatible with this viewpoint.
First, we can multiply a matrix by a scalar (a number). We simply multiply each entry
in the matrix by the scalar. For example,
1 2 3
2 4 6
2
=
.
4 5 6
8 10 12
Matrix addition is also easy. We add matrices element by element. For example,
1 2 3
1 1 −1
2 3 2
+
=
.
4 5 6
0 2 4
4 7 10
If the sizes do not match, then addition is not defined.
If we denote by 0 the matrix with all zero entries, by ๐‘, ๐‘‘ scalars, and by ๐ด, ๐ต, ๐ถ matrices,
we have the following familiar rules:
๐ด + 0 = ๐ด = 0 + ๐ด,
๐ด + ๐ต = ๐ต + ๐ด,
(๐ด + ๐ต) + ๐ถ = ๐ด + (๐ต + ๐ถ),
๐‘(๐ด + ๐ต) = ๐‘๐ด + ๐‘๐ต,
(๐‘ + ๐‘‘)๐ด = ๐‘๐ด + ๐‘‘๐ด.
128
CHAPTER 3. SYSTEMS OF ODES
Another useful operation for matrices is the so-called transpose. This operation just
swaps rows and columns of a matrix. The transpose of ๐ด is denoted by ๐ด๐‘‡ . Example:
3.2.2
1 2 3
4 5 6
๐‘‡
๏ฃฎ1 4๏ฃน
๏ฃฏ
๏ฃบ
= ๏ฃฏ๏ฃฏ2 5๏ฃบ๏ฃบ
๏ฃฏ3 6๏ฃบ
๏ฃฐ
๏ฃป
Matrix multiplication
Let us now define matrix multiplication. First we define the so-called dot product (or inner
product) of two vectors. Usually this will be a row vector multiplied with a column vector
of the same size. For the dot product we multiply each pair of entries from the first and the
second vector and we sum these products. The result is a single number. For example,
๐‘Ž1 ๐‘Ž2
๏ฃฎ ๏ฃน
๏ฃฏ๐‘ 1 ๏ฃบ
๐‘Ž 3 · ๏ฃฏ๏ฃฏ๐‘ 2 ๏ฃบ๏ฃบ = ๐‘Ž 1 ๐‘ 1 + ๐‘Ž 2 ๐‘ 2 + ๐‘Ž 3 ๐‘ 3 .
๏ฃฏ๐‘ 3 ๏ฃบ
๏ฃฐ ๏ฃป
Similarly for larger (or smaller) vectors.
Armed with the dot product we define the product of matrices. First let us denote by
row๐‘– (๐ด) the ๐‘– th row of ๐ด and by column ๐‘— (๐ด) the ๐‘— th column of ๐ด. For an ๐‘š × ๐‘› matrix ๐ด
and an ๐‘› × ๐‘ matrix ๐ต, we can define the product ๐ด๐ต. We let ๐ด๐ต be an ๐‘š × ๐‘ matrix whose
๐‘–๐‘— th entry is the dot product
row๐‘– (๐ด) · column ๐‘— (๐ต).
Do note how the sizes match up: ๐‘š × ๐‘› multiplied by ๐‘› × ๐‘ is ๐‘š × ๐‘. Example:
๏ฃฎ1 0 −1๏ฃน
๏ฃบ
1 2 3 ๏ฃฏ๏ฃฏ
๏ฃบ=
1
1
1
๏ฃบ
4 5 6 ๏ฃฏ๏ฃฏ
๏ฃบ
1
0
0
๏ฃฐ
๏ฃป
1·1+2·1+3·1
=
4·1+5·1+6·1
1·0+2·1+3·0
4·0+5·1+6·0
1 · (−1) + 2 · 1 + 3 · 0
6 2 1
=
4 · (−1) + 5 · 1 + 6 · 0
15 5 1
For multiplication we want an analogue of a 1. This analogue is the so-called identity
matrix. The identity matrix is a square matrix with 1s on the diagonal and zeros everywhere
else. It is usually denoted by ๐ผ. For each size we have a different identity matrix and so
sometimes we may denote the size as a subscript. For example, the ๐ผ3 would be the 3 × 3
identity matrix
๏ฃฎ1 0 0๏ฃน
๏ฃฏ
๏ฃบ
๐ผ = ๐ผ3 = ๏ฃฏ๏ฃฏ0 1 0๏ฃบ๏ฃบ .
๏ฃฏ0 0 1๏ฃบ
๏ฃฐ
๏ฃป
129
3.2. MATRICES AND LINEAR SYSTEMS
We have the following rules for matrix multiplication. Suppose that ๐ด, ๐ต, ๐ถ are matrices
of the correct sizes so that the following make sense. Let ๐›ผ denote a scalar (number).
๐ด(๐ต๐ถ) = (๐ด๐ต)๐ถ,
๐ด(๐ต + ๐ถ) = ๐ด๐ต + ๐ด๐ถ,
(๐ต + ๐ถ)๐ด = ๐ต๐ด + ๐ถ๐ด,
๐›ผ(๐ด๐ต) = (๐›ผ๐ด)๐ต = ๐ด(๐›ผ๐ต),
๐ผ๐ด = ๐ด = ๐ด๐ผ.
A few warnings are in order.
That is, matrices do not
(i) ๐ด๐ต ≠ ๐ต๐ด in general (it may be true
1 1by
fluke sometimes).
1 0
commute. For example, take ๐ด = 1 1 and ๐ต = 0 2 .
(ii) ๐ด๐ต = ๐ด๐ถ does not necessarily imply ๐ต = ๐ถ, even if ๐ด is not 0.
(iii) ๐ด๐ต = 0 does not necessarily mean that ๐ด = 0 or ๐ต = 0. Try, for example, ๐ด = ๐ต =
0 1
00
.
For the last two items to hold we would need to “divide” by a matrix. This is where the
matrix inverse comes in. Suppose that ๐ด and ๐ต are ๐‘› × ๐‘› matrices such that
๐ด๐ต = ๐ผ = ๐ต๐ด.
Then we call ๐ต the inverse of ๐ด and we denote ๐ต by ๐ด−1 . If the inverse of ๐ด exists, then we
call ๐ด invertible. If ๐ด is not invertible, we sometimes say ๐ด is singular.
If ๐ด is invertible, then ๐ด๐ต = ๐ด๐ถ does imply that ๐ต = ๐ถ (in particular, the inverse of
๐ด is unique). We just multiply both sides by ๐ด−1 (on the left) to get ๐ด−1 ๐ด๐ต = ๐ด−1 ๐ด๐ถ or
−1
๐ผ๐ต = ๐ผ๐ถ or ๐ต = ๐ถ. It is also not hard to see that (๐ด−1 ) = ๐ด.
3.2.3
The determinant
For square matrices we define a useful quantity called the determinant. We define the
determinant of a 1 × 1 matrix as the value of its only entry. For a 2 × 2 matrix we define
det
๐‘Ž ๐‘
๐‘ ๐‘‘
def
= ๐‘Ž๐‘‘ − ๐‘๐‘.
Before trying to define the determinant for larger matrices, let us note the meaning of
the determinant. Consider an ๐‘› × ๐‘› matrix as a mapping of the ๐‘›-dimensional euclidean
space โ„ ๐‘› to itself, where ๐‘ฅ® gets sent to ๐ด ๐‘ฅ®. In particular, a 2 × 2 matrix ๐ด is a mapping
of the plane to itself. The determinant of ๐ด is the factor by which the area of objects
changes. If we take the unit square (square of side 1) in the plane, then ๐ด takes the square
to a parallelogram of area |det(๐ด)|. The sign of det(๐ด) denotes changing of orientation
(negative if the axes get flipped). For example, let
1 1
๐ด=
.
−1 1
130
CHAPTER 3. SYSTEMS OF ODES
Then det(๐ด) = 1 + 1 = 2. Let us see where the (unit) square with vertices (0, 0), (1, 0), (0, 1),
and (1, 1) gets sent. Clearly (0, 0) gets sent to (0, 0).
1 1
−1 1
1
1
=
,
0
−1
1 1
−1 1
0
1
=
,
1
1
1 1
−1 1
1
2
=
.
1
0
The image of the square is another√square with vertices (0, 0), (1, −1), (1, 1), and (2, 0). The
image square has a side of length 2 and is therefore of area 2.
If you think back to high school geometry, you may have seen a formula for computing
the area of a parallelogram with vertices (0, 0), (๐‘Ž, ๐‘), (๐‘, ๐‘‘) and (๐‘Ž + ๐‘, ๐‘ + ๐‘‘). And it is
precisely
๐‘Ž ๐‘
det
.
๐‘ ๐‘‘
The vertical lines above mean absolute value. The matrix
the given parallelogram.
๐‘Ž ๐‘
๐‘ ๐‘‘
carries the unit square to
Let us look at the determinant for larger matrices. We define ๐ด ๐‘–๐‘— as the matrix ๐ด with
the ๐‘– th row and the ๐‘— th column deleted. To compute the determinant of a matrix, pick one
row, say the ๐‘– th row and compute:
det(๐ด) =
๐‘›
Õ
(−1)๐‘–+๐‘— ๐‘Ž ๐‘–๐‘— det(๐ด ๐‘–๐‘— ).
๐‘—=1
For the first row we get
(
det(๐ด) = ๐‘Ž 11 det(๐ด11 ) − ๐‘Ž 12 det(๐ด12 ) + ๐‘Ž 13 det(๐ด13 ) − · · ·
+๐‘Ž 1๐‘› det(๐ด1๐‘› ) if ๐‘› is odd,
−๐‘Ž 1๐‘› det(๐ด1๐‘› ) if ๐‘› even.
We alternately add and subtract the determinants of the submatrices ๐ด ๐‘–๐‘— multiplied by
๐‘Ž ๐‘–๐‘— for a fixed ๐‘– and all ๐‘—. For a 3 × 3 matrix, picking the first row, we get det(๐ด) =
๐‘Ž 11 det(๐ด11 ) − ๐‘Ž 12 det(๐ด12 ) + ๐‘Ž 13 det(๐ด13 ). For example,
๏ฃฎ1 2 3๏ฃน
๏ฃบª
5
6
4
6
4
5
© ๏ฃฏ๏ฃฏ
det ­ ๏ฃฏ4 5 6๏ฃบ๏ฃบ ® = 1 · det
− 2 · det
+ 3 · det
8 9
7 9
7 8
๏ฃฏ7 8 9๏ฃบ
«๏ฃฐ
๏ฃป¬
= 1(5 · 9 − 6 · 8) − 2(4 · 9 − 6 · 7) + 3(4 · 8 − 5 · 7) = 0.
The numbers (−1)๐‘–+๐‘— det(๐ด ๐‘–๐‘— ) are called cofactors of the matrix and this way of computing
the determinant is called the cofactor expansion. No matter which row you pick, you always
get the same number. It is also possible to compute the determinant by expanding along
columns (picking a column instead of a row above). It is true that det(๐ด) = det(๐ด๐‘‡ ).
A common notation for the determinant is a pair of vertical lines:
๐‘Ž ๐‘
= det
๐‘ ๐‘‘
๐‘Ž ๐‘
๐‘ ๐‘‘
.
131
3.2. MATRICES AND LINEAR SYSTEMS
I personally find this notation confusing as vertical lines usually mean a positive quantity,
while determinants can be negative. Also think about how to write the absolute value of a
determinant. I will not use this notation in this book.
Think of the determinants telling you the scaling of a mapping. If ๐ต doubles the sizes
of geometric objects and ๐ด triples them, then ๐ด๐ต (which applies ๐ต to an object and then ๐ด)
should make size go up by a factor of 6. This is true in general:
det(๐ด๐ต) = det(๐ด) det(๐ต).
This property is one of the most useful, and it is employed often to actually compute
determinants. A particularly interesting consequence is to note what it means for existence
of inverses. Take ๐ด and ๐ต to be inverses of each other, that is ๐ด๐ต = ๐ผ. Then
det(๐ด) det(๐ต) = det(๐ด๐ต) = det(๐ผ) = 1.
Neither det(๐ด) nor det(๐ต) can be zero. Let us state this as a theorem as it will be very
important in the context of this course.
Theorem 3.2.1. An ๐‘› × ๐‘› matrix ๐ด is invertible if and only if det(๐ด) ≠ 0.
1
In fact, det(๐ด−1 ) det(๐ด) = 1 says that det(๐ด−1 ) = det(๐ด)
. So we even know what the
determinant of ๐ด−1 is before we know how to compute ๐ด−1 .
There is a simple formula for the inverse of a 2 × 2 matrix
๐‘Ž ๐‘
๐‘ ๐‘‘
−1
1
๐‘‘ −๐‘
=
.
๐‘Ž๐‘‘ − ๐‘๐‘ −๐‘ ๐‘Ž
Notice the determinant of the matrix [ ๐‘Ž๐‘ ๐‘๐‘‘ ] in the denominator of the fraction. The formula
only works if the determinant is nonzero, otherwise we are dividing by zero.
3.2.4
Solving linear systems
One application of matrices we will need is to solve systems of linear equations. This is
best shown by example. Suppose that we have the following system of linear equations
2๐‘ฅ1 + 2๐‘ฅ2 + 2๐‘ฅ3 = 2,
๐‘ฅ1 + ๐‘ฅ2 + 3๐‘ฅ3 = 5,
๐‘ฅ1 + 4๐‘ฅ2 + ๐‘ฅ3 = 10.
Without changing the solution, we could swap equations in this system, we could
multiply any of the equations by a nonzero number, and we could add a multiple of one
equation to another equation. It turns out these operations always suffice to find a solution.
It is easier to write the system as a matrix equation. The system above can be written as
๏ฃฎ2 2 2๏ฃน ๏ฃฎ๐‘ฅ1 ๏ฃน ๏ฃฎ 2 ๏ฃน
๏ฃฏ
๏ฃบ๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบ
๏ฃฏ1 1 3๏ฃบ ๏ฃฏ๐‘ฅ2 ๏ฃบ = ๏ฃฏ 5 ๏ฃบ .
๏ฃฏ
๏ฃบ๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบ
๏ฃฏ1 4 1๏ฃบ ๏ฃฏ๐‘ฅ3 ๏ฃบ ๏ฃฏ10๏ฃบ
๏ฃฐ
๏ฃป๏ฃฐ ๏ฃป ๏ฃฐ ๏ฃป
132
CHAPTER 3. SYSTEMS OF ODES
To solve the system we put the coefficient matrix (the matrix on the left-hand side of the
equation) together with the vector on the right and side and get the so-called augmented
matrix
๏ฃฎ2 2 2 2 ๏ฃน
๏ฃฏ
๏ฃบ
๏ฃฏ 1 1 3 5 ๏ฃบ.
๏ฃฏ
๏ฃบ
๏ฃฏ 1 4 1 10 ๏ฃบ
๏ฃฐ
๏ฃป
We apply the following three elementary operations.
(i) Swap two rows.
(ii) Multiply a row by a nonzero number.
(iii) Add a multiple of one row to another row.
We keep doing these operations until we get into a state where it is easy to read off the
answer, or until we get into a contradiction indicating no solution, for example if we come
up with an equation such as 0 = 1.
Let us work through the example. First multiply the first row by 1/2 to obtain
๏ฃฎ1 1 1 1 ๏ฃน
๏ฃฏ
๏ฃบ
๏ฃฏ 1 1 3 5 ๏ฃบ.
๏ฃฏ
๏ฃบ
๏ฃฏ 1 4 1 10 ๏ฃบ
๏ฃฐ
๏ฃป
Now subtract the first row from the second and third row.
๏ฃฎ1 1 1 1๏ฃน
๏ฃฏ
๏ฃบ
๏ฃฏ0 0 2 4๏ฃบ
๏ฃฏ
๏ฃบ
๏ฃฏ0 3 0 9๏ฃบ
๏ฃฐ
๏ฃป
Multiply the last row by 1/3 and the second row by 1/2.
๏ฃฎ1 1 1 1๏ฃน
๏ฃฏ
๏ฃบ
๏ฃฏ0 0 1 2๏ฃบ
๏ฃฏ
๏ฃบ
๏ฃฏ0 1 0 3๏ฃบ
๏ฃฐ
๏ฃป
Swap rows 2 and 3.
๏ฃฎ1 1 1 1๏ฃน
๏ฃฏ
๏ฃบ
๏ฃฏ0 1 0 3๏ฃบ
๏ฃฏ
๏ฃบ
๏ฃฏ0 0 1 2๏ฃบ
๏ฃฐ
๏ฃป
Subtract the last row from the first, then subtract the second row from the first.
๏ฃฎ 1 0 0 −4 ๏ฃน
๏ฃบ
๏ฃฏ
๏ฃฏ0 1 0 3 ๏ฃบ
๏ฃฏ
๏ฃบ
๏ฃฏ0 0 1 2 ๏ฃบ
๏ฃฐ
๏ฃป
If we think about what equations this augmented matrix represents, we see that ๐‘ฅ1 = −4,
๐‘ฅ2 = 3, and ๐‘ฅ3 = 2. We try this solution in the original system and, voilà, it works!
133
3.2. MATRICES AND LINEAR SYSTEMS
Exercise 3.2.1: Check that the solution above really solves the given equations.
We write this equation in matrix notation as
®
๐ด ๐‘ฅ® = ๐‘,
where ๐ด is the matrix
via the inverse,
h2 2 2i
113
141
and ๐‘® is the vector
h
2
5
10
i
. The solution can also be computed
®
๐‘ฅ® = ๐ด−1 ๐ด ๐‘ฅ® = ๐ด−1 ๐‘.
It is possible that the solution is not unique, or that no solution exists. It is easy to tell if
a solution does not exist. If during the row reduction you come up with a row where all the
entries except the last one are zero (the last entry in a row corresponds to the right-hand
side of the equation), then the system is inconsistent and has no solution. For example, for
a system of 3 equations and 3 unknowns, if you find a row such as [ 0 0 0 | 1 ] in the
augmented matrix, you know the system is inconsistent. That row corresponds to 0 = 1.
You generally try to use row operations until the following conditions are satisfied. The
first (from the left) nonzero entry in each row is called the leading entry.
(i) The leading entry in any row is strictly to the right of the leading entry of the row
above.
(ii) Any zero rows are below all the nonzero rows.
(iii) All leading entries are 1.
(iv) All the entries above and below a leading entry are zero.
Such a matrix is said to be in reduced row echelon form. The variables corresponding to
columns with no leading entries are said to be free variables. Free variables mean that we can
pick those variables to be anything we want and then solve for the rest of the unknowns.
Example 3.2.1: The following augmented matrix is in reduced row echelon form.
๏ฃฎ1 2 0 3๏ฃน
๏ฃฏ
๏ฃบ
๏ฃฏ0 0 1 1๏ฃบ
๏ฃฏ
๏ฃบ
๏ฃฏ0 0 0 0๏ฃบ
๏ฃฐ
๏ฃป
Suppose the variables are ๐‘ฅ 1 , ๐‘ฅ2 , and ๐‘ฅ3 . Then ๐‘ฅ 2 is the free variable, ๐‘ฅ1 = 3 − 2๐‘ฅ2 , and
๐‘ฅ3 = 1.
On the other hand if during the row reduction process you come up with the matrix
๏ฃฎ 1 2 13 3 ๏ฃน
๏ฃฏ
๏ฃบ
๏ฃฏ 0 0 1 1 ๏ฃบ,
๏ฃฏ
๏ฃบ
๏ฃฏ0 0 0 3๏ฃบ
๏ฃฐ
๏ฃป
there is no need to go further. The last row corresponds to the equation 0๐‘ฅ1 + 0๐‘ฅ2 + 0๐‘ฅ3 = 3,
which is preposterous. Hence, no solution exists.
134
3.2.5
CHAPTER 3. SYSTEMS OF ODES
Computing the inverse
If the matrix ๐ด is square and there exists a unique solution ๐‘ฅ® to ๐ด ๐‘ฅ® = ๐‘® for any ๐‘® (there are
no free variables), then ๐ด is invertible. Multiplying both sides by ๐ด−1 , you can see that
® So it is useful to compute the inverse if you want to solve the equation for many
๐‘ฅ® = ๐ด−1 ๐‘.
®
different right-hand sides ๐‘.
We have a formula for the 2 × 2 inverse, but it is also not hard to compute inverses of
larger matrices. While we will not have too much occasion to compute inverses for larger
matrices than 2 × 2 by hand, let us touch on how to do it. Finding the inverse of ๐ด is
actually just solving a bunch of linear equations. If we can solve ๐ด ๐‘ฅ®๐‘˜ = ๐‘’®๐‘˜ where ๐‘’®๐‘˜ is the
vector with all zeros except a 1 at the ๐‘˜ th position, then the inverse is the matrix with the
columns ๐‘ฅ®๐‘˜ for ๐‘˜ = 1, 2, . . . , ๐‘› (exercise: why?). Therefore, to find the inverse we write a
larger ๐‘› × 2๐‘› augmented matrix [ ๐ด | ๐ผ ], where ๐ผ is the identity matrix. We then perform
row reduction. The reduced row echelon form of [ ๐ด | ๐ผ ] will be of the form [ ๐ผ | ๐ด−1 ] if
and only if ๐ด is invertible. We then just read off the inverse ๐ด−1 .
3.2.6
Exercises
Exercise 3.2.2: Solve
1 2
34
๐‘ฅ® =
5
6
by using matrix inverse.
h
Exercise 3.2.3: Compute determinant of
9 −2 −6
−8 3 6
10 −2 −6
1 2
3 1
40 5 0
60 7 0
8 0 10 1
Exercise 3.2.4: Compute determinant of
to make the calculations simpler.
Exercise 3.2.5: Compute inverse of
Exercise 3.2.6: For which โ„Ž is
Exercise 3.2.8: Solve
Exercise 3.2.9: Solve
h5 3 7i
844
633
.
not invertible? Find all such โ„Ž.
1 1
0 โ„Ž 0
1 1 โ„Ž
i
3 2 3 0
Exercise 3.2.10: Solve
. Hint: Expand along the proper row or column
hโ„Ž
i
h1i
๐‘ฅ® =
๐‘ฅ® =
3333
0242
2343
not invertible? Is there only one such โ„Ž? Are there several?
45 6
78 โ„Ž
9 −2 −6
−8 3 6
10 −2 −6
h
111
010
.
h1 2 3i
Infinitely many?
Exercise 3.2.7: For which โ„Ž is
h1 2 3i
i
2
3
h2i
0
0
๐‘ฅ® =
.
.
2
0
4
1
.
Exercise 3.2.11: Find 3 nonzero 2 × 2 matrices ๐ด, ๐ต, and ๐ถ such that ๐ด๐ต = ๐ด๐ถ but ๐ต ≠ ๐ถ.
Exercise 3.2.101: Compute determinant of
h1
1 1
2 3 −5
1 −1 0
i
135
3.2. MATRICES AND LINEAR SYSTEMS
Exercise 3.2.102: Find ๐‘ก such that
Exercise 3.2.103: Solve
1
1
1 −1
๐‘ฅ® =
1 ๐‘ก
−1 2
10 20
is not invertible.
.
Exercise 3.2.104: Suppose ๐‘Ž, ๐‘, ๐‘ are nonzero numbers. Let ๐‘€ =
a) Compute ๐‘€ −1 .
๐‘Ž 0
0 ๐‘
b) Compute ๐‘ −1 .
,๐‘=
h๐‘Ž
00
0 ๐‘ 0
0 0 ๐‘
i
.
136
3.3
CHAPTER 3. SYSTEMS OF ODES
Linear systems of ODEs
Note: less than 1 lecture, second part of §5.1 in [EP], §7.4 in [BD]
First let us talk about matrix- or vector-valued functions. Such a function is just a matrix
or vector whose entries depend on some variable. If ๐‘ก is the independent variable, we write
a vector-valued function ๐‘ฅ®(๐‘ก) as
๏ฃฎ ๐‘ฅ1 (๐‘ก) ๏ฃน
๏ฃฏ
๏ฃบ
๏ฃฏ ๐‘ฅ2 (๐‘ก) ๏ฃบ
๏ฃฏ
๐‘ฅ®(๐‘ก) = ๏ฃฏ .. ๏ฃบ๏ฃบ .
.
Similarly a matrix-valued function ๐ด(๐‘ก) is
๏ฃฏ
๏ฃบ
๏ฃฏ๐‘ฅ (๐‘ก)๏ฃบ
๏ฃฐ ๐‘› ๏ฃป
๏ฃฎ ๐‘Ž11 (๐‘ก) ๐‘Ž12 (๐‘ก) · · · ๐‘Ž1๐‘› (๐‘ก) ๏ฃน
๏ฃฏ
๏ฃบ
๏ฃฏ ๐‘Ž21 (๐‘ก) ๐‘Ž22 (๐‘ก) · · · ๐‘Ž2๐‘› (๐‘ก) ๏ฃบ
๐ด(๐‘ก) = ๏ฃฏ๏ฃฏ ..
..
.. ๏ฃบ๏ฃบ .
..
.
.
.
. ๏ฃบ
๏ฃฏ
๏ฃฏ ๐‘Ž (๐‘ก) ๐‘Ž (๐‘ก) · · · ๐‘Ž (๐‘ก)๏ฃบ
๐‘›2
๐‘›๐‘›
๏ฃฐ ๐‘›1
๏ฃป
0
th
The derivative ๐ด0(๐‘ก) or ๐‘‘๐ด
๐‘‘๐‘ก is just the matrix-valued function whose ๐‘–๐‘— entry is ๐‘Ž ๐‘–๐‘— (๐‘ก).
Rules of differentiation of matrix-valued functions are similar to rules for normal
functions. Let ๐ด(๐‘ก) and ๐ต(๐‘ก) be matrix-valued functions. Let ๐‘ a scalar and let ๐ถ be a
constant matrix. Then
0
๐ด(๐‘ก) + ๐ต(๐‘ก) = ๐ด0(๐‘ก) + ๐ต0(๐‘ก),
0
๐ด(๐‘ก)๐ต(๐‘ก) = ๐ด0(๐‘ก)๐ต(๐‘ก) + ๐ด(๐‘ก)๐ต0(๐‘ก),
0
๐‘๐ด(๐‘ก) = ๐‘๐ด0(๐‘ก),
0
๐ถ๐ด(๐‘ก) = ๐ถ๐ด0(๐‘ก),
๐ด(๐‘ก) ๐ถ
0
= ๐ด0(๐‘ก) ๐ถ.
Note the order of the multiplication in the last two expressions.
A first order linear system of ODEs is a system that can be written as the vector equation
๐‘ฅ®0(๐‘ก) = ๐‘ƒ(๐‘ก)๐‘ฅ®(๐‘ก) + ๐‘“®(๐‘ก),
where ๐‘ƒ(๐‘ก) is a matrix-valued function, and ๐‘ฅ®(๐‘ก) and ๐‘“®(๐‘ก) are vector-valued functions. We
will often suppress the dependence on ๐‘ก and only write ๐‘ฅ®0 = ๐‘ƒ ๐‘ฅ® + ๐‘“®. A solution of the
system is a vector-valued function ๐‘ฅ® satisfying the vector equation.
For example, the equations
๐‘ฅ10 = 2๐‘ก๐‘ฅ1 + ๐‘’ ๐‘ก ๐‘ฅ2 + ๐‘ก 2 ,
๐‘ฅ1
๐‘ฅ20 =
− ๐‘ฅ2 + ๐‘’ ๐‘ก ,
๐‘ก
137
3.3. LINEAR SYSTEMS OF ODES
can be written as
2๐‘ก ๐‘’ ๐‘ก
๐‘ก2
๐‘ฅ® = 1
๐‘ฅ® + ๐‘ก .
๐‘’
/๐‘ก −1
0
We will mostly concentrate on equations that are not just linear, but are in fact constant
coefficient equations. That is, the matrix ๐‘ƒ will be constant; it will not depend on ๐‘ก.
When ๐‘“® = 0® (the zero vector), then we say the system is homogeneous. For homogeneous
linear systems we have the principle of superposition, just like for single homogeneous
equations.
Theorem 3.3.1 (Superposition). Let ๐‘ฅ®0 = ๐‘ƒ ๐‘ฅ® be a linear homogeneous system of ODEs. Suppose
that ๐‘ฅ®1 , ๐‘ฅ®2 , . . . , ๐‘ฅ®๐‘› are ๐‘› solutions of the equation and ๐‘1 , ๐‘2 , . . . , ๐‘ ๐‘› are any constants, then
๐‘ฅ® = ๐‘ 1 ๐‘ฅ®1 + ๐‘2 ๐‘ฅ®2 + · · · + ๐‘ ๐‘› ๐‘ฅ®๐‘› ,
(3.2)
is also a solution. Furthermore, if this is a system of ๐‘› equations (๐‘ƒ is ๐‘› × ๐‘›), and ๐‘ฅ®1 , ๐‘ฅ®2 , . . . , ๐‘ฅ®๐‘›
are linearly independent, then every solution ๐‘ฅ® can be written as (3.2).
Linear independence for vector-valued functions is the same idea as for normal functions.
The vector-valued functions ๐‘ฅ®1 , ๐‘ฅ®2 , . . . , ๐‘ฅ®๐‘› are linearly independent when
๐‘1 ๐‘ฅ®1 + ๐‘2 ๐‘ฅ®2 + · · · + ๐‘ ๐‘› ๐‘ฅ®๐‘› = 0®
has only the solution ๐‘1 = ๐‘ 2 = · · · = ๐‘ ๐‘› = 0, where the equation must hold for all ๐‘ก.
Example 3.3.1: ๐‘ฅ®1 =
h i
๐‘ก2
๐‘ก
, ๐‘ฅ®2 =
h
0
1+๐‘ก
i
, ๐‘ฅ®3 =
h
−๐‘ก 2
1
i
are linearly dependent because ๐‘ฅ®1 + ๐‘ฅ®3 =
๐‘ฅ®2 , and this holds for all ๐‘ก. So ๐‘1 = 1, ๐‘ 2 = −1, and ๐‘3 = 1 above will
h i work. h i
h i
2
2
0
๐‘ก
On the other hand if we change the example just slightly ๐‘ฅ®1 = ๐‘ก , ๐‘ฅ®2 = ๐‘ก , ๐‘ฅ®3 = −๐‘ก1 ,
then the functions are linearly independent. First write ๐‘ 1 ๐‘ฅ®1 + ๐‘ 2 ๐‘ฅ®2 + ๐‘3 ๐‘ฅ®3 = 0® and note
that it has to hold for all ๐‘ก. We get that
๐‘1 ๐‘ก 2 − ๐‘3 ๐‘ก 2
0
๐‘ 1 ๐‘ฅ®1 + ๐‘2 ๐‘ฅ®2 + ๐‘3 ๐‘ฅ®3 =
=
.
๐‘1 ๐‘ก + ๐‘2 ๐‘ก + ๐‘3
0
In other words ๐‘1 ๐‘ก 2 − ๐‘3 ๐‘ก 2 = 0 and ๐‘ 1 ๐‘ก + ๐‘2 ๐‘ก + ๐‘3 = 0. If we set ๐‘ก = 0, then the second
equation becomes ๐‘3 = 0. But then the first equation becomes ๐‘ 1 ๐‘ก 2 = 0 for all ๐‘ก and so
๐‘1 = 0. Thus the second equation is just ๐‘2 ๐‘ก = 0, which means ๐‘ 2 = 0. So ๐‘1 = ๐‘ 2 = ๐‘3 = 0 is
the only solution and ๐‘ฅ®1 , ๐‘ฅ®2 , and ๐‘ฅ®3 are linearly independent.
The linear combination ๐‘1 ๐‘ฅ®1 + ๐‘ 2 ๐‘ฅ®2 + · · · + ๐‘ ๐‘› ๐‘ฅ®๐‘› could always be written as
๐‘‹(๐‘ก) ๐‘®,
where ๐‘‹(๐‘ก) is the matrix with columns ๐‘ฅ®1 , ๐‘ฅ®2 , . . . , ๐‘ฅ®๐‘› , and ๐‘® is the column vector with entries
๐‘1 , ๐‘2 , . . . , ๐‘ ๐‘› . Assuming that ๐‘ฅ®1 , ๐‘ฅ®2 , . . . , ๐‘ฅ®๐‘› are linearly independent, the matrix-valued
function ๐‘‹(๐‘ก) is called a fundamental matrix, or a fundamental matrix solution.
To solve nonhomogeneous first order linear systems, we use the same technique as we
applied to solve single linear nonhomogeneous equations.
138
CHAPTER 3. SYSTEMS OF ODES
Theorem 3.3.2. Let ๐‘ฅ®0 = ๐‘ƒ ๐‘ฅ® + ๐‘“® be a linear system of ODEs. Suppose ๐‘ฅ®๐‘ is one particular solution.
Then every solution can be written as
๐‘ฅ® = ๐‘ฅ®๐‘ + ๐‘ฅ®๐‘ ,
where ๐‘ฅ®๐‘ is a solution to the associated homogeneous equation (๐‘ฅ®0 = ๐‘ƒ ๐‘ฅ®).
The procedure for systems is the same as for single equations. We find a particular
solution to the nonhomogeneous equation, then we find the general solution to the
associated homogeneous equation, and finally we add the two together.
Alright, suppose you have found the general solution of ๐‘ฅ®0 = ๐‘ƒ ๐‘ฅ® + ๐‘“®. Next suppose
you are given an initial condition of the form
๐‘ฅ®(๐‘ก0 ) = ๐‘®
® Let ๐‘‹(๐‘ก) be a fundamental matrix solution of
for some fixed ๐‘ก0 and a constant vector ๐‘.
the associated homogeneous equation (i.e. columns of ๐‘‹(๐‘ก) are solutions). The general
solution can be written as
๐‘ฅ®(๐‘ก) = ๐‘‹(๐‘ก) ๐‘® + ๐‘ฅ®๐‘ (๐‘ก).
We are seeking a vector ๐‘® such that
๐‘® = ๐‘ฅ®(๐‘ก0 ) = ๐‘‹(๐‘ก0 ) ๐‘® + ๐‘ฅ®๐‘ (๐‘ก0 ).
In other words, we are solving for ๐‘® the nonhomogeneous system of linear equations
๐‘‹(๐‘ก0 ) ๐‘® = ๐‘® − ๐‘ฅ®๐‘ (๐‘ก0 ).
Example 3.3.2: In § 3.1 we solved the system
๐‘ฅ10 = ๐‘ฅ1 ,
๐‘ฅ20 = ๐‘ฅ1 − ๐‘ฅ2 ,
with initial conditions ๐‘ฅ1 (0) = 1, ๐‘ฅ2 (0) = 2. Let us consider this problem in the language of
this section.
® We write the system and the initial conditions
The system is homogeneous, so ๐‘“®(๐‘ก) = 0.
as
1 0
1
0
๐‘ฅ® =
๐‘ฅ®,
๐‘ฅ®(0) =
.
1 −1
2
We found the general solution
is i๐‘ฅ1 = ๐‘1 ๐‘’ ๐‘ก and ๐‘ฅ2 =
h
๐‘2 = 0, we obtain the solution
๐‘’๐‘ก
(1/2)๐‘’ ๐‘ก
๐‘1 ๐‘ก
2๐‘’
+ ๐‘ 2 ๐‘’ −๐‘ก . Letting ๐‘1 = 1 and
. Letting ๐‘1 = 0 and ๐‘ 2 = 1, we obtain
0
๐‘’ −๐‘ก
. These
two solutions are linearly independent, as can be seen by setting ๐‘ก = 0, and noting that
the resulting constant vectors are linearly independent. In matrix notation, a fundamental
matrix solution is, therefore,
๐‘ก
๐‘’
0
๐‘‹(๐‘ก) = 1 ๐‘ก −๐‘ก .
๐‘’
2๐‘’
139
3.3. LINEAR SYSTEMS OF ODES
To solve the initial value problem we solve for ๐‘® in the equation
®
๐‘‹(0) ๐‘® = ๐‘,
or in other words,
1 0
1
๐‘® =
.
1
2
2 1
A single elementary row operation shows ๐‘® =
๐‘’๐‘ก
0
๐‘ฅ®(๐‘ก) = ๐‘‹(๐‘ก) ๐‘® = 1 ๐‘ก −๐‘ก
๐‘’
2๐‘’
1
3/2
1
3
2
. Our solution is
๐‘’๐‘ก
= 1 ๐‘ก 3 −๐‘ก .
2๐‘’ + 2๐‘’
This new solution agrees with our previous solution from § 3.1.
3.3.1
Exercises
Exercise 3.3.1: Write the system ๐‘ฅ10 = 2๐‘ฅ1 − 3๐‘ก๐‘ฅ2 + sin ๐‘ก, ๐‘ฅ20 = ๐‘’ ๐‘ก ๐‘ฅ1 + 3๐‘ฅ2 + cos ๐‘ก in the form
๐‘ฅ®0 = ๐‘ƒ(๐‘ก)๐‘ฅ® + ๐‘“®(๐‘ก).
Exercise 3.3.2:
a) Verify that the system ๐‘ฅ®0 =
๐‘ฅ® has the two solutions
1 3
31
1
1
๐‘’ 4๐‘ก and
1
−1
๐‘’ −2๐‘ก .
b) Write down the general solution.
c) Write down the general solution in the form ๐‘ฅ 1 =?, ๐‘ฅ2 =? (i.e. write down a formula for each
element of the solution).
Exercise 3.3.3: Verify that
1
๐‘’ ๐‘ก and
1
−1
๐‘’ ๐‘ก are linearly independent. Hint: Just plug in ๐‘ก = 0.
Exercise 3.3.4: Verify that
h1i
๐‘’๐‘ก
h
1
−1
1
i
๐‘’๐‘ก
1
1
0
and
and
h
1
−1
1
i
๐‘’ 2๐‘ก are linearly independent. Hint: You
must be a bit more tricky than in the previous exercise.
Exercise 3.3.5: Verify that
๐‘ก
๐‘ก2
and
h i
๐‘ก3
๐‘ก4
are linearly independent.
Exercise 3.3.6: Take the system ๐‘ฅ10 + ๐‘ฅ20 = ๐‘ฅ 1 , ๐‘ฅ 10 − ๐‘ฅ 20 = ๐‘ฅ2 .
a) Write it in the form ๐ด ๐‘ฅ®0 = ๐ต ๐‘ฅ® for matrices ๐ด and ๐ต.
b) Compute ๐ด−1 and use that to write the system in the form ๐‘ฅ®0 = ๐‘ƒ ๐‘ฅ®.
Exercise 3.3.101: Are
h
Exercise 3.3.102: Are
cosh(๐‘ก) ๐‘ก Exercise 3.3.103: Write
๐‘’ 2๐‘ก
๐‘’๐‘ก
i
and
1
0
๐‘ฅ =
,
h
๐‘’๐‘ก
๐‘’ 2๐‘ก
๐‘’
1
i
linearly independent? Justify.
๐‘’ −๐‘ก
1
๐‘ก
0
๐‘’ ,๐‘ฆ =
, and
3๐‘ฅ − ๐‘ฆ +
linearly independent? Justify.
๐‘ก๐‘ฅ in matrix notation.
Exercise 3.3.104:
a) Write ๐‘ฅ10 = 2๐‘ก๐‘ฅ2 , ๐‘ฅ20 = 2๐‘ก๐‘ฅ2 in matrix notation.
b) Solve and write the solution in matrix notation.
140
3.4
CHAPTER 3. SYSTEMS OF ODES
Eigenvalue method
Note: 2 lectures, §5.2 in [EP], part of §7.3, §7.5, and §7.6 in [BD]
In this section we will learn how to solve linear homogeneous constant coefficient
systems of ODEs by the eigenvalue method. Suppose we have such a system
๐‘ฅ®0 = ๐‘ƒ ๐‘ฅ®,
where ๐‘ƒ is a constant square matrix. We wish to adapt the method for the single constant
coefficient equation by trying the function ๐‘’ ๐œ†๐‘ก . However, ๐‘ฅ® is a vector. So we try ๐‘ฅ® = ๐‘ฃ® ๐‘’ ๐œ†๐‘ก ,
where ๐‘ฃ® is an arbitrary constant vector. We plug this ๐‘ฅ® into the equation to get
๐œ†®
๐‘ฃ ๐‘’ ๐œ†๐‘ก = ๐‘ƒ®
๐‘ฃ ๐‘’ ๐œ†๐‘ก .
|{z}
๐‘ฅ®0
|{z}
๐‘ƒ ๐‘ฅ®
We divide by ๐‘’ ๐œ†๐‘ก and notice that we are looking for a scalar ๐œ† and a vector ๐‘ฃ® that satisfy the
equation
๐œ†®
๐‘ฃ = ๐‘ƒ®
๐‘ฃ.
To solve this equation we need a little bit more linear algebra, which we now review.
3.4.1
Eigenvalues and eigenvectors of a matrix
Let ๐ด be a constant square matrix. Suppose there is a scalar ๐œ† and a nonzero vector ๐‘ฃ® such
that
๐ด®
๐‘ฃ = ๐œ†®
๐‘ฃ.
We call ๐œ† an eigenvalue of ๐ด and we call ๐‘ฃ® a corresponding eigenvector.
Example 3.4.1: The matrix
1
0 as
2 1
01
has an eigenvalue ๐œ† = 2 with a corresponding eigenvector
2 1
0 1
1
2
1
=
=2
.
0
0
0
Let us see how to compute eigenvalues for any matrix. Rewrite the equation for an
eigenvalue as
®
(๐ด − ๐œ†๐ผ)®
๐‘ฃ = 0.
This equation has a nonzero solution ๐‘ฃ® only if ๐ด − ๐œ†๐ผ is not invertible. Were it invertible,
® which implies ๐‘ฃ® = 0.
® Therefore, ๐ด has
we could write (๐ด − ๐œ†๐ผ)−1 (๐ด − ๐œ†๐ผ)®
๐‘ฃ = (๐ด − ๐œ†๐ผ)−1 0,
the eigenvalue ๐œ† if and only if ๐œ† solves the equation
det(๐ด − ๐œ†๐ผ) = 0.
Consequently, we can find an eigenvalue of ๐ด without finding a corresponding eigenvector at the same time. An eigenvector will have to be found later, once ๐œ† is known.
141
3.4. EIGENVALUE METHOD
Example 3.4.2: Find all eigenvalues of
We write
h2 1 1i
120
002
.
๏ฃฎ2 1 1๏ฃน
๏ฃฎ1 0 0๏ฃน
๏ฃฎ2 − ๐œ†
1
1 ๏ฃน๏ฃบ
๏ฃบ
๏ฃฏ
๏ฃบª
๏ฃฏ
© ๏ฃฏ๏ฃฏ
©
ª
2−๐œ†
0 ๏ฃบ๏ฃบ ® =
det ­ ๏ฃฏ1 2 0๏ฃบ๏ฃบ − ๐œ† ๏ฃฏ๏ฃฏ0 1 0๏ฃบ๏ฃบ ® = det ­ ๏ฃฏ๏ฃฏ 1
๏ฃฏ
๏ฃบ
๏ฃฏ0 0 1๏ฃบ
๏ฃฏ
0
2 − ๐œ†๏ฃบ๏ฃป ¬
« ๏ฃฐ0 0 2๏ฃป
«๏ฃฐ 0
๏ฃฐ
๏ฃป¬
2
= (2 − ๐œ†) (2 − ๐œ†) − 1 = −(๐œ† − 1)(๐œ† − 2)(๐œ† − 3).
So the eigenvalues are ๐œ† = 1, ๐œ† = 2, and ๐œ† = 3.
For an ๐‘› × ๐‘› matrix, the polynomial we get by computing det(๐ด − ๐œ†๐ผ) is of degree ๐‘›, and
hence in general, we have ๐‘› eigenvalues. Some may be repeated, some may be complex.
To find an eigenvector corresponding to an eigenvalue ๐œ†, we write
®
(๐ด − ๐œ†๐ผ)®
๐‘ฃ = 0,
and solve for a nontrivial (nonzero) vector ๐‘ฃ® . If ๐œ† is an eigenvalue, there will be at least one
free variable, and so for each distinct eigenvalue ๐œ†, we can always find an eigenvector.
Example 3.4.3: Find an eigenvector of
We write
h2 1 1i
120
002
corresponding to the eigenvalue ๐œ† = 3.
๏ฃฎ2 1 1๏ฃน
๏ฃฎ1 0 0๏ฃน ๏ฃฎ๐‘ฃ1 ๏ฃน ๏ฃฎ−1 1 1 ๏ฃน
๏ฃบ
๏ฃฏ
๏ฃบª ๏ฃฏ ๏ฃบ ๏ฃฏ
๏ฃบ
© ๏ฃฏ๏ฃฏ
๏ฃบ
(๐ด − ๐œ†๐ผ)®
๐‘ฃ = ­ ๏ฃฏ1 2 0๏ฃบ − 3 ๏ฃฏ๏ฃฏ0 1 0๏ฃบ๏ฃบ ® ๏ฃฏ๏ฃฏ๐‘ฃ2 ๏ฃบ๏ฃบ = ๏ฃฏ๏ฃฏ 1 −1 0 ๏ฃบ๏ฃบ
๏ฃฏ
๏ฃบ
๏ฃฏ0 0 1๏ฃบ ๏ฃฏ๐‘ฃ3 ๏ฃบ ๏ฃฏ 0 0 −1๏ฃบ
« ๏ฃฐ0 0 2๏ฃป
๏ฃฐ
๏ฃป¬ ๏ฃฐ ๏ฃป ๏ฃฐ
๏ฃป
๏ฃฎ๐‘ฃ 1 ๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฏ๐‘ฃ2 ๏ฃบ = 0.
®
๏ฃฏ ๏ฃบ
๏ฃฏ๐‘ฃ 3 ๏ฃบ
๏ฃฐ ๏ฃป
It is easy to solve this system of linear equations. We write down the augmented matrix
๏ฃฎ −1 1 1 0 ๏ฃน
๏ฃฏ
๏ฃบ
๏ฃฏ 1 −1 0 0 ๏ฃบ ,
๏ฃบ
๏ฃฏ
๏ฃฏ 0 0 −1 0 ๏ฃบ
๏ฃป
๏ฃฐ
and perform row operations (exercise: which ones?) until we get:
๏ฃฎ 1 −1 0 0 ๏ฃน
๏ฃฏ
๏ฃบ
๏ฃฏ 0 0 1 0 ๏ฃบ.
๏ฃฏ
๏ฃบ
๏ฃฏ0 0 0 0๏ฃบ
๏ฃฐ
๏ฃป
The entries of ๐‘ฃ® have to satisfy the equations ๐‘ฃ1 − ๐‘ฃ 2 = 0, ๐‘ฃ3 = 0, and ๐‘ฃ2 is a free variable.
We can pick ๐‘ฃ 2 to be arbitrary
h i(but nonzero), let ๐‘ฃ1 = ๐‘ฃ2 , and of course ๐‘ฃ3 = 0. For example,
if we pick ๐‘ฃ2 = 1, then ๐‘ฃ® =
to ๐œ† = 3:
Yay! It worked.
1
1
0
. Let us verify that ๐‘ฃ® really is an eigenvector corresponding
๏ฃฎ2 1 1๏ฃน ๏ฃฎ1๏ฃน ๏ฃฎ3๏ฃน
๏ฃฎ1๏ฃน
๏ฃฏ
๏ฃบ๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ1 2 0๏ฃบ ๏ฃฏ1๏ฃบ = ๏ฃฏ3๏ฃบ = 3 ๏ฃฏ1๏ฃบ .
๏ฃฏ
๏ฃบ๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ0 0 2๏ฃบ ๏ฃฏ0๏ฃบ ๏ฃฏ0๏ฃบ
๏ฃฏ0๏ฃบ
๏ฃฐ
๏ฃป๏ฃฐ ๏ฃป ๏ฃฐ ๏ฃป
๏ฃฐ ๏ฃป
142
CHAPTER 3. SYSTEMS OF ODES
Exercise 3.4.1 (easy): Are eigenvectors unique? Can you find a different eigenvector for ๐œ† = 3 in
the example above? How are the two eigenvectors related?
Exercise 3.4.2: When the matrix is 2 × 2 you do not need to do row operations when computing an
eigenvector, you can read it off from ๐ด − ๐œ†๐ผ (if
2you
have computed the eigenvalues correctly). Can
1
you see why? Explain. Try it for the matrix 1 2 .
3.4.2
The eigenvalue method with distinct real eigenvalues
OK. We have the system of equations
๐‘ฅ®0 = ๐‘ƒ ๐‘ฅ®.
We find the eigenvalues ๐œ†1 , ๐œ†2 , . . . , ๐œ†๐‘› of the matrix ๐‘ƒ, and corresponding eigenvectors ๐‘ฃ®1 ,
๐‘ฃ®2 , . . . , ๐‘ฃ® ๐‘› . Now we notice that the functions ๐‘ฃ®1 ๐‘’ ๐œ†1 ๐‘ก , ๐‘ฃ®2 ๐‘’ ๐œ†2 ๐‘ก , . . . , ๐‘ฃ® ๐‘› ๐‘’ ๐œ†๐‘› ๐‘ก are solutions of the
system of equations and hence ๐‘ฅ® = ๐‘1 ๐‘ฃ®1 ๐‘’ ๐œ†1 ๐‘ก + ๐‘2 ๐‘ฃ®2 ๐‘’ ๐œ†2 ๐‘ก + · · · + ๐‘ ๐‘› ๐‘ฃ® ๐‘› ๐‘’ ๐œ†๐‘› ๐‘ก is a solution.
Theorem 3.4.1. Take ๐‘ฅ®0 = ๐‘ƒ ๐‘ฅ®. If ๐‘ƒ is an ๐‘› × ๐‘› constant matrix that has ๐‘› distinct real eigenvalues
๐œ†1 , ๐œ†2 , . . . , ๐œ†๐‘› , then there exist ๐‘› linearly independent corresponding eigenvectors ๐‘ฃ®1 , ๐‘ฃ®2 , . . . , ๐‘ฃ® ๐‘› ,
and the general solution to ๐‘ฅ®0 = ๐‘ƒ ๐‘ฅ® can be written as
๐‘ฅ® = ๐‘ 1 ๐‘ฃ®1 ๐‘’ ๐œ†1 ๐‘ก + ๐‘2 ๐‘ฃ®2 ๐‘’ ๐œ†2 ๐‘ก + · · · + ๐‘ ๐‘› ๐‘ฃ® ๐‘› ๐‘’ ๐œ†๐‘› ๐‘ก .
The corresponding fundamental matrix solution is
๐‘‹(๐‘ก) = ๐‘ฃ®1 ๐‘’ ๐œ†1 ๐‘ก
๐‘ฃ®2 ๐‘’ ๐œ†2 ๐‘ก
···
๐‘ฃ® ๐‘› ๐‘’ ๐œ†๐‘› ๐‘ก .
That is, ๐‘‹(๐‘ก) is the matrix whose ๐‘— th column is ๐‘ฃ® ๐‘— ๐‘’ ๐œ† ๐‘— ๐‘ก .
Example 3.4.4: Consider the system
๏ฃฎ2 1 1๏ฃน
๏ฃฏ
๏ฃบ
๐‘ฅ® = ๏ฃฏ๏ฃฏ1 2 0๏ฃบ๏ฃบ ๐‘ฅ®.
๏ฃฏ0 0 2๏ฃบ
๏ฃฐ
๏ฃป
0
Find the general solution.
Earlier, we found the eigenvalues are 1, 2, 3. We found the eigenvector
eigenvalue 3. Similarly we find the eigenvector
h
1
−1
0
i
for the eigenvalue 1, and
eigenvalue 2 (exercise: check). Hence our general solution is
๏ฃฎ1๏ฃน
๏ฃฎ0๏ฃน
๏ฃฎ1๏ฃน
๏ฃฎ
๏ฃน
๐‘1 ๐‘’ ๐‘ก + ๐‘3 ๐‘’ 3๐‘ก
๏ฃฏ ๏ฃบ ๐‘ก
๏ฃฏ ๏ฃบ 2๐‘ก
๏ฃฏ ๏ฃบ 3๐‘ก ๏ฃฏ
๏ฃบ
๐‘ก
2๐‘ก
3๐‘ก
๏ฃฏ
๏ฃบ
๏ฃฏ
๏ฃบ
๏ฃฏ
๏ฃบ
๏ฃฏ
๐‘ฅ® = ๐‘ 1 ๏ฃฏ−1๏ฃบ ๐‘’ + ๐‘2 ๏ฃฏ 1 ๏ฃบ ๐‘’ + ๐‘ 3 ๏ฃฏ1๏ฃบ ๐‘’ = ๏ฃฏ−๐‘1 ๐‘’ + ๐‘ 2 ๐‘’ + ๐‘ 3 ๐‘’ ๏ฃบ๏ฃบ .
๏ฃฏ0๏ฃบ
๏ฃฏ−1๏ฃบ
๏ฃฏ0๏ฃบ
๏ฃฏ
๏ฃบ
−๐‘ 2 ๐‘’ 2๐‘ก
๏ฃฐ ๏ฃป
๏ฃฐ ๏ฃป
๏ฃฐ ๏ฃป
๏ฃฐ
๏ฃป
In terms of a fundamental matrix solution,
๏ฃฎ ๐‘’๐‘ก
0
๐‘’ 3๐‘ก ๏ฃน๏ฃบ ๏ฃฎ๏ฃฏ ๐‘1 ๏ฃน๏ฃบ
๏ฃฏ ๐‘ก
๐‘’ 2๐‘ก ๐‘’ 3๐‘ก ๏ฃบ๏ฃบ ๏ฃฏ๏ฃฏ ๐‘2 ๏ฃบ๏ฃบ .
๐‘ฅ® = ๐‘‹(๐‘ก) ๐‘® = ๏ฃฏ๏ฃฏ−๐‘’
๏ฃฏ 0 −๐‘’ 2๐‘ก 0 ๏ฃบ ๏ฃฏ๐‘3 ๏ฃบ
๏ฃฐ
๏ฃป๏ฃฐ ๏ฃป
h1i
h
1 for the
0 i
0
for the
1
−1
143
3.4. EIGENVALUE METHOD
Exercise 3.4.3: Check that this ๐‘ฅ® really solves the system.
Note: If we write a single homogeneous linear constant coefficient ๐‘› th order equation
as a first order system (as we did in § 3.1), then the eigenvalue equation
det(๐‘ƒ − ๐œ†๐ผ) = 0
is essentially the same as the characteristic equation we got in § 2.2 and § 2.3.
3.4.3
Complex eigenvalues
A matrix may very well have complex eigenvalues even if all the entries are real. Take, for
example,
1 1
0
๐‘ฅ®.
๐‘ฅ® =
−1 1
Let us compute the eigenvalues of the matrix ๐‘ƒ =
det(๐‘ƒ − ๐œ†๐ผ) = det
1−๐œ†
1
−1 1 − ๐œ†
1 1
−1 1
.
= (1 − ๐œ†)2 + 1 = ๐œ†2 − 2๐œ† + 2 = 0.
Thus ๐œ† = 1 ± ๐‘–. Corresponding eigenvectors are also complex. Start with ๐œ† = 1 − ๐‘–.
®
๐‘ƒ − (1 − ๐‘–)๐ผ ๐‘ฃ® = 0,
๐‘– 1
®
๐‘ฃ® = 0.
−1 ๐‘–
The equations ๐‘–๐‘ฃ1 + ๐‘ฃ 2 = 0 and −๐‘ฃ 1 + ๐‘–๐‘ฃ2 = 0 are multiples of each other. So we only need
to consider one of them. After picking ๐‘ฃ2 = 1, for example, we have an eigenvector ๐‘ฃ® = 1๐‘– .
In similar fashion we find that −๐‘–
1 is an eigenvector corresponding to the eigenvalue 1 + ๐‘–.
We could write the solution as
๐‘ฅ® = ๐‘1
๐‘– (1−๐‘–)๐‘ก
−๐‘– (1+๐‘–)๐‘ก
๐‘ ๐‘–๐‘’ (1−๐‘–)๐‘ก − ๐‘ 2 ๐‘–๐‘’ (1+๐‘–)๐‘ก
๐‘’
+ ๐‘2
๐‘’
= 1 (1−๐‘–)๐‘ก
.
1
1
๐‘1 ๐‘’
+ ๐‘ 2 ๐‘’ (1+๐‘–)๐‘ก
We would then need to look for complex values ๐‘1 and ๐‘2 to solve any initial conditions. It
is perhaps not completely clear that we get a real solution. After solving for ๐‘ 1 and ๐‘ 2 , we
could use Euler’s formula and do the whole song and dance we did before, but we will not.
We will apply the formula in a smarter way first to find independent real solutions.
We claim that we did not have to look for a second eigenvector (nor for the second
eigenvalue). All complex eigenvalues come in pairs (because the matrix ๐‘ƒ is real).
๐‘ง
First a small detour. The real part of a complex number ๐‘ง can be computed as ๐‘ง+¯
2 ,
where the bar above ๐‘ง means ๐‘Ž + ๐‘–๐‘ = ๐‘Ž − ๐‘–๐‘. This operation is called the complex conjugate.
If ๐‘Ž is a real number, then ๐‘Ž¯ = ๐‘Ž. Similarly we bar whole vectors or matrices by taking
144
CHAPTER 3. SYSTEMS OF ODES
the complex conjugate of every entry. Suppose a matrix ๐‘ƒ is real. Then ๐‘ƒ = ๐‘ƒ, and so
๐‘ƒ ๐‘ฅ® = ๐‘ƒ ๐‘ฅ® = ๐‘ƒ ๐‘ฅ®. Also the complex conjugate of 0 is still 0, therefore,
¯ ๐‘ฃ.
๐‘ฃ = (๐‘ƒ − ๐œ†๐ผ)®
0® = 0® = (๐‘ƒ − ๐œ†๐ผ)®
In other words, if ๐œ† = ๐‘Ž + ๐‘–๐‘ is an eigenvalue, then so is ๐œ†¯ = ๐‘Ž − ๐‘–๐‘. And if ๐‘ฃ® is an eigenvector
corresponding to the eigenvalue ๐œ†, then ๐‘ฃ® is an eigenvector corresponding to the eigenvalue
¯
๐œ†.
Suppose ๐‘Ž + ๐‘–๐‘ is a complex eigenvalue of ๐‘ƒ, and ๐‘ฃ® is a corresponding eigenvector. Then
๐‘ฅ®1 = ๐‘ฃ® ๐‘’ (๐‘Ž+๐‘–๐‘)๐‘ก
is a solution (complex-valued) of ๐‘ฅ®0 = ๐‘ƒ ๐‘ฅ®. Euler’s formula shows that ๐‘’ ๐‘Ž+๐‘–๐‘ = ๐‘’ ๐‘Ž−๐‘–๐‘ , and so
๐‘ฅ®2 = ๐‘ฅ®1 = ๐‘ฃ® ๐‘’ (๐‘Ž−๐‘–๐‘)๐‘ก
is also a solution. As ๐‘ฅ®1 and ๐‘ฅ®2 are solutions, the function
๐‘ฅ®3 = Re ๐‘ฅ®1 = Re ๐‘ฃ® ๐‘’ (๐‘Ž+๐‘–๐‘)๐‘ก =
๐‘ฅ®1 + ๐‘ฅ®1 ๐‘ฅ®1 + ๐‘ฅ®2 1
1
=
= ๐‘ฅ®1 + ๐‘ฅ®2
2
2
2
2
๐‘ง
is also a solution. And ๐‘ฅ®3 is real-valued! Similarly as Im ๐‘ง = ๐‘ง−¯
2๐‘– is the imaginary part, we
find that
๐‘ฅ®1 − ๐‘ฅ®1 ๐‘ฅ®1 − ๐‘ฅ®2
๐‘ฅ®4 = Im ๐‘ฅ®1 =
=
.
2๐‘–
2๐‘–
is also a real-valued solution. It turns out that ๐‘ฅ®3 and ๐‘ฅ®4 are linearly independent. We will
use Euler’s formula to separate out the real and imaginary part.
Returning to our problem,
๐‘– (1−๐‘–)๐‘ก
๐‘–
๐‘ฅ®1 =
๐‘’
=
1
1
Then
๐‘–๐‘’ ๐‘ก cos ๐‘ก + ๐‘’ ๐‘ก sin ๐‘ก
๐‘’ ๐‘ก sin ๐‘ก
๐‘’ ๐‘ก cos ๐‘ก
๐‘’ cos ๐‘ก − ๐‘–๐‘’ sin ๐‘ก = ๐‘ก
=
+
๐‘–
.
๐‘’ cos ๐‘ก − ๐‘–๐‘’ ๐‘ก sin ๐‘ก
๐‘’ ๐‘ก cos ๐‘ก
−๐‘’ ๐‘ก sin ๐‘ก
๐‘ก
๐‘ก
๐‘’ ๐‘ก sin ๐‘ก
Re ๐‘ฅ®1 = ๐‘ก
,
๐‘’ cos ๐‘ก
and
๐‘’ ๐‘ก cos ๐‘ก
Im ๐‘ฅ®1 =
,
−๐‘’ ๐‘ก sin ๐‘ก
are the two real-valued linearly independent solutions we seek.
Exercise 3.4.4: Check that these really are solutions.
The general solution is
๐‘ฅ® = ๐‘ 1
๐‘’ ๐‘ก sin ๐‘ก
๐‘’ ๐‘ก cos ๐‘ก
๐‘ 1 ๐‘’ ๐‘ก sin ๐‘ก + ๐‘ 2 ๐‘’ ๐‘ก cos ๐‘ก
+
๐‘
=
.
2
๐‘’ ๐‘ก cos ๐‘ก
−๐‘’ ๐‘ก sin ๐‘ก
๐‘ 1 ๐‘’ ๐‘ก cos ๐‘ก − ๐‘ 2 ๐‘’ ๐‘ก sin ๐‘ก
This solution is real-valued for real ๐‘ 1 and ๐‘2 . At this point, we would solve for any initial
conditions we may have to find ๐‘1 and ๐‘ 2 .
Let us summarize the discussion as a theorem.
145
3.4. EIGENVALUE METHOD
Theorem 3.4.2. Let ๐‘ƒ be a real-valued constant matrix. If ๐‘ƒ has a complex eigenvalue ๐‘Ž + ๐‘–๐‘ and
a corresponding eigenvector ๐‘ฃ® , then ๐‘ƒ also has a complex eigenvalue ๐‘Ž − ๐‘–๐‘ with a corresponding
eigenvector ๐‘ฃ® . Furthermore, ๐‘ฅ®0 = ๐‘ƒ ๐‘ฅ® has two linearly independent real-valued solutions
๐‘ฅ®1 = Re ๐‘ฃ® ๐‘’ (๐‘Ž+๐‘–๐‘)๐‘ก ,
๐‘ฅ®2 = Im ๐‘ฃ® ๐‘’ (๐‘Ž+๐‘–๐‘)๐‘ก .
and
For each pair of complex eigenvalues ๐‘Ž + ๐‘–๐‘ and ๐‘Ž − ๐‘–๐‘, we get two real-valued linearly
independent solutions. We then go on to the next eigenvalue, which is either a real
eigenvalue or another complex eigenvalue pair. If we have ๐‘› distinct eigenvalues (real
or complex), then we end up with ๐‘› linearly independent solutions. If we had only two
equations (๐‘› = 2) as in the example above, then once we found two solutions we are
finished, and our general solution is
๐‘ฅ® = ๐‘ 1 ๐‘ฅ®1 + ๐‘2 ๐‘ฅ®2 = ๐‘1 Re ๐‘ฃ® ๐‘’ (๐‘Ž+๐‘–๐‘)๐‘ก + ๐‘ 2 Im ๐‘ฃ® ๐‘’ (๐‘Ž+๐‘–๐‘)๐‘ก .
We can now find a real-valued general solution to any homogeneous system where the
matrix has distinct eigenvalues. When we have repeated eigenvalues, matters get a bit
more complicated and we will look at that situation in § 3.7.
3.4.4
Exercises
Exercise 3.4.5 (easy):
h 1 i Let ๐ด be a 3 × 3 matrix with an eigenvalue of 3 and a corresponding
eigenvector ๐‘ฃ® = −1 . Find ๐ด®
๐‘ฃ.
3
Exercise 3.4.6:
a) Find the general solution of ๐‘ฅ10 = 2๐‘ฅ1 , ๐‘ฅ 20 = 3๐‘ฅ 2 using the eigenvalue method (first write the
system in the form ๐‘ฅ®0 = ๐ด ๐‘ฅ®).
b) Solve the system by solving each equation separately and verify you get the same general
solution.
Exercise 3.4.7: Find the general solution of ๐‘ฅ10 = 3๐‘ฅ1 + ๐‘ฅ2 , ๐‘ฅ20 = 2๐‘ฅ1 + 4๐‘ฅ2 using the eigenvalue
method.
Exercise 3.4.8: Find the general solution of ๐‘ฅ10 = ๐‘ฅ 1 − 2๐‘ฅ2 , ๐‘ฅ20 = 2๐‘ฅ1 + ๐‘ฅ2 using the eigenvalue
method. Do not use complex exponentials in your solution.
Exercise 3.4.9:
a) Compute eigenvalues and eigenvectors of ๐ด =
h
9 −2 −6
−8 3 6
10 −2 −6
i
.
b) Find the general solution of ๐‘ฅ®0 = ๐ด ๐‘ฅ®.
Exercise 3.4.10: Compute eigenvalues and eigenvectors of
h −2 −1 −1 i
3 2 1
−3 −1 0
.
146
CHAPTER 3. SYSTEMS OF ODES
Exercise 3.4.11: Let ๐‘Ž, ๐‘, ๐‘, ๐‘‘, ๐‘’, ๐‘“ be numbers. Find the eigenvalues of
h๐‘Ž
Exercise 3.4.101:
a) Compute eigenvalues and eigenvectors of ๐ด =
h
1 03
−1 0 1
2 02
1 1
−1 0
i
.
b) Solve the system ๐‘ฅ® 0 = ๐ด ๐‘ฅ®.
Exercise 3.4.102:
a) Compute eigenvalues and eigenvectors of ๐ด =
.
b) Solve the system ๐‘ฅ® 0 = ๐ด ๐‘ฅ®.
Exercise 3.4.103: Solve ๐‘ฅ10 = ๐‘ฅ2 , ๐‘ฅ20 = ๐‘ฅ1 using the eigenvalue method.
Exercise 3.4.104: Solve ๐‘ฅ10 = ๐‘ฅ2 , ๐‘ฅ20 = −๐‘ฅ1 using the eigenvalue method.
๐‘ ๐‘
0 ๐‘‘ ๐‘’
0 0 ๐‘“
i
.
147
3.5. TWO-DIMENSIONAL SYSTEMS AND THEIR VECTOR FIELDS
3.5
Two-dimensional systems and their vector fields
Note: 1 lecture, part of §6.2 in [EP], parts of §7.5 and §7.6 in [BD]
Let us take a moment to talk about constant coefficient linear homogeneous systems in
the plane. Much intuition can be obtained by studying
๐‘Ž ๐‘ this simple case. We use coordinates
(๐‘ฅ, ๐‘ฆ) for the plane as usual, and suppose ๐‘ƒ = ๐‘ ๐‘‘ is a 2 × 2 matrix. Consider the system
0
๐‘ฅ
๐‘ฆ
๐‘ฅ
=๐‘ƒ
๐‘ฆ
0
๐‘ฅ
๐‘ฆ
or
๐‘Ž ๐‘
=
๐‘ ๐‘‘
๐‘ฅ
.
๐‘ฆ
(3.3)
The system is autonomous (compare this section to § 1.6) and so we can draw a vector field
(see the end of § 3.1). We will be able to visually tell what the vector field looks like and how
the solutions behave, once we find the eigenvalues and eigenvectors of the matrix ๐‘ƒ. For
this section, we assume that ๐‘ƒ has two eigenvalues and two corresponding eigenvectors.
Case 1. Suppose that the eigenvalues of ๐‘ƒ are real and positive. We find two
1 1 corresponding eigenvectors and plot them in the plane. For example, take the matrix 0 2 . The
eigenvalues are 1 and 2 and corresponding eigenvectors are 10 and 11 . See Figure 3.4.
Let (๐‘ฅ, ๐‘ฆ) be a point on the line deter๐‘ฃ® for an eigenvalue
mined by an
๐‘ฅeigenvector
๐œ†. That is, ๐‘ฆ = ๐›ผ®
๐‘ฃ for some scalar ๐›ผ. Then
-3
0
๐‘ฅ
๐‘ฆ
=๐‘ƒ
๐‘ฅ
= ๐‘ƒ(๐›ผ®
๐‘ฃ ) = ๐›ผ(๐‘ƒ®
๐‘ฃ ) = ๐›ผ๐œ†®
๐‘ฃ.
๐‘ฆ
-2
-1
0
1
2
3
3
3
2
2
1
1
0
0
The derivative is a multiple of ๐‘ฃ® and hence
points along the line determined by ๐‘ฃ® . As
๐œ† > 0, the derivative points in the direction
of ๐‘ฃ® when ๐›ผ is positive and in the opposite
direction when ๐›ผ is negative. We draw the
lines determined by the eigenvectors, and
Figure 3.4: Eigenvectors of ๐‘ƒ.
we draw arrows on the lines to indicate the
directions. See Figure 3.5 on the next page.
We fill in the rest of the arrows for the vector field and we also draw a few solutions. See
Figure 3.6 on the following page. The picture looks like a source with arrows coming out
from the origin. Hence we call this type of picture a source or sometimes an unstable node.
-1
-1
-2
-2
-3
-3
-3
-2
-1
0
1
2
3
Case 2. Suppose
both
eigenvalues are negative. For example, take the negation of the
−1 . The eigenvalues are −1 and −2 and corresponding eigenvectors
matrix in case 1, −1
0 −2 are the same, 10 and 11 . The calculation and the picture are almost the same. The only
difference is that the eigenvalues are negative and hence all arrows are reversed. We get
the picture in Figure 3.7 on the next page. We call this kind of picture a sink or a stable node.
1 Case
1 the matrix
1 3. Suppose one eigenvalue is positive and one is negative. For example
1
0 −2
. The eigenvalues are 1 and −2 and corresponding eigenvectors are
0
and
−3
.
148
CHAPTER 3. SYSTEMS OF ODES
-3
-2
-1
0
1
2
-3
3
-2
-1
0
1
2
3
3
3
3
3
2
2
2
2
1
1
1
1
0
0
0
0
-1
-1
-1
-1
-2
-2
-2
-2
-3
-3
-3
-3
-2
-1
0
1
2
Figure 3.5: Eigenvectors of ๐‘ƒ with directions.
-3
-2
-1
0
1
2
-3
-3
3
-2
-1
0
1
2
3
Figure 3.6: Example source vector field with eigenvectors and solutions.
3
-3
-2
-1
0
1
2
3
3
3
3
3
2
2
2
2
1
1
1
1
0
0
0
0
-1
-1
-1
-1
-2
-2
-2
-2
-3
-3
-3
-3
-2
-1
0
1
2
3
Figure 3.7: Example sink vector field with eigenvectors and solutions.
-3
-3
-2
-1
0
1
2
3
Figure 3.8: Example saddle vector field with eigenvectors and solutions.
We reverse the arrows on one line (corresponding to the negative eigenvalue) and we
obtain the picture in Figure 3.8. We call this picture a saddle point.
For the next three cases we will assume the eigenvalues are complex. In this case the
eigenvectors are also complex and we cannot just plot them in the plane.
Case
0 4.
Suppose the eigenvalues are purely imaginary, that is, ±๐‘–๐‘. For
1example,
1let
1
๐‘ƒ = −4 0 . The eigenvalues are ±2๐‘– and corresponding eigenvectors are 2๐‘– and −2๐‘– .
Consider the eigenvalue 2๐‘– and its eigenvector 2๐‘–1 . The real and imaginary parts of ๐‘ฃ® ๐‘’ 2๐‘–๐‘ก
are
1 2๐‘–๐‘ก
cos(2๐‘ก)
1 2๐‘–๐‘ก
sin(2๐‘ก)
Re
๐‘’ =
,
Im
๐‘’ =
.
2๐‘–
−2 sin(2๐‘ก)
2๐‘–
2 cos(2๐‘ก)
149
3.5. TWO-DIMENSIONAL SYSTEMS AND THEIR VECTOR FIELDS
We can take any linear combination of them to get other solutions, which one we take
depends on the initial conditions. Now note that the real part is a parametric equation for
an ellipse. Same with the imaginary part and in fact any linear combination of the two.
This is what happens in general when the eigenvalues are purely imaginary. So when the
eigenvalues are purely imaginary, we get ellipses for the solutions. This type of picture is
sometimes called a center. See Figure 3.9.
-3
-2
-1
0
1
2
-3
3
-2
-1
0
1
2
3
3
3
3
3
2
2
2
2
1
1
1
1
0
0
0
0
-1
-1
-1
-1
-2
-2
-2
-2
-3
-3
-3
-3
-2
-1
0
1
2
3
Figure 3.9: Example center vector field.
-3
-3
-2
-1
0
1
2
3
Figure 3.10: Example spiral source vector field.
Case 5. Now suppose the complex eigenvalues have a positive real
1 part.
That is, suppose
1
the eigenvalues are ๐‘Ž ± ๐‘–๐‘ for some ๐‘Ž > 0. For example, let ๐‘ƒ = −4 1 . The eigenvalues
1 turn out to be 1 ± 2๐‘– and eigenvectors are 2๐‘–1 and −2๐‘–
. We take 1 + 2๐‘– and its eigenvector
1
(1+2๐‘–)๐‘ก
®๐‘’
are
2๐‘– and find the real and imaginary parts of ๐‘ฃ
1 (1+2๐‘–)๐‘ก
cos(2๐‘ก)
Re
๐‘’
= ๐‘’๐‘ก
,
2๐‘–
−2 sin(2๐‘ก)
1 (1+2๐‘–)๐‘ก
sin(2๐‘ก)
Im
๐‘’
= ๐‘’๐‘ก
.
2๐‘–
2 cos(2๐‘ก)
Note the ๐‘’ ๐‘ก in front of the solutions. The solutions grow in magnitude while spinning
around the origin. Hence we get a spiral source. See Figure 3.10.
Case 6. Finally suppose the complex eigenvalues have a negative real part.
is,
−1 −1That
suppose the eigenvalues are −๐‘Ž ± ๐‘–๐‘ for some ๐‘Ž > 0. For example, let ๐‘ƒ = 4 −1 . The
1 eigenvalues turn out to be −1 ± 2๐‘– and eigenvectors are −2๐‘–
and 2๐‘–1 . We take −1 − 2๐‘–
and its eigenvector 2๐‘–1 and find the real and imaginary parts of ๐‘ฃ® ๐‘’ (−1−2๐‘–)๐‘ก are
1 (−1−2๐‘–)๐‘ก
cos(2๐‘ก)
Re
๐‘’
= ๐‘’ −๐‘ก
,
2๐‘–
2 sin(2๐‘ก)
1 (−1−2๐‘–)๐‘ก
− sin(2๐‘ก)
Im
๐‘’
= ๐‘’ −๐‘ก
.
2๐‘–
2 cos(2๐‘ก)
Note the ๐‘’ −๐‘ก in front of the solutions. The solutions shrink in magnitude while spinning
around the origin. Hence we get a spiral sink. See Figure 3.11 on the next page.
150
CHAPTER 3. SYSTEMS OF ODES
-3
-2
-1
0
1
2
3
3
3
2
2
1
1
0
0
-1
-1
-2
-2
-3
-3
-3
-2
-1
0
1
2
3
Figure 3.11: Example spiral sink vector field.
We summarize the behavior of linear homogeneous two-dimensional systems given by
a nonsingular matrix in Table 3.1. Systems where one of the eigenvalues is zero (the matrix
is singular) come up in practice from time to time, see Example 3.1.2 on page 121, and the
pictures are somewhat different (simpler in a way). See the exercises.
Eigenvalues
Behavior
real and both positive
real and both negative
real and opposite signs
purely imaginary
complex with positive real part
complex with negative real part
source / unstable node
sink / stable node
saddle
center point / ellipses
spiral source
spiral sink
Table 3.1: Summary of behavior of linear homogeneous two-dimensional systems.
3.5.1
Exercises
Exercise 3.5.1: Take the equation ๐‘š๐‘ฅ 00 + ๐‘๐‘ฅ 0 + ๐‘˜๐‘ฅ = 0, with ๐‘š > 0, ๐‘ ≥ 0, ๐‘˜ > 0 for the
mass-spring system.
a) Convert this to a system of first order equations.
b) Classify for what ๐‘š, ๐‘, ๐‘˜ do you get which behavior.
c) Explain from physical intuition why you do not get all the different kinds of behavior here?
151
3.5. TWO-DIMENSIONAL SYSTEMS AND THEIR VECTOR FIELDS
Exercise 3.5.2: What happens in the case when ๐‘ƒ = 10 11 ? In this case the eigenvalue is repeated
and there is only one independent eigenvector. What picture does this look like?
Exercise 3.5.3: What happens in the case when ๐‘ƒ =
we have drawn?
1 1
11
? Does this look like any of the pictures
Exercise 3.5.4: Which behaviors are possible if ๐‘ƒ is diagonal, that is ๐‘ƒ =
that ๐‘Ž and ๐‘ are not zero.
๐‘Ž 0
0 ๐‘
? You can assume
Exercise 3.5.5: Take the system from Example 3.1.2 on page 121, ๐‘ฅ10 = ๐‘‰๐‘Ÿ (๐‘ฅ2 − ๐‘ฅ1 ), ๐‘ฅ20 = ๐‘‰๐‘Ÿ (๐‘ฅ 1 − ๐‘ฅ2 ).
As we said, one of the eigenvalues is zero. What is the other eigenvalue, how does the picture look
like and what happens when ๐‘ก goes to infinity.
Exercise 3.5.101: Describe the behavior of the following systems without solving:
a) ๐‘ฅ 0 = ๐‘ฅ + ๐‘ฆ,
๐‘ฆ 0 = ๐‘ฅ − ๐‘ฆ.
b) ๐‘ฅ10 = ๐‘ฅ1 + ๐‘ฅ2 ,
c) ๐‘ฅ10 = −2๐‘ฅ 2 ,
๐‘ฅ20 = 2๐‘ฅ1 .
d) ๐‘ฅ 0 = ๐‘ฅ + 3๐‘ฆ,
e) ๐‘ฅ 0 = ๐‘ฅ − 4๐‘ฆ,
๐‘ฅ20 = 2๐‘ฅ2 .
๐‘ฆ 0 = −2๐‘ฅ − 4๐‘ฆ.
๐‘ฆ 0 = −4๐‘ฅ + ๐‘ฆ.
Exercise 3.5.102: Suppose that ๐‘ฅ® 0 = ๐ด ๐‘ฅ® where ๐ด is a 2 by 2 matrix with eigenvalues 2 ± ๐‘–.
Describe the behavior.
Exercise 3.5.103: Take ๐‘ฅ๐‘ฆ = 00 10 ๐‘ฅ๐‘ฆ . Draw the vector field and describe the behavior. Is it
one of the behaviors that we have seen before?
0
152
3.6
CHAPTER 3. SYSTEMS OF ODES
Second order systems and applications
Note: more than 2 lectures, §5.4 in [EP], not in [BD]
3.6.1
Undamped mass-spring systems
While we did say that we will usually only look at first order systems, it is sometimes more
convenient to study the system in the way it arises naturally. For example, suppose we
have 3 masses connected by springs between two walls. We could pick any higher number,
and the math would be essentially the same, but for simplicity we pick 3 right now. Let
us also assume no friction, that is, the system is undamped. The masses are ๐‘š1 , ๐‘š2 , and
๐‘š3 and the spring constants are ๐‘˜ 1 , ๐‘˜ 2 , ๐‘˜ 3 , and ๐‘˜ 4 . Let ๐‘ฅ1 be the displacement from rest
position of the first mass, and ๐‘ฅ2 and ๐‘ฅ3 the displacement of the second and third mass.
We make, as usual, positive values go right (as ๐‘ฅ 1 grows, the first mass is moving right).
See Figure 3.12.
๐‘˜1
๐‘˜2
๐‘š1
๐‘š2
๐‘˜3
๐‘š3
๐‘˜4
Figure 3.12: System of masses and springs.
This simple system turns up in unexpected places. For example, our world really
consists of many small particles of matter interacting together. When we try the system
above with many more masses, we obtain a good approximation to how an elastic material
behaves. By somehow taking a limit of the number of masses going to infinity, we obtain
the continuous one-dimensional wave equation (that we study in § 4.7). But we digress.
Let us set up the equations for the three mass system. By Hooke’s law, the force acting
on the mass equals the spring compression times the spring constant. By Newton’s second
law, force is mass times acceleration. So if we sum the forces acting on each mass, put the
right sign in front of each term, depending on the direction in which it is acting, and set
this equal to mass times the acceleration, we end up with the desired system of equations.
๐‘š1 ๐‘ฅ 100 = −๐‘˜1 ๐‘ฅ1 + ๐‘˜2 (๐‘ฅ2 − ๐‘ฅ1 )
= −(๐‘˜ 1 + ๐‘˜ 2 )๐‘ฅ1 + ๐‘˜ 2 ๐‘ฅ2 ,
๐‘š2 ๐‘ฅ 200
๐‘š3 ๐‘ฅ 300
= −๐‘˜2 (๐‘ฅ2 − ๐‘ฅ1 ) + ๐‘˜ 3 (๐‘ฅ3 − ๐‘ฅ2 )
= ๐‘˜ 2 ๐‘ฅ 1 − (๐‘˜ 2 + ๐‘˜ 3 )๐‘ฅ2 + ๐‘˜ 3 ๐‘ฅ3 ,
= −๐‘˜ 3 (๐‘ฅ3 − ๐‘ฅ2 ) − ๐‘˜ 4 ๐‘ฅ 3
= ๐‘˜ 3 ๐‘ฅ 2 − (๐‘˜ 3 + ๐‘˜ 4 )๐‘ฅ3 .
We define the matrices
๏ฃฎ ๐‘š1 0 0 ๏ฃน
๏ฃฏ
๏ฃบ
๐‘€ = ๏ฃฏ๏ฃฏ 0 ๐‘š2 0 ๏ฃบ๏ฃบ
๏ฃฏ 0 0 ๐‘š3 ๏ฃบ
๏ฃฐ
๏ฃป
and
๏ฃฎ−(๐‘˜1 + ๐‘˜2 )
๏ฃน
๐‘˜2
0
๏ฃฏ
๏ฃบ
๏ฃบ.
๐‘˜2
−(๐‘˜2 + ๐‘˜ 3 )
๐‘˜3
๐พ = ๏ฃฏ๏ฃฏ
๏ฃบ
๏ฃฏ
๏ฃบ
0
๐‘˜
−(๐‘˜
+
๐‘˜
)
3
3
4
๏ฃฐ
๏ฃป
153
3.6. SECOND ORDER SYSTEMS AND APPLICATIONS
We write the equation simply as
๐‘€ ๐‘ฅ®00 = ๐พ ๐‘ฅ®.
At this point we could introduce 3 new variables and write out a system of 6 first order
equations. We claim this simple setup is easier to handle as a second order system. We call
๐‘ฅ® the displacement vector, ๐‘€ the mass matrix, and ๐พ the stiffness matrix.
Exercise 3.6.1: Repeat this setup for 4 masses (find the matrices ๐‘€ and ๐พ). Do it for 5 masses.
Can you find a prescription to do it for ๐‘› masses?
As with a single equation we want to “divide by ๐‘€.” This means computing the inverse
of ๐‘€. The masses are all nonzero and ๐‘€ is a diagonal matrix, so computing the inverse is
easy:
๏ฃฎ ๐‘š1 0 0 ๏ฃน
๏ฃฏ 1
๏ฃบ
๐‘€ −1 = ๏ฃฏ๏ฃฏ 0 ๐‘š12 0 ๏ฃบ๏ฃบ .
๏ฃฏ0 0 1๏ฃบ
๏ฃฐ
๐‘š3 ๏ฃป
This fact follows readily by how we multiply diagonal matrices. As an exercise, you should
verify that ๐‘€๐‘€ −1 = ๐‘€ −1 ๐‘€ = ๐ผ.
Let ๐ด = ๐‘€ −1 ๐พ. We look at the system ๐‘ฅ®00 = ๐‘€ −1 ๐พ ๐‘ฅ®, or
๐‘ฅ®00 = ๐ด ๐‘ฅ®.
Many real world systems can be modeled by this equation. For simplicity, we will only talk
about the given masses-and-springs problem. We try a solution of the form
๐‘ฅ® = ๐‘ฃ® ๐‘’ ๐›ผ๐‘ก .
We compute that for this guess, ๐‘ฅ®00 = ๐›ผ2 ๐‘ฃ® ๐‘’ ๐›ผ๐‘ก . We plug our guess into the equation and get
๐›ผ2 ๐‘ฃ® ๐‘’ ๐›ผ๐‘ก = ๐ด®
๐‘ฃ ๐‘’ ๐›ผ๐‘ก .
We divide by ๐‘’ ๐›ผ๐‘ก to arrive at ๐›ผ2 ๐‘ฃ® = ๐ด®
๐‘ฃ . Hence if ๐›ผ2 is an eigenvalue of ๐ด and ๐‘ฃ® is a
corresponding eigenvector, we have found a solution.
In our example, and in other common applications, ๐ด has only real negative eigenvalues
(and possibly a zero eigenvalue). So we study only this case. When an eigenvalue ๐œ† is
negative, it means that ๐›ผ2 = ๐œ† is negative. Hence there is some real number ๐œ” such that
−๐œ”2 = ๐œ†. Then ๐›ผ = ±๐‘–๐œ”. The solution we guessed was
๐‘ฅ® = ๐‘ฃ® cos(๐œ”๐‘ก) + ๐‘– sin(๐œ”๐‘ก) .
By taking the real and imaginary parts (note that ๐‘ฃ® is real), we find that ๐‘ฃ® cos(๐œ”๐‘ก) and
๐‘ฃ® sin(๐œ”๐‘ก) are linearly independent solutions.
If an eigenvalue is zero, it turns out that both ๐‘ฃ® and ๐‘ฃ® ๐‘ก are solutions, where ๐‘ฃ® is an
eigenvector corresponding to the eigenvalue 0.
154
CHAPTER 3. SYSTEMS OF ODES
Exercise 3.6.2: Show that if ๐ด has a zero eigenvalue and ๐‘ฃ® is a corresponding eigenvector, then
๐‘ฅ® = ๐‘ฃ® (๐‘Ž + ๐‘๐‘ก) is a solution of ๐‘ฅ®00 = ๐ด ๐‘ฅ® for arbitrary constants ๐‘Ž and ๐‘.
Theorem 3.6.1. Let ๐ด be a real ๐‘› × ๐‘› matrix with ๐‘› distinct real negative (or zero) eigenvalues we
denote by −๐œ”12 > −๐œ”22 > · · · > −๐œ”๐‘›2 , and corresponding eigenvectors by ๐‘ฃ®1 , ๐‘ฃ®2 , . . . , ๐‘ฃ® ๐‘› . If ๐ด is
invertible (that is, if ๐œ”1 > 0), then
๐‘ฅ®(๐‘ก) =
๐‘›
Õ
๐‘ฃ® ๐‘– ๐‘Ž ๐‘– cos(๐œ” ๐‘– ๐‘ก) + ๐‘ ๐‘– sin(๐œ” ๐‘– ๐‘ก) ,
๐‘–=1
is the general solution of
๐‘ฅ®00 = ๐ด ๐‘ฅ®,
for some arbitrary constants ๐‘Ž ๐‘– and ๐‘ ๐‘– . If ๐ด has a zero eigenvalue, that is ๐œ”1 = 0, and all other
eigenvalues are distinct and negative, then the general solution can be written as
๐‘ฅ®(๐‘ก) = ๐‘ฃ®1 (๐‘Ž1 + ๐‘ 1 ๐‘ก) +
๐‘›
Õ
๐‘ฃ® ๐‘– ๐‘Ž ๐‘– cos(๐œ” ๐‘– ๐‘ก) + ๐‘ ๐‘– sin(๐œ” ๐‘– ๐‘ก) .
๐‘–=2
We use this solution and the setup from the introduction of this section even when
some of the masses and springs are missing. For example, when there are only 2 masses
and only 2 springs, simply take only the equations for the two masses and set all the spring
constants for the springs that are missing to zero.
3.6.2
Examples
Example 3.6.1: Consider the setup in Figure 3.13, with ๐‘š1 = 2 kg, ๐‘š2 = 1 kg, ๐‘˜ 1 = 4 N/m,
and ๐‘˜2 = 2 N/m.
๐‘˜1
๐‘˜2
๐‘š1
๐‘š2
Figure 3.13: System of masses and springs.
The equations we write down are
or
2 0 00
−(4 + 2) 2
๐‘ฅ® =
๐‘ฅ®,
0 1
2
−2
−3 1
๐‘ฅ® =
๐‘ฅ®.
2 −2
00
155
3.6. SECOND ORDER SYSTEMS AND APPLICATIONS
We find the eigenvalues
1
1of ๐ด to be ๐œ† = −1, −4 (exercise). We find corresponding
eigenvectors to be 2 and −1 respectively (exercise).
We check the theorem and note that ๐œ”1 = 1 and ๐œ”2 = 2. Hence the general solution is
1
๐‘ฅ® =
2
1
๐‘Ž 1 cos(๐‘ก) + ๐‘ 1 sin(๐‘ก) +
−1
๐‘Ž 2 cos(2๐‘ก) + ๐‘ 2 sin(2๐‘ก) .
The two terms in the solution represent the two so-called natural or normal modes of
oscillation. And the two (angular) frequencies are the natural frequencies. The first natural
frequency is 1, and second natural frequency is 2. The two modes are plotted in Figure 3.14.
0.0
2.5
5.0
7.5
10.0
0.0
2.5
5.0
7.5
10.0
2
2
1.0
1.0
1
1
0.5
0.5
0
0
0.0
0.0
-1
-1
-0.5
-0.5
-2
-1.0
-2
0.0
2.5
5.0
7.5
10.0
-1.0
0.0
2.5
5.0
7.5
10.0
Figure 3.14: The two modes of the mass-spring system. In the left plot the masses are moving in unison
and in the right plot are masses moving in the opposite direction.
Let us write the solution as
1
1
๐‘ฅ® =
๐‘1 cos(๐‘ก − ๐›ผ1 ) +
๐‘ cos(2๐‘ก − ๐›ผ2 ).
2
−1 2
The first term,
1
๐‘ cos(๐‘ก − ๐›ผ1 )
๐‘ cos(๐‘ก − ๐›ผ1 ) = 1
,
2 1
2๐‘1 cos(๐‘ก − ๐›ผ1 )
corresponds to the mode where the masses move synchronously in the same direction.
The second term,
1
๐‘ 2 cos(2๐‘ก − ๐›ผ2 )
๐‘ 2 cos(2๐‘ก − ๐›ผ2 ) =
,
−1
−๐‘ 2 cos(2๐‘ก − ๐›ผ2 )
corresponds to the mode where the masses move synchronously but in opposite directions.
The general solution is a combination of the two modes. That is, the initial conditions
determine the amplitude and phase shift of each mode. As an example, suppose we have
initial conditions
1
0
0
๐‘ฅ®(0) =
,
๐‘ฅ® (0) =
.
−1
6
156
CHAPTER 3. SYSTEMS OF ODES
We use the ๐‘Ž ๐‘— , ๐‘ ๐‘— constants to solve for initial conditions. First
1
1
1
๐‘Ž1 + ๐‘Ž2
= ๐‘ฅ®(0) =
๐‘Ž1 +
๐‘Ž2 =
.
−1
2
−1
2๐‘Ž 1 − ๐‘Ž 2
We solve (exercise) to find ๐‘Ž 1 = 0, ๐‘Ž 2 = 1. To find the ๐‘ 1 and ๐‘ 2 , we differentiate first:
1
๐‘ฅ® =
2
0
Now we solve:
1
−๐‘Ž1 sin(๐‘ก) + ๐‘ 1 cos(๐‘ก) +
−1
−2๐‘Ž 2 sin(2๐‘ก) + 2๐‘ 2 cos(2๐‘ก) .
0
1
1
๐‘1 + 2๐‘2
= ๐‘ฅ®0(0) =
๐‘1 +
2๐‘ 2 =
.
6
2
−1
2๐‘ 1 − 2๐‘ 2
Again solve (exercise) to find ๐‘ 1 = 2, ๐‘ 2 = −1. So our solution is
1
1
๐‘ฅ® =
2 sin(๐‘ก) +
2
−1
2 sin(๐‘ก) + cos(2๐‘ก) − sin(2๐‘ก)
cos(2๐‘ก) − sin(2๐‘ก) =
.
4 sin(๐‘ก) − cos(2๐‘ก) + sin(2๐‘ก)
The graphs of the two displacements, ๐‘ฅ 1 and ๐‘ฅ2 of the two carts is in Figure 3.15.
0.0
2.5
5.0
7.5
10.0
5.0
5.0
2.5
2.5
0.0
0.0
-2.5
-2.5
0.0
2.5
5.0
7.5
10.0
Figure 3.15: Superposition of the two modes given the initial conditions.
Example 3.6.2: We have two toy rail cars. Car 1 of mass 2 kg is traveling at 3 m/s towards
the second rail car of mass 1 kg. There is a bumper on the second rail car that engages at
the moment the cars hit (it connects to two cars) and does not let go. The bumper acts
like a spring of spring constant ๐‘˜ = 2 N/m. The second car is 10 meters from a wall. See
Figure 3.16 on the facing page.
We want to ask several questions. At what time after the cars link does impact with the
wall happen? What is the speed of car 2 when it hits the wall?
OK, let us first set the system up. Let ๐‘ก = 0 be the time when the two cars link up. Let ๐‘ฅ 1
be the displacement of the first car from the position at ๐‘ก = 0, and let ๐‘ฅ2 be the displacement
3.6. SECOND ORDER SYSTEMS AND APPLICATIONS
๐‘˜
๐‘š1
157
๐‘š2
10 meters
Figure 3.16: The crash of two rail cars.
of the second car from its original location. Then the time when ๐‘ฅ2 (๐‘ก) = 10 is exactly the
time when impact with wall occurs. For this ๐‘ก, ๐‘ฅ20 (๐‘ก) is the speed at impact. This system
acts just like the system of the previous example but without ๐‘˜1 . Hence the equation is
2 0 00
−2 2
๐‘ฅ® =
๐‘ฅ®,
0 1
2 −2
or
−1 1
๐‘ฅ®.
๐‘ฅ® =
2 −2
00
We compute the eigenvalues of ๐ด. It is not hard
to see
that the eigenvalues are 0 and
1
1
−3 (exercise). Furthermore, eigenvectors are 1 and −2 respectively (exercise). Then
√
๐œ”1 = 0, ๐œ”2 = 3, and by the second part of the theorem the general solution is
√
√ 1
1
๐‘ฅ® =
๐‘Ž 2 cos( 3 ๐‘ก) + ๐‘ 2 sin( 3 ๐‘ก)
(๐‘Ž 1 + ๐‘ 1 ๐‘ก) +
1
−2
√
√
3 ๐‘ก)
๐‘Ž 1 + ๐‘1 ๐‘ก + ๐‘Ž2 cos(√3 ๐‘ก) + ๐‘ 2 sin( √
=
.
๐‘Ž1 + ๐‘ 1 ๐‘ก − 2๐‘Ž2 cos( 3 ๐‘ก) − 2๐‘ 2 sin( 3 ๐‘ก)
We now apply the initial conditions. First the cars start at position 0 so ๐‘ฅ1 (0) = 0 and
๐‘ฅ2 (0) = 0. The first car is traveling at 3 m/s, so ๐‘ฅ10 (0) = 3 and the second car starts at rest, so
๐‘ฅ20 (0) = 0. The first conditions says
๐‘Ž1 + ๐‘Ž2
0® = ๐‘ฅ®(0) =
.
๐‘Ž 1 − 2๐‘Ž2
It is not hard to see that ๐‘Ž1 = ๐‘Ž 2 = 0. We set ๐‘Ž1 = 0 and ๐‘Ž 2 = 0 in ๐‘ฅ®(๐‘ก) and differentiate to get
√
√
๐‘ 1 + √3 ๐‘2 cos( √3 ๐‘ก)
๐‘ฅ® (๐‘ก) =
.
๐‘ 1 − 2 3 ๐‘ 2 cos( 3 ๐‘ก)
0
So
√
3
๐‘ 1 + √3 ๐‘2
0
= ๐‘ฅ® (0) =
.
0
๐‘1 − 2 3 ๐‘2
158
CHAPTER 3. SYSTEMS OF ODES
Solving these two equations we find ๐‘ 1 = 2 and ๐‘ 2 =
(until the impact with the wall)
"
๐‘ฅ® =
2๐‘ก +
2๐‘ก −
√1
3
√2
3
√1 .
3
Hence the position of our cars is
√ #
sin( 3 ๐‘ก)
√
.
sin( 3 ๐‘ก)
Note how the presence of the zero eigenvalue resulted in a term containing ๐‘ก. This means
that the cars will be traveling in the positive direction as time grows, which is what we
expect.
What we are really
√ interested in is the second expression, the one for ๐‘ฅ2 . We have
2
√
๐‘ฅ2 (๐‘ก) = 2๐‘ก − sin( 3 ๐‘ก). See Figure 3.17 for the plot of ๐‘ฅ2 versus time.
3
Just from the graph we can see that time of impact will be a little more than
√ 5 seconds
2
from time zero. For this we have to solve the equation 10 = ๐‘ฅ2 (๐‘ก) = 2๐‘ก − √ sin( 3 ๐‘ก). Using
3
a computer (or even a graphing calculator) we find that ๐‘กimpact ≈ 5.22 seconds.
The speed of the second car is ๐‘ฅ 20 =
√
2 − 2 cos( 3 ๐‘ก). At the time of impact (5.22
seconds from ๐‘ก = 0) we get ๐‘ฅ20 (๐‘กimpact ) ≈ 3.85.
The maximum
speed is the maximum of
√
2 − 2 cos( 3 ๐‘ก), which is 4. We are traveling
at almost the maximum speed when we hit
the wall.
0
1
2
3
4
5
6
12.5
12.5
10.0
10.0
7.5
7.5
5.0
5.0
Suppose that Bob is a tiny person sitting
on car 2. Bob has a Martini in his hand and
would like not to spill it. Let us suppose
Bob would not spill his Martini when the
first car links up with car 2, but if car 2 hits
Figure 3.17: Position of the second car in time
the wall at any speed greater than zero, Bob
(ignoring the wall).
will spill his drink. Suppose Bob can move
car 2 a few meters towards or away from
the wall (he cannot go all the way to the wall, nor can he get out of the way of the first car).
Is there a “safe” distance for him to be at? A distance such that the impact with the wall is
at zero speed?
The answer is yes. On Figure 3.17, note the “plateau” between ๐‘ก = 3 and ๐‘ก = 4.√There is
a point where the speed is zero. To find it we solve ๐‘ฅ 20 (๐‘ก) = 0. This is when cos( 3 ๐‘ก) = 1
2๐œ‹ √
or in other words when ๐‘ก = √
, 4๐œ‹ , . . . and so on. We plug in the first value to obtain
2.5
2.5
0.0
0.0
0
๐‘ฅ2
2๐œ‹
√
3
3
=
4๐œ‹
√
3
1
2
3
4
5
6
3
≈ 7.26. So a “safe” distance is about 7 and a quarter meters from the wall.
Alternatively Bob could
move
away from the wall towards the incoming car 2, where
another safe distance is ๐‘ฅ2
๐‘ฅ20 (๐‘ก)
4๐œ‹
√
3
=
8๐œ‹
√
3
≈ 14.51 and so on. We can use all the different ๐‘ก such
that
= 0. Of course ๐‘ก = 0 is also a solution, corresponding to ๐‘ฅ2 = 0, but that means
standing right at the wall.
159
3.6. SECOND ORDER SYSTEMS AND APPLICATIONS
3.6.3
Forced oscillations
Finally we move to forced oscillations. Suppose that now our system is
๐‘ฅ®00 = ๐ด ๐‘ฅ® + ๐น® cos(๐œ”๐‘ก).
(3.4)
®
That is, we are adding periodic forcing to the system in the direction of the vector ๐น.
As before, this system just requires us to find one particular solution ๐‘ฅ®๐‘ , add it to the
general solution of the associated homogeneous system ๐‘ฅ®๐‘ , and we will have the general
solution to (3.4). Let us suppose that ๐œ” is not one of the natural frequencies of ๐‘ฅ®00 = ๐ด ๐‘ฅ®,
then we can guess
๐‘ฅ®๐‘ = ๐‘® cos(๐œ”๐‘ก),
where ๐‘® is an unknown constant vector. Note that we do not need to use sine since there
are only second derivatives. We solve for ๐‘® to find ๐‘ฅ®๐‘ . This is really just the method of
undetermined coefficients for systems. Let us differentiate ๐‘ฅ®๐‘ twice to get
๐‘ฅ®00๐‘ = −๐œ” 2 ๐‘® cos(๐œ”๐‘ก).
Plug ๐‘ฅ®๐‘ and ๐‘ฅ®00๐‘ into equation (3.4):
๐‘ฅ®00๐‘
z
๐ด ๐‘ฅ®๐‘
}|
{
z }| {
−๐œ” 2 ๐‘® cos(๐œ”๐‘ก) = ๐ด®๐‘ cos(๐œ”๐‘ก) +๐น® cos(๐œ”๐‘ก).
We cancel out the cosine and rearrange the equation to obtain
®
(๐ด + ๐œ” 2 ๐ผ)®๐‘ = −๐น.
So
−1
®
๐‘® = (๐ด + ๐œ” 2 ๐ผ) (−๐น).
Of course this is possible only if (๐ด + ๐œ” 2 ๐ผ) = ๐ด − (−๐œ”2 )๐ผ is invertible. That matrix is
invertible if and only if −๐œ”2 is not an eigenvalue of ๐ด. That is true if and only if ๐œ” is not a
natural frequency of the system.
We simplified things a little bit. If we wish to have the forcing term to be in the units of
force, say Newtons, then we must write
® cos(๐œ”๐‘ก).
๐‘€ ๐‘ฅ®00 = ๐พ ๐‘ฅ® + ๐บ
If we then write things in terms of ๐ด = ๐‘€ −1 ๐พ, we have
® cos(๐œ”๐‘ก)
๐‘ฅ®00 = ๐‘€ −1 ๐พ ๐‘ฅ® + ๐‘€ −1 ๐บ
or
๐‘ฅ®00 = ๐ด ๐‘ฅ® + ๐น® cos(๐œ”๐‘ก),
®
where ๐น® = ๐‘€ −1 ๐บ.
Example 3.6.3: Let us take the example in Figure 3.13 on page 154 with the same parameters
as before: ๐‘š1 = 2, ๐‘š2 = 1, ๐‘˜ 1 = 4, and ๐‘˜ 2 = 2. Now suppose that there is a force 2 cos(3๐‘ก)
acting on the second cart.
160
CHAPTER 3. SYSTEMS OF ODES
The equation is
2 0 00
−4 2
0
๐‘ฅ® =
๐‘ฅ® +
cos(3๐‘ก)
0 1
2 −2
2
0
−3 1
๐‘ฅ® +
cos(3๐‘ก).
๐‘ฅ® =
2
2 −2
00
or
We solved the associated homogeneous equation before and found the complementary
solution to be
1
๐‘ฅ®๐‘ =
2
1
๐‘Ž1 cos(๐‘ก) + ๐‘ 1 sin(๐‘ก) +
−1
๐‘Ž2 cos(2๐‘ก) + ๐‘ 2 sin(2๐‘ก) .
The natural frequencies are 1 and 2. As 3 is not a natural frequency, we try ๐‘® cos(3๐‘ก).
We invert (๐ด + 32 ๐ผ):
−3
1
2
−2
+3 ๐ผ
2
−1
=
−1
® =
๐‘® = (๐ด + ๐œ” ๐ผ) (−๐น)
−1
2 7
Hence,
2
6 1
7
40
−1
20
−1 40
3
20
=
7
40
−1
20
−1 40
3
20
0
−2
=
.
1 20
−3
10
.
Combining with the general solution of the associated homogeneous problem, we get
that the general solution to ๐‘ฅ®00 = ๐ด ๐‘ฅ® + ๐น® cos(๐œ”๐‘ก) is
๐‘ฅ® = ๐‘ฅ®๐‘ + ๐‘ฅ®๐‘ =
1
2
๐‘Ž1 cos(๐‘ก) + ๐‘ 1 sin(๐‘ก) +
1
−1
๐‘Ž2 cos(2๐‘ก) + ๐‘ 2 sin(2๐‘ก) +
1 20
−3
10
cos(3๐‘ก).
We then solve for the constants ๐‘Ž1 , ๐‘Ž2 , ๐‘ 1 , and ๐‘ 2 using any initial conditions we are given.
Note that given force ๐‘“®, we write the equation as ๐‘€ ๐‘ฅ®00 = ๐พ ๐‘ฅ® + ๐‘“® to get the units right.
Then we write ๐‘ฅ®00 = ๐‘€ −1 ๐พ ๐‘ฅ® + ๐‘€ −1 ๐‘“®. The term ๐‘”® = ๐‘€ −1 ๐‘“® in ๐‘ฅ®00 = ๐ด ๐‘ฅ® + ๐‘”® is in units of force
per unit mass.
If ๐œ” is a natural frequency of the system, resonance may occur, because we will have to
try a particular solution of the form
๐‘ฅ®๐‘ = ๐‘® ๐‘ก sin(๐œ”๐‘ก) + ๐‘‘® cos(๐œ”๐‘ก).
That is assuming that the eigenvalues of the coefficient matrix are distinct. Next, note that
the amplitude of this solution grows without bound as ๐‘ก grows.
3.6.4
Exercises
Exercise 3.6.3: Find a particular solution to
−3 1
0
๐‘ฅ® =
๐‘ฅ® +
cos(2๐‘ก).
2 −2
2
00
161
3.6. SECOND ORDER SYSTEMS AND APPLICATIONS
Exercise 3.6.4 (challenging): Let us take the example in Figure 3.13 on page 154 with the same
parameters as before: ๐‘š1 = 2, ๐‘˜ 1 = 4, and ๐‘˜ 2 = 2, except for ๐‘š2 , which is unknown. Suppose
that there is a force cos(5๐‘ก) acting on the first mass. Find an ๐‘š2 such that there exists a particular
solution where the first mass does not move.
Note: This idea is called dynamic damping. In practice there will be a small amount of
damping and so any transient solution will disappear and after long enough time, the first mass will
always come to a stop.
Exercise 3.6.5: Let us take the Example 3.6.2 on page 156, but that at time of impact, car 2 is
moving to the left at the speed of 3 m/s.
a) Find the behavior of the system after linkup.
b) Will the second car hit the wall, or will it be moving away from the wall as time goes on?
c) At what speed would the first car have to be traveling for the system to essentially stay in
place after linkup?
Exercise 3.6.6: Let us take the example in Figure 3.13 on page 154 with parameters ๐‘š1 = ๐‘š2 = 1,
๐‘˜1 = ๐‘˜ 2 = 1. Does there exist a set of initial conditions for which the first cart moves but the second
cart does not? If so, find those conditions. If not, argue why not.
Exercise 3.6.101: Find the general solution to
h1 0 0i
020
003
๐‘ฅ® 00
=
h −3
0 0
2 −4 0
0 6 −3
i
๐‘ฅ® +
h cos(2๐‘ก) i
0
0
.
Exercise 3.6.102: Suppose there are three carts of equal mass ๐‘š and connected by two springs of
constant ๐‘˜ (and no connections to walls). Set up the system and find its general solution.
Exercise 3.6.103: Suppose a cart of mass 2 kg is attached by a spring of constant ๐‘˜ = 1 to a cart of
mass 3 kg, which is attached to the wall by a spring also of constant ๐‘˜ = 1. Suppose that the initial
position of the first cart is 1 meter in the positive direction from the rest position, and the second
mass starts at the rest position. The masses are not moving and are let go. Find the position of the
second mass as a function of time.
162
3.7
CHAPTER 3. SYSTEMS OF ODES
Multiple eigenvalues
Note: 1 or 1.5 lectures, §5.5 in [EP], §7.8 in [BD]
It may happen that a matrix ๐ด has some “repeated” eigenvalues. That is, the characteristic equation det(๐ด − ๐œ†๐ผ) = 0 may have repeated roots. This is actually unlikely to happen
for a random matrix. If we take a small perturbation of ๐ด (we change the entries of ๐ด
slightly), we get a matrix with distinct eigenvalues. As any system we want to solve in
practice is an approximation to reality anyway, it is not absolutely indispensable to know
how to solve these corner cases. On the other hand, these cases do come up in applications
from time to time. Furthermore, if we have distinct but very close eigenvalues, the behavior
is similar to that of repeated eigenvalues, and so understanding that case will give us
insight into what is going on.
3.7.1
Geometric multiplicity
Take the diagonal matrix
3 0
๐ด=
.
0 3
๐ด has an eigenvalue 3 of multiplicity 2. We call the multiplicity of the eigenvalue in the
characteristic equation the algebraic
In this case, there also exist 2 linearly
multiplicity.
1
0
independent eigenvectors, 0 and 1 corresponding to the eigenvalue 3. This means
that the so-called geometric multiplicity of this eigenvalue is also 2.
In all the theorems where we required a matrix to have ๐‘› distinct eigenvalues, we only
really needed to have ๐‘› linearly independent eigenvectors. For example, ๐‘ฅ®0 = ๐ด ๐‘ฅ® has the
general solution
0 3๐‘ก
1 3๐‘ก
๐‘ฅ® = ๐‘1
๐‘’ + ๐‘2
๐‘’ .
0
1
Let us restate the theorem about real eigenvalues. In the following theorem we will repeat
eigenvalues according to (algebraic) multiplicity. So for the matrix ๐ด above, we would say
that it has eigenvalues 3 and 3.
Theorem 3.7.1. Suppose the ๐‘› × ๐‘› matrix ๐‘ƒ has ๐‘› real eigenvalues (not necessarily distinct), ๐œ†1 ,
๐œ†2 , . . . , ๐œ†๐‘› , and there are ๐‘› linearly independent corresponding eigenvectors ๐‘ฃ®1 , ๐‘ฃ®2 , . . . , ๐‘ฃ® ๐‘› . Then
the general solution to ๐‘ฅ®0 = ๐‘ƒ ๐‘ฅ® can be written as
๐‘ฅ® = ๐‘ 1 ๐‘ฃ®1 ๐‘’ ๐œ†1 ๐‘ก + ๐‘2 ๐‘ฃ®2 ๐‘’ ๐œ†2 ๐‘ก + · · · + ๐‘ ๐‘› ๐‘ฃ® ๐‘› ๐‘’ ๐œ†๐‘› ๐‘ก .
The geometric multiplicity of an eigenvalue of algebraic multiplicity ๐‘› is equal to the
number of corresponding linearly independent eigenvectors. The geometric multiplicity is
always less than or equal to the algebraic multiplicity. The theorem handles the case when
these two multiplicities are equal for all eigenvalues. If for an eigenvalue the geometric
multiplicity is equal to the algebraic multiplicity, then we say the eigenvalue is complete.
163
3.7. MULTIPLE EIGENVALUES
In other words, the hypothesis of the theorem could be stated as saying that if all the
eigenvalues of ๐‘ƒ are complete, then there are ๐‘› linearly independent eigenvectors and thus
we have the given general solution.
If the geometric multiplicity of an eigenvalue is 2 or greater, then the set of linearly
independent eigenvectors is not
unique up to multiples as it was
1 before.
1For
example, for
3
0
the diagonal matrix ๐ด = 0 3 we could also pick eigenvectors 1 and −1 , or in fact any
pair of two linearly independent vectors. The number of linearly independent eigenvectors
corresponding to ๐œ† is the number of free variables we obtain when solving ๐ด®
๐‘ฃ = ๐œ†®
๐‘ฃ . We
pick specific values for those free variables to obtain eigenvectors. If you pick different
values, you may get different eigenvectors.
3.7.2
Defective eigenvalues
If an ๐‘› × ๐‘› matrix has less than ๐‘› linearly independent eigenvectors, it is said to be deficient.
Then there is at least one eigenvalue with an algebraic multiplicity that is higher than its
geometric multiplicity. We call this eigenvalue defective and the difference between the two
multiplicities we call the defect.
Example 3.7.1: The matrix
3 1
0 3
has an eigenvalue 3 of algebraic multiplicity 2. Let us try to compute eigenvectors.
0 1
0 0
๐‘ฃ1
®
= 0.
๐‘ฃ2
We must have that ๐‘ฃ2 = 0. Hence any eigenvector is of the form ๐‘ฃ01 . Any two such
vectors are linearly dependent, and hence the geometric multiplicity of the eigenvalue is 1.
Therefore, the defect is 1, and we can no longer apply the eigenvalue method directly to a
system of ODEs with such a coefficient matrix.
Roughly, the key observation is that if ๐œ† is an eigenvalue of ๐ด of algebraic multiplicity ๐‘š,
then we can find certain ๐‘š linearly independent vectors solving (๐ด − ๐œ†๐ผ) ๐‘˜ ๐‘ฃ® = 0® for various
powers ๐‘˜. We will call these generalized eigenvectors.
Let us continue with the example ๐ด = 30 13 and the equation ๐‘ฅ®0 = ๐ด ๐‘ฅ®. We found an
eigenvalue
๐œ† = 3 of (algebraic) multiplicity 2 and defect 1. We found one eigenvector
1
๐‘ฃ® = 0 . We have one solution
๐‘ฅ®1 = ๐‘ฃ® ๐‘’
3๐‘ก
=
1 3๐‘ก
๐‘’ .
0
We are now stuck, we get no other solutions from standard eigenvectors. But we need two
linearly independent solutions to find the general solution of the equation.
164
CHAPTER 3. SYSTEMS OF ODES
Let us try (in the spirit of repeated roots of the characteristic equation for a single
equation) another solution of the form
๐‘ฅ®2 = (®
๐‘ฃ2 + ๐‘ฃ®1 ๐‘ก) ๐‘’ 3๐‘ก .
We differentiate to get
๐‘ฅ®20 = ๐‘ฃ®1 ๐‘’ 3๐‘ก + 3(®
๐‘ฃ 2 + ๐‘ฃ®1 ๐‘ก) ๐‘’ 3๐‘ก = (3®
๐‘ฃ2 + ๐‘ฃ®1 ) ๐‘’ 3๐‘ก + 3®
๐‘ฃ1 ๐‘ก๐‘’ 3๐‘ก .
As we are assuming that ๐‘ฅ®2 is a solution, ๐‘ฅ®20 must equal ๐ด ๐‘ฅ®2 . So let’s compute ๐ด ๐‘ฅ®2 :
๐ด ๐‘ฅ®2 = ๐ด(®
๐‘ฃ 2 + ๐‘ฃ®1 ๐‘ก) ๐‘’ 3๐‘ก = ๐ด®
๐‘ฃ 2 ๐‘’ 3๐‘ก + ๐ด®
๐‘ฃ 1 ๐‘ก๐‘’ 3๐‘ก .
By looking at the coefficients of ๐‘’ 3๐‘ก and ๐‘ก๐‘’ 3๐‘ก we see 3®
๐‘ฃ2 + ๐‘ฃ®1 = ๐ด®
๐‘ฃ2 and 3®
๐‘ฃ1 = ๐ด®
๐‘ฃ 1 . This
means that
®
(๐ด − 3๐ผ)®
๐‘ฃ 2 = ๐‘ฃ®1 ,
and
(๐ด − 3๐ผ)®
๐‘ฃ 1 = 0.
Therefore, ๐‘ฅ®2 is a solution if these two equations are satisfied. The second equation
is
1
satisfied if ๐‘ฃ®1 is an eigenvector, and we found the eigenvector above, so let ๐‘ฃ®1 = 0 . So, if
we can find a ๐‘ฃ®2 that solves (๐ด − 3๐ผ)®
๐‘ฃ2 = ๐‘ฃ®1 , then we are done. This is just a bunch of linear
equations to solve and we are by now very good at that. Let us solve (๐ด − 3๐ผ)®
๐‘ฃ 2 = ๐‘ฃ®1 . Write
0 1
0 0
๐‘Ž
1
=
.
๐‘
0
By inspection we see that letting
๐‘Ž = 0 (๐‘Ž could be anything in fact) and ๐‘ = 1 does the job.
0
Hence we can take ๐‘ฃ®2 = 1 . Our general solution to ๐‘ฅ®0 = ๐ด ๐‘ฅ® is
๐‘ฅ® = ๐‘ 1
1 3๐‘ก
๐‘’ + ๐‘2
0
0
1
๐‘ ๐‘’ 3๐‘ก + ๐‘ 2 ๐‘ก๐‘’ 3๐‘ก
+
๐‘ก ๐‘’ 3๐‘ก = 1
.
1
0
๐‘2 ๐‘’ 3๐‘ก
Let us check that we really do have the solution. First ๐‘ฅ10 = ๐‘ 1 3๐‘’ 3๐‘ก + ๐‘ 2 ๐‘’ 3๐‘ก + 3๐‘ 2 ๐‘ก๐‘’ 3๐‘ก = 3๐‘ฅ 1 + ๐‘ฅ2 .
Good. Now ๐‘ฅ20 = 3๐‘2 ๐‘’ 3๐‘ก = 3๐‘ฅ 2 . Good.
In the example, if we plug (๐ด − 3๐ผ)®
๐‘ฃ 2 = ๐‘ฃ®1 into (๐ด − 3๐ผ)®
๐‘ฃ1 = 0® we find
®
(๐ด − 3๐ผ)(๐ด − 3๐ผ)®
๐‘ฃ 2 = 0,
or
®
(๐ด − 3๐ผ)2 ๐‘ฃ®2 = 0.
® then (๐ด − 3๐ผ)๐‘ค
® ≠ 0,
® is an eigenvector, a multiple of ๐‘ฃ®1 . In this
Furthermore, if (๐ด − 3๐ผ)๐‘ค
2
® solves (๐ด − 3๐ผ)2 ๐‘ค
® = 0®
2 × 2 case (๐ด − 3๐ผ) is just the zero matrix (exercise). So any vector ๐‘ค
® Then we could use ๐‘ค
® such that (๐ด − 3๐ผ)๐‘ค
® ≠ 0.
® for ๐‘ฃ®2 , and (๐ด − 3๐ผ)๐‘ค
®
and we just need a ๐‘ค
for ๐‘ฃ®1 .
Note that the system ๐‘ฅ®0 = ๐ด ๐‘ฅ® has a simpler solution since ๐ด is a so-called upper triangular
matrix, that is every entry below the diagonal is zero. In particular, the equation for ๐‘ฅ2
does not depend on ๐‘ฅ1 . Mind you, not every defective matrix is triangular.
165
3.7. MULTIPLE EIGENVALUES
Exercise 3.7.1: Solve ๐‘ฅ®0 = 30 13 ๐‘ฅ® by first solving for ๐‘ฅ2 and then for ๐‘ฅ 1 independently. Check that
you got the same solution as we did above.
Let us describe the general algorithm. Suppose that ๐œ† is an eigenvalue of multiplicity 2,
® Then, find a
defect 1. First find an eigenvector ๐‘ฃ®1 of ๐œ†. That is, ๐‘ฃ®1 solves (๐ด − ๐œ†๐ผ)®
๐‘ฃ1 = 0.
vector ๐‘ฃ®2 such that
(๐ด − ๐œ†๐ผ)®
๐‘ฃ2 = ๐‘ฃ®1 .
This gives us two linearly independent solutions
๐‘ฅ®1 = ๐‘ฃ®1 ๐‘’ ๐œ†๐‘ก ,
๐‘ฅ®2 = ๐‘ฃ®2 + ๐‘ฃ®1 ๐‘ก ๐‘’ ๐œ†๐‘ก .
Example 3.7.2: Consider the system
๏ฃฎ 2 −5 0๏ฃน
๏ฃฏ
๏ฃบ
2 0๏ฃบ๏ฃบ ๐‘ฅ®.
๐‘ฅ® = ๏ฃฏ๏ฃฏ 0
๏ฃฏ−1 4 1๏ฃบ
๏ฃฐ
๏ฃป
0
Compute the eigenvalues,
๏ฃฎ2 − ๐œ† −5
0 ๏ฃน๏ฃบ
© ๏ฃฏ๏ฃฏ
ª
2−๐œ†
0 ๏ฃบ๏ฃบ ® = (2 − ๐œ†)2 (1 − ๐œ†).
0 = det(๐ด − ๐œ†๐ผ) = det ­ ๏ฃฏ 0
๏ฃฏ
4
1 − ๐œ†๏ฃบ๏ฃป ¬
« ๏ฃฐ −1
The eigenvalues
are 1 and 2, where 2 has multiplicity 2. We leave it to the reader to find
h i
that
0
0
1
is an eigenvector for the eigenvalue ๐œ† = 1.
Let’s focus on ๐œ† = 2. We compute eigenvectors:
๏ฃฎ 0 −5 0 ๏ฃน ๏ฃฎ๐‘ฃ1 ๏ฃน
๏ฃฏ
๏ฃบ๏ฃฏ ๏ฃบ
®0 = (๐ด − 2๐ผ)®
0
0 ๏ฃบ๏ฃบ ๏ฃฏ๏ฃฏ๐‘ฃ 2 ๏ฃบ๏ฃบ .
๐‘ฃ = ๏ฃฏ๏ฃฏ 0
๏ฃฏ−1 4 −1๏ฃบ ๏ฃฏ๐‘ฃ3 ๏ฃบ
๏ฃฐ
๏ฃป๏ฃฐ ๏ฃป
The first equation says that ๐‘ฃ 2 = 0, so the last equation is −๐‘ฃ1 − ๐‘ฃ3 = 0. Let
h 1 ๐‘ฃi3 be the free
variable to find that ๐‘ฃ1 = −๐‘ฃ 3 . Perhaps let ๐‘ฃ3 = −1 to find an eigenvector 0 . Problem is
−1
that setting ๐‘ฃ3 to anything else just gets multiples of this vector and so we have a defect of
1. Let ๐‘ฃ®1 be the eigenvector and let’s look for a generalized eigenvector ๐‘ฃ®2 :
(๐ด − 2๐ผ)®
๐‘ฃ2 = ๐‘ฃ®1 ,
or
๏ฃฎ 0 −5 0 ๏ฃน ๏ฃฎ ๐‘Ž ๏ฃน ๏ฃฎ 1 ๏ฃน
๏ฃฏ
๏ฃบ๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบ
๏ฃฏ 0 0 0 ๏ฃบ ๏ฃฏ๐‘ ๏ฃบ = ๏ฃฏ 0 ๏ฃบ ,
๏ฃฏ
๏ฃบ๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบ
๏ฃฏ−1 4 −1๏ฃบ ๏ฃฏ ๐‘ ๏ฃบ ๏ฃฏ−1๏ฃบ
๏ฃฐ
๏ฃป๏ฃฐ ๏ฃป ๏ฃฐ ๏ฃป
where we used ๐‘Ž, ๐‘, ๐‘ as components of ๐‘ฃ®2 for simplicity. The first equation says −5๐‘ = 1
so ๐‘ = −1/5. The second equation says nothing. The last equation is −๐‘Ž + 4๐‘ − ๐‘ = −1, or
166
CHAPTER 3. SYSTEMS OF ODES
๐‘Ž + 4/5 + ๐‘ = 1, or ๐‘Ž + ๐‘ = 1/5. We let ๐‘ be the free variable and we choose ๐‘ = 0. We find
๐‘ฃ®2 =
1/5
−1/5
.
0
The general solution is therefore,
๏ฃฎ1๏ฃน
๏ฃฎ 1/5 ๏ฃน ๏ฃฎ 1 ๏ฃน
๏ฃฎ0 ๏ฃน
๏ฃฏ ๏ฃบ ๐‘ก
๏ฃฏ ๏ฃบ 2๐‘ก
๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบ ª
©
๐‘ฅ® = ๐‘1 ๏ฃฏ๏ฃฏ0๏ฃบ๏ฃบ ๐‘’ + ๐‘2 ๏ฃฏ๏ฃฏ 0 ๏ฃบ๏ฃบ ๐‘’ + ๐‘3 ­ ๏ฃฏ๏ฃฏ−1/5๏ฃบ๏ฃบ + ๏ฃฏ๏ฃฏ 0 ๏ฃบ๏ฃบ ๐‘ก ® ๐‘’ 2๐‘ก .
๏ฃฏ1 ๏ฃบ
๏ฃฏ−1๏ฃบ
๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบ
« ๏ฃฐ 0 ๏ฃป ๏ฃฐ−1๏ฃป ¬
๏ฃฐ ๏ฃป
๏ฃฐ ๏ฃป
This machinery can also be generalized to higher multiplicities and higher defects. We
will not go over this method in detail, but let us just sketch the ideas. Suppose that ๐ด has
an eigenvalue ๐œ† of multiplicity ๐‘š. We find vectors such that
®
(๐ด − ๐œ†๐ผ) ๐‘˜ ๐‘ฃ® = 0,
but
®
(๐ด − ๐œ†๐ผ) ๐‘˜−1 ๐‘ฃ® ≠ 0.
Such vectors are called generalized eigenvectors (then ๐‘ฃ®1 = (๐ด − ๐œ†๐ผ) ๐‘˜−1 ๐‘ฃ® is an eigenvector).
For the eigenvector ๐‘ฃ®1 there is a chain of generalized eigenvectors ๐‘ฃ®2 through ๐‘ฃ® ๐‘˜ such that:
®
(๐ด − ๐œ†๐ผ)®
๐‘ฃ 1 = 0,
(๐ด − ๐œ†๐ผ)®
๐‘ฃ 2 = ๐‘ฃ®1 ,
..
.
(๐ด − ๐œ†๐ผ)®
๐‘ฃ ๐‘˜ = ๐‘ฃ® ๐‘˜−1 .
® you find the
Really once you find the ๐‘ฃ® ๐‘˜ such that (๐ด − ๐œ†๐ผ) ๐‘˜ ๐‘ฃ® ๐‘˜ = 0® but (๐ด − ๐œ†๐ผ) ๐‘˜−1 ๐‘ฃ® ๐‘˜ ≠ 0,
entire chain since you can compute the rest, ๐‘ฃ® ๐‘˜−1 = (๐ด − ๐œ†๐ผ)®
๐‘ฃ ๐‘˜ , ๐‘ฃ® ๐‘˜−2 = (๐ด − ๐œ†๐ผ)®
๐‘ฃ ๐‘˜−1 , etc. We
form the linearly independent solutions
๐‘ฅ®1 = ๐‘ฃ®1 ๐‘’ ๐œ†๐‘ก ,
๐‘ฅ®2 = (®
๐‘ฃ 2 + ๐‘ฃ®1 ๐‘ก) ๐‘’ ๐œ†๐‘ก ,
..
.
๐‘ก2
๐‘ก ๐‘˜−2
๐‘ก ๐‘˜−1
๐‘ฅ®๐‘˜ = ๐‘ฃ® ๐‘˜ + ๐‘ฃ® ๐‘˜−1 ๐‘ก + ๐‘ฃ® ๐‘˜−2 + · · · + ๐‘ฃ®2
+ ๐‘ฃ®1
๐‘’ ๐œ†๐‘ก .
2
(๐‘˜ − 2)!
(๐‘˜ − 1)!
Recall that ๐‘˜! = 1 · 2 · 3 · · · (๐‘˜ − 1) · ๐‘˜ is the factorial. If you have an eigenvalue of geometric
multiplicity โ„“ , you will have to find โ„“ such chains (some of them might be short: just the
single eigenvector equation). We go until we form ๐‘š linearly independent solutions where
๐‘š is the algebraic multiplicity. We don’t quite know which specific eigenvectors go with
which chain, so start by finding ๐‘ฃ® ๐‘˜ first for the longest possible chain and go from there.
For example, if ๐œ† is an eigenvalue of ๐ด of algebraic multiplicity 3 and defect 2, then
solve
®
(๐ด − ๐œ†๐ผ)®
๐‘ฃ 1 = 0,
(๐ด − ๐œ†๐ผ)®
๐‘ฃ2 = ๐‘ฃ®1 ,
(๐ด − ๐œ†๐ผ)®
๐‘ฃ3 = ๐‘ฃ®2 .
167
3.7. MULTIPLE EIGENVALUES
® but (๐ด − ๐œ†๐ผ)2 ๐‘ฃ®3 ≠ 0.
® Then you are done as
That is, find ๐‘ฃ®3 such that (๐ด − ๐œ†๐ผ)3 ๐‘ฃ®3 = 0,
๐‘ฃ®2 = (๐ด − ๐œ†๐ผ)®
๐‘ฃ3 and ๐‘ฃ®1 = (๐ด − ๐œ†๐ผ)®
๐‘ฃ 2 . The 3 linearly independent solutions are
๐œ†๐‘ก
๐œ†๐‘ก
๐‘ฅ®1 = ๐‘ฃ®1 ๐‘’ ,
๐‘ฅ®2 = (®
๐‘ฃ2 + ๐‘ฃ®1 ๐‘ก) ๐‘’ ,
๐‘ก 2 ๐œ†๐‘ก
๐‘ฅ®3 = ๐‘ฃ®3 + ๐‘ฃ®2 ๐‘ก + ๐‘ฃ®1
๐‘’ .
2
If on the other hand ๐ด has an eigenvalue ๐œ† of algebraic multiplicity 3 and defect 1, then
solve
®
®
(๐ด − ๐œ†๐ผ)®
๐‘ฃ 1 = 0,
(๐ด − ๐œ†๐ผ)®
๐‘ฃ2 = 0,
(๐ด − ๐œ†๐ผ)®
๐‘ฃ3 = ๐‘ฃ®2 .
Here ๐‘ฃ®1 and ๐‘ฃ®2 are actual honest eigenvectors, and ๐‘ฃ®3 is a generalized eigenvector. So
® but (๐ด − ๐œ†๐ผ)®
®
there are two chains. To solve, first find a ๐‘ฃ®3 such that (๐ด − ๐œ†๐ผ)2 ๐‘ฃ®3 = 0,
๐‘ฃ3 ≠ 0.
Then ๐‘ฃ®2 = (๐ด − ๐œ†๐ผ)®
๐‘ฃ 3 is going to be an eigenvector. Then solve for an eigenvector ๐‘ฃ®1 that is
linearly independent from ๐‘ฃ®2 . You get 3 linearly independent solutions
๐‘ฅ®1 = ๐‘ฃ®1 ๐‘’ ๐œ†๐‘ก ,
3.7.3
๐‘ฅ®2 = ๐‘ฃ®2 ๐‘’ ๐œ†๐‘ก ,
๐‘ฅ®3 = (®
๐‘ฃ3 + ๐‘ฃ®2 ๐‘ก) ๐‘’ ๐œ†๐‘ก .
Exercises
Exercise 3.7.2: Let ๐ด =
5 −3 Exercise 3.7.3: Let ๐ด =
h
3 −1
. Find the general solution of ๐‘ฅ®0 = ๐ด ๐‘ฅ®.
5 −4 4
0 3 0
−2 4 −1
i
.
a) What are the eigenvalues?
b) What is/are the defect(s) of the eigenvalue(s)?
c) Find the general solution of ๐‘ฅ®0 = ๐ด ๐‘ฅ®.
Exercise 3.7.4: Let ๐ด =
h2 1 0i
020
002
.
a) What are the eigenvalues?
b) What is/are the defect(s) of the eigenvalue(s)?
c) Find the general solution of ๐‘ฅ®0 = ๐ด ๐‘ฅ® in two different ways and verify you get the same answer.
Exercise 3.7.5: Let ๐ด =
h
0 1 2
−1 −2 −2
−4 4 7
i
.
a) What are the eigenvalues?
b) What is/are the defect(s) of the eigenvalue(s)?
c) Find the general solution of ๐‘ฅ®0 = ๐ด ๐‘ฅ®.
168
Exercise 3.7.6: Let ๐ด =
CHAPTER 3. SYSTEMS OF ODES
h
0 4 −2
−1 −4 1
0 0 −2
i
.
a) What are the eigenvalues?
b) What is/are the defect(s) of the eigenvalue(s)?
c) Find the general solution of ๐‘ฅ®0 = ๐ด ๐‘ฅ®.
Exercise 3.7.7: Let ๐ด =
h
2 1 −1
−1 0 2
−1 −2 4
i
.
a) What are the eigenvalues?
b) What is/are the defect(s) of the eigenvalue(s)?
c) Find the general solution of ๐‘ฅ®0 = ๐ด ๐‘ฅ®.
Exercise 3.7.8: Suppose that ๐ด is a 2 × 2 matrix with a repeated eigenvalue ๐œ†. Suppose that there
are two linearly independent eigenvectors. Show that ๐ด = ๐œ†๐ผ.
Exercise 3.7.101: Let ๐ด =
h1 1 1i
111
111
.
a) What are the eigenvalues?
b) What is/are the defect(s) of the eigenvalue(s)?
c) Find the general solution of ๐‘ฅ® 0 = ๐ด ๐‘ฅ®.
Exercise 3.7.102: Let ๐ด =
h
1 33
1 10
−1 1 2
i
.
a) What are the eigenvalues?
b) What is/are the defect(s) of the eigenvalue(s)?
c) Find the general solution of ๐‘ฅ® 0 = ๐ด ๐‘ฅ®.
Exercise 3.7.103: Let ๐ด =
h
2 0 0
−1 −1 9
0 −1 5
i
.
a) What are the eigenvalues?
b) What is/are the defect(s) of the eigenvalue(s)?
c) Find the general solution of ๐‘ฅ® 0 = ๐ด ๐‘ฅ®.
Exercise 3.7.104: Let ๐ด = [ ๐‘๐‘Ž ๐‘Ž๐‘ ], where ๐‘Ž, ๐‘, and ๐‘ are unknowns. Suppose that 5 is a doubled
eigenvalue of defect 1, and suppose that 10 is a corresponding eigenvector. Find ๐ด and show that
there is only one such matrix ๐ด.
169
3.8. MATRIX EXPONENTIALS
3.8
Matrix exponentials
Note: 2 lectures, §5.6 in [EP], §7.7 in [BD]
3.8.1
Definition
There is another way of finding a fundamental matrix solution of a system. Consider the
constant coefficient equation
๐‘ฅ®0 = ๐‘ƒ ๐‘ฅ®.
If this would be just one equation (when ๐‘ƒ is a number or a 1 × 1 matrix), then the solution
would be
๐‘ฅ® = ๐‘’ ๐‘ƒ๐‘ก .
That doesn’t make sense if ๐‘ƒ is a larger matrix, but essentially the same computation that
led to the above works for matrices when we define ๐‘’ ๐‘ƒ๐‘ก properly. First let us write down
the Taylor series for ๐‘’ ๐‘Ž๐‘ก for some number ๐‘Ž:
Õ (๐‘Ž๐‘ก) ๐‘˜
(๐‘Ž๐‘ก)2 (๐‘Ž๐‘ก)3 (๐‘Ž๐‘ก)4
= 1 + ๐‘Ž๐‘ก +
+
+
+··· =
.
2
6
24
๐‘˜!
∞
๐‘’
๐‘Ž๐‘ก
๐‘˜=0
Recall ๐‘˜! = 1 · 2 · 3 · · · ๐‘˜ is the factorial, and 0! = 1. We differentiate this series term by term
(๐‘Ž๐‘ก)2 (๐‘Ž๐‘ก)3
๐‘‘ ๐‘Ž๐‘ก ๐‘Ž3๐‘ก2 ๐‘Ž4๐‘ก3
2
+
+ · · · = ๐‘Ž 1 + ๐‘Ž๐‘ก +
+
+ · · · = ๐‘Ž๐‘’ ๐‘Ž๐‘ก .
๐‘’ =0+๐‘Ž+๐‘Ž ๐‘ก+
๐‘‘๐‘ก
2
6
2
6
Maybe we can try the same trick with matrices. For an ๐‘› × ๐‘› matrix ๐ด we define the matrix
exponential as
1
1
1
def
๐‘’ ๐ด = ๐ผ + ๐ด + ๐ด2 + ๐ด3 + · · · + ๐ด ๐‘˜ + · · ·
2
6
๐‘˜!
Let us not worry about convergence. The series really does always converge. We usually
write ๐‘ƒ๐‘ก as ๐‘ก๐‘ƒ by convention when ๐‘ƒ is a matrix. With this small change and by the exact
same calculation as above we have that
๐‘‘ ๐‘ก๐‘ƒ ๐‘’
= ๐‘ƒ๐‘’ ๐‘ก๐‘ƒ .
๐‘‘๐‘ก
Now ๐‘ƒ and hence ๐‘’ ๐‘ก๐‘ƒ is an ๐‘› × ๐‘› matrix. What we are looking for is a vector. In the 1 × 1
case we would at this point multiply by an arbitrary constant to get the general solution. In
the matrix case we multiply by a column vector ๐‘®.
Theorem 3.8.1. Let ๐‘ƒ be an ๐‘› × ๐‘› matrix. Then the general solution to ๐‘ฅ®0 = ๐‘ƒ ๐‘ฅ® is
๐‘ฅ® = ๐‘’ ๐‘ก๐‘ƒ ๐‘®,
where ๐‘® is an arbitrary constant vector. In fact, ๐‘ฅ®(0) = ๐‘®.
170
CHAPTER 3. SYSTEMS OF ODES
Let us check:
๐‘‘ ๐‘ก๐‘ƒ ๐‘‘
®
๐‘ฅ=
๐‘’ ๐‘® = ๐‘ƒ๐‘’ ๐‘ก๐‘ƒ ๐‘® = ๐‘ƒ ๐‘ฅ®.
๐‘‘๐‘ก
๐‘‘๐‘ก
๐‘ก๐‘ƒ
Hence ๐‘’ is a fundamental matrix solution of the homogeneous system. So if we can
compute the matrix exponential, we have another method of solving constant coefficient
homogeneous systems. It also makes it easy to solve for initial conditions. To solve ๐‘ฅ®0 = ๐ด ๐‘ฅ®,
® we take the solution
๐‘ฅ®(0) = ๐‘,
®
๐‘ฅ® = ๐‘’ ๐‘ก๐ด ๐‘.
®
This equation follows because ๐‘’ 0๐ด = ๐ผ, so ๐‘ฅ®(0) = ๐‘’ 0๐ด ๐‘® = ๐‘.
We mention a drawback of matrix exponentials. In general ๐‘’ ๐ด+๐ต ≠ ๐‘’ ๐ด ๐‘’ ๐ต . The trouble is
that matrices do not commute, that is, in general ๐ด๐ต ≠ ๐ต๐ด. If you try to prove ๐‘’ ๐ด+๐ต ≠ ๐‘’ ๐ด ๐‘’ ๐ต
using the Taylor series, you will see why the lack of commutativity becomes a problem.
However, it is still true that if ๐ด๐ต = ๐ต๐ด, that is, if ๐ด and ๐ต commute, then ๐‘’ ๐ด+๐ต = ๐‘’ ๐ด ๐‘’ ๐ต . We
will find this fact useful. Let us restate this as a theorem to make a point.
Theorem 3.8.2. If ๐ด๐ต = ๐ต๐ด, then ๐‘’ ๐ด+๐ต = ๐‘’ ๐ด ๐‘’ ๐ต . Otherwise, ๐‘’ ๐ด+๐ต ≠ ๐‘’ ๐ด ๐‘’ ๐ต in general.
3.8.2
Simple cases
In some instances it may work
๐‘Žto0 just plug into the series definition. Suppose the matrix is
diagonal. For example, ๐ท = 0 ๐‘ . Then
๐‘Ž๐‘˜ 0
๐ท๐‘˜ =
,
0 ๐‘๐‘˜
and
1
1
๐‘’ ๐ท = ๐ผ + ๐ท + ๐ท2 + ๐ท3 + · · ·
2
6
๐‘Ž
1 ๐‘Ž2 0
1 ๐‘Ž3 0
๐‘’
0
1 0
๐‘Ž 0
=
+
+
+
+··· =
.
0 1
0 ๐‘
0 ๐‘’๐‘
2 0 ๐‘2
6 0 ๐‘3
So by this rationale
๐‘’ 0
๐‘’๐ผ =
0 ๐‘’
and
๐‘’ ๐‘Ž๐ผ
๐‘’๐‘Ž 0
=
.
0 ๐‘’๐‘Ž
This makes
of certain other matrices easy
5 exponentials
2 4to compute. For example,
0 0 the
2
4
matrix ๐ด = −1 1 can be written as 3๐ผ + ๐ต where ๐ต = −1 −2 . Notice that ๐ต = 0 0 . So
๐ต ๐‘˜ = 0 for all ๐‘˜ ≥ 2. Therefore, ๐‘’ ๐ต = ๐ผ + ๐ต. Suppose we actually want to compute ๐‘’ ๐‘ก๐ด . The
matrices 3๐‘ก๐ผ and ๐‘ก๐ต commute (exercise: check this) and ๐‘’ ๐‘ก๐ต = ๐ผ + ๐‘ก๐ต, since (๐‘ก๐ต)2 = ๐‘ก 2 ๐ต2 = 0.
We write
๐‘’
๐‘ก๐ด
=๐‘’
3๐‘ก๐ผ+๐‘ก๐ต
=๐‘’
3๐‘ก๐ผ ๐‘ก๐ต
๐‘’
๐‘’ 3๐‘ก 0
=
(๐ผ + ๐‘ก๐ต) =
0 ๐‘’ 3๐‘ก
๐‘’ 3๐‘ก 0
=
0 ๐‘’ 3๐‘ก
1 + 2๐‘ก
4๐‘ก
(1 + 2๐‘ก) ๐‘’ 3๐‘ก
4๐‘ก๐‘’ 3๐‘ก
=
.
−๐‘ก
1 − 2๐‘ก
−๐‘ก๐‘’ 3๐‘ก
(1 − 2๐‘ก) ๐‘’ 3๐‘ก
171
3.8. MATRIX EXPONENTIALS
We found a fundamental matrix solution for the system ๐‘ฅ®0 = ๐ด ๐‘ฅ®. Note that this matrix has
a repeated eigenvalue with a defect; there is only one eigenvector for the eigenvalue 3. So
we found a perhaps easier way to handle this case. In fact, if a matrix ๐ด is 2 × 2 and has an
eigenvalue ๐œ† of multiplicity 2, then either ๐ด = ๐œ†๐ผ, or ๐ด = ๐œ†๐ผ + ๐ต where ๐ต2 = 0. This is a
good exercise.
Exercise 3.8.1: Suppose that ๐ด is 2 × 2 and ๐œ† is the only eigenvalue. Show that (๐ด − ๐œ†๐ผ)2 = 0,
and therefore that we can write ๐ด = ๐œ†๐ผ + ๐ต, where ๐ต2 = 0 (and possibly ๐ต = 0). Hint: First write
down what does it mean for the eigenvalue to be of multiplicity 2. You will get an equation for the
entries. Now compute the square of ๐ต.
Matrices ๐ต such that ๐ต ๐‘˜ = 0 for some ๐‘˜ are called nilpotent. Computation of the matrix
exponential for nilpotent matrices is easy by just writing down the first ๐‘˜ terms of the
Taylor series.
3.8.3
General matrices
In general, the exponential is not as easy to compute as above. We usually cannot write a
matrix as a sum of commuting matrices where the exponential is simple for each one. But
fear not, it is still not too difficult provided we can find enough eigenvectors. First we need
the following interesting result about matrix exponentials. For two square matrices ๐ด and
๐ต, with ๐ต invertible, we have
−1
๐‘’ ๐ต๐ด๐ต = ๐ต๐‘’ ๐ด ๐ต−1 .
This can be seen by writing down the Taylor series. First
2
(๐ต๐ด๐ต−1 ) = ๐ต๐ด๐ต−1 ๐ต๐ด๐ต−1 = ๐ต๐ด๐ผ๐ด๐ต−1 = ๐ต๐ด2 ๐ต−1 .
๐‘˜
And by the same reasoning (๐ต๐ด๐ต−1 ) = ๐ต๐ด ๐‘˜ ๐ต−1 . Now write the Taylor series for ๐‘’ ๐ต๐ด๐ต :
๐‘’ ๐ต๐ด๐ต
−1
−1
1
1
2
3
= ๐ผ + ๐ต๐ด๐ต−1 + (๐ต๐ด๐ต−1 ) + (๐ต๐ด๐ต−1 ) + · · ·
2
6
1
1
= ๐ต๐ต−1 + ๐ต๐ด๐ต−1 + ๐ต๐ด2 ๐ต−1 + ๐ต๐ด3 ๐ต−1 + · · ·
2
6
−1
1 2 1 3
= ๐ต ๐ผ +๐ด+ ๐ด + ๐ด +··· ๐ต
2
6
๐ด −1
= ๐ต๐‘’ ๐ต .
Given a square matrix ๐ด, we can usually write ๐ด = ๐ธ๐ท๐ธ−1 , where ๐ท is diagonal and
๐ธ invertible. This procedure is called diagonalization. If we can do that, the computation
of the exponential becomes easy as ๐‘’ ๐ท is just taking the exponential of the entries on the
diagonal. Adding ๐‘ก into the mix, we can then compute the exponential
๐‘’ ๐‘ก๐ด = ๐ธ๐‘’ ๐‘ก๐ท ๐ธ−1 .
172
CHAPTER 3. SYSTEMS OF ODES
To diagonalize ๐ด we need ๐‘› linearly independent eigenvectors of ๐ด. Otherwise, this
method of computing the exponential does not work and we need to be trickier, but
we will not get into such details. Let ๐ธ be the matrix with the eigenvectors as columns.
Let ๐œ†1 , ๐œ†2 , . . . , ๐œ†๐‘› be the eigenvalues and let ๐‘ฃ®1 , ๐‘ฃ®2 , . . . , ๐‘ฃ® ๐‘› be the eigenvectors, then
๐ธ = [ ๐‘ฃ®1 ๐‘ฃ®2 · · · ๐‘ฃ® ๐‘› ]. Make a diagonal matrix ๐ท with the eigenvalues on the diagonal:
๏ฃฎ๐œ†1 0
๏ฃฏ
๏ฃฏ 0 ๐œ†2
๐ท = ๏ฃฏ๏ฃฏ ..
..
.
๏ฃฏ.
๏ฃฏ0 0
๏ฃฐ
···
···
...
0
0
..
.
···
๐‘ฃ® ๐‘› ]
๏ฃน
๏ฃบ
๏ฃบ
๏ฃบ.
๏ฃบ
๏ฃบ
· · · ๐œ†๐‘› ๏ฃบ๏ฃป
We compute
๐ด๐ธ = ๐ด[ ๐‘ฃ®1
๐‘ฃ®2
= [ ๐ด®
๐‘ฃ1
๐ด®
๐‘ฃ2
= [ ๐œ†1 ๐‘ฃ®1
๐œ†2 ๐‘ฃ®2
= [ ๐‘ฃ®1
๐‘ฃ®2
···
···
···
๐ด®
๐‘ฃ๐‘› ]
๐œ†๐‘› ๐‘ฃ® ๐‘› ]
๐‘ฃ® ๐‘› ]๐ท
= ๐ธ๐ท.
The columns of ๐ธ are linearly independent as these are linearly independent eigenvectors
of ๐ด. Hence ๐ธ is invertible. Since ๐ด๐ธ = ๐ธ๐ท, we multiply on the right by ๐ธ−1 and we get
๐ด = ๐ธ๐ท๐ธ−1 .
This means that ๐‘’ ๐ด = ๐ธ๐‘’ ๐ท ๐ธ −1 . Multiplying the matrix by ๐‘ก we obtain
๐‘’ ๐‘ก๐ด = ๐ธ๐‘’ ๐‘ก๐ท ๐ธ −1
๏ฃฎ ๐‘’ ๐œ†1 ๐‘ก 0 · · ·
0 ๏ฃน
๏ฃบ
๏ฃฏ
๏ฃฏ 0 ๐‘’ ๐œ†2 ๐‘ก · · ·
0 ๏ฃบ −1
๏ฃฏ
= ๐ธ ๏ฃฏ ..
..
.. ๏ฃบ๏ฃบ ๐ธ .
...
.
. ๏ฃบ
๏ฃฏ .
๏ฃฏ 0
๐œ†๐‘› ๐‘ก ๏ฃบ
0
·
·
·
๐‘’
๏ฃฐ
๏ฃป
(3.5)
The formula (3.5), therefore, gives the formula for computing a fundamental matrix solution
๐‘’ ๐‘ก๐ด for the system ๐‘ฅ®0 = ๐ด ๐‘ฅ®, in the case where we have ๐‘› linearly independent eigenvectors.
This computation still works when the eigenvalues and eigenvectors are complex,
though then you have to compute with complex numbers. It is clear from the definition
that if ๐ด is real, then ๐‘’ ๐‘ก๐ด is real. So you will only need complex numbers in the computation
and not for the result. You may need to apply Euler’s formula to simplify the result. If
simplified properly, the final matrix will not have any complex numbers in it.
Example 3.8.1: Compute a fundamental matrix solution using the matrix exponential for
the system
0 ๐‘ฅ
1 2 ๐‘ฅ
=
.
๐‘ฆ
2 1 ๐‘ฆ
Then compute the particular solution for the initial conditions ๐‘ฅ(0) = 4 and ๐‘ฆ(0) = 2.
173
3.8. MATRIX EXPONENTIALS
Let ๐ด be the coefficient matrix 12 21 . We first compute (exercise) that the eigenvalues
1 are 3 and −1 and corresponding eigenvectors are 11 and −1
. Hence the diagonalization
of ๐ด is
1 2
1 1
=
2 1
1 −1
| {z }
3 0
0 −1
1 1
1 −1
−1
.
| {z } | {z } | {z }
๐ด
๐ธ
๐ท
๐ธ −1
We write
๐‘’
๐‘ก๐ด
= ๐ธ๐‘’
๐‘ก๐ท −1
๐ธ
−1
๐‘’ 3๐‘ก 0
0 ๐‘’ −๐‘ก
๐‘’ 3๐‘ก 0 −1 −1 −1
0 ๐‘’ −๐‘ก 2 −1 1
1 1
=
1 −1
1 1
=
1 −1
−1 ๐‘’ 3๐‘ก ๐‘’ −๐‘ก
=
2 ๐‘’ 3๐‘ก −๐‘’ −๐‘ก
1 1
1 −1
−1 −1
−1 1
−1 −๐‘’ 3๐‘ก − ๐‘’ −๐‘ก −๐‘’ 3๐‘ก + ๐‘’ −๐‘ก
=
=
2 −๐‘’ 3๐‘ก + ๐‘’ −๐‘ก −๐‘’ 3๐‘ก − ๐‘’ −๐‘ก
"
๐‘’ 3๐‘ก +๐‘’ −๐‘ก
2
๐‘’ 3๐‘ก −๐‘’ −๐‘ก
2
๐‘’ 3๐‘ก −๐‘’ −๐‘ก
2
๐‘’ 3๐‘ก +๐‘’ −๐‘ก
2
#
.
The initial conditions are ๐‘ฅ(0) = 4 and ๐‘ฆ(0) = 2. Hence, by the property that ๐‘’ 0๐ด = ๐ผ
we find that the particular solution we are looking for is ๐‘’ ๐‘ก๐ด ๐‘® where ๐‘® is 42 . Then the
particular solution we are looking for is
๐‘ฅ
=
๐‘ฆ
3.8.4
"
๐‘’ 3๐‘ก +๐‘’ −๐‘ก
2
๐‘’ 3๐‘ก −๐‘’ −๐‘ก
2
๐‘’ 3๐‘ก −๐‘’ −๐‘ก
2
๐‘’ 3๐‘ก +๐‘’ −๐‘ก
2
# 4
2๐‘’ 3๐‘ก + 2๐‘’ −๐‘ก + ๐‘’ 3๐‘ก − ๐‘’ −๐‘ก
3๐‘’ 3๐‘ก + ๐‘’ −๐‘ก
=
=
.
2
2๐‘’ 3๐‘ก − 2๐‘’ −๐‘ก + ๐‘’ 3๐‘ก + ๐‘’ −๐‘ก
3๐‘’ 3๐‘ก − ๐‘’ −๐‘ก
Fundamental matrix solutions
We note that if you can compute a fundamental matrix solution in a different way, you can
use this to find the matrix exponential ๐‘’ ๐‘ก๐ด . A fundamental matrix solution of a system of
ODEs is not unique. The exponential is the fundamental matrix solution with the property
that for ๐‘ก = 0 we get the identity matrix. So we must find the right fundamental matrix
solution. Let ๐‘‹ be any fundamental matrix solution to ๐‘ฅ®0 = ๐ด ๐‘ฅ®. Then we claim
๐‘’ ๐‘ก๐ด = ๐‘‹(๐‘ก) [๐‘‹(0)]−1 .
Clearly, if we plug ๐‘ก = 0 into ๐‘‹(๐‘ก) [๐‘‹(0)]−1 we get the identity. We can multiply a
fundamental matrix solution on the right by any constant invertible matrix and we still
get a fundamental matrix solution. All we are doing is changing what are the arbitrary
constants in the general solution ๐‘ฅ®(๐‘ก) = ๐‘‹(๐‘ก) ๐‘®.
174
3.8.5
CHAPTER 3. SYSTEMS OF ODES
Approximations
If you think about it, the computation of any fundamental matrix solution ๐‘‹ using the
eigenvalue method is just as difficult as the computation of ๐‘’ ๐‘ก๐ด . So perhaps we did not
gain much by this new tool. However, the Taylor series expansion actually gives us a way
to approximate solutions, which the eigenvalue method did not.
The simplest thing we can do is to just compute the series up to a certain number of
terms. There are better ways to approximate the exponential∗ . In many cases, however,
few terms of the Taylor series give a reasonable approximation for the exponential and
may suffice for the application.
For example, let us compute the first 4 terms of the series
1
2
for the matrix ๐ด = 2 1 .
๐‘’
๐‘ก๐ด
5
1 2
๐‘ก2 2 ๐‘ก3 3
2 2 2
+๐‘ก
≈ ๐ผ + ๐‘ก๐ด + ๐ด + ๐ด = ๐ผ + ๐‘ก
+ ๐‘ก3
5
2
6
2 1
2 2
7 6
3
=
7
13
3
6
3
2 ๐‘ก + 2 ๐‘ก 2 + 73 ๐‘ก 3
1 + ๐‘ก + 25 ๐‘ก 2 + 13
6 ๐‘ก
3
2 ๐‘ก + 2 ๐‘ก 2 + 73 ๐‘ก 3
1 + ๐‘ก + 52 ๐‘ก 2 + 13
6 ๐‘ก
=
13
.
Just like the scalar version of the Taylor series approximation, the approximation will be
better for small ๐‘ก and worse for larger ๐‘ก. For larger ๐‘ก, we will generally have to compute
more terms. Let us see how we stack up against the real solution with ๐‘ก = 0.1. The
approximate solution is approximately (rounded to 8 decimal places)
๐‘’
0.1 ๐ด
0.12 2 0.13 3
1.12716667 0.22233333
๐ด +
๐ด =
.
≈ ๐ผ + 0.1 ๐ด +
0.22233333 1.12716667
2
6
And plugging ๐‘ก = 0.1 into the real solution (rounded to 8 decimal places) we get
๐‘’
0.1 ๐ด
1.12734811 0.22251069
=
.
0.22251069 1.12734811
Not bad at all! Although if we take the same approximation for ๐‘ก = 1 we get
1
1
6.66666667 6.33333333
,
๐ผ + ๐ด + ๐ด2 + ๐ด3 =
6.33333333 6.66666667
2
6
while the real value is (again rounded to 8 decimal places)
10.22670818 9.85882874
๐‘’๐ด =
.
9.85882874 10.22670818
So the approximation is not very good once we get up to ๐‘ก = 1. To get a good approximation
at ๐‘ก = 1 (say up to 2 decimal places) we would need to go up to the 11th power (exercise).
∗ C.
Moler and C.F. Van Loan, Nineteen Dubious Ways to Compute the Exponential of a Matrix, Twenty-Five
Years Later, SIAM Review 45 (1), 2003, 3–49
175
3.8. MATRIX EXPONENTIALS
3.8.6
Exercises
Exercise 3.8.2: Using the matrix exponential, find a fundamental matrix solution for the system
๐‘ฅ 0 = 3๐‘ฅ + ๐‘ฆ, ๐‘ฆ 0 = ๐‘ฅ + 3๐‘ฆ.
Exercise 3.8.3: Find ๐‘’ ๐‘ก๐ด for the matrix ๐ด =
2 3
02
.
Exercise 3.8.4: Find a fundamental matrix solution for the system ๐‘ฅ10 = 7๐‘ฅ1 + 4๐‘ฅ 2 + 12๐‘ฅ3 ,
๐‘ฅ20
= ๐‘ฅ1 + 2๐‘ฅ 2 +
๐‘ฅ 3 , ๐‘ฅ 30
= −3๐‘ฅ1 − 2๐‘ฅ2 − 5๐‘ฅ3 . Then find the solution that satisfies ๐‘ฅ®(0) =
Exercise 3.8.5: Compute the matrix exponential ๐‘’ ๐ด for ๐ด =
1 2
01
h
0
1
−2
i
.
.
Exercise 3.8.6 (challenging): Suppose ๐ด๐ต = ๐ต๐ด. Show that under this assumption, ๐‘’ ๐ด+๐ต = ๐‘’ ๐ด ๐‘’ ๐ต .
−1
Exercise 3.8.7: Use Exercise 3.8.6 to show that (๐‘’ ๐ด )
๐ด is not.
= ๐‘’ −๐ด . In particular, ๐‘’ ๐ด is invertible even if
Exercise
1 0 3.8.8: Let ๐ด be a 2 × 2 matrix with eigenvalues −1, 1, and corresponding eigenvectors
1 , 1 .
a) Find matrix ๐ด with these properties.
b) Find a fundamental matrix solution to ๐‘ฅ®0 = ๐ด ๐‘ฅ®.
c) Solve the system in with initial conditions ๐‘ฅ®(0) =
2
3
.
Exercise 3.8.9: Suppose that ๐ด is an ๐‘› × ๐‘› matrix with a repeated eigenvalue ๐œ† of multiplicity ๐‘›
with ๐‘› linearly independent eigenvectors. Show that the matrix is diagonal, in fact, ๐ด = ๐œ†๐ผ. Hint:
Use diagonalization and the fact that the identity matrix commutes with every other matrix.
Exercise 3.8.10: Let ๐ด =
−1 −1 1 −3
.
a) Find ๐‘’ ๐‘ก๐ด .
Exercise 3.8.11: Let ๐ด =
order.
b) Solve ๐‘ฅ®0 = ๐ด ๐‘ฅ®, ๐‘ฅ®(0) =
1 2
34
1
−2
.
. Approximate ๐‘’ ๐‘ก๐ด by expanding the power series up to the third
Exercise 3.8.12: For any positive integer ๐‘›, find a formula (or a recipe) for ๐ด๐‘› for the following
matrices:
3 0
a)
0 9
5 2
b)
4 7
0 1
c)
0 0
Exercise 3.8.101: Compute ๐‘’ ๐‘ก๐ด where ๐ด =
1 −2
−2 1
Exercise 3.8.102: Compute ๐‘’ ๐‘ก๐ด where ๐ด =
h
1 −3 2
−2 1 2
−1 −3 4
.
i
.
2 1
d)
0 2
176
CHAPTER 3. SYSTEMS OF ODES
Exercise 3.8.103:
a) Compute ๐‘’ ๐‘ก๐ด where ๐ด =
3 −1 1 1
b) Solve ๐‘ฅ® 0 = ๐ด ๐‘ฅ® for ๐‘ฅ®(0) =
.
1
2
.
Exercise 3.8.104:
the first 3 terms (up to the second degree) of the Taylor expansion of
Compute
๐‘’ ๐‘ก๐ด where ๐ด = 22 32 . Write it as a single matrix. Then use it to approximate ๐‘’ 0.1๐ด .
Exercise 3.8.105: For any positive integer ๐‘›, find a formula (or a recipe) for ๐ด๐‘› for the following
matrices:
7
4
a)
−5 −2
−3 4
b)
−6 −7
0 1
c)
1 0
177
3.9. NONHOMOGENEOUS SYSTEMS
3.9
Nonhomogeneous systems
Note: 3 lectures (may have to skip a little), somewhat different from §5.7 in [EP], §7.9 in [BD]
3.9.1
First order constant coefficient
Integrating factor
Let us first focus on the nonhomogeneous first order equation
๐‘ฅ®0(๐‘ก) = ๐ด ๐‘ฅ®(๐‘ก) + ๐‘“®(๐‘ก),
where ๐ด is a constant matrix. The first method we look at is the integrating factor method.
For simplicity we rewrite the equation as
๐‘ฅ®0(๐‘ก) + ๐‘ƒ ๐‘ฅ®(๐‘ก) = ๐‘“®(๐‘ก),
where ๐‘ƒ = −๐ด. We multiply both sides of the equation by ๐‘’ ๐‘ก๐‘ƒ (being mindful that we are
dealing with matrices that may not commute) to obtain
๐‘’ ๐‘ก๐‘ƒ ๐‘ฅ®0(๐‘ก) + ๐‘’ ๐‘ก๐‘ƒ ๐‘ƒ ๐‘ฅ®(๐‘ก) = ๐‘’ ๐‘ก๐‘ƒ ๐‘“®(๐‘ก).
We notice that ๐‘ƒ๐‘’ ๐‘ก๐‘ƒ = ๐‘’ ๐‘ก๐‘ƒ ๐‘ƒ. This fact follows by writing down the series definition of ๐‘’ ๐‘ก๐‘ƒ :
๐‘ƒ๐‘’
๐‘ก๐‘ƒ
1
1
= ๐‘ƒ ๐ผ + ๐‘ก๐‘ƒ + (๐‘ก๐‘ƒ)2 + · · · = ๐‘ƒ + ๐‘ก๐‘ƒ 2 + ๐‘ก 2 ๐‘ƒ 3 + · · · =
2
2
1
= ๐ผ + ๐‘ก๐‘ƒ + (๐‘ก๐‘ƒ)2 + · · · ๐‘ƒ = ๐‘’ ๐‘ก๐‘ƒ ๐‘ƒ.
2
So
๐‘‘
๐‘‘๐‘ก
๐‘’ ๐‘ก๐‘ƒ = ๐‘ƒ๐‘’ ๐‘ก๐‘ƒ = ๐‘’ ๐‘ก๐‘ƒ ๐‘ƒ. The product rule says
๐‘‘ ๐‘ก๐‘ƒ
๐‘’ ๐‘ฅ®(๐‘ก) = ๐‘’ ๐‘ก๐‘ƒ ๐‘ฅ®0(๐‘ก) + ๐‘’ ๐‘ก๐‘ƒ ๐‘ƒ ๐‘ฅ®(๐‘ก),
๐‘‘๐‘ก
and so
๐‘‘ ๐‘ก๐‘ƒ
๐‘’ ๐‘ฅ®(๐‘ก) = ๐‘’ ๐‘ก๐‘ƒ ๐‘“®(๐‘ก).
๐‘‘๐‘ก
We can now integrate. That is, we integrate each component of the vector separately
๐‘ก๐‘ƒ
๐‘’ ๐‘ฅ®(๐‘ก) =
−1
Recall from Exercise 3.8.7 that (๐‘’ ๐‘ก๐‘ƒ )
๐‘ฅ®(๐‘ก) = ๐‘’
∫
๐‘’ ๐‘ก๐‘ƒ ๐‘“®(๐‘ก) ๐‘‘๐‘ก + ๐‘®.
= ๐‘’ −๐‘ก๐‘ƒ . Therefore, we obtain
−๐‘ก๐‘ƒ
∫
๐‘’ ๐‘ก๐‘ƒ ๐‘“®(๐‘ก) ๐‘‘๐‘ก + ๐‘’ −๐‘ก๐‘ƒ ๐‘®.
178
CHAPTER 3. SYSTEMS OF ODES
Perhaps it is better understood as a definite integral. In this case it will be easy to also
solve for the initial conditions. Consider the equation with initial conditions
๐‘ฅ®0(๐‘ก) + ๐‘ƒ ๐‘ฅ®(๐‘ก) = ๐‘“®(๐‘ก),
®
๐‘ฅ®(0) = ๐‘.
The solution can then be written as
๐‘ฅ®(๐‘ก) = ๐‘’
∫
−๐‘ก๐‘ƒ
๐‘ก
®
๐‘’ ๐‘ ๐‘ƒ ๐‘“®(๐‘ ) ๐‘‘๐‘  + ๐‘’ −๐‘ก๐‘ƒ ๐‘.
(3.6)
0
Again, the integration means that each component of the vector ๐‘’ ๐‘ ๐‘ƒ ๐‘“®(๐‘ ) is integrated
®
separately. It is not hard to see that (3.6) really does satisfy the initial condition ๐‘ฅ®(0) = ๐‘.
๐‘ฅ®(0) = ๐‘’
−0๐‘ƒ
0
∫
®
๐‘’ ๐‘ ๐‘ƒ ๐‘“®(๐‘ ) ๐‘‘๐‘  + ๐‘’ −0๐‘ƒ ๐‘® = ๐ผ ๐‘® = ๐‘.
0
Example 3.9.1: Suppose that we have the system
๐‘ฅ10 + 5๐‘ฅ1 − 3๐‘ฅ2 = ๐‘’ ๐‘ก ,
๐‘ฅ20 + 3๐‘ฅ1 − ๐‘ฅ2 = 0,
with initial conditions ๐‘ฅ1 (0) = 1, ๐‘ฅ2 (0) = 0.
Let us write the system as
5 −3
๐‘’๐‘ก
๐‘ฅ® +
๐‘ฅ® =
,
3 −1
0
0
๐‘ฅ®(0) =
1
.
0
The matrix ๐‘ƒ = 53 −3
−1 has a doubled eigenvalue 2 with defect 1, and we leave it as an
exercise to double check we computed ๐‘’ ๐‘ก๐‘ƒ correctly. Once we have ๐‘’ ๐‘ก๐‘ƒ , we find ๐‘’ −๐‘ก๐‘ƒ , simply
by negating ๐‘ก.
๐‘’
๐‘ก๐‘ƒ
(1 + 3๐‘ก) ๐‘’ 2๐‘ก
−3๐‘ก๐‘’ 2๐‘ก
=
,
3๐‘ก๐‘’ 2๐‘ก
(1 − 3๐‘ก) ๐‘’ 2๐‘ก
๐‘’
−๐‘ก๐‘ƒ
(1 − 3๐‘ก) ๐‘’ −2๐‘ก
3๐‘ก๐‘’ −2๐‘ก
=
.
−3๐‘ก๐‘’ −2๐‘ก
(1 + 3๐‘ก) ๐‘’ −2๐‘ก
Instead of computing the whole formula at once, let us do it in stages. First
∫
๐‘ก
๐‘’ ๐‘ ๐‘ƒ ๐‘“®(๐‘ ) ๐‘‘๐‘  =
๐‘ก
∫
0
0
∫
๐‘ก
=
"∫0 ๐‘ก
(1 + 3๐‘ ) ๐‘’ 2๐‘ 
−3๐‘ ๐‘’ 2๐‘ 
3๐‘ ๐‘’ 2๐‘ 
(1 − 3๐‘ ) ๐‘’ 2๐‘ 
(1 + 3๐‘ ) ๐‘’ 3๐‘ 
๐‘‘๐‘ 
3๐‘ ๐‘’ 3๐‘ 
∫๐‘ก
0
=
๐‘’
0
๐‘‘๐‘ 
(1 + 3๐‘ ) ๐‘’ 3๐‘  ๐‘‘๐‘ 
0
=
๐‘ 
#
3๐‘ ๐‘’ 3๐‘  ๐‘‘๐‘ 
๐‘ก๐‘’ 3๐‘ก
(3๐‘ก−1) ๐‘’ 3๐‘ก +1
3
(used integration by parts).
179
3.9. NONHOMOGENEOUS SYSTEMS
Then
๐‘ฅ®(๐‘ก) = ๐‘’
−๐‘ก๐‘ƒ
∫
๐‘ก
๐‘’ ๐‘ ๐‘ƒ ๐‘“®(๐‘ ) ๐‘‘๐‘  + ๐‘’ −๐‘ก๐‘ƒ ๐‘®
0
(1 − 3๐‘ก) ๐‘’ −2๐‘ก
3๐‘ก๐‘’ −2๐‘ก
=
−3๐‘ก๐‘’ −2๐‘ก
(1 + 3๐‘ก) ๐‘’ −2๐‘ก
=
๐‘ก
(3๐‘ก−1) ๐‘’ 3๐‘ก +1
3
−2๐‘ก
3๐‘ก) ๐‘’
(1 − 3๐‘ก) ๐‘’ −2๐‘ก
3๐‘ก๐‘’ −2๐‘ก
+
−3๐‘ก๐‘’ −2๐‘ก
(1 + 3๐‘ก) ๐‘’ −2๐‘ก
1
0
๐‘ก๐‘’ −2๐‘ก
(1 −
−2๐‘ก +
1
−3๐‘ก๐‘’ −2๐‘ก
+ 3 +๐‘ก ๐‘’
− ๐‘’3
๐‘ก๐‘’ 3๐‘ก
(1 − 2๐‘ก) ๐‘’ −2๐‘ก
=
.
๐‘ก
− ๐‘’3 + 31 − 2๐‘ก ๐‘’ −2๐‘ก
Phew!
Let us check that this really works.
๐‘ฅ10 + 5๐‘ฅ1 − 3๐‘ฅ2 = (4๐‘ก๐‘’ −2๐‘ก − 4๐‘’ −2๐‘ก ) + 5(1 − 2๐‘ก) ๐‘’ −2๐‘ก + ๐‘’ ๐‘ก − (1 − 6๐‘ก) ๐‘’ −2๐‘ก = ๐‘’ ๐‘ก .
Similarly (exercise) ๐‘ฅ 20 + 3๐‘ฅ1 − ๐‘ฅ2 = 0. The initial conditions are also satisfied (exercise).
For systems, the integrating factor method only works if ๐‘ƒ does not depend on ๐‘ก, that
is, ๐‘ƒ is constant. The problem is that in general
๐‘‘ h ∫
๐‘’
๐‘‘๐‘ก
๐‘ƒ(๐‘ก) ๐‘‘๐‘ก
i
≠ ๐‘ƒ(๐‘ก) ๐‘’
∫
๐‘ƒ(๐‘ก) ๐‘‘๐‘ก
,
because matrix multiplication is not commutative.
Eigenvector decomposition
For the next method, note that eigenvectors of a matrix give the directions in which the
matrix acts like a scalar. If we solve the system along these directions, the computations
are simpler as we treat the matrix as a scalar. We then put those solutions together to get
the general solution for the system.
Take the equation
๐‘ฅ®0(๐‘ก) = ๐ด ๐‘ฅ®(๐‘ก) + ๐‘“®(๐‘ก).
(3.7)
Assume ๐ด has ๐‘› linearly independent eigenvectors ๐‘ฃ®1 , ๐‘ฃ®2 , . . . , ๐‘ฃ® ๐‘› . Write
๐‘ฅ®(๐‘ก) = ๐‘ฃ®1 ๐œ‰1 (๐‘ก) + ๐‘ฃ®2 ๐œ‰2 (๐‘ก) + · · · + ๐‘ฃ® ๐‘› ๐œ‰๐‘› (๐‘ก).
(3.8)
That is, we wish to write our solution as a linear combination of eigenvectors of ๐ด. If we
solve for the scalar functions ๐œ‰1 through ๐œ‰๐‘› , we have our solution ๐‘ฅ®. Let us decompose ๐‘“® in
terms of the eigenvectors as well. We wish to write
๐‘“®(๐‘ก) = ๐‘ฃ®1 ๐‘”1 (๐‘ก) + ๐‘ฃ®2 ๐‘”2 (๐‘ก) + · · · + ๐‘ฃ® ๐‘› ๐‘”๐‘› (๐‘ก).
(3.9)
That is, we wish to find ๐‘”1 through ๐‘”๐‘› that satisfy (3.9). Since all the eigenvectors are
independent, the matrix ๐ธ = [ ๐‘ฃ®1 ๐‘ฃ®2 · · · ๐‘ฃ® ๐‘› ] is invertible. Write the equation (3.9) as
180
CHAPTER 3. SYSTEMS OF ODES
๐‘“® = ๐ธ ๐‘”® , where the components of ๐‘”® are the functions ๐‘”1 through ๐‘”๐‘› . Then ๐‘”® = ๐ธ−1 ๐‘“®.
Hence it is always possible to find ๐‘”® when there are ๐‘› linearly independent eigenvectors.
We plug (3.8) into (3.7), and note that ๐ด®
๐‘ฃ ๐‘˜ = ๐œ† ๐‘˜ ๐‘ฃ® ๐‘˜ :
๐‘ฃ®1 ๐œ‰10
+
}|
๐‘ฃ®2 ๐œ‰20
+···+
๐‘“®
๐ด ๐‘ฅ®
๐‘ฅ®0
z
{
๐‘ฃ® ๐‘› ๐œ‰0๐‘›
z
}|
{ z
}|
{
= ๐ด ๐‘ฃ®1 ๐œ‰1 + ๐‘ฃ®2 ๐œ‰2 + · · · + ๐‘ฃ® ๐‘› ๐œ‰๐‘› + ๐‘ฃ®1 ๐‘”1 + ๐‘ฃ®2 ๐‘”2 + · · · + ๐‘ฃ® ๐‘› ๐‘”๐‘›
= ๐ด®
๐‘ฃ 1 ๐œ‰1 + ๐ด®
๐‘ฃ2 ๐œ‰2 + · · · + ๐ด®
๐‘ฃ ๐‘› ๐œ‰๐‘› + ๐‘ฃ®1 ๐‘”1 + ๐‘ฃ®2 ๐‘”2 + · · · + ๐‘ฃ® ๐‘› ๐‘”๐‘›
= ๐‘ฃ®1 ๐œ†1 ๐œ‰1 + ๐‘ฃ®2 ๐œ†2 ๐œ‰2 + · · · + ๐‘ฃ® ๐‘› ๐œ†๐‘› ๐œ‰๐‘› + ๐‘ฃ®1 ๐‘”1 + ๐‘ฃ®2 ๐‘”2 + · · · + ๐‘ฃ® ๐‘› ๐‘”๐‘›
= ๐‘ฃ®1 (๐œ†1 ๐œ‰1 + ๐‘”1 ) + ๐‘ฃ®2 (๐œ†2 ๐œ‰2 + ๐‘”2 ) + · · · + ๐‘ฃ® ๐‘› (๐œ†๐‘› ๐œ‰๐‘› + ๐‘”๐‘› ).
If we identify the coefficients of the vectors ๐‘ฃ®1 through ๐‘ฃ® ๐‘› , we get the equations
๐œ‰10 = ๐œ†1 ๐œ‰1 + ๐‘”1 ,
๐œ‰20 = ๐œ†2 ๐œ‰2 + ๐‘”2 ,
..
.
๐œ‰0๐‘› = ๐œ†๐‘› ๐œ‰๐‘› + ๐‘”๐‘› .
Each one of these equations is independent of the others. They are all linear first order
equations and can easily be solved by the standard integrating factor method for single
equations. That is, for the ๐‘˜ th equation we write
๐œ‰0๐‘˜ (๐‘ก) − ๐œ† ๐‘˜ ๐œ‰ ๐‘˜ (๐‘ก) = ๐‘” ๐‘˜ (๐‘ก).
We use the integrating factor ๐‘’ −๐œ† ๐‘˜ ๐‘ก to find that
i
๐‘‘ h
๐œ‰ ๐‘˜ (๐‘ก) ๐‘’ −๐œ† ๐‘˜ ๐‘ก = ๐‘’ −๐œ† ๐‘˜ ๐‘ก ๐‘” ๐‘˜ (๐‘ก).
๐‘‘๐‘ก
We integrate and solve for ๐œ‰ ๐‘˜ to get
๐œ‰ ๐‘˜ (๐‘ก) = ๐‘’
๐œ†๐‘˜ ๐‘ก
∫
๐‘’ −๐œ† ๐‘˜ ๐‘ก ๐‘” ๐‘˜ (๐‘ก) ๐‘‘๐‘ก + ๐ถ ๐‘˜ ๐‘’ ๐œ† ๐‘˜ ๐‘ก .
If we are looking for just any particular solution, we can set ๐ถ ๐‘˜ to be zero. If we leave these
constants in, we get the general solution. Write ๐‘ฅ®(๐‘ก) = ๐‘ฃ®1 ๐œ‰1 (๐‘ก) + ๐‘ฃ®2 ๐œ‰2 (๐‘ก) + · · · + ๐‘ฃ® ๐‘› ๐œ‰๐‘› (๐‘ก), and
we are done.
As always, it is perhaps better to write these integrals as definite integrals. Suppose that
® Take ๐‘Ž® = ๐ธ−1 ๐‘® to find ๐‘® = ๐‘ฃ®1 ๐‘Ž 1 + ๐‘ฃ®2 ๐‘Ž 2 + · · · + ๐‘ฃ® ๐‘› ๐‘Ž ๐‘› ,
we have an initial condition ๐‘ฅ®(0) = ๐‘.
just like before. Then if we write
๐œ‰ ๐‘˜ (๐‘ก) = ๐‘’
๐œ†๐‘˜ ๐‘ก
∫
0
๐‘ก
๐‘’ −๐œ† ๐‘˜ ๐‘  ๐‘” ๐‘˜ (๐‘ ) ๐‘‘๐‘  + ๐‘Ž ๐‘˜ ๐‘’ ๐œ† ๐‘˜ ๐‘ก ,
181
3.9. NONHOMOGENEOUS SYSTEMS
®
we get the particular solution ๐‘ฅ®(๐‘ก) = ๐‘ฃ®1 ๐œ‰1 (๐‘ก) + ๐‘ฃ®2 ๐œ‰2 (๐‘ก) + · · · + ๐‘ฃ® ๐‘› ๐œ‰๐‘› (๐‘ก) satisfying ๐‘ฅ®(0) = ๐‘,
because ๐œ‰ ๐‘˜ (0) = ๐‘Ž ๐‘˜ .
Let us remark that the technique we just outlined is the eigenvalue method applied to
® then the equations
nonhomogeneous systems. If a system is homogeneous, that is, if ๐‘“® = 0,
we get are ๐œ‰0๐‘˜ = ๐œ† ๐‘˜ ๐œ‰ ๐‘˜ , and so ๐œ‰ ๐‘˜ = ๐ถ ๐‘˜ ๐‘’ ๐œ† ๐‘˜ ๐‘ก are the solutions and that’s precisely what we got
in § 3.4.
Example 3.9.2: Let ๐ด =
1 3
31
. Solve
๐‘ฅ®0
= ๐ด ๐‘ฅ® + ๐‘“® where ๐‘“®(๐‘ก) =
2๐‘’ ๐‘ก
2๐‘ก
for ๐‘ฅ®(0) =
h
3/16
−5/16
i
.
1
The eigenvalues of ๐ด are −2 and 4 and corresponding eigenvectors are −1
and 1
respectively. This calculation is left as an exercise. We write down the matrix ๐ธ of the
eigenvectors and compute its inverse (using the inverse formula for 2 × 2 matrices)
1 1
๐ธ=
,
−1 1
๐ธ
−1
1
1 1 −1
=
.
2 1 1
๐œ‰1 + 11 ๐œ‰2 . We first need to write ๐‘“®
๐‘ก 1 1
in terms of the eigenvectors. That is we wish to write ๐‘“® = 2๐‘’
=
๐‘”
+
1
−1
1 ๐‘”2 . Thus
2๐‘ก
We are looking for a solution of the form ๐‘ฅ® =
1
−1
1 1 −1
๐‘”1
2๐‘’ ๐‘ก
= ๐ธ−1
=
๐‘”2
2๐‘ก
2 1 1
2๐‘’ ๐‘ก
๐‘’๐‘ก − ๐‘ก
= ๐‘ก
.
2๐‘ก
๐‘’ +๐‘ก
So ๐‘”1 = ๐‘’ ๐‘ก − ๐‘ก and ๐‘”2 = ๐‘’ ๐‘ก + ๐‘ก.
We hfurther
i need to write ๐‘ฅ®(0) in terms of the eigenvectors. That is, we wish to write
๐‘ฅ®(0) =
3/16
−5/16
=
1
−1
๐‘Ž1 +
1
1
๐‘Ž 2 . Hence
๐‘Ž1
๐‘Ž2
=๐ธ
−1
3
/16
−5/16
=
1/4
.
−1/16
So ๐‘Ž 1 = 1/4 and ๐‘Ž 2 = −1/16. We plug our ๐‘ฅ® into the equation and get
๐‘ฅ®0
z
}| {
๐‘“®
๐ด ๐‘ฅ®
z
}|
{ z
}| {
1
1 0
1
1
1
1
๐œ‰10 +
๐œ‰2 = ๐ด
๐œ‰1 + ๐ด
๐œ‰2 +
๐‘”1 +
๐‘”
−1
1
−1
1
−1
1 2
1
1
1
1 ๐‘ก
=
(−2๐œ‰1 ) +
4๐œ‰2 +
(๐‘’ ๐‘ก − ๐‘ก) +
(๐‘’ + ๐‘ก).
−1
1
−1
1
We get the two equations
๐œ‰10 = −2๐œ‰1 + ๐‘’ ๐‘ก − ๐‘ก,
๐œ‰20 = 4๐œ‰2 + ๐‘’ ๐‘ก + ๐‘ก,
1
,
4
−1
where ๐œ‰2 (0) = ๐‘Ž 2 =
.
16
where ๐œ‰1 (0) = ๐‘Ž 1 =
182
CHAPTER 3. SYSTEMS OF ODES
We solve with integrating factor. Computation of the integral is left as an exercise to the
student. You will need integration by parts.
๐œ‰1 = ๐‘’
−2๐‘ก
∫
๐‘’ 2๐‘ก (๐‘’ ๐‘ก − ๐‘ก) ๐‘‘๐‘ก + ๐ถ1 ๐‘’ −2๐‘ก =
๐‘’๐‘ก ๐‘ก 1
− + + ๐ถ1 ๐‘’ −2๐‘ก .
3 2 4
๐ถ1 is the constant of integration. As ๐œ‰1 (0) = 1/4, then 1/4 = 1/3 + 1/4 + ๐ถ1 and hence ๐ถ1 = −1/3.
Similarly
∫
๐‘’๐‘ก ๐‘ก
1
4๐‘ก
+ ๐ถ2 ๐‘’ 4๐‘ก .
๐œ‰2 = ๐‘’
๐‘’ −4๐‘ก (๐‘’ ๐‘ก + ๐‘ก) ๐‘‘๐‘ก + ๐ถ2 ๐‘’ 4๐‘ก = − − −
3 4 16
As ๐œ‰2 (0) = −1/16 we have −1/16 = −1/3 − 1/16 + ๐ถ2 and hence ๐ถ2 = 1/3. The solution is
1
๐‘ฅ®(๐‘ก) =
−1
๐‘’ ๐‘ก − ๐‘’ −2๐‘ก 1 − 2๐‘ก
1
+
+
1
3
4
|
That is, ๐‘ฅ1 =
๐‘’ 4๐‘ก −๐‘’ −2๐‘ก
3
{z
๐œ‰1
+
3−12๐‘ก
16
๐‘’ 4๐‘ก − ๐‘’ ๐‘ก 4๐‘ก + 1
−
=
3
16
}
|
and ๐‘ฅ2 =
๐‘’ −2๐‘ก +๐‘’ 4๐‘ก −2๐‘’ ๐‘ก
3
{z
๐œ‰2
+
"
๐‘’ 4๐‘ก −๐‘’ −2๐‘ก
+ 3−12๐‘ก
3
16
−2๐‘ก
4๐‘ก
๐‘’ +๐‘’ −2๐‘’ ๐‘ก
4๐‘ก−5
+
3
16
#
.
}
4๐‘ก−5
16 .
Exercise 3.9.1: Check that ๐‘ฅ1 and ๐‘ฅ2 solve the problem. Check both that they satisfy the differential
equation and that they satisfy the initial conditions.
Undetermined coefficients
The method of undetermined coefficients also works for systems. The only difference is that
we use unknown vectors rather than just numbers. Same caveats apply to undetermined
coefficients for systems as for single equations. This method does not always work.
Furthermore, if the right-hand side is complicated, we have to solve for lots of variables.
Each element of an unknown vector is an unknown number. In system of 3 equations with
say say 4 unknown vectors (this would not be uncommon), we already have 12 unknown
numbers to solve for. The method can turn into a lot of tedious work if done by hand. As
the method is essentially the same as for single equations, let us just do an example.
๐‘ก
0
®0 = ๐ด ๐‘ฅ® + ๐‘“® where ๐‘“®(๐‘ก) = ๐‘’๐‘ก .
Example 3.9.3: Let ๐ด = −1
−2 1 . Find a particular solution of ๐‘ฅ
Note that we can solve this system in an easier way (can you see how?), but for the
purposes of the example, let us use the eigenvalue method plus undetermined
1coefficients.
The eigenvalues of ๐ด are −1 and 1 and corresponding eigenvectors are 1 and 01
respectively. Hence our complementary solution is
๐‘ฅ®๐‘ = ๐›ผ 1
1 −๐‘ก
0 ๐‘ก
๐‘’ + ๐›ผ2
๐‘’ ,
1
1
for some arbitrary constants ๐›ผ1 and ๐›ผ 2 .
We would want to guess a particular solution of
® + ๐‘®.
๐‘ฅ® = ๐‘Ž® ๐‘’ ๐‘ก + ๐‘๐‘ก
183
3.9. NONHOMOGENEOUS SYSTEMS
However, something of the form ๐‘Ž® ๐‘’ ๐‘ก appears in
0the
complementary solution. Because we
do not yet know if the vector ๐‘Ž® is a multiple of 1 , we do not know if a conflict arises. It is
® ๐‘ก . Here we find the
possible that there is no conflict, but to be safe we should also try ๐‘๐‘ก๐‘’
crux of the difference between a single equation and systems. We try both terms ๐‘Ž® ๐‘’ ๐‘ก and
® ๐‘ก in the solution, not just the term ๐‘๐‘ก๐‘’
® ๐‘ก . Therefore, we try
๐‘๐‘ก๐‘’
® ๐‘ก + ๐‘®๐‘ก + ๐‘‘.
®
๐‘ฅ® = ๐‘Ž® ๐‘’ ๐‘ก + ๐‘๐‘ก๐‘’
Thus we have 8 unknowns. We write ๐‘Ž® =
h
๐‘Ž1
๐‘Ž2
i
, ๐‘® =
h
๐‘1
๐‘2
i
, ๐‘® =
h i
๐‘1
๐‘2
, and ๐‘‘® =
h
๐‘‘1
๐‘‘2
i
. We plug
๐‘ฅ® into the equation. First let us compute ๐‘ฅ®0.
® + ๐‘® = ๐‘Ž 1 + ๐‘ 1 ๐‘’ ๐‘ก + ๐‘ 1 ๐‘ก๐‘’ ๐‘ก + ๐‘ 1 .
๐‘ฅ® = ๐‘Ž® + ๐‘® ๐‘’ + ๐‘๐‘ก๐‘’
๐‘Ž2 + ๐‘2
๐‘2
๐‘2
0
๐‘ก
๐‘ก
Now ๐‘ฅ®0 must equal ๐ด ๐‘ฅ® + ๐‘“®, which is
® ๐‘ก + ๐ด®๐‘ ๐‘ก + ๐ด ๐‘‘® + ๐‘“®
๐ด ๐‘ฅ® + ๐‘“® = ๐ด®๐‘Ž ๐‘’ ๐‘ก + ๐ด๐‘๐‘ก๐‘’
−๐‘Ž1
−๐‘ 1
−๐‘ 1
−๐‘‘1
1 ๐‘ก
0
๐‘’ +
๐‘ก
=
๐‘’๐‘ก +
๐‘ก๐‘’ ๐‘ก +
๐‘ก+
+
−2๐‘Ž 1 + ๐‘Ž 2
−2๐‘ 1 + ๐‘ 2
−2๐‘1 + ๐‘ 2
−2๐‘‘1 + ๐‘‘2
0
1
−๐‘Ž 1 + 1
−๐‘ 1
−๐‘1
−๐‘‘1
=
๐‘’๐‘ก +
๐‘ก๐‘’ ๐‘ก +
๐‘ก+
.
−2๐‘Ž1 + ๐‘Ž 2
−2๐‘ 1 + ๐‘ 2
−2๐‘1 + ๐‘ 2 + 1
−2๐‘‘1 + ๐‘‘2
We identify the coefficients of ๐‘’ ๐‘ก , ๐‘ก๐‘’ ๐‘ก , ๐‘ก and any constant vectors in ๐‘ฅ®0 and in ๐ด ๐‘ฅ® + ๐‘“® to find
the equations:
๐‘Ž 1 + ๐‘ 1 = −๐‘Ž 1 + 1,
0 = −๐‘1 ,
๐‘Ž 2 + ๐‘2 = −2๐‘Ž1 + ๐‘Ž2 ,
0 = −2๐‘ 1 + ๐‘ 2 + 1,
๐‘ 1 = −๐‘ 1 ,
๐‘ 1 = −๐‘‘1 ,
๐‘ 2 = −2๐‘ 1 + ๐‘ 2 ,
๐‘ 2 = −2๐‘‘1 + ๐‘‘2 .
We could write the 8 × 9 augmented matrix and start row reduction, but it is easier to just
solve the equations in an ad hoc manner. Immediately we see that ๐‘ 1 = 0, ๐‘1 = 0, ๐‘‘1 = 0.
Plugging these back in, we get that ๐‘2 = −1 and ๐‘‘2 = −1. The remaining equations that tell
us something are
๐‘Ž1 = −๐‘Ž 1 + 1,
๐‘Ž 2 + ๐‘ 2 = −2๐‘Ž 1 + ๐‘Ž 2 .
So ๐‘Ž1 = 1/2 and ๐‘ 2 = −1. Finally, ๐‘Ž 2 can be arbitrary and still satisfy the equations. We are
looking for just a single solution so presumably the simplest one is when ๐‘Ž 2 = 0. Therefore,
® ๐‘ก + ๐‘®๐‘ก + ๐‘‘® =
๐‘ฅ® = ๐‘Ž® ๐‘’ ๐‘ก + ๐‘๐‘ก๐‘’
1 ๐‘ก
0
0
0
2 ๐‘’
๐‘’๐‘ก +
๐‘ก๐‘’ ๐‘ก +
๐‘ก+
=
.
๐‘ก
0
−1
−1
−1
−๐‘ก๐‘’ − ๐‘ก − 1
1/2
That is, ๐‘ฅ1 = 12 ๐‘’ ๐‘ก , ๐‘ฅ2 = −๐‘ก๐‘’ ๐‘ก − ๐‘ก − 1. We would add this to the complementary solution to
® ๐‘ก were really needed.
get the general solution of the problem. Notice that both ๐‘Ž® ๐‘’ ๐‘ก and ๐‘๐‘ก๐‘’
184
CHAPTER 3. SYSTEMS OF ODES
Exercise 3.9.2: Check that ๐‘ฅ1 and ๐‘ฅ2 solve the problem. Try setting ๐‘Ž2 = 1 and check we get a
solution as well. What is the difference between the two solutions we obtained (one with ๐‘Ž 2 = 0 and
one with ๐‘Ž 2 = 1)?
As you can see, other than the handling of conflicts, undetermined coefficients works
exactly the same as it did for single equations. However, the computations can get out of
hand pretty quickly for systems. The equation we considered was pretty simple.
3.9.2
First order variable coefficient
Variation of parameters
Just as for a single equation, there is the method of variation of parameters. For constant
coefficient systems, it is essentially the same thing as the integrating factor method we
discussed earlier. However, this method works for any linear system, even if it is not
constant coefficient, provided we somehow solve the associated homogeneous problem.
Suppose we have the equation
๐‘ฅ®0 = ๐ด(๐‘ก) ๐‘ฅ® + ๐‘“®(๐‘ก).
(3.10)
Further, suppose we solved the associated homogeneous equation ๐‘ฅ®0 = ๐ด(๐‘ก) ๐‘ฅ® and found a
fundamental matrix solution ๐‘‹(๐‘ก). The general solution to the associated homogeneous
equation is ๐‘‹(๐‘ก)®๐‘ for a constant vector ๐‘®. Just like for variation of parameters for single
equation we try the solution to the nonhomogeneous equation of the form
๐‘ฅ®๐‘ = ๐‘‹(๐‘ก) ๐‘ข® (๐‘ก),
where ๐‘ข® (๐‘ก) is a vector-valued function instead of a constant. We substitute ๐‘ฅ®๐‘ into (3.10) to
obtain
๐‘‹ 0(๐‘ก) ๐‘ข® (๐‘ก) + ๐‘‹(๐‘ก) ๐‘ข® 0(๐‘ก) = ๐ด(๐‘ก) ๐‘‹(๐‘ก) ๐‘ข® (๐‘ก) + ๐‘“®(๐‘ก).
|
{z
}
๐‘ฅ®0๐‘ (๐‘ก)
|
{z
๐ด(๐‘ก) ๐‘ฅ®๐‘ (๐‘ก)
}
But ๐‘‹(๐‘ก) is a fundamental matrix solution to the homogeneous problem. So ๐‘‹ 0(๐‘ก) = ๐ด(๐‘ก)๐‘‹(๐‘ก),
and
0 0 ๐‘‹
(๐‘ก) ๐‘ข® (๐‘ก) + ๐‘‹(๐‘ก) ๐‘ข® 0(๐‘ก) = ๐‘‹
(๐‘ก) ๐‘ข® (๐‘ก) + ๐‘“®(๐‘ก).
Hence ๐‘‹(๐‘ก) ๐‘ข® 0(๐‘ก) = ๐‘“®(๐‘ก). If we compute [๐‘‹(๐‘ก)]−1 , then ๐‘ข® 0(๐‘ก) = [๐‘‹(๐‘ก)]−1 ๐‘“®(๐‘ก). We integrate to
obtain ๐‘ข® and we have the particular solution ๐‘ฅ®๐‘ = ๐‘‹(๐‘ก) ๐‘ข® (๐‘ก). Let us write this as a formula
๐‘ฅ®๐‘ = ๐‘‹(๐‘ก)
∫
[๐‘‹(๐‘ก)]−1 ๐‘“®(๐‘ก) ๐‘‘๐‘ก.
If ๐ด is constant and ๐‘‹(๐‘ก) = ๐‘’ ๐‘ก๐ด , then [๐‘‹(๐‘ก)]−1 = ๐‘’ −๐‘ก๐ด . We get a solution ๐‘ฅ®๐‘ =
∫
๐‘’ ๐‘ก๐ด ๐‘’ −๐‘ก๐ด ๐‘“®(๐‘ก) ๐‘‘๐‘ก, which is precisely what we got using the integrating factor method.
185
3.9. NONHOMOGENEOUS SYSTEMS
Example 3.9.4: Find a particular solution to
1
๐‘ก
๐‘ก −1
(๐‘ก 2 + 1).
๐‘ฅ® +
๐‘ฅ® = 2
1
๐‘ก +1 1 ๐‘ก
0
(3.11)
Here ๐ด = ๐‘ก 21+1 1๐‘ก −1
is most definitely not constant. Perhaps by a lucky guess, we find
๐‘ก
1 −๐‘ก that ๐‘‹ = ๐‘ก 1 solves ๐‘‹ 0(๐‘ก) = ๐ด(๐‘ก)๐‘‹(๐‘ก). Once we know the complementary solution we
can easily find a solution to (3.11). First we find
−1
[๐‘‹(๐‘ก)]
1
1 ๐‘ก
.
= 2
๐‘ก + 1 −๐‘ก 1
Next we know a particular solution to (3.11) is
๐‘ฅ®๐‘ = ๐‘‹(๐‘ก)
∫
[๐‘‹(๐‘ก)]−1 ๐‘“®(๐‘ก) ๐‘‘๐‘ก
1
1 ๐‘ก
๐‘ก 2 + 1 −๐‘ก 1
∫
∫ 1 −๐‘ก
=
๐‘ก 1
1 −๐‘ก
=
๐‘ก 1
1 −๐‘ก
=
๐‘ก 1
=
1 4
3๐‘ก
2 3
3๐‘ก +
๐‘ก
๐‘ก
(๐‘ก 2 + 1) ๐‘‘๐‘ก
1
2๐‘ก
๐‘‘๐‘ก
2
−๐‘ก + 1
๐‘ก2
− 13 ๐‘ก 3 + ๐‘ก
.
Adding the complementary solution we find the general solution to (3.11):
1 −๐‘ก
๐‘ฅ® =
๐‘ก 1
๐‘1
+
๐‘2
Exercise 3.9.3: Check that ๐‘ฅ1 =
1 4
3๐‘ก
1 4
3๐‘ก
2 3
3๐‘ก +
and ๐‘ฅ2 =
๐‘1 − ๐‘2 ๐‘ก + 13 ๐‘ก 4
=
.
๐‘ก
๐‘2 + (๐‘ 1 + 1) ๐‘ก + 32 ๐‘ก 3
2 3
3๐‘ก
+ ๐‘ก really solve (3.11).
In the variation of parameters, just like in the integrating factor method we can obtain
the general solution by adding in constants of integration. That is, we will add ๐‘‹(๐‘ก)®๐‘ for a
vector of arbitrary constants. But that is precisely the complementary solution.
3.9.3
Second order constant coefficients
Undetermined coefficients
We have already seen a simple example of the method of undetermined coefficients for
second order systems in § 3.6. This method is essentially the same as undetermined
coefficients for first order systems. There are some simplifications that we can make, as we
did in § 3.6. Let the equation be
®
๐‘ฅ®00 = ๐ด ๐‘ฅ® + ๐น(๐‘ก),
186
CHAPTER 3. SYSTEMS OF ODES
® is of the form ๐น®0 cos(๐œ”๐‘ก), then as two derivatives of
where ๐ด is a constant matrix. If ๐น(๐‘ก)
cosine is again cosine we can try a solution of the form
๐‘ฅ®๐‘ = ๐‘® cos(๐œ”๐‘ก),
and we do not need to introduce sines.
If the ๐น® is a sum of cosines, note that we still have the superposition principle. If
®
๐น(๐‘ก)
= ๐น®0 cos(๐œ”0 ๐‘ก) + ๐น®1 cos(๐œ”1 ๐‘ก), then we would try ๐‘Ž® cos(๐œ”0 ๐‘ก) for the problem ๐‘ฅ®00 =
๐ด ๐‘ฅ® + ๐น®0 cos(๐œ”0 ๐‘ก), and we would try ๐‘® cos(๐œ”1 ๐‘ก) for the problem ๐‘ฅ®00 = ๐ด ๐‘ฅ® + ๐น®1 cos(๐œ”1 ๐‘ก). Then
we sum the solutions.
However, if there is duplication with the complementary solution, or the equation is of
®
the form ๐‘ฅ®00 = ๐ด ๐‘ฅ®0 + ๐ต ๐‘ฅ® + ๐น(๐‘ก),
then we need to do the same thing as we do for first order
systems.
You will never go wrong with putting in more terms than needed into your guess. You
will find that the extra coefficients will turn out to be zero. But it is useful to save some
time and effort.
Eigenvector decomposition
If we have the system
๐‘ฅ®00 = ๐ด ๐‘ฅ® + ๐‘“®(๐‘ก),
we can do eigenvector decomposition, just like for first order systems.
Let ๐œ†1 , ๐œ†2 , . . . , ๐œ†๐‘› be the eigenvalues and ๐‘ฃ®1 , ๐‘ฃ®2 , . . . , ๐‘ฃ® ๐‘› be eigenvectors. Again form the
matrix ๐ธ = [ ๐‘ฃ®1 ๐‘ฃ®2 · · · ๐‘ฃ® ๐‘› ]. Write
๐‘ฅ®(๐‘ก) = ๐‘ฃ®1 ๐œ‰1 (๐‘ก) + ๐‘ฃ®2 ๐œ‰2 (๐‘ก) + · · · + ๐‘ฃ® ๐‘› ๐œ‰๐‘› (๐‘ก).
Decompose ๐‘“® in terms of the eigenvectors
๐‘“®(๐‘ก) = ๐‘ฃ®1 ๐‘”1 (๐‘ก) + ๐‘ฃ®2 ๐‘”2 (๐‘ก) + · · · + ๐‘ฃ® ๐‘› ๐‘”๐‘› (๐‘ก),
where, again, ๐‘”® = ๐ธ −1 ๐‘“®.
We plug in, and as before we obtain
๐‘ฃ®1 ๐œ‰100
+
}|
๐‘ฃ®2 ๐œ‰200
+···+
๐‘“®
๐ด ๐‘ฅ®
๐‘ฅ®00
z
{
๐‘ฃ® ๐‘› ๐œ‰00๐‘›
z
}|
{ z
}|
{
= ๐ด ๐‘ฃ®1 ๐œ‰1 + ๐‘ฃ®2 ๐œ‰2 + · · · + ๐‘ฃ® ๐‘› ๐œ‰๐‘› + ๐‘ฃ®1 ๐‘”1 + ๐‘ฃ®2 ๐‘”2 + · · · + ๐‘ฃ® ๐‘› ๐‘”๐‘›
= ๐ด®
๐‘ฃ1 ๐œ‰1 + ๐ด®
๐‘ฃ 2 ๐œ‰2 + · · · + ๐ด®
๐‘ฃ ๐‘› ๐œ‰๐‘› + ๐‘ฃ®1 ๐‘”1 + ๐‘ฃ®2 ๐‘”2 + · · · + ๐‘ฃ® ๐‘› ๐‘”๐‘›
= ๐‘ฃ®1 ๐œ†1 ๐œ‰1 + ๐‘ฃ®2 ๐œ†2 ๐œ‰2 + · · · + ๐‘ฃ® ๐‘› ๐œ†๐‘› ๐œ‰๐‘› + ๐‘ฃ®1 ๐‘”1 + ๐‘ฃ®2 ๐‘”2 + · · · + ๐‘ฃ® ๐‘› ๐‘”๐‘›
= ๐‘ฃ®1 (๐œ†1 ๐œ‰1 + ๐‘”1 ) + ๐‘ฃ®2 (๐œ†2 ๐œ‰2 + ๐‘”2 ) + · · · + ๐‘ฃ® ๐‘› (๐œ†๐‘› ๐œ‰๐‘› + ๐‘”๐‘› ).
187
3.9. NONHOMOGENEOUS SYSTEMS
We identify the coefficients of the eigenvectors to get the equations
๐œ‰100 = ๐œ†1 ๐œ‰1 + ๐‘”1 ,
๐œ‰200 = ๐œ†2 ๐œ‰2 + ๐‘”2 ,
..
.
๐œ‰00๐‘› = ๐œ†๐‘› ๐œ‰๐‘› + ๐‘”๐‘› .
Each one of these equations is independent of the others. We solve each equation using the
methods of chapter 2. We write ๐‘ฅ®(๐‘ก) = ๐‘ฃ®1 ๐œ‰1 (๐‘ก) + ๐‘ฃ®2 ๐œ‰2 (๐‘ก) + · · · + ๐‘ฃ® ๐‘› ๐œ‰๐‘› (๐‘ก), and we are done;
we have a particular solution. We find the general solutions for ๐œ‰1 through ๐œ‰๐‘› , and again
๐‘ฅ®(๐‘ก) = ๐‘ฃ®1 ๐œ‰1 (๐‘ก) + ๐‘ฃ®2 ๐œ‰2 (๐‘ก) + · · · + ๐‘ฃ® ๐‘› ๐œ‰๐‘› (๐‘ก) is the general solution (and not just a particular
solution).
Example 3.9.5: Let us do the example from § 3.6 using this method. The equation is
−3 1
0
๐‘ฅ® =
๐‘ฅ® +
cos(3๐‘ก).
2 −2
2
00
The eigenvalues are −1 and −4, with eigenvectors
1 . Therefore,
๐ธ−1 = 31 12 −1
๐‘”1
๐‘”2
=๐ธ
1 1 1
๐‘“ (๐‘ก) =
3 2 −1
−1 ®
1
2
and
0
2 cos(3๐‘ก)
1
−1
=
. Therefore ๐ธ =
2
3 cos(3๐‘ก)
−2
3 cos(3๐‘ก)
1
1
2 −1
and
.
So after the whole song and dance of plugging in, the equations we get are
๐œ‰100 = −๐œ‰1 +
2
cos(3๐‘ก),
3
๐œ‰200 = −4 ๐œ‰2 −
2
cos(3๐‘ก).
3
For each equation we use the method of undetermined coefficients. We try ๐ถ1 cos(3๐‘ก) for
the first equation and ๐ถ2 cos(3๐‘ก) for the second equation. We plug in to get
2
cos(3๐‘ก),
3
2
−9๐ถ2 cos(3๐‘ก) = −4๐ถ2 cos(3๐‘ก) − cos(3๐‘ก).
3
−9๐ถ1 cos(3๐‘ก) = −๐ถ1 cos(3๐‘ก) +
We solve each of these equations separately. We get −9๐ถ1 = −๐ถ1 + 2/3 and −9๐ถ2 = −4๐ถ2 − 2/3.
And hence ๐ถ1 = −1/12 and ๐ถ2 = 2/15. So our particular solution is
1
๐‘ฅ® =
2
−1
1
cos(3๐‘ก) +
−1
12
2
cos(3๐‘ก) =
15
This solution matches what we got previously in § 3.6.
1/20
−3/10
cos(3๐‘ก).
188
3.9.4
CHAPTER 3. SYSTEMS OF ODES
Exercises
Exercise 3.9.4: Find a particular solution to ๐‘ฅ 0 = ๐‘ฅ + 2๐‘ฆ + 2๐‘ก, ๐‘ฆ 0 = 3๐‘ฅ + 2๐‘ฆ − 4,
a) using integrating factor method,
b) using eigenvector decomposition,
c) using undetermined coefficients.
Exercise 3.9.5: Find the general solution to ๐‘ฅ 0 = 4๐‘ฅ + ๐‘ฆ − 1, ๐‘ฆ 0 = ๐‘ฅ + 4๐‘ฆ − ๐‘’ ๐‘ก ,
a) using integrating factor method,
b) using eigenvector decomposition,
c) using undetermined coefficients.
Exercise 3.9.6: Find the general solution to ๐‘ฅ100 = −6๐‘ฅ1 + 3๐‘ฅ2 + cos(๐‘ก), ๐‘ฅ200 = 2๐‘ฅ1 − 7๐‘ฅ2 + 3 cos(๐‘ก),
a) using eigenvector decomposition,
b) using undetermined coefficients.
Exercise 3.9.7: Find the general solution to ๐‘ฅ 100 = −6๐‘ฅ 1 + 3๐‘ฅ 2 + cos(2๐‘ก), ๐‘ฅ200 = 2๐‘ฅ1 − 7๐‘ฅ2 + 3 cos(2๐‘ก),
a) using eigenvector decomposition,
0
Exercise 3.9.8: Take the equation ๐‘ฅ® =
a) Check that ๐‘ฅ®๐‘ = ๐‘1
b) using undetermined coefficients.
1
๐‘ก
−1
1
1
๐‘ก
๐‘ก2
๐‘ฅ® +
.
−๐‘ก
๐‘ก sin ๐‘ก
๐‘ก cos ๐‘ก
+ ๐‘2
is the complementary solution.
−๐‘ก cos ๐‘ก
๐‘ก sin ๐‘ก
b) Use variation of parameters to find a particular solution.
Exercise 3.9.101: Find a particular solution to ๐‘ฅ 0 = 5๐‘ฅ + 4๐‘ฆ + ๐‘ก, ๐‘ฆ 0 = ๐‘ฅ + 8๐‘ฆ − ๐‘ก,
a) using integrating factor method,
b) using eigenvector decomposition,
c) using undetermined coefficients.
Exercise 3.9.102: Find a particular solution to ๐‘ฅ 0 = ๐‘ฆ + ๐‘’ ๐‘ก , ๐‘ฆ 0 = ๐‘ฅ + ๐‘’ ๐‘ก ,
a) using integrating factor method,
b) using eigenvector decomposition,
c) using undetermined coefficients.
Exercise 3.9.103: Solve ๐‘ฅ10 = ๐‘ฅ2 + ๐‘ก, ๐‘ฅ20 = ๐‘ฅ1 + ๐‘ก with initial conditions ๐‘ฅ 1 (0) = 1, ๐‘ฅ2 (0) = 2,
using eigenvector decomposition.
Exercise 3.9.104: Solve ๐‘ฅ100 = −3๐‘ฅ1 + ๐‘ฅ2 + ๐‘ก, ๐‘ฅ 200 = 9๐‘ฅ 1 + 5๐‘ฅ 2 + cos(๐‘ก) with initial conditions
๐‘ฅ1 (0) = 0, ๐‘ฅ2 (0) = 0, ๐‘ฅ10 (0) = 0, ๐‘ฅ20 (0) = 0, using eigenvector decomposition.
Chapter 4
Fourier series and PDEs
4.1
Boundary value problems
Note: 2 lectures, similar to §3.8 in [EP], §10.1 and §11.1 in [BD]
4.1.1
Boundary value problems
Before we tackle the Fourier series, we study the so-called boundary value problems (or
endpoint problems). Consider
๐‘ฅ 00 + ๐œ†๐‘ฅ = 0,
๐‘ฅ(๐‘Ž) = 0,
๐‘ฅ(๐‘) = 0,
for some constant ๐œ†, where ๐‘ฅ(๐‘ก) is defined for ๐‘ก in the interval [๐‘Ž, ๐‘]. Previously we specified
the value of the solution and its derivative at a single point. Now we specify the value of
the solution at two different points. As ๐‘ฅ = 0 is a solution, existence of solutions is not a
problem. Uniqueness of solutions is another issue. The general solution to ๐‘ฅ 00 + ๐œ†๐‘ฅ = 0 has
two arbitrary constants† . It is, therefore, natural (but wrong) to believe that requiring two
conditions guarantees a unique solution.
Example 4.1.1: Take ๐œ† = 1, ๐‘Ž = 0, ๐‘ = ๐œ‹. That is,
๐‘ฅ 00 + ๐‘ฅ = 0,
๐‘ฅ(0) = 0,
๐‘ฅ(๐œ‹) = 0.
Then ๐‘ฅ = sin ๐‘ก is another solution (besides ๐‘ฅ = 0) satisfying both boundary conditions.
There are more. Write down the general solution of the differential equation, which is
๐‘ฅ = ๐ด cos ๐‘ก + ๐ต sin ๐‘ก. The condition ๐‘ฅ(0) = 0 forces ๐ด = 0. Letting ๐‘ฅ(๐œ‹) = 0 does not give us
any more information as ๐‘ฅ = ๐ต sin ๐‘ก already satisfies both boundary conditions. Hence,
there are infinitely many solutions of the form ๐‘ฅ = ๐ต sin ๐‘ก, where ๐ต is an arbitrary constant.
Example 4.1.2: On the other hand, consider ๐œ† = 2. That is,
๐‘ฅ 00 + 2๐‘ฅ = 0,
† See
๐‘ฅ(0) = 0,
๐‘ฅ(๐œ‹) = 0.
subsection 0.2.4 on page 13 or Example 2.2.1 on page 85 and Example 2.2.3 on page 88.
190
CHAPTER 4. FOURIER SERIES AND PDES
√
√
Then the general solution is ๐‘ฅ = ๐ด cos( 2 ๐‘ก) + ๐ต sin( 2 ๐‘ก).√Letting ๐‘ฅ(0) =
√ 0 still forces ๐ด = 0.
We apply the second condition to find 0 = ๐‘ฅ(๐œ‹) = ๐ต sin( 2 ๐œ‹). As sin( 2 ๐œ‹) ≠ 0 we obtain
๐ต = 0. Therefore ๐‘ฅ = 0 is the unique solution to this problem.
What is going on? We will be interested in finding which constants ๐œ† allow a nonzero
solution, and we will be interested in finding those solutions. This problem is an analogue
of finding eigenvalues and eigenvectors of matrices.
4.1.2
Eigenvalue problems
For basic Fourier series theory we will need the following three eigenvalue problems. We
will consider more general equations and boundary conditions, but we will postpone this
until chapter 5.
๐‘ฅ 00 + ๐œ†๐‘ฅ = 0, ๐‘ฅ(๐‘Ž) = 0, ๐‘ฅ(๐‘) = 0,
(4.1)
๐‘ฅ 00 + ๐œ†๐‘ฅ = 0,
๐‘ฅ 0(๐‘Ž) = 0,
๐‘ฅ 0(๐‘) = 0,
(4.2)
๐‘ฅ 0(๐‘Ž) = ๐‘ฅ 0(๐‘).
(4.3)
and
๐‘ฅ 00 + ๐œ†๐‘ฅ = 0,
๐‘ฅ(๐‘Ž) = ๐‘ฅ(๐‘),
A number ๐œ† is called an eigenvalue of (4.1) (resp. (4.2) or (4.3)) if and only if there exists a
nonzero (not identically zero) solution to (4.1) (resp. (4.2) or (4.3)) given that specific ๐œ†. A
nonzero solution is called a corresponding eigenfunction.
Note the similarity to eigenvalues and eigenvectors of matrices. The similarity is not
just coincidental. If we think of the equations as differential operators, then we are doing
the same exact thing. Think of a function ๐‘ฅ(๐‘ก) as a vector with infinitely many components
2
(one for each ๐‘ก). Let ๐ฟ = − ๐‘‘ 2 be the linear operator. Then the eigenvalue/eigenfunction
๐‘‘๐‘ก
pair should be ๐œ† and nonzero ๐‘ฅ such that ๐ฟ๐‘ฅ = ๐œ†๐‘ฅ. In other words, we are looking for
nonzero functions ๐‘ฅ satisfying certain endpoint conditions that solve (๐ฟ − ๐œ†)๐‘ฅ = 0. A lot of
the formalism from linear algebra still applies here, though we will not pursue this line of
reasoning too far.
Example 4.1.3: Let us find the eigenvalues and eigenfunctions of
๐‘ฅ 00 + ๐œ†๐‘ฅ = 0,
๐‘ฅ(0) = 0,
๐‘ฅ(๐œ‹) = 0.
We have to handle the cases ๐œ† > 0, ๐œ† = 0, ๐œ† < 0 separately. First suppose that ๐œ† > 0.
Then the general solution to ๐‘ฅ 00 + ๐œ†๐‘ฅ = 0 is
√
√
๐‘ฅ = ๐ด cos( ๐œ† ๐‘ก) + ๐ต sin( ๐œ† ๐‘ก).
The condition ๐‘ฅ(0) = 0 implies immediately ๐ด = 0. Next
√
0 = ๐‘ฅ(๐œ‹) = ๐ต sin( ๐œ† ๐œ‹).
If ๐ต is zero, then
solution. So to get a nonzero solution we must
√ ๐‘ฅ is not a nonzero
√
have that sin( ๐œ† ๐œ‹) = 0. Hence, ๐œ† ๐œ‹ must be an integer multiple of ๐œ‹. In other words,
191
4.1. BOUNDARY VALUE PROBLEMS
√
๐œ† = ๐‘˜ for a positive integer ๐‘˜. Hence the positive eigenvalues are ๐‘˜ 2 for all integers
๐‘˜ ≥ 1. Corresponding eigenfunctions can be taken as ๐‘ฅ = sin(๐‘˜๐‘ก). Just like for eigenvectors,
constant multiples of an eigenfunction are also eigenfunctions, so we only need to pick one.
Now suppose that ๐œ† = 0. In this case the equation is ๐‘ฅ 00 = 0, and its general solution
is ๐‘ฅ = ๐ด๐‘ก + ๐ต. The condition ๐‘ฅ(0) = 0 implies that ๐ต = 0, and ๐‘ฅ(๐œ‹) = 0 implies that ๐ด = 0.
This means that ๐œ† = 0 is not an eigenvalue.
Finally, suppose that ๐œ† < 0. In this case we have the general solution∗
√
√
๐‘ฅ = ๐ด cosh( −๐œ† ๐‘ก) + ๐ต sinh( −๐œ† ๐‘ก).
Letting ๐‘ฅ(0) =√0 implies that ๐ด = 0 (recall cosh 0 = 1 and sinh 0 = 0). So our solution must
be ๐‘ฅ = ๐ต sinh( −๐œ† ๐‘ก) and satisfy ๐‘ฅ(๐œ‹) = 0. This is only possible if ๐ต is zero. Why? Because
sinh ๐œ‰ is only zero when ๐œ‰ = 0. You should plot sinh to see this fact. We can also see this
๐œ‰
−๐œ‰
๐œ‰
−๐œ‰
from the definition of sinh. We get 0 = sinh ๐œ‰ = ๐‘’ −๐‘’
2 . Hence ๐‘’ = ๐‘’ , which implies
๐œ‰ = −๐œ‰ and that is only true if ๐œ‰ = 0. So there are no negative eigenvalues.
In summary, the eigenvalues and corresponding eigenfunctions are
๐œ†๐‘˜ = ๐‘˜2
with an eigenfunction
๐‘ฅ ๐‘˜ = sin(๐‘˜๐‘ก)
for all integers ๐‘˜ ≥ 1.
Example 4.1.4: Let us compute the eigenvalues and eigenfunctions of
๐‘ฅ 00 + ๐œ†๐‘ฅ = 0,
๐‘ฅ 0(0) = 0,
๐‘ฅ 0(๐œ‹) = 0.
Again we have to handle the cases ๐œ† > 0, ๐œ† = 0, ๐œ† <
√ 0 separately.
√ First suppose that
00
๐œ† > 0. The general solution to ๐‘ฅ + ๐œ†๐‘ฅ = 0 is ๐‘ฅ = ๐ด cos( ๐œ† ๐‘ก) + ๐ต sin( ๐œ† ๐‘ก). So
√
√
√
√
๐‘ฅ 0 = −๐ด ๐œ† sin( ๐œ† ๐‘ก) + ๐ต ๐œ† cos( ๐œ† ๐‘ก).
The condition ๐‘ฅ 0(0) = 0 implies immediately ๐ต = 0. Next
√
√
0 = ๐‘ฅ 0(๐œ‹) = −๐ด ๐œ† sin( ๐œ† ๐œ‹).
√
√
Again ๐ด cannot be zero if ๐œ† is to be an eigenvalue, and sin( ๐œ† ๐œ‹) is only zero if ๐œ† = ๐‘˜ for
a positive integer ๐‘˜. Hence the positive eigenvalues are again ๐‘˜ 2 for all integers ๐‘˜ ≥ 1. And
the corresponding eigenfunctions can be taken as ๐‘ฅ = cos(๐‘˜๐‘ก).
Now suppose that ๐œ† = 0. In this case the equation is ๐‘ฅ 00 = 0 and the general solution is
๐‘ฅ = ๐ด๐‘ก + ๐ต so ๐‘ฅ 0 = ๐ด. The condition ๐‘ฅ 0(0) = 0 implies that ๐ด = 0. The condition ๐‘ฅ 0(๐œ‹) = 0
also implies ๐ด = 0. Hence ๐ต could be anything (let us take it to be 1). So ๐œ† = 0 is an
eigenvalue and ๐‘ฅ = 1 is a corresponding eigenfunction.
√
√
Finally, let ๐œ† < 0. In this case the general solution is ๐‘ฅ = ๐ด cosh( −๐œ† ๐‘ก) + ๐ต sinh( −๐œ† ๐‘ก)
and
√
√
√
√
๐‘ฅ 0 = ๐ด −๐œ† sinh( −๐œ† ๐‘ก) + ๐ต −๐œ† cosh( −๐œ† ๐‘ก).
∗ Recall
that cosh ๐‘  = 21 (๐‘’ ๐‘  + ๐‘’ −๐‘  ) and sinh ๐‘  = 12 (๐‘’ ๐‘  − ๐‘’ −๐‘  ). As an exercise try the computation with the
general solution written as ๐‘ฅ = ๐ด๐‘’
√
−๐œ† ๐‘ก
√
+ ๐ต๐‘’ −
−๐œ† ๐‘ก
(for different ๐ด and ๐ต of course).
192
CHAPTER 4. FOURIER SERIES AND PDES
We have already seen (with roles of ๐ด and ๐ต switched) that for this expression to be zero at
๐‘ก = 0 and ๐‘ก = ๐œ‹, we must have ๐ด = ๐ต = 0. Hence there are no negative eigenvalues.
In summary, the eigenvalues and corresponding eigenfunctions are
๐œ†๐‘˜ = ๐‘˜2
๐‘ฅ ๐‘˜ = cos(๐‘˜๐‘ก)
with an eigenfunction
for all integers ๐‘˜ ≥ 1,
and there is another eigenvalue
๐œ†0 = 0
with an eigenfunction
๐‘ฅ0 = 1.
The following problem is the one that leads to the general Fourier series.
Example 4.1.5: Let us compute the eigenvalues and eigenfunctions of
๐‘ฅ 00 + ๐œ†๐‘ฅ = 0,
๐‘ฅ(−๐œ‹) = ๐‘ฅ(๐œ‹),
๐‘ฅ 0(−๐œ‹) = ๐‘ฅ 0(๐œ‹).
We have not specified the values or the derivatives at the endpoints, but rather that they
are the same at the beginning and at the end of the interval.
Let us skip ๐œ† < 0. The computations are the same as before, and again we find that
there are no negative eigenvalues.
For ๐œ† = 0, the general solution is ๐‘ฅ = ๐ด๐‘ก + ๐ต. The condition ๐‘ฅ(−๐œ‹) = ๐‘ฅ(๐œ‹) implies
that ๐ด = 0 (๐ด๐œ‹ + ๐ต = −๐ด๐œ‹ + ๐ต implies ๐ด = 0). The second condition ๐‘ฅ 0(−๐œ‹) = ๐‘ฅ 0(๐œ‹) says
nothing about ๐ต and hence ๐œ† = 0 is an eigenvalue with a corresponding eigenfunction
๐‘ฅ = 1.
√
√
For ๐œ† > 0 we get that ๐‘ฅ = ๐ด cos( ๐œ† ๐‘ก) + ๐ต sin( ๐œ† ๐‘ก). Now
√
√
√
√
๐ด cos(− ๐œ† ๐œ‹) + ๐ต sin(− ๐œ† ๐œ‹) = ๐ด cos( ๐œ† ๐œ‹) + ๐ต sin( ๐œ† ๐œ‹) .
|
{z
๐‘ฅ(−๐œ‹)
}
|
{z
๐‘ฅ(๐œ‹)
}
We remember that cos(−๐œƒ) = cos(๐œƒ) and sin(−๐œƒ) = − sin(๐œƒ). Therefore,
√
√
√
√
๐ด cos( ๐œ† ๐œ‹) − ๐ต sin( ๐œ† ๐œ‹) = ๐ด cos( ๐œ† ๐œ‹) + ๐ต sin( ๐œ† ๐œ‹).
√
Hence either ๐ต = 0 or sin( ๐œ† ๐œ‹) = 0. Similarly (exercise)
if we differentiate ๐‘ฅ and plug in
√
the second condition we find that ๐ด = 0 or sin( ๐œ† ๐œ‹) = 0. Therefore,
unless we√want ๐ด
√
and ๐ต to both be zero (which we do not) we must have sin( ๐œ† ๐œ‹) = 0. Hence, ๐œ† is an
integer and the eigenvalues are yet again ๐œ† = ๐‘˜ 2 for an integer ๐‘˜ ≥ 1. In this case, however,
๐‘ฅ = ๐ด cos(๐‘˜๐‘ก) + ๐ต sin(๐‘˜๐‘ก) is an eigenfunction for any ๐ด and any ๐ต. So we have two linearly
independent eigenfunctions sin(๐‘˜๐‘ก) and cos(๐‘˜๐‘ก). Remember that for a matrix we can also
have two eigenvectors corresponding to a single eigenvalue if the eigenvalue is repeated.
In summary, the eigenvalues and corresponding eigenfunctions are
๐œ†๐‘˜ = ๐‘˜2
with eigenfunctions
cos(๐‘˜๐‘ก)
๐œ†0 = 0
with an eigenfunction
๐‘ฅ0 = 1.
and
sin(๐‘˜๐‘ก)
for all integers ๐‘˜ ≥ 1,
193
4.1. BOUNDARY VALUE PROBLEMS
4.1.3
Orthogonality of eigenfunctions
Something that will be very useful in the next section is the orthogonality property of the
eigenfunctions. This is an analogue of the following fact about eigenvectors of a matrix. A
matrix is called symmetric if ๐ด = ๐ด๐‘‡ (it is equal to its transpose). Eigenvectors for two distinct
eigenvalues of a symmetric matrix are orthogonal. The differential operators we are dealing
with act much like a symmetric matrix. We, therefore, get the following theorem.
Theorem 4.1.1. Suppose that ๐‘ฅ1 (๐‘ก) and ๐‘ฅ2 (๐‘ก) are two eigenfunctions of the problem (4.1), (4.2) or
(4.3) for two different eigenvalues ๐œ†1 and ๐œ†2 . Then they are orthogonal in the sense that
∫
๐‘
๐‘Ž
๐‘ฅ1 (๐‘ก)๐‘ฅ2 (๐‘ก) ๐‘‘๐‘ก = 0.
The terminology comes from the fact that the integral is a type of inner product. We will
expand on this in the next section. The theorem has a very short, elegant, and illuminating
proof so let us give it here. First, we have the following two equations.
๐‘ฅ100 + ๐œ†1 ๐‘ฅ1 = 0
๐‘ฅ 200 + ๐œ†2 ๐‘ฅ2 = 0.
and
Multiply the first by ๐‘ฅ2 and the second by ๐‘ฅ1 and subtract to get
(๐œ†1 − ๐œ†2 )๐‘ฅ1 ๐‘ฅ2 = ๐‘ฅ200 ๐‘ฅ1 − ๐‘ฅ2 ๐‘ฅ100 .
Now integrate both sides of the equation:
(๐œ†1 − ๐œ†2 )
∫
๐‘Ž
๐‘
๐‘ฅ1 ๐‘ฅ2 ๐‘‘๐‘ก =
∫
๐‘
๐‘ฅ200 ๐‘ฅ 1 − ๐‘ฅ2 ๐‘ฅ100 ๐‘‘๐‘ก
๐‘Ž
∫
=
๐‘Ž
๐‘
๐‘‘ 0
๐‘ฅ2 ๐‘ฅ1 − ๐‘ฅ 2 ๐‘ฅ10 ๐‘‘๐‘ก
๐‘‘๐‘ก
h
= ๐‘ฅ20 ๐‘ฅ1 − ๐‘ฅ2 ๐‘ฅ10
i๐‘
๐‘ก=๐‘Ž
= 0.
The last equality holds because of the boundary conditions. For example, if we consider
(4.1) we have ๐‘ฅ 1 (๐‘Ž) = ๐‘ฅ 1 (๐‘) = ๐‘ฅ2 (๐‘Ž) = ๐‘ฅ2 (๐‘) = 0 and so ๐‘ฅ20 ๐‘ฅ1 − ๐‘ฅ2 ๐‘ฅ10 is zero at both ๐‘Ž and ๐‘.
As ๐œ†1 ≠ ๐œ†2 , the theorem follows.
Exercise 4.1.1 (easy): Finish the proof of the theorem (check the last equality in the proof) for the
cases (4.2) and (4.3).
The function sin(๐‘›๐‘ก) is an eigenfunction for the problem ๐‘ฅ 00 + ๐œ†๐‘ฅ = 0, ๐‘ฅ(0) = 0, ๐‘ฅ(๐œ‹) = 0.
Hence for positive integers ๐‘› and ๐‘š we have the integrals
∫
0
๐œ‹
sin(๐‘š๐‘ก) sin(๐‘›๐‘ก) ๐‘‘๐‘ก = 0,
when ๐‘š ≠ ๐‘›.
194
CHAPTER 4. FOURIER SERIES AND PDES
Similarly,
๐œ‹
∫
cos(๐‘š๐‘ก) cos(๐‘›๐‘ก) ๐‘‘๐‘ก = 0,
∫
when ๐‘š ≠ ๐‘›,
๐œ‹
and
cos(๐‘›๐‘ก) ๐‘‘๐‘ก = 0.
0
0
And finally we also get
∫
๐œ‹
sin(๐‘š๐‘ก) sin(๐‘›๐‘ก) ๐‘‘๐‘ก = 0,
when ๐‘š ≠ ๐‘›,
and
๐œ‹
cos(๐‘š๐‘ก) cos(๐‘›๐‘ก) ๐‘‘๐‘ก = 0,
∫
when ๐‘š ≠ ๐‘›,
๐œ‹
and
cos(๐‘›๐‘ก) ๐‘‘๐‘ก = 0,
−๐œ‹
−๐œ‹
and
sin(๐‘›๐‘ก) ๐‘‘๐‘ก = 0,
−๐œ‹
−๐œ‹
∫
๐œ‹
∫
∫
๐œ‹
cos(๐‘š๐‘ก) sin(๐‘›๐‘ก) ๐‘‘๐‘ก = 0
(even if ๐‘š = ๐‘›).
−๐œ‹
4.1.4
Fredholm alternative
We now touch on a very useful theorem in the theory of differential equations. The theorem
holds in a more general setting than we are going to state it, but for our purposes the
following statement is sufficient. We will give a slightly more general version in chapter 5.
Theorem 4.1.2 (Fredholm alternative∗ ). Exactly one of the following statements holds. Either
๐‘ฅ 00 + ๐œ†๐‘ฅ = 0,
๐‘ฅ(๐‘Ž) = 0,
๐‘ฅ(๐‘) = 0
(4.4)
has a nonzero solution, or
๐‘ฅ 00 + ๐œ†๐‘ฅ = ๐‘“ (๐‘ก),
๐‘ฅ(๐‘Ž) = 0,
๐‘ฅ(๐‘) = 0
(4.5)
has a unique solution for every function ๐‘“ continuous on [๐‘Ž, ๐‘].
The theorem is also true for the other types of boundary conditions we considered. The
theorem means that if ๐œ† is not an eigenvalue, the nonhomogeneous equation (4.5) has a
unique solution for every right-hand side. On the other hand if ๐œ† is an eigenvalue, then
(4.5) need not have a solution for every ๐‘“ , and furthermore, even if it happens to have a
solution, the solution is not unique.
We also want to reinforce the idea here that linear differential operators have much
in common with matrices. So it is no surprise that there is a finite-dimensional version
of Fredholm alternative for matrices as well. Let ๐ด be an ๐‘› × ๐‘› matrix. The Fredholm
alternative then states that either (๐ด − ๐œ†๐ผ)๐‘ฅ® = 0® has a nontrivial solution, or (๐ด − ๐œ†๐ผ)๐‘ฅ® = ๐‘®
®
has a unique solution for every ๐‘.
A lot of intuition from linear algebra can be applied to linear differential operators, but
one must be careful of course. For example, one difference we have already seen is that in
general a differential operator will have infinitely many eigenvalues, while a matrix has
only finitely many.
∗ Named
after the Swedish mathematician Erik Ivar Fredholm (1866–1927).
195
4.1. BOUNDARY VALUE PROBLEMS
4.1.5
Application
Let us consider a physical application of an endpoint problem. Suppose we have a tightly
stretched quickly spinning elastic string or rope of uniform linear density ๐œŒ, for example in
kg/m. Let us put this problem into the ๐‘ฅ๐‘ฆ-plane and both ๐‘ฅ and ๐‘ฆ are in meters. The ๐‘ฅ-axis
represents the position on the string. The string rotates at angular velocity ๐œ”, in radians/s.
Imagine that the whole ๐‘ฅ๐‘ฆ-plane rotates at angular velocity ๐œ”. This way, the string stays in
this ๐‘ฅ ๐‘ฆ-plane and ๐‘ฆ measures its deflection from the equilibrium position, ๐‘ฆ = 0, on the
๐‘ฅ-axis. Hence the graph of ๐‘ฆ gives the shape of the string. We consider an ideal string with
no volume, just a mathematical curve. We suppose the tension on the string is a constant ๐‘‡
in Newtons. Assuming that the deflection is small, we can use Newton’s second law (let us
skip the derivation) to get the equation
๐‘‡ ๐‘ฆ 00 + ๐œŒ๐œ”2 ๐‘ฆ = 0.
To check the units notice that the units of ๐‘ฆ 00 are m/m2 , as the derivative is in terms of ๐‘ฅ.
Let ๐ฟ be the length of the string (in meters) and the string is fixed at the beginning and
end points. Hence, ๐‘ฆ(0) = 0 and ๐‘ฆ(๐ฟ) = 0. See Figure 4.1.
๐‘ฆ
๐‘ฆ
๐ฟ
0
๐‘ฅ
Figure 4.1: Whirling string.
๐œŒ๐œ” 2
We rewrite the equation as ๐‘ฆ 00 + ๐‘‡ ๐‘ฆ = 0. The setup is similar to Example 4.1.3
on page 190, except for the interval length being ๐ฟ instead of ๐œ‹. We are looking for
๐œŒ๐œ”2
eigenvalues of ๐‘ฆ 00 + ๐œ†๐‘ฆ = 0, ๐‘ฆ(0) = 0, ๐‘ฆ(๐ฟ) = 0 where ๐œ† = ๐‘‡ . As before there are
no nonpositive
eigenvalues.
With ๐œ† > 0, the general solution to the equation is ๐‘ฆ =
√
√
๐ด cos( ๐œ† ๐‘ฅ) + ๐ต sin( ๐œ† ๐‘ฅ). The condition
๐‘ฆ(0) = 0 implies
that ๐ด = 0 as before. The
√
√
condition ๐‘ฆ(๐ฟ) = 0 implies that sin( ๐œ† ๐ฟ) = 0 and hence ๐œ† ๐ฟ = ๐‘˜๐œ‹ for some integer ๐‘˜ > 0,
so
๐œŒ๐œ”2
๐‘˜ 2 ๐œ‹2
=๐œ†= 2 .
๐‘‡
๐ฟ
What does this say about the shape of the string? It says that for all parameters ๐œŒ, ๐œ”, ๐‘‡
not satisfying the equation above, the string is in the equilibrium position, ๐‘ฆ = 0. When
๐œŒ๐œ” 2
๐‘‡
= ๐‘˜ ๐ฟ๐œ‹2 , then the string will “pop out” some distance ๐ต. We cannot compute ๐ต with the
information we have.
2 2
196
CHAPTER 4. FOURIER SERIES AND PDES
Let us assume that ๐œŒ and ๐‘‡ are fixed and we are changing ๐œ”. For most values √
of ๐œ” the
√ ๐‘‡ , then
string is in the equilibrium state. When the angular velocity ๐œ” hits a value ๐œ” = ๐‘˜๐œ‹
๐ฟ ๐œŒ
the string pops out and has the shape of a sin wave crossing the ๐‘ฅ-axis ๐‘˜ − 1 times between
the end points. For example, at ๐‘˜ = 1, the string does not cross the ๐‘ฅ-axis and the shape
looks like in Figure 4.1 on the preceding page. On the other hand, when ๐‘˜ = 3 the string
crosses the ๐‘ฅ-axis 2 times, see Figure 4.2. When ๐œ” changes again, the string returns to the
equilibrium position. The higher the angular velocity, the more times it crosses the ๐‘ฅ-axis
when it is popped out.
๐‘ฆ
0
๐ฟ
๐‘ฅ
Figure 4.2: Whirling string at the third eigenvalue (๐‘˜ = 3).
For another example, if you have a spinning jump rope (then ๐‘˜ = 1 as it is completely
“popped out”) and you pull on the ends to increase the tension, then the velocity also
increases for the rope to stay “popped out”.
4.1.6
Exercises
Hint
for the following exercises: Note that when ๐œ† > 0, then cos
๐‘Ž) are also solutions of the homogeneous equation.
√
√
๐œ† (๐‘ก − ๐‘Ž) and sin ๐œ† (๐‘ก −
Exercise 4.1.2: Compute all eigenvalues and eigenfunctions of ๐‘ฅ 00 + ๐œ†๐‘ฅ = 0, ๐‘ฅ(๐‘Ž) = 0, ๐‘ฅ(๐‘) = 0
(assume ๐‘Ž < ๐‘).
Exercise 4.1.3: Compute all eigenvalues and eigenfunctions of ๐‘ฅ 00 + ๐œ†๐‘ฅ = 0, ๐‘ฅ 0(๐‘Ž) = 0, ๐‘ฅ 0(๐‘) = 0
(assume ๐‘Ž < ๐‘).
Exercise 4.1.4: Compute all eigenvalues and eigenfunctions of ๐‘ฅ 00 + ๐œ†๐‘ฅ = 0, ๐‘ฅ 0(๐‘Ž) = 0, ๐‘ฅ(๐‘) = 0
(assume ๐‘Ž < ๐‘).
Exercise 4.1.5: Compute all eigenvalues and eigenfunctions of ๐‘ฅ 00 + ๐œ†๐‘ฅ = 0, ๐‘ฅ(๐‘Ž) = ๐‘ฅ(๐‘), ๐‘ฅ 0(๐‘Ž) =
๐‘ฅ 0(๐‘) (assume ๐‘Ž < ๐‘).
Exercise 4.1.6: We skipped the case of ๐œ† < 0 for the boundary value problem ๐‘ฅ 00 + ๐œ†๐‘ฅ = 0, ๐‘ฅ(−๐œ‹) =
๐‘ฅ(๐œ‹), ๐‘ฅ 0(−๐œ‹) = ๐‘ฅ 0(๐œ‹). Finish the calculation and show that there are no negative eigenvalues.
Exercise 4.1.101: Consider a spinning string of length 2 and linear density 0.1 and tension 3. Find
smallest angular velocity when the string pops out.
4.1. BOUNDARY VALUE PROBLEMS
197
Exercise 4.1.102: Suppose ๐‘ฅ 00 + ๐œ†๐‘ฅ = 0 and ๐‘ฅ(0) = 1, ๐‘ฅ(1) = 1. Find all ๐œ† for which there is more
than one solution. Also find the corresponding solutions (only for the eigenvalues).
Exercise 4.1.103: Suppose ๐‘ฅ 00 + ๐‘ฅ = 0 and ๐‘ฅ(0) = 0, ๐‘ฅ 0(๐œ‹) = 1. Find all the solution(s) if any
exist.
Exercise 4.1.104: Consider ๐‘ฅ 0 + ๐œ†๐‘ฅ = 0 and ๐‘ฅ(0) = 0, ๐‘ฅ(1) = 0. Why does it not have any
eigenvalues? Why does any first order equation with two endpoint conditions such as above have no
eigenvalues?
Exercise 4.1.105 (challenging): Suppose ๐‘ฅ 000 + ๐œ†๐‘ฅ = 0 and ๐‘ฅ(0) = 0, ๐‘ฅ 0(0) = 0, ๐‘ฅ(1) =√0.
3
Suppose that ๐œ† > 0. Find an equation that all such eigenvalues must satisfy. Hint: Note that − ๐œ†
is a root of ๐‘Ÿ 3 + ๐œ† = 0.
198
4.2
CHAPTER 4. FOURIER SERIES AND PDES
The trigonometric series
Note: 2 lectures, §9.1 in [EP], §10.2 in [BD]
4.2.1
Periodic functions and motivation
As motivation for studying Fourier series, suppose we have the problem
๐‘ฅ 00 + ๐œ”02 ๐‘ฅ = ๐‘“ (๐‘ก),
(4.6)
for some periodic function ๐‘“ (๐‘ก). We already solved
๐‘ฅ 00 + ๐œ”02 ๐‘ฅ = ๐น0 cos(๐œ”๐‘ก).
(4.7)
One way to solve (4.6) is to decompose ๐‘“ (๐‘ก) as a sum of cosines (and sines) and then solve
many problems of the form (4.7). We then use the principle of superposition, to sum up all
the solutions we got to get a solution to (4.6).
Before we proceed, let us talk a little bit more in detail about periodic functions. A
function is said to be periodic with period ๐‘ƒ if ๐‘“ (๐‘ก) = ๐‘“ (๐‘ก + ๐‘ƒ) for all ๐‘ก. For brevity we say
๐‘“ (๐‘ก) is ๐‘ƒ-periodic. Note that a ๐‘ƒ-periodic function is also 2๐‘ƒ-periodic, 3๐‘ƒ-periodic and
so on. For example, cos(๐‘ก) and sin(๐‘ก) are 2๐œ‹-periodic. So are cos(๐‘˜๐‘ก) and sin(๐‘˜๐‘ก) for all
integers ๐‘˜. The constant functions are an extreme example. They are periodic for any
period (exercise).
Normally we start with a function ๐‘“ (๐‘ก) defined on some interval [−๐ฟ, ๐ฟ], and we want to
extend ๐‘“ (๐‘ก) periodically to make it a 2๐ฟ-periodic function. We do this extension by defining
a new function ๐น(๐‘ก) such that for ๐‘ก in [−๐ฟ, ๐ฟ], ๐น(๐‘ก) = ๐‘“ (๐‘ก). For ๐‘ก in [๐ฟ, 3๐ฟ], we define
๐น(๐‘ก) = ๐‘“ (๐‘ก − 2๐ฟ), for ๐‘ก in [−3๐ฟ, −๐ฟ], ๐น(๐‘ก) = ๐‘“ (๐‘ก + 2๐ฟ), and so on. To make that work we
needed ๐‘“ (−๐ฟ) = ๐‘“ (๐ฟ). We could have also started with ๐‘“ defined only on the half-open
interval (−๐ฟ, ๐ฟ] and then define ๐‘“ (−๐ฟ) = ๐‘“ (๐ฟ).
Example 4.2.1: Define ๐‘“ (๐‘ก) = 1 − ๐‘ก 2 on [−1, 1]. Now extend ๐‘“ (๐‘ก) periodically to a 2-periodic
function. See Figure 4.3 on the facing page.
You should be careful to distinguish between ๐‘“ (๐‘ก) and its extension. A common mistake
is to assume that a formula for ๐‘“ (๐‘ก) holds for its extension. It can be confusing when the
formula for ๐‘“ (๐‘ก) is periodic, but with perhaps a different period.
Exercise 4.2.1: Define ๐‘“ (๐‘ก) = cos ๐‘ก on [−๐œ‹/2, ๐œ‹/2]. Take the ๐œ‹-periodic extension and sketch its
graph. How does it compare to the graph of cos ๐‘ก?
4.2.2
Inner product and eigenvector decomposition
Suppose we have a symmetric matrix, that is ๐ด๐‘‡ = ๐ด. As we remarked before, eigenvectors of
® are two eigenvectors
๐ด are then orthogonal. Here the word orthogonal means that if ๐‘ฃ® and ๐‘ค
199
4.2. THE TRIGONOMETRIC SERIES
-3
-2
-1
0
1
2
3
1.5
1.5
1.0
1.0
0.5
0.5
0.0
0.0
-0.5
-0.5
-3
-2
-1
0
1
2
3
Figure 4.3: Periodic extension of the function 1 − ๐‘ก 2 .
® = 0. In this case the inner product h®
® is the
of ๐ด for distinct eigenvalues, then h®
๐‘ฃ , ๐‘คi
๐‘ฃ , ๐‘คi
๐‘‡
®
dot product, which can be computed as ๐‘ฃ® ๐‘ค.
® 1 and ๐‘ค
® 2 we write
To decompose a vector ๐‘ฃ® in terms of mutually orthogonal vectors ๐‘ค
® 1 + ๐‘Ž2 ๐‘ค
® 2.
๐‘ฃ® = ๐‘Ž 1 ๐‘ค
Let us find the formula for ๐‘Ž 1 and ๐‘Ž 2 . First let us compute
® 1 + ๐‘Ž2 ๐‘ค
® 2 , ๐‘ค®1 i = ๐‘Ž 1 h๐‘ค
® 1 , ๐‘ค®1 i + ๐‘Ž 2 h๐‘ค
® 2 , ๐‘ค®1 i = ๐‘Ž 1 h๐‘ค
® 1 , ๐‘ค®1 i.
h®
๐‘ฃ , ๐‘ค®1 i = h๐‘Ž 1 ๐‘ค
| {z }
=0
Therefore,
๐‘Ž1 =
h®
๐‘ฃ , ๐‘ค®1 i
.
® 1 , ๐‘ค®1 i
h๐‘ค
๐‘Ž2 =
h®
๐‘ฃ , ๐‘ค®2 i
.
® 2 , ๐‘ค®2 i
h๐‘ค
Similarly
You probably remember this formula from vector calculus.
1
Example 4.2.2: Write ๐‘ฃ® = 23 as a linear combination of ๐‘ค®1 = −1
and ๐‘ค®2 = 11 .
® 1 and ๐‘ค
® 2 are orthogonal as h๐‘ค
®1, ๐‘ค
® 2 i = 1(1) + (−1)1 = 0. Then
First note that ๐‘ค
h®
๐‘ฃ , ๐‘ค®1 i
2(1) + 3(−1)
−1
=
=
,
2
® 1 , ๐‘ค®1 i 1(1) + (−1)(−1)
h๐‘ค
h®
๐‘ฃ , ๐‘ค®2 i
2+3 5
๐‘Ž2 =
=
= .
® 2 , ๐‘ค®2 i 1 + 1 2
h๐‘ค
๐‘Ž1 =
Hence
−1 1
5 1
2
=
+
.
3
2 −1
2 1
200
4.2.3
CHAPTER 4. FOURIER SERIES AND PDES
The trigonometric series
Instead of decomposing a vector in terms of eigenvectors of a matrix, we decompose
a function in terms of eigenfunctions of a certain eigenvalue problem. The eigenvalue
problem we use for the Fourier series is
๐‘ฅ 00 + ๐œ†๐‘ฅ = 0,
๐‘ฅ(−๐œ‹) = ๐‘ฅ(๐œ‹),
๐‘ฅ 0(−๐œ‹) = ๐‘ฅ 0(๐œ‹).
We computed that eigenfunctions are 1, cos(๐‘˜๐‘ก), sin(๐‘˜๐‘ก). That is, we want to find a
representation of a 2๐œ‹-periodic function ๐‘“ (๐‘ก) as
∞
๐‘Ž0 Õ
+
๐‘“ (๐‘ก) =
๐‘Ž ๐‘› cos(๐‘›๐‘ก) + ๐‘ ๐‘› sin(๐‘›๐‘ก).
2
๐‘›=1
This series is called the Fourier series∗ or the trigonometric series for ๐‘“ (๐‘ก). We write the
coefficient of the eigenfunction 1 as ๐‘Ž20 for convenience. We could also think of 1 = cos(0๐‘ก),
so that we only need to look at cos(๐‘˜๐‘ก) and sin(๐‘˜๐‘ก).
As for matrices we want to find a projection of ๐‘“ (๐‘ก) onto the subspaces given by the
eigenfunctions. So we want to define an inner product of functions. For example, to find ๐‘Ž ๐‘›
we want to compute h ๐‘“ (๐‘ก) , cos(๐‘›๐‘ก) i. We define the inner product as
def
h ๐‘“ (๐‘ก) , ๐‘”(๐‘ก) i =
∫
๐œ‹
๐‘“ (๐‘ก) ๐‘”(๐‘ก) ๐‘‘๐‘ก.
−๐œ‹
With this definition of the inner product, we saw in the previous section that the eigenfunctions cos(๐‘˜๐‘ก) (including the constant eigenfunction), and sin(๐‘˜๐‘ก) are orthogonal in the sense
that
h cos(๐‘š๐‘ก) , cos(๐‘›๐‘ก) i = 0
for ๐‘š ≠ ๐‘›,
h sin(๐‘š๐‘ก) , sin(๐‘›๐‘ก) i = 0
for ๐‘š ≠ ๐‘›,
h sin(๐‘š๐‘ก) , cos(๐‘›๐‘ก) i = 0
for all ๐‘š and ๐‘›.
For ๐‘› = 1, 2, 3, . . . we have
h cos(๐‘›๐‘ก) , cos(๐‘›๐‘ก) i =
h sin(๐‘›๐‘ก) , sin(๐‘›๐‘ก) i =
∫
๐œ‹
∫−๐œ‹๐œ‹
cos(๐‘›๐‘ก) cos(๐‘›๐‘ก) ๐‘‘๐‘ก = ๐œ‹,
sin(๐‘›๐‘ก) sin(๐‘›๐‘ก) ๐‘‘๐‘ก = ๐œ‹,
−๐œ‹
by elementary calculus. For the constant we get
h1, 1i =
∫
๐œ‹
1 · 1 ๐‘‘๐‘ก = 2๐œ‹.
−๐œ‹
∗ Named
after the French mathematician Jean Baptiste Joseph Fourier (1768–1830).
201
4.2. THE TRIGONOMETRIC SERIES
The coefficients are given by
๐œ‹
h ๐‘“ (๐‘ก) , cos(๐‘›๐‘ก) i
1
=
๐‘Ž๐‘› =
๐‘“ (๐‘ก) cos(๐‘›๐‘ก) ๐‘‘๐‘ก,
h cos(๐‘›๐‘ก) , cos(๐‘›๐‘ก) i ๐œ‹ −๐œ‹
∫ ๐œ‹
h ๐‘“ (๐‘ก) , sin(๐‘›๐‘ก) i
1
=
๐‘๐‘› =
๐‘“ (๐‘ก) sin(๐‘›๐‘ก) ๐‘‘๐‘ก.
h sin(๐‘›๐‘ก) , sin(๐‘›๐‘ก) i ๐œ‹ −๐œ‹
∫
Compare these expressions with the finite-dimensional example. For ๐‘Ž0 we get a similar
formula
∫ ๐œ‹
h ๐‘“ (๐‘ก) , 1 i
1
๐‘“ (๐‘ก) ๐‘‘๐‘ก.
๐‘Ž0 = 2
=
๐œ‹ −๐œ‹
h1, 1i
Let us check the formulas using the orthogonality properties. Suppose for a moment
that
∞
๐‘Ž0 Õ
+
๐‘Ž ๐‘› cos(๐‘›๐‘ก) + ๐‘ ๐‘› sin(๐‘›๐‘ก).
๐‘“ (๐‘ก) =
2
๐‘›=1
Then for ๐‘š ≥ 1 we have
h ๐‘“ (๐‘ก) , cos(๐‘š๐‘ก) i =
D๐‘Ž
0
2
+
∞
Õ
๐‘Ž ๐‘› cos(๐‘›๐‘ก) + ๐‘ ๐‘› sin(๐‘›๐‘ก) , cos(๐‘š๐‘ก)
E
๐‘›=1
∞
Õ
๐‘Ž0
= h 1 , cos(๐‘š๐‘ก) i +
๐‘Ž ๐‘› h cos(๐‘›๐‘ก) , cos(๐‘š๐‘ก) i + ๐‘ ๐‘› h sin(๐‘›๐‘ก) , cos(๐‘š๐‘ก) i
2
๐‘›=1
= ๐‘Ž ๐‘š h cos(๐‘š๐‘ก) , cos(๐‘š๐‘ก) i.
And hence ๐‘Ž ๐‘š =
h ๐‘“ (๐‘ก) , cos(๐‘š๐‘ก) i
.
h cos(๐‘š๐‘ก) , cos(๐‘š๐‘ก) i
Exercise 4.2.2: Carry out the calculation for ๐‘Ž 0 and ๐‘ ๐‘š .
Example 4.2.3: Take the function
๐‘“ (๐‘ก) = ๐‘ก
for ๐‘ก in (−๐œ‹, ๐œ‹]. Extend ๐‘“ (๐‘ก) periodically and write it as a Fourier series. This function is
called the sawtooth.
The plot of the extended periodic function is given in Figure 4.4 on the next page. Let
us compute the coefficients. We start with ๐‘Ž 0 ,
1
๐‘Ž0 =
๐œ‹
∫
๐œ‹
๐‘ก ๐‘‘๐‘ก = 0.
−๐œ‹
We will often use the result from calculus that says that the integral of an odd function
over a symmetric interval is zero. Recall that an odd function is a function ๐œ‘(๐‘ก) such that
๐œ‘(−๐‘ก) = −๐œ‘(๐‘ก). For example the functions ๐‘ก, sin ๐‘ก, or (importantly for us) ๐‘ก cos(๐‘›๐‘ก) are all
odd functions. Thus
∫ ๐œ‹
1
๐‘Ž๐‘› =
๐‘ก cos(๐‘›๐‘ก) ๐‘‘๐‘ก = 0.
๐œ‹ −๐œ‹
202
CHAPTER 4. FOURIER SERIES AND PDES
-5.0
-2.5
0.0
2.5
5.0
3
3
2
2
1
1
0
0
-1
-1
-2
-2
-3
-3
-5.0
-2.5
0.0
2.5
5.0
Figure 4.4: The graph of the sawtooth function.
Let us move to ๐‘ ๐‘› . Another useful fact from calculus is that the integral of an even function
over a symmetric interval is twice the integral of the same function over half the interval.
Recall an even function is a function ๐œ‘(๐‘ก) such that ๐œ‘(−๐‘ก) = ๐œ‘(๐‘ก). For example ๐‘ก sin(๐‘›๐‘ก) is
even.
๐œ‹
1
๐‘๐‘› =
๐‘ก sin(๐‘›๐‘ก) ๐‘‘๐‘ก
๐œ‹ −๐œ‹
∫ ๐œ‹
2
=
๐‘ก sin(๐‘›๐‘ก) ๐‘‘๐‘ก
๐œ‹ 0
∫
2
=
๐œ‹
−๐‘ก cos(๐‘›๐‘ก)
๐‘›
๐œ‹
1
+
๐‘›
๐‘ก=0
2 −๐œ‹ cos(๐‘›๐œ‹)
=
+0
๐œ‹
๐‘›
=
∫
๐œ‹
cos(๐‘›๐‘ก) ๐‘‘๐‘ก
0
−2 cos(๐‘›๐œ‹) 2 (−1)๐‘›+1
=
.
๐‘›
๐‘›
We have used the fact that
๐‘›
(
cos(๐‘›๐œ‹) = (−1) =
The series, therefore, is
∞
Õ
2 (−1)๐‘›+1
๐‘›=1
๐‘›
1
if ๐‘› even,
−1
if ๐‘› odd.
sin(๐‘›๐‘ก).
Let us write out the first 3 harmonics of the series for ๐‘“ (๐‘ก).
2 sin(๐‘ก) − sin(2๐‘ก) +
2
sin(3๐‘ก) + · · ·
3
203
4.2. THE TRIGONOMETRIC SERIES
The plot of these first three terms of the series, along with a plot of the first 20 terms is
given in Figure 4.5.
-5.0
-2.5
0.0
2.5
5.0
-5.0
-2.5
0.0
2.5
5.0
3
3
3
3
2
2
2
2
1
1
1
1
0
0
0
0
-1
-1
-1
-1
-2
-2
-2
-2
-3
-3
-3
-3
-5.0
-2.5
0.0
2.5
5.0
-5.0
-2.5
0.0
2.5
5.0
Figure 4.5: First 3 (left graph) and 20 (right graph) harmonics of the sawtooth function.
Example 4.2.4: Take the function
(
๐‘“ (๐‘ก) =
0
if −๐œ‹ < ๐‘ก ≤ 0,
๐œ‹
if
0 < ๐‘ก ≤ ๐œ‹.
Extend ๐‘“ (๐‘ก) periodically and write it as a Fourier series. This function or its variants appear
often in applications and the function is called the square wave.
-5.0
-2.5
0.0
2.5
5.0
3
3
2
2
1
1
0
0
-5.0
-2.5
0.0
2.5
5.0
Figure 4.6: The graph of the square wave function.
The plot of the extended periodic function is given in Figure 4.6. Now we compute the
204
CHAPTER 4. FOURIER SERIES AND PDES
coefficients. We start with ๐‘Ž 0
Next,
1
๐‘Ž๐‘› =
๐œ‹
∫
๐œ‹
−๐œ‹
∫
๐œ‹
๐œ‹
1
๐‘“ (๐‘ก) ๐‘‘๐‘ก =
๐œ‹
∫
1
๐‘“ (๐‘ก) cos(๐‘›๐‘ก) ๐‘‘๐‘ก =
๐œ‹
∫
1
๐‘Ž0 =
๐œ‹
−๐œ‹
๐œ‹ ๐‘‘๐‘ก = ๐œ‹.
0
๐œ‹
๐œ‹ cos(๐‘›๐‘ก) ๐‘‘๐‘ก = 0.
0
And finally,
๐œ‹
1
๐‘๐‘› =
๐‘“ (๐‘ก) sin(๐‘›๐‘ก) ๐‘‘๐‘ก
๐œ‹ −๐œ‹
∫ ๐œ‹
1
=
๐œ‹ sin(๐‘›๐‘ก) ๐‘‘๐‘ก
๐œ‹ 0
∫
− cos(๐‘›๐‘ก)
=
๐‘›
๐œ‹
๐‘ก=0
2
1 − cos(๐œ‹๐‘›) 1 − (−1)๐‘›
=
=
= ๐‘›
๐‘›
๐‘›
0
(
if ๐‘› is odd,
if ๐‘› is even.
The Fourier series is
∞
∞
๐œ‹ Õ 2
๐œ‹ Õ 2
+
sin(๐‘›๐‘ก) = +
sin (2๐‘˜ − 1) ๐‘ก .
2
๐‘›
2
2๐‘˜ − 1
๐‘›=1
๐‘› odd
๐‘˜=1
Let us write out the first 3 harmonics of the series for ๐‘“ (๐‘ก):
๐œ‹
2
+ 2 sin(๐‘ก) + sin(3๐‘ก) + · · ·
2
3
The plot of these first three and also of the first 20 terms of the series is given in Figure 4.7
on the facing page.
We have so far skirted the issue of convergence. For example, if ๐‘“ (๐‘ก) is the square wave
function, the equation
∞
๐œ‹ Õ 2
๐‘“ (๐‘ก) = +
sin (2๐‘˜ − 1) ๐‘ก .
2
2๐‘˜ − 1
๐‘˜=1
is only an equality for such ๐‘ก where ๐‘“ (๐‘ก) is continuous. We do not get an equality for
๐‘ก = −๐œ‹, 0, ๐œ‹ and all the other discontinuities of ๐‘“ (๐‘ก). It is not hard to see that when ๐‘ก is an
integer multiple of ๐œ‹ (which gives all the discontinuities), then
∞
๐œ‹
๐œ‹ Õ 2
+
sin (2๐‘˜ − 1) ๐‘ก = .
2
2๐‘˜ − 1
2
๐‘˜=1
205
4.2. THE TRIGONOMETRIC SERIES
-5.0
-2.5
0.0
2.5
5.0
-5.0
-2.5
0.0
2.5
5.0
3
3
3
3
2
2
2
2
1
1
1
1
0
0
0
0
-5.0
-2.5
0.0
2.5
5.0
-5.0
-2.5
0.0
2.5
5.0
Figure 4.7: First 3 (left graph) and 20 (right graph) harmonics of the square wave function.
We redefine ๐‘“ (๐‘ก) on [−๐œ‹, ๐œ‹] as
๏ฃฑ
๏ฃด
0
๏ฃด
๏ฃฒ
๏ฃด
๐‘“ (๐‘ก) = ๐œ‹
if −๐œ‹ < ๐‘ก < 0,
if
๏ฃด
๏ฃด
๏ฃด ๐œ‹/2 if
๏ฃณ
0 < ๐‘ก < ๐œ‹,
๐‘ก = −๐œ‹, ๐‘ก = 0, or ๐‘ก = ๐œ‹,
and extend periodically. The series equals this new extended ๐‘“ (๐‘ก) everywhere, including
the discontinuities. We will generally not worry about changing the function values at
several (finitely many) points.
We will say more about convergence in the next section. Let us, however, briefly
mention an effect of the discontinuity. Zoom in near the discontinuity in the square wave.
Further, plot the first 100 harmonics, see Figure 4.8 on the next page. While the series is a
very good approximation away from the discontinuities, the error (the overshoot) near the
discontinuity at ๐‘ก = ๐œ‹ does not seem to be getting any smaller as we take more and more
harmonics. This behavior is known as the Gibbs phenomenon. The region where the error is
large does get smaller, however, the more terms in the series we take.
We can think of a periodic function as a “signal” being a superposition of many signals
of pure frequency. For example, we could think of the square wave as a tone of certain base
frequency. This base frequency is called the fundamental frequency. The square wave will
be a superposition of many different pure tones of frequencies that are multiples of the
fundamental frequency. In music, the higher frequencies are called the overtones. All the
frequencies that appear are called the spectrum of the signal. On the other hand a simple
sine wave is only the pure tone (no overtones). The simplest way to make sound using a
computer is the square wave, and the sound is very different from a pure tone. If you ever
played video games from the 1980s or so, then you heard what square waves sound like.
206
CHAPTER 4. FOURIER SERIES AND PDES
1.75
2.00
2.25
2.50
2.75
3.00
3.25
3.50
3.50
3.25
3.25
3.00
3.00
2.75
2.75
1.75
2.00
2.25
2.50
2.75
3.00
3.25
Figure 4.8: Gibbs phenomenon in action.
4.2.4
Exercises
Exercise 4.2.3: Suppose ๐‘“ (๐‘ก) is defined on [−๐œ‹, ๐œ‹] as sin(5๐‘ก) + cos(3๐‘ก). Extend periodically and
compute the Fourier series of ๐‘“ (๐‘ก).
Exercise 4.2.4: Suppose ๐‘“ (๐‘ก) is defined on [−๐œ‹, ๐œ‹] as |๐‘ก|. Extend periodically and compute the
Fourier series of ๐‘“ (๐‘ก).
Exercise 4.2.5: Suppose ๐‘“ (๐‘ก) is defined on [−๐œ‹, ๐œ‹] as |๐‘ก | 3 . Extend periodically and compute the
Fourier series of ๐‘“ (๐‘ก).
Exercise 4.2.6: Suppose ๐‘“ (๐‘ก) is defined on (−๐œ‹, ๐œ‹] as
(
๐‘“ (๐‘ก) =
−1 if −๐œ‹ < ๐‘ก ≤ 0,
1
if
0 < ๐‘ก ≤ ๐œ‹.
Extend periodically and compute the Fourier series of ๐‘“ (๐‘ก).
Exercise 4.2.7: Suppose ๐‘“ (๐‘ก) is defined on (−๐œ‹, ๐œ‹] as ๐‘ก 3 . Extend periodically and compute the
Fourier series of ๐‘“ (๐‘ก).
Exercise 4.2.8: Suppose ๐‘“ (๐‘ก) is defined on [−๐œ‹, ๐œ‹] as ๐‘ก 2 . Extend periodically and compute the
Fourier series of ๐‘“ (๐‘ก).
There is another form of the Fourier series using complex exponentials ๐‘’ ๐‘›๐‘ก for ๐‘› =
. . . , −2, −1, 0, 1, 2, . . . instead of cos(๐‘›๐‘ก) and sin(๐‘›๐‘ก) for positive ๐‘›. This form may be easier
to work with sometimes. It is certainly more compact to write, and there is only one
formula for the coefficients. On the downside, the coefficients are complex numbers.
207
4.2. THE TRIGONOMETRIC SERIES
Exercise 4.2.9: Let
∞
๐‘Ž0 Õ
+
๐‘Ž ๐‘› cos(๐‘›๐‘ก) + ๐‘ ๐‘› sin(๐‘›๐‘ก).
๐‘“ (๐‘ก) =
2
๐‘›=1
Use Euler’s formula ๐‘’ ๐‘–๐œƒ = cos(๐œƒ) + ๐‘– sin(๐œƒ) to show that there exist complex numbers ๐‘ ๐‘š such that
๐‘“ (๐‘ก) =
∞
Õ
๐‘ ๐‘š ๐‘’ ๐‘–๐‘š๐‘ก .
๐‘š=−∞
Note that the sum now ranges over all the integers including negative ones. Do not worry about
convergence in this calculation. Hint: It may be better to start from the complex exponential form
and write the series as
๐‘0 +
∞ Õ
๐‘๐‘š ๐‘’
๐‘–๐‘š๐‘ก
+ ๐‘ −๐‘š ๐‘’
−๐‘–๐‘š๐‘ก
.
๐‘š=1
Exercise 4.2.101: Suppose ๐‘“ (๐‘ก) is defined on [−๐œ‹, ๐œ‹] as ๐‘“ (๐‘ก) = sin(๐‘ก). Extend periodically and
compute the Fourier series.
Exercise 4.2.102: Suppose ๐‘“ (๐‘ก) is defined on (−๐œ‹, ๐œ‹] as ๐‘“ (๐‘ก) = sin(๐œ‹๐‘ก). Extend periodically and
compute the Fourier series.
Exercise 4.2.103: Suppose ๐‘“ (๐‘ก) is defined on (−๐œ‹, ๐œ‹] as ๐‘“ (๐‘ก) = sin2 (๐‘ก). Extend periodically and
compute the Fourier series.
Exercise 4.2.104: Suppose ๐‘“ (๐‘ก) is defined on (−๐œ‹, ๐œ‹] as ๐‘“ (๐‘ก) = ๐‘ก 4 . Extend periodically and
compute the Fourier series.
208
4.3
CHAPTER 4. FOURIER SERIES AND PDES
More on the Fourier series
Note: 2 lectures, §9.2–§9.3 in [EP], §10.3 in [BD]
4.3.1
2๐ฟ-periodic functions
We have computed the Fourier series for a 2๐œ‹-periodic function, but what about functions
of different periods. Well, fear not, the computation is a simple case of change of variables.
We just rescale the independent axis. Suppose we have a 2๐ฟ-periodic function ๐‘“ (๐‘ก). Then ๐ฟ
is called the half period. Let ๐‘  = ๐œ‹๐ฟ ๐‘ก. Then the function
๐ฟ
๐‘”(๐‘ ) = ๐‘“
๐‘ 
๐œ‹
is 2๐œ‹-periodic. We must also rescale all our sines and cosines. In the series we use ๐œ‹๐ฟ ๐‘ก as
the variable. That is, we want to write
∞
๐‘›๐œ‹ ๐‘›๐œ‹ ๐‘Ž0 Õ
๐‘Ž ๐‘› cos
๐‘“ (๐‘ก) =
+
๐‘ก + ๐‘ ๐‘› sin
๐‘ก .
2
๐ฟ
๐ฟ
๐‘›=1
If we change variables to ๐‘ , we see that
∞
๐‘Ž0 Õ
๐‘”(๐‘ ) =
+
๐‘Ž ๐‘› cos(๐‘›๐‘ ) + ๐‘ ๐‘› sin(๐‘›๐‘ ).
2
๐‘›=1
We compute ๐‘Ž ๐‘› and ๐‘ ๐‘› as before. After we write down the integrals, we change variables
from ๐‘  back to ๐‘ก, noting also that ๐‘‘๐‘  = ๐œ‹๐ฟ ๐‘‘๐‘ก.
1
๐‘Ž0 =
๐œ‹
1
๐‘Ž๐‘› =
๐œ‹
๐‘๐‘› =
1
๐œ‹
∫
๐œ‹
−๐œ‹
∫ ๐œ‹
−๐œ‹
∫ ๐œ‹
−๐œ‹
1
๐‘”(๐‘ ) ๐‘‘๐‘  =
๐ฟ
∫
๐ฟ
๐‘“ (๐‘ก) ๐‘‘๐‘ก,
−๐ฟ
1
๐‘”(๐‘ ) cos(๐‘›๐‘ ) ๐‘‘๐‘  =
๐ฟ
๐‘”(๐‘ ) sin(๐‘›๐‘ ) ๐‘‘๐‘  =
1
๐ฟ
∫
๐ฟ
−๐ฟ
∫ ๐ฟ
−๐ฟ
๐‘“ (๐‘ก) cos
๐‘“ (๐‘ก) sin
๐‘›๐œ‹ ๐ฟ
๐‘ก ๐‘‘๐‘ก,
๐‘›๐œ‹ ๐ฟ
๐‘ก ๐‘‘๐‘ก.
The two most common half periods that show up in examples are ๐œ‹ and 1 because of
the simplicity of the formulas. We should stress that we have done no new mathematics,
we have only changed variables. If you understand the Fourier series for 2๐œ‹-periodic
functions, you understand it for 2๐ฟ-periodic functions. You can think of it as just using
different units for time. All that we are doing is moving some constants around, but all the
mathematics is the same.
209
4.3. MORE ON THE FOURIER SERIES
Example 4.3.1: Let
๐‘“ (๐‘ก) = |๐‘ก |
for −1 < ๐‘ก ≤ 1,
extended periodically. The plot of the periodic extension is given in Figure 4.9. Compute
the Fourier series of ๐‘“ (๐‘ก).
-2
-1
0
1
2
1.00
1.00
0.75
0.75
0.50
0.50
0.25
0.25
0.00
0.00
-2
-1
0
1
2
Figure 4.9: Periodic extension of the function ๐‘“ (๐‘ก).
We want to write ๐‘“ (๐‘ก) = ๐‘Ž20 +
|๐‘ก| cos(๐‘›๐œ‹๐‘ก) is even and hence
๐‘Ž๐‘› =
1
∫
Í∞
๐‘›=1
๐‘Ž ๐‘› cos(๐‘›๐œ‹๐‘ก) + ๐‘ ๐‘› sin(๐‘›๐œ‹๐‘ก). For ๐‘› ≥ 1 we note that
๐‘“ (๐‘ก) cos(๐‘›๐œ‹๐‘ก) ๐‘‘๐‘ก
−1
∫
=2
1
๐‘ก cos(๐‘›๐œ‹๐‘ก) ๐‘‘๐‘ก
0
๐‘ก
=2
sin(๐‘›๐œ‹๐‘ก)
๐‘›๐œ‹
1
1
∫
1
sin(๐‘›๐œ‹๐‘ก) ๐‘‘๐‘ก
๐‘›๐œ‹
−2
๐‘ก=0
0
i1
2 (−1)๐‘› − 1
0
1 h
= 0 + 2 2 cos(๐‘›๐œ‹๐‘ก)
=
=
−4
๐‘ก=0
๐‘› ๐œ‹
๐‘› 2 ๐œ‹2
๐‘› 2 ๐œ‹2
Next we find ๐‘Ž0 :
๐‘Ž0 =
∫
1
(
if ๐‘› is even,
if ๐‘› is odd.
|๐‘ก | ๐‘‘๐‘ก = 1.
−1
You should be able to find this integral by thinking about the integral as the area under the
graph without doing any computation at all. Finally we can find ๐‘ ๐‘› . Here, we notice that
|๐‘ก| sin(๐‘›๐œ‹๐‘ก) is odd and, therefore,
๐‘๐‘› =
∫
1
−1
๐‘“ (๐‘ก) sin(๐‘›๐œ‹๐‘ก) ๐‘‘๐‘ก = 0.
210
CHAPTER 4. FOURIER SERIES AND PDES
Hence, the series is
∞
1 Õ −4
+
cos(๐‘›๐œ‹๐‘ก).
2 ๐œ‹2
2
๐‘›
๐‘›=1
๐‘› odd
Let us explicitly write down the first few terms of the series up to the 3rd harmonic.
4
1
4
− 2 cos(๐œ‹๐‘ก) − 2 cos(3๐œ‹๐‘ก) − · · ·
2 ๐œ‹
9๐œ‹
The plot of these few terms and also a plot up to the 20th harmonic is given in Figure 4.10.
You should notice how close the graph is to the real function. You should also notice that
there is no “Gibbs phenomenon” present as there are no discontinuities.
-2
-1
0
1
2
-2
-1
0
1
2
1.00
1.00
1.00
1.00
0.75
0.75
0.75
0.75
0.50
0.50
0.50
0.50
0.25
0.25
0.25
0.25
0.00
0.00
0.00
0.00
-2
-1
0
1
2
-2
-1
0
1
2
Figure 4.10: Fourier series of ๐‘“ (๐‘ก) up to the 3rd harmonic (left graph) and up to the 20th harmonic (right
graph).
4.3.2
Convergence
We will need the one sided limits of functions. We will use the following notation
๐‘“ (๐‘−) = lim ๐‘“ (๐‘ก),
and
๐‘ก↑๐‘
๐‘“ (๐‘+) = lim ๐‘“ (๐‘ก).
๐‘ก↓๐‘
If you are unfamiliar with this notation, lim๐‘ก↑๐‘ ๐‘“ (๐‘ก) means we are taking a limit of ๐‘“ (๐‘ก) as ๐‘ก
approaches ๐‘ from below (i.e. ๐‘ก < ๐‘) and lim๐‘ก↓๐‘ ๐‘“ (๐‘ก) means we are taking a limit of ๐‘“ (๐‘ก) as ๐‘ก
approaches ๐‘ from above (i.e. ๐‘ก > ๐‘). For example, for the square wave function
(
๐‘“ (๐‘ก) =
we have ๐‘“ (0−) = 0 and ๐‘“ (0+) = ๐œ‹.
0
if −๐œ‹ < ๐‘ก ≤ 0,
๐œ‹
if
0 < ๐‘ก ≤ ๐œ‹,
(4.8)
211
4.3. MORE ON THE FOURIER SERIES
Let ๐‘“ (๐‘ก) be a function defined on an interval [๐‘Ž, ๐‘]. Suppose that we find finitely many
points ๐‘Ž = ๐‘ก0 , ๐‘ก1 , ๐‘ก2 , . . . , ๐‘ก ๐‘˜ = ๐‘ in the interval, such that ๐‘“ (๐‘ก) is continuous on the intervals
(๐‘ก0 , ๐‘ก1 ), (๐‘ก1 , ๐‘ก2 ), . . . , (๐‘ก ๐‘˜−1 , ๐‘ก ๐‘˜ ). Also suppose that all the one sided limits exist, that is, all of
๐‘“ (๐‘ก0 +), ๐‘“ (๐‘ก1 −), ๐‘“ (๐‘ก1 +), ๐‘“ (๐‘ก2 −), ๐‘“ (๐‘ก2 +), . . . , ๐‘“ (๐‘ก ๐‘˜ −) exist and are finite. Then we say ๐‘“ (๐‘ก) is
piecewise continuous.
If moreover, ๐‘“ (๐‘ก) is differentiable at all but finitely many points, and ๐‘“ 0(๐‘ก) is piecewise
continuous, then ๐‘“ (๐‘ก) is said to be piecewise smooth.
Example 4.3.2: The square wave function (4.8) is piecewise smooth on [−๐œ‹, ๐œ‹] or any other
interval. In such a case we simply say that the function is piecewise smooth.
Example 4.3.3: The function ๐‘“ (๐‘ก) = |๐‘ก| is piecewise smooth.
Example 4.3.4: The function ๐‘“ (๐‘ก) = 1๐‘ก is not piecewise smooth on [−1, 1] (or any other
interval containing zero). In fact, it is not even piecewise continuous.
√
Example 4.3.5: The function ๐‘“ (๐‘ก) = 3 ๐‘ก is not piecewise smooth on [−1, 1] (or any other
interval containing zero). ๐‘“ (๐‘ก) is continuous, but the derivative of ๐‘“ (๐‘ก) is unbounded near
zero and hence not piecewise continuous.
Piecewise smooth functions have an easy answer on the convergence of the Fourier
series.
Theorem 4.3.1. Suppose ๐‘“ (๐‘ก) is a 2๐ฟ-periodic piecewise smooth function. Let
∞
๐‘›๐œ‹ ๐‘›๐œ‹ ๐‘Ž0 Õ
+
๐‘Ž ๐‘› cos
๐‘ก + ๐‘ ๐‘› sin
๐‘ก
2
๐ฟ
๐ฟ
๐‘›=1
be the Fourier series for ๐‘“ (๐‘ก). Then the series converges for all ๐‘ก. If ๐‘“ (๐‘ก) is continuous at ๐‘ก, then
∞
๐‘›๐œ‹ ๐‘›๐œ‹ ๐‘Ž0 Õ
๐‘“ (๐‘ก) =
+
๐‘Ž ๐‘› cos
๐‘ก + ๐‘ ๐‘› sin
๐‘ก .
2
๐ฟ
๐ฟ
๐‘›=1
Otherwise,
∞
๐‘›๐œ‹ ๐‘›๐œ‹ ๐‘“ (๐‘ก−) + ๐‘“ (๐‘ก+) ๐‘Ž 0 Õ
=
+
๐‘Ž ๐‘› cos
๐‘ก + ๐‘ ๐‘› sin
๐‘ก .
2
2
๐ฟ
๐ฟ
๐‘›=1
๐‘“ (๐‘ก−)+ ๐‘“ (๐‘ก+)
If we happen to have that ๐‘“ (๐‘ก) =
at all the discontinuities, the Fourier series
2
converges to ๐‘“ (๐‘ก) everywhere. We can always just redefine ๐‘“ (๐‘ก) by changing the value at
each discontinuity appropriately. Then we can write an equals sign between ๐‘“ (๐‘ก) and the
series without any worry. We mentioned this fact briefly at the end last section.
The theorem does not say how fast the series converges. Think back to the discussion
of the Gibbs phenomenon in the last section. The closer you get to the discontinuity, the
more terms you need to take to get an accurate approximation to the function.
212
4.3.3
CHAPTER 4. FOURIER SERIES AND PDES
Differentiation and integration of Fourier series
Not only does Fourier series converge nicely, but it is easy to differentiate and integrate the
series. We can do this just by differentiating or integrating term by term.
Theorem 4.3.2. Suppose
∞
๐‘›๐œ‹ ๐‘›๐œ‹ ๐‘Ž0 Õ
๐‘“ (๐‘ก) =
+
๐‘Ž ๐‘› cos
๐‘ก + ๐‘ ๐‘› sin
๐‘ก
2
๐ฟ
๐ฟ
๐‘›=1
is a piecewise smooth continuous function and the derivative ๐‘“ 0(๐‘ก) is piecewise smooth. Then the
derivative can be obtained by differentiating term by term,
๐‘“ 0(๐‘ก) =
∞
Õ
−๐‘Ž ๐‘› ๐‘›๐œ‹
๐‘›=1
๐ฟ
sin
๐‘›๐œ‹ ๐ฟ
๐‘ก +
๐‘›๐œ‹ ๐‘ ๐‘› ๐‘›๐œ‹
cos
๐‘ก .
๐ฟ
๐ฟ
It is important that the function is continuous. It can have corners, but no jumps.
Otherwise, the differentiated series will fail to converge. For an exercise, take the series
obtained for the square wave and try to differentiate the series. Similarly, we can also
integrate a Fourier series.
Theorem 4.3.3. Suppose
∞
๐‘›๐œ‹ ๐‘›๐œ‹ ๐‘Ž0 Õ
๐‘“ (๐‘ก) =
+
๐‘Ž ๐‘› cos
๐‘ก + ๐‘ ๐‘› sin
๐‘ก
2
๐ฟ
๐ฟ
๐‘›=1
is a piecewise smooth function. Then the antiderivative is obtained by antidifferentiating term by
term and so
∞
๐‘›๐œ‹ −๐‘ ๐ฟ
๐‘›๐œ‹ Õ
๐‘Ž๐‘› ๐ฟ
๐‘Ž0 ๐‘ก
๐‘›
+๐ถ+
sin
๐‘ก +
cos
๐‘ก ,
๐น(๐‘ก) =
2
๐‘›๐œ‹
๐ฟ
๐‘›๐œ‹
๐ฟ
๐‘›=1
where
๐น0(๐‘ก)
= ๐‘“ (๐‘ก) and ๐ถ is an arbitrary constant.
Note that the series for ๐น(๐‘ก) is no longer a Fourier series as it contains the ๐‘Ž20 ๐‘ก term. The
antiderivative of a periodic function need no longer be periodic and so we should not
expect a Fourier series.
4.3.4
Rates of convergence and smoothness
Let us do an example of a periodic function with one derivative everywhere.
Example 4.3.6: Take the function
(
๐‘“ (๐‘ก) =
(๐‘ก + 1) ๐‘ก
if −1 < ๐‘ก ≤ 0,
(1 − ๐‘ก) ๐‘ก
if
0 < ๐‘ก ≤ 1,
and extend to a 2-periodic function. The plot is given in Figure 4.11 on the facing page.
This function has one derivative everywhere, but it does not have a second derivative
whenever ๐‘ก is an integer.
213
4.3. MORE ON THE FOURIER SERIES
-2
-1
0
1
2
0.50
0.50
0.25
0.25
0.00
0.00
-0.25
-0.25
-0.50
-0.50
-2
-1
0
1
2
Figure 4.11: Smooth 2-periodic function.
Exercise 4.3.1: Compute ๐‘“ 00(0+) and ๐‘“ 00(0−).
Let us compute the Fourier series coefficients. The actual computation involves several
integration by parts and is left to student.
๐‘Ž0 =
๐‘Ž๐‘› =
๐‘๐‘› =
∫
1
−1
∫ 1
−1
∫ 1
๐‘“ (๐‘ก) ๐‘‘๐‘ก =
∫
0
(๐‘ก + 1) ๐‘ก ๐‘‘๐‘ก +
∫
−1
๐‘“ (๐‘ก) cos(๐‘›๐œ‹๐‘ก) ๐‘‘๐‘ก =
๐‘“ (๐‘ก) sin(๐‘›๐œ‹๐‘ก) ๐‘‘๐‘ก =
−1
(1 − ๐‘ก) ๐‘ก ๐‘‘๐‘ก = 0,
0
∫
0
(๐‘ก + 1) ๐‘ก cos(๐‘›๐œ‹๐‘ก) ๐‘‘๐‘ก +
−1
∫ 0
(๐‘ก + 1) ๐‘ก sin(๐‘›๐œ‹๐‘ก) ๐‘‘๐‘ก +
∫
(
1
0
∫ 1
−1
8
4(1 − (−1)๐‘› )
๐œ‹3 ๐‘› 3
=
=
๐œ‹3 ๐‘› 3
0
That is, the series is
1
(1 − ๐‘ก) ๐‘ก cos(๐‘›๐œ‹๐‘ก) ๐‘‘๐‘ก = 0,
(1 − ๐‘ก) ๐‘ก sin(๐‘›๐œ‹๐‘ก) ๐‘‘๐‘ก
0
if ๐‘› is odd,
if ๐‘› is even.
∞
Õ
๐‘›=1
๐‘› odd
8
sin(๐‘›๐œ‹๐‘ก).
๐œ‹3 ๐‘› 3
This series converges very fast. If you plot up to the third harmonic, that is the function
8
8
sin(๐œ‹๐‘ก) +
sin(3๐œ‹๐‘ก),
3
๐œ‹
27๐œ‹3
it is almost indistinguishable from the plot of ๐‘“ (๐‘ก) in Figure 4.11. In fact, the coefficient
8
is already just 0.0096 (approximately). The reason for this behavior is the ๐‘› 3 term in
27๐œ‹3
the denominator. The coefficients ๐‘ ๐‘› in this case go to zero as fast as 1/๐‘› 3 goes to zero.
For functions constructed piecewise from polynomials as above, it is generally true
that if you have one derivative, the Fourier coefficients will go to zero approximately like
214
CHAPTER 4. FOURIER SERIES AND PDES
1/๐‘› 3 .
If you have only a continuous function, then the Fourier coefficients will go to zero as
If you have discontinuities, then the Fourier coefficients will go to zero approximately
1
as /๐‘› . For more general functions the story is somewhat more complicated but the same
idea holds, the more derivatives you have, the faster the coefficients go to zero. Similar
reasoning works in reverse. If the coefficients go to zero like 1/๐‘› 2 , you always obtain a
continuous function. If they go to zero like 1/๐‘› 3 , you obtain an everywhere differentiable
function.
To justify this behavior, take for example the function defined by the Fourier series
1/๐‘› 2 .
๐‘“ (๐‘ก) =
∞
Õ
1
๐‘›=1
๐‘›3
sin(๐‘›๐‘ก).
When we differentiate term by term we notice
0
๐‘“ (๐‘ก) =
∞
Õ
1
๐‘›=1
cos(๐‘›๐‘ก).
๐‘›2
Therefore, the coefficients now go down like 1/๐‘› 2 , which means that we have a continuous
function. The derivative of ๐‘“ 0(๐‘ก) is defined at most points, but there are points where ๐‘“ 0(๐‘ก)
is not differentiable. It has corners, but no jumps. If we differentiate again (where we can),
we find that the function ๐‘“ 00(๐‘ก), now fails to be continuous (has jumps)
00
๐‘“ (๐‘ก) =
∞
Õ
−1
๐‘›=1
๐‘›
sin(๐‘›๐‘ก).
This function is similar to the sawtooth. If we tried to differentiate the series again, we
would obtain
∞
Õ
− cos(๐‘›๐‘ก),
๐‘›=1
which does not converge!
Exercise 4.3.2: Use a computer to plot the series we obtained for ๐‘“ (๐‘ก), ๐‘“ 0(๐‘ก) and ๐‘“ 00(๐‘ก). That is,
plot say the first 5 harmonics of the functions. At what points does ๐‘“ 00(๐‘ก) have the discontinuities?
4.3.5
Exercises
Exercise 4.3.3: Let
(
๐‘“ (๐‘ก) =
0
if −1 < ๐‘ก ≤ 0,
๐‘ก
if
0 < ๐‘ก ≤ 1,
extended periodically.
a) Compute the Fourier series for ๐‘“ (๐‘ก).
b) Write out the series explicitly up to the 3rd harmonic.
215
4.3. MORE ON THE FOURIER SERIES
Exercise 4.3.4: Let
(
๐‘“ (๐‘ก) =
−๐‘ก
if −1 < ๐‘ก ≤ 0,
๐‘ก2
if
0 < ๐‘ก ≤ 1,
extended periodically.
a) Compute the Fourier series for ๐‘“ (๐‘ก).
b) Write out the series explicitly up to the 3rd harmonic.
Exercise 4.3.5: Let
(
๐‘“ (๐‘ก) =
−๐‘ก
10
๐‘ก
10
if −10 < ๐‘ก ≤ 0,
0 < ๐‘ก ≤ 10,
if
extended periodically (period is 20).
a) Compute the Fourier series for ๐‘“ (๐‘ก).
b) Write out the series explicitly up to the 3rd harmonic.
1
Exercise 4.3.6: Let ๐‘“ (๐‘ก) = ∞
๐‘›=1 ๐‘› 3 cos(๐‘›๐‘ก). Is ๐‘“ (๐‘ก) continuous and differentiable everywhere?
Find the derivative (if it exists everywhere) or justify why ๐‘“ (๐‘ก) is not differentiable everywhere.
Í
๐‘›
(−1)
Exercise 4.3.7: Let ๐‘“ (๐‘ก) = ∞
๐‘›=1 ๐‘› sin(๐‘›๐‘ก). Is ๐‘“ (๐‘ก) differentiable everywhere? Find the
derivative (if it exists everywhere) or justify why ๐‘“ (๐‘ก) is not differentiable everywhere.
Í
Exercise 4.3.8: Let
๏ฃฑ
๏ฃด
0
๏ฃด
๏ฃฒ
๏ฃด
๐‘“ (๐‘ก) = ๐‘ก
if −2 < ๐‘ก ≤ 0,
if
0 < ๐‘ก ≤ 1,
๏ฃด
๏ฃด
๏ฃด −๐‘ก + 2 if
๏ฃณ
1 < ๐‘ก ≤ 2,
extended periodically.
a) Compute the Fourier series for ๐‘“ (๐‘ก).
b) Write out the series explicitly up to the 3rd harmonic.
Exercise 4.3.9: Let
๐‘“ (๐‘ก) = ๐‘’ ๐‘ก
for −1 < ๐‘ก ≤ 1
extended periodically.
a) Compute the Fourier series for ๐‘“ (๐‘ก).
b) Write out the series explicitly up to the 3rd harmonic.
c) What does the series converge to at ๐‘ก = 1.
216
CHAPTER 4. FOURIER SERIES AND PDES
Exercise 4.3.10: Let
๐‘“ (๐‘ก) = ๐‘ก 2
for −1 < ๐‘ก ≤ 1
extended periodically.
a) Compute the Fourier series for ๐‘“ (๐‘ก).
b) By plugging in ๐‘ก = 0, evaluate
∞
Õ
(−1)๐‘›
๐‘›2
๐‘›=1
c) Now evaluate
∞
Õ
1
๐‘›=1
๐‘›2
=1+
=1−
1 1
+ − ···.
4 9
1 1
+ + ···.
4 9
Exercise 4.3.11: Let
(
๐‘“ (๐‘ก) =
0
if −3 < ๐‘ก ≤ 0,
๐‘ก
if
0 < ๐‘ก ≤ 3,
extended periodically. Suppose ๐น(๐‘ก) is the function given by the Fourier series of ๐‘“ . Without
computing the Fourier series evaluate
a) ๐น(2)
b) ๐น(−2)
c) ๐น(4)
d) ๐น(−4)
e) ๐น(3)
f) ๐น(−9)
Exercise 4.3.101: Let
๐‘“ (๐‘ก) = ๐‘ก 2
for −2 < ๐‘ก ≤ 2
extended periodically.
a) Compute the Fourier series for ๐‘“ (๐‘ก).
b) Write out the series explicitly up to the 3rd harmonic.
Exercise 4.3.102: Let
๐‘“ (๐‘ก) = ๐‘ก
for −๐œ† < ๐‘ก ≤ ๐œ† (for some ๐œ† > 0)
extended periodically.
a) Compute the Fourier series for ๐‘“ (๐‘ก).
b) Write out the series explicitly up to the 3rd harmonic.
Exercise 4.3.103: Let
∞
1 Õ
1
๐‘“ (๐‘ก) = +
sin(๐‘›๐œ‹๐‘ก).
2
2
๐‘›(๐‘› + 1)
๐‘›=1
Compute
๐‘“ 0(๐‘ก).
217
4.3. MORE ON THE FOURIER SERIES
Exercise 4.3.104: Let
∞
1 Õ 1
cos(๐‘›๐‘ก).
๐‘“ (๐‘ก) = +
2
๐‘›3
๐‘›=1
a) Find the antiderivative.
b) Is the antiderivative periodic?
Exercise 4.3.105: Let
๐‘“ (๐‘ก) = ๐‘ก/2
for −๐œ‹ < ๐‘ก < ๐œ‹
extended periodically.
a) Compute the Fourier series for ๐‘“ (๐‘ก).
b) Plug in ๐‘ก = ๐œ‹/2 to find a series representation for ๐œ‹/4.
c) Using the first 4 terms of the result from part b) approximate ๐œ‹/4.
Exercise 4.3.106: Let
(
๐‘“ (๐‘ก) =
0
if −2 < ๐‘ก ≤ 0,
2
if
0 < ๐‘ก ≤ 2,
extended periodically. Suppose ๐น(๐‘ก) is the function given by the Fourier series of ๐‘“ . Without
computing the Fourier series evaluate
a) ๐น(0)
b) ๐น(−1)
c) ๐น(1)
d) ๐น(−2)
e) ๐น(4)
f) ๐น(−8)
218
4.4
CHAPTER 4. FOURIER SERIES AND PDES
Sine and cosine series
Note: 2 lectures, §9.3 in [EP], §10.4 in [BD]
4.4.1
Odd and even periodic functions
You may have noticed by now that an odd function has no cosine terms in the Fourier
series and an even function has no sine terms in the Fourier series. This observation is not
a coincidence. Let us look at even and odd periodic function in more detail.
Recall that a function ๐‘“ (๐‘ก) is odd if ๐‘“ (−๐‘ก) = − ๐‘“ (๐‘ก). A function ๐‘“ (๐‘ก) is even if ๐‘“ (−๐‘ก) = ๐‘“ (๐‘ก).
For example, cos(๐‘›๐‘ก) is even and sin(๐‘›๐‘ก) is odd. Similarly the function ๐‘ก ๐‘˜ is even if ๐‘˜ is even
and odd when ๐‘˜ is odd.
Exercise 4.4.1: Take two functions ๐‘“ (๐‘ก) and ๐‘”(๐‘ก) and define their product โ„Ž(๐‘ก) = ๐‘“ (๐‘ก)๐‘”(๐‘ก).
a) Suppose both ๐‘“ (๐‘ก) and ๐‘”(๐‘ก) are odd. Is โ„Ž(๐‘ก) odd or even?
b) Suppose one is even and one is odd. Is โ„Ž(๐‘ก) odd or even?
c) Suppose both are even. Is โ„Ž(๐‘ก) odd or even?
If ๐‘“ (๐‘ก) and ๐‘”(๐‘ก) are both odd, then ๐‘“ (๐‘ก) + ๐‘”(๐‘ก) is odd. Similarly for even functions. On
the other hand, if ๐‘“ (๐‘ก) is odd and ๐‘”(๐‘ก) even, then we cannot say anything about the sum
๐‘“ (๐‘ก) + ๐‘”(๐‘ก). In fact, the Fourier series of any function is a sum of an odd (the sine terms)
and an even (the cosine terms) function.
In this section we consider odd and even periodic functions. We have previously
defined the 2๐ฟ-periodic extension of a function defined on the interval [−๐ฟ, ๐ฟ]. Sometimes
we are only interested in the function on the range [0, ๐ฟ] and it would be convenient to have
an odd (resp. even) function. If the function is odd (resp. even), all the cosine (resp. sine)
terms disappear. What we will do is take the odd (resp. even) extension of the function to
[−๐ฟ, ๐ฟ] and then extend periodically to a 2๐ฟ-periodic function.
Take a function ๐‘“ (๐‘ก) defined on [0, ๐ฟ]. On (−๐ฟ, ๐ฟ] define the functions
def
(
๐นodd (๐‘ก) =
def
๐นeven (๐‘ก) =
๐‘“ (๐‘ก)
if
0 ≤ ๐‘ก ≤ ๐ฟ,
− ๐‘“ (−๐‘ก) if −๐ฟ < ๐‘ก < 0,
(
๐‘“ (๐‘ก)
if
0 ≤ ๐‘ก ≤ ๐ฟ,
๐‘“ (−๐‘ก) if −๐ฟ < ๐‘ก < 0.
Extend ๐นodd (๐‘ก) and ๐นeven (๐‘ก) to be 2๐ฟ-periodic. Then ๐นodd (๐‘ก) is called the odd periodic extension
of ๐‘“ (๐‘ก), and ๐นeven (๐‘ก) is called the even periodic extension of ๐‘“ (๐‘ก). For the odd extension we
generally assume that ๐‘“ (0) = ๐‘“ (๐ฟ) = 0.
Exercise 4.4.2: Check that ๐นodd (๐‘ก) is odd and ๐นeven (๐‘ก) is even. For ๐นodd , assume ๐‘“ (0) = ๐‘“ (๐ฟ) = 0.
Example 4.4.1: Take the function ๐‘“ (๐‘ก) = ๐‘ก (1 − ๐‘ก) defined on [0, 1]. Figure 4.12 on the facing
page shows the plots of the odd and even periodic extensions of ๐‘“ (๐‘ก).
219
4.4. SINE AND COSINE SERIES
-2
-1
0
1
2
-2
-1
0
1
2
0.3
0.3
0.3
0.3
0.2
0.2
0.2
0.2
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
-0.1
-0.1
-0.1
-0.1
-0.2
-0.2
-0.2
-0.2
-0.3
-0.3
-0.3
-2
-1
0
1
2
-0.3
-2
-1
0
1
2
Figure 4.12: Odd and even 2-periodic extension of ๐‘“ (๐‘ก) = ๐‘ก (1 − ๐‘ก), 0 ≤ ๐‘ก ≤ 1.
4.4.2
Sine and cosine series
Let ๐‘“ (๐‘ก) be an odd 2๐ฟ-periodic function. We write the Fourier series for ๐‘“ (๐‘ก). First, we
compute the coefficients ๐‘Ž ๐‘› (including ๐‘› = 0) and get
1
๐‘Ž๐‘› =
๐ฟ
∫
๐ฟ
๐‘“ (๐‘ก) cos
๐‘›๐œ‹ ๐ฟ
−๐ฟ
๐‘ก ๐‘‘๐‘ก = 0.
That is, there are no cosine terms in the Fourier series of an odd function. The integral is
zero because ๐‘“ (๐‘ก) cos (๐‘›๐œ‹๐ฟ๐‘ก) is an odd function (product of an odd and an even function is
odd) and the integral of an odd function over a symmetric interval is always zero. The
integral of an even function over a symmetric interval [−๐ฟ, ๐ฟ]
is twice the integral of the
๐‘›๐œ‹
function over the interval [0, ๐ฟ]. The function ๐‘“ (๐‘ก) sin ๐ฟ ๐‘ก is the product of two odd
functions and hence is even.
1
๐‘๐‘› =
๐ฟ
∫
๐ฟ
−๐ฟ
๐‘“ (๐‘ก) sin
๐‘›๐œ‹ 2
๐‘ก ๐‘‘๐‘ก =
๐ฟ
๐ฟ
∫
๐ฟ
0
๐‘“ (๐‘ก) sin
๐‘›๐œ‹ ๐ฟ
๐‘ก ๐‘‘๐‘ก.
We now write the Fourier series of ๐‘“ (๐‘ก) as
∞
Õ
๐‘ ๐‘› sin
๐‘›๐œ‹ ๐ฟ
๐‘›=1
๐‘ก .
Similarly, if ๐‘“ (๐‘ก) is an even 2๐ฟ-periodic function. For the same exact reasons as above,
we find that ๐‘ ๐‘› = 0 and
∫
๐‘›๐œ‹ 2 ๐ฟ
๐‘“ (๐‘ก) cos
๐‘Ž๐‘› =
๐‘ก ๐‘‘๐‘ก.
๐ฟ 0
๐ฟ
The formula still works for ๐‘› = 0, in which case it becomes
2
๐‘Ž0 =
๐ฟ
∫
0
๐ฟ
๐‘“ (๐‘ก) ๐‘‘๐‘ก.
220
CHAPTER 4. FOURIER SERIES AND PDES
The Fourier series is then
∞
๐‘›๐œ‹ ๐‘Ž0 Õ
+
๐‘Ž ๐‘› cos
๐‘ก .
2
๐ฟ
๐‘›=1
An interesting consequence is that the coefficients of the Fourier series of an odd (or
even) function can be computed by just integrating over the half interval [0, ๐ฟ]. Therefore,
we can compute the Fourier series of the odd (or even) extension of a function by computing
certain integrals over the interval where the original function is defined.
Theorem 4.4.1. Let ๐‘“ (๐‘ก) be a piecewise smooth function defined on [0, ๐ฟ]. Then the odd periodic
extension of ๐‘“ (๐‘ก) has the Fourier series
๐นodd (๐‘ก) =
∞
Õ
๐‘ ๐‘› sin
๐‘›๐œ‹ ๐ฟ
๐‘›=1
๐‘ก ,
where
2
๐‘๐‘› =
๐ฟ
∫
๐ฟ
๐‘“ (๐‘ก) sin
0
๐‘›๐œ‹ ๐ฟ
๐‘ก ๐‘‘๐‘ก.
The even periodic extension of ๐‘“ (๐‘ก) has the Fourier series
∞
๐‘›๐œ‹ ๐‘Ž0 Õ
๐นeven (๐‘ก) =
+
๐‘Ž ๐‘› cos
๐‘ก ,
2
๐ฟ
๐‘›=1
where
2
๐‘Ž๐‘› =
๐ฟ
∫
๐ฟ
๐‘“ (๐‘ก) cos
0
๐‘›๐œ‹ ๐ฟ
๐‘ก ๐‘‘๐‘ก.
๐‘Ž0
∞
๐‘›๐œ‹
We call the series
๐‘›=1 ๐‘ ๐‘› sin ๐ฟ ๐‘ก the sine series of ๐‘“ (๐‘ก) and we call the series 2 +
Í∞
๐‘›๐œ‹
๐‘›=1 ๐‘Ž ๐‘› cos ๐ฟ ๐‘ก the cosine series of ๐‘“ (๐‘ก). We often do not actually care what happens
outside of [0, ๐ฟ]. In this case, we pick whichever series fits our problem better.
It is not necessary to start with the full Fourier series to obtain the sine and cosine series.
The sine series is really the eigenfunction expansion of ๐‘“ (๐‘ก) using eigenfunctions of the
eigenvalue problem ๐‘ฅ 00 + ๐œ†๐‘ฅ = 0, ๐‘ฅ(0) = 0, ๐‘ฅ(๐ฟ) = ๐ฟ. The cosine series is the eigenfunction
expansion of ๐‘“ (๐‘ก) using eigenfunctions of the eigenvalue problem ๐‘ฅ 00 + ๐œ†๐‘ฅ = 0, ๐‘ฅ 0(0) = 0,
๐‘ฅ 0(๐ฟ) = ๐ฟ. We could have, therefore, gotten the same formulas by defining the inner
product
Í
h ๐‘“ (๐‘ก), ๐‘”(๐‘ก)i =
∫
๐ฟ
๐‘“ (๐‘ก)๐‘”(๐‘ก) ๐‘‘๐‘ก,
0
and following the procedure of § 4.2. This point of view is useful, as we commonly use
a specific series that arose because our underlying question led to a certain eigenvalue
problem. If the eigenvalue problem is not one of the three we covered so far, you can still
221
4.4. SINE AND COSINE SERIES
do an eigenfunction expansion, generalizing the results of this chapter. We will deal with
such a generalization in chapter 5.
Example 4.4.2: Find the Fourier series of the even periodic extension of the function
๐‘“ (๐‘ก) = ๐‘ก 2 for 0 ≤ ๐‘ก ≤ ๐œ‹.
We want to write
∞
๐‘Ž0 Õ
๐‘“ (๐‘ก) =
+
๐‘Ž ๐‘› cos(๐‘›๐‘ก),
2
๐‘›=1
where
2
๐‘Ž0 =
๐œ‹
๐œ‹
0
2๐œ‹2
,
3
๐œ‹
∫
๐‘ก 2 ๐‘‘๐‘ก =
and
2
๐‘Ž๐‘› =
๐œ‹
∫
๐œ‹
0
2 21
๐‘ก cos(๐‘›๐‘ก) ๐‘‘๐‘ก =
๐‘ก sin(๐‘›๐‘ก)
๐œ‹
๐‘›
2
i๐œ‹
4 h
4
= 2 ๐‘ก cos(๐‘›๐‘ก) + 2
0
๐‘› ๐œ‹
๐‘› ๐œ‹
∫
๐œ‹
0
4
−
๐‘›๐œ‹
cos(๐‘›๐‘ก) ๐‘‘๐‘ก =
0
∫
๐œ‹
๐‘ก sin(๐‘›๐‘ก) ๐‘‘๐‘ก
0
๐‘›
4(−1)
.
๐‘›2
Note that we have “detected” the continuity of the extension since the coefficients decay
as ๐‘›12 . That is, the even periodic extension of ๐‘ก 2 has no jump discontinuities. It does have
corners, since the derivative, which is an odd function and a sine series, has jumps; it has a
Fourier series whose coefficients decay only as ๐‘›1 .
Explicitly, the first few terms of the series are
๐œ‹2
4
− 4 cos(๐‘ก) + cos(2๐‘ก) − cos(3๐‘ก) + · · ·
3
9
Exercise 4.4.3:
a) Compute the derivative of the even periodic extension of ๐‘“ (๐‘ก) above and verify it has jump
discontinuities. Use the actual definition of ๐‘“ (๐‘ก), not its cosine series!
b) Why is it that the derivative of the even periodic extension of ๐‘“ (๐‘ก) is the odd periodic extension
of ๐‘“ 0(๐‘ก)?
4.4.3
Application
Fourier series ties in to the boundary value problems we studied earlier. Let us see this
connection in an application.
Consider the boundary value problem for 0 < ๐‘ก < ๐ฟ,
๐‘ฅ 00(๐‘ก) + ๐œ† ๐‘ฅ(๐‘ก) = ๐‘“ (๐‘ก),
for the Dirichlet boundary conditions ๐‘ฅ(0) = 0, ๐‘ฅ(๐ฟ) = 0. The Fredholm alternative
(Theorem 4.1.2 on page 194) says that as long as ๐œ† is not an eigenvalue of the underlying homogeneous problem, there exists a unique solution. Eigenfunctions of this
222
CHAPTER 4. FOURIER SERIES AND PDES
eigenvalue problem are the functions sin ๐‘›๐œ‹
๐ฟ ๐‘ก . Therefore, to find the solution, we first
find the Fourier sine series for ๐‘“ (๐‘ก). We write ๐‘ฅ also as a sine series, but with unknown
coefficients. We substitute the series for ๐‘ฅ into the equation and solve for the unknown
coefficients. If we have the Neumann boundary conditions ๐‘ฅ 0(0) = 0, ๐‘ฅ 0(๐ฟ) = 0, we do the
same procedure using the cosine series.
Let us see how this method works on examples.
Example 4.4.3: Take the boundary value problem for 0 < ๐‘ก < 1,
๐‘ฅ 00(๐‘ก) + 2๐‘ฅ(๐‘ก) = ๐‘“ (๐‘ก),
where ๐‘“ (๐‘ก) = ๐‘ก on 0 < ๐‘ก < 1, and satisfying the Dirichlet boundary conditions ๐‘ฅ(0) = 0,
๐‘ฅ(1) = 0. We write ๐‘“ (๐‘ก) as a sine series
๐‘“ (๐‘ก) =
∞
Õ
๐‘ ๐‘› sin(๐‘›๐œ‹๐‘ก).
๐‘›=1
Compute
๐‘๐‘› = 2
1
∫
๐‘ก sin(๐‘›๐œ‹๐‘ก) ๐‘‘๐‘ก =
0
2 (−1)๐‘›+1
.
๐‘›๐œ‹
We write ๐‘ฅ(๐‘ก) as
๐‘ฅ(๐‘ก) =
∞
Õ
๐‘ ๐‘› sin(๐‘›๐œ‹๐‘ก).
๐‘›=1
We plug in to obtain
00
๐‘ฅ (๐‘ก) + 2๐‘ฅ(๐‘ก) =
∞
Õ
−๐‘ ๐‘› ๐‘› ๐œ‹ sin(๐‘›๐œ‹๐‘ก) + 2
2 2
๐‘›=1
=
๐‘ ๐‘› sin(๐‘›๐œ‹๐‘ก)
๐‘›=1
|
{z
}
๐‘ฅ 00
∞
Õ
∞
Õ
|
๐‘ ๐‘› (2 − ๐‘› 2 ๐œ‹2 ) sin(๐‘›๐œ‹๐‘ก)
๐‘›=1
= ๐‘“ (๐‘ก) =
∞
Õ
2 (−1)๐‘›+1
๐‘›=1
Therefore,
๐‘›๐œ‹
sin(๐‘›๐œ‹๐‘ก).
2 (−1)๐‘›+1
๐‘ ๐‘› (2 − ๐‘› ๐œ‹ ) =
๐‘›๐œ‹
2 2
or
2 (−1)๐‘›+1
๐‘๐‘› =
.
๐‘›๐œ‹(2 − ๐‘› 2 ๐œ‹2 )
{z
๐‘ฅ
}
223
4.4. SINE AND COSINE SERIES
That 2 − ๐‘› 2 ๐œ‹2 is not zero for any ๐‘›, and that we can solve for ๐‘ ๐‘› , is precisely because 2 is
not an eigenvalue of the problem. We have thus obtained a Fourier series for the solution
๐‘ฅ(๐‘ก) =
∞
Õ
๐‘›=1
2 (−1)๐‘›+1
sin(๐‘›๐œ‹๐‘ก).
๐‘›๐œ‹ (2 − ๐‘› 2 ๐œ‹2 )
See Figure 4.13 for a graph of the solution. Notice that because the eigenfunctions satisfy
the boundary conditions, and ๐‘ฅ is written in terms of the boundary conditions, then ๐‘ฅ
satisfies the boundary conditions.
0.00
0.25
0.50
0.75
1.00
0.00
0.00
-0.02
-0.02
-0.04
-0.04
-0.06
-0.06
-0.08
-0.08
0.00
0.25
0.50
0.75
1.00
Figure 4.13: Plot of the solution of ๐‘ฅ 00 + 2๐‘ฅ = ๐‘ก, ๐‘ฅ(0) = 0, ๐‘ฅ(1) = 0.
Example 4.4.4: Similarly we handle the Neumann conditions. Take the boundary value
problem for 0 < ๐‘ก < 1,
๐‘ฅ 00(๐‘ก) + 2๐‘ฅ(๐‘ก) = ๐‘“ (๐‘ก),
where again ๐‘“ (๐‘ก) = ๐‘ก on 0 < ๐‘ก < 1, but now satisfying the Neumann boundary conditions
๐‘ฅ 0(0) = 0, ๐‘ฅ 0(1) = 0. We write ๐‘“ (๐‘ก) as a cosine series
∞
๐‘0 Õ
๐‘“ (๐‘ก) =
+
๐‘ ๐‘› cos(๐‘›๐œ‹๐‘ก),
2
๐‘›=1
where
๐‘0 = 2
1
∫
๐‘ก ๐‘‘๐‘ก = 1,
0
and
๐‘๐‘› = 2
∫
1
๐‘ก cos(๐‘›๐œ‹๐‘ก) ๐‘‘๐‘ก =
0
2 (−1)๐‘› − 1
๐œ‹2 ๐‘› 2
(
=
−4
๐œ‹2 ๐‘› 2
0
We write ๐‘ฅ(๐‘ก) as a cosine series
∞
๐‘Ž0 Õ
๐‘ฅ(๐‘ก) =
+
๐‘Ž ๐‘› cos(๐‘›๐œ‹๐‘ก).
2
๐‘›=1
if ๐‘› odd,
if ๐‘› even.
224
CHAPTER 4. FOURIER SERIES AND PDES
We plug in to obtain
00
๐‘ฅ (๐‘ก) + 2๐‘ฅ(๐‘ก) =
∞ h
Õ
i
−๐‘Ž ๐‘› ๐‘› ๐œ‹ cos(๐‘›๐œ‹๐‘ก) + ๐‘Ž0 + 2
2 2
๐‘›=1
= ๐‘Ž0 +
∞ h
Õ
๐‘Ž ๐‘› cos(๐‘›๐œ‹๐‘ก)
i
๐‘›=1
∞
Õ
๐‘Ž ๐‘› (2 − ๐‘› 2 ๐œ‹2 ) cos(๐‘›๐œ‹๐‘ก)
๐‘›=1
∞
Õ
1
−4
= ๐‘“ (๐‘ก) = +
cos(๐‘›๐œ‹๐‘ก).
2๐‘›2
2
๐œ‹
๐‘›=1
๐‘› odd
Therefore, ๐‘Ž 0 = 12 , ๐‘Ž ๐‘› = 0 for ๐‘› even (๐‘› ≥ 2) and for ๐‘› odd we have
๐‘Ž ๐‘› (2 − ๐‘› 2 ๐œ‹2 ) =
or
๐‘Ž๐‘› =
−4
,
๐œ‹2 ๐‘› 2
−4
.
− ๐‘› 2 ๐œ‹2 )
๐‘› 2 ๐œ‹2 (2
The Fourier series for the solution ๐‘ฅ(๐‘ก) is
∞
Õ
1
−4
๐‘ฅ(๐‘ก) = +
cos(๐‘›๐œ‹๐‘ก).
2
2
4
๐‘› ๐œ‹ (2 − ๐‘› 2 ๐œ‹2 )
๐‘›=1
๐‘› odd
4.4.4
Exercises
Exercise 4.4.4: Take ๐‘“ (๐‘ก) = (๐‘ก − 1)2 defined on 0 ≤ ๐‘ก ≤ 1.
a) Sketch the plot of the even periodic extension of ๐‘“ .
b) Sketch the plot of the odd periodic extension of ๐‘“ .
Exercise 4.4.5: Find the Fourier series of both the odd and even periodic extension of the function
๐‘“ (๐‘ก) = (๐‘ก − 1)2 for 0 ≤ ๐‘ก ≤ 1. Can you tell which extension is continuous from the Fourier series
coefficients?
Exercise 4.4.6: Find the Fourier series of both the odd and even periodic extension of the function
๐‘“ (๐‘ก) = ๐‘ก for 0 ≤ ๐‘ก ≤ ๐œ‹.
Exercise 4.4.7: Find the Fourier series of the even periodic extension of the function ๐‘“ (๐‘ก) = sin ๐‘ก
for 0 ≤ ๐‘ก ≤ ๐œ‹.
225
4.4. SINE AND COSINE SERIES
Exercise 4.4.8: Consider
๐‘ฅ 00(๐‘ก) + 4๐‘ฅ(๐‘ก) = ๐‘“ (๐‘ก),
where ๐‘“ (๐‘ก) = 1 on 0 < ๐‘ก < 1.
a) Solve for the Dirichlet conditions ๐‘ฅ(0) = 0, ๐‘ฅ(1) = 0.
b) Solve for the Neumann conditions ๐‘ฅ 0(0) = 0, ๐‘ฅ 0(1) = 0.
Exercise 4.4.9: Consider
๐‘ฅ 00(๐‘ก) + 9๐‘ฅ(๐‘ก) = ๐‘“ (๐‘ก),
for ๐‘“ (๐‘ก) = sin(2๐œ‹๐‘ก) on 0 < ๐‘ก < 1.
a) Solve for the Dirichlet conditions ๐‘ฅ(0) = 0, ๐‘ฅ(1) = 0.
b) Solve for the Neumann conditions ๐‘ฅ 0(0) = 0, ๐‘ฅ 0(1) = 0.
Exercise 4.4.10: Consider
๐‘ฅ 00(๐‘ก) + 3๐‘ฅ(๐‘ก) = ๐‘“ (๐‘ก),
๐‘ฅ(0) = 0,
๐‘ฅ(1) = 0,
where ๐‘“ (๐‘ก) = ∞
๐‘›=1 ๐‘ ๐‘› sin(๐‘›๐œ‹๐‘ก). Write the solution ๐‘ฅ(๐‘ก) as a Fourier series, where the coefficients
are given in terms of ๐‘ ๐‘› .
Í
Exercise 4.4.11: Let ๐‘“ (๐‘ก) = ๐‘ก 2 (2 − ๐‘ก) for 0 ≤ ๐‘ก ≤ 2. Let ๐น(๐‘ก) be the odd periodic extension. Compute
๐น(1), ๐น(2), ๐น(3), ๐น(−1), ๐น(9/2), ๐น(101), ๐น(103). Note: Do not compute using the sine series.
Exercise 4.4.101: Let ๐‘“ (๐‘ก) = ๐‘ก/3 on 0 ≤ ๐‘ก < 3.
a) Find the Fourier series of the even periodic extension.
b) Find the Fourier series of the odd periodic extension.
Exercise 4.4.102: Let ๐‘“ (๐‘ก) = cos(2๐‘ก) on 0 ≤ ๐‘ก < ๐œ‹.
a) Find the Fourier series of the even periodic extension.
b) Find the Fourier series of the odd periodic extension.
Exercise 4.4.103: Let ๐‘“ (๐‘ก) be defined on 0 ≤ ๐‘ก < 1. Now take the average of the two extensions
even (๐‘ก)
๐‘”(๐‘ก) = ๐นodd (๐‘ก)+๐น
.
2
a) What is ๐‘”(๐‘ก) if 0 ≤ ๐‘ก < 1 (Justify!)
Exercise 4.4.104: Let ๐‘“ (๐‘ก) =
๐‘ฅ(0) = 0 and ๐‘ฅ(๐œ‹) = 0.
Í∞
1
๐‘›=1 ๐‘› 2
b) What is ๐‘”(๐‘ก) if −1 < ๐‘ก < 0 (Justify!)
sin(๐‘›๐‘ก). Solve ๐‘ฅ 00 − ๐‘ฅ = ๐‘“ (๐‘ก) for the Dirichlet conditions
1
00
Exercise 4.4.105 (challenging): Let ๐‘“ (๐‘ก) = ๐‘ก + ∞
๐‘›=1 2๐‘› sin(๐‘›๐‘ก). Solve ๐‘ฅ + ๐œ‹๐‘ฅ = ๐‘“ (๐‘ก) for the
Dirichlet conditions ๐‘ฅ(0) = 0 and ๐‘ฅ(๐œ‹) = 1. Hint: Note that ๐œ‹๐‘ก satisfies the given Dirichlet
conditions.
Í
226
4.5
CHAPTER 4. FOURIER SERIES AND PDES
Applications of Fourier series
Note: 2 lectures, §9.4 in [EP], not in [BD]
4.5.1
Periodically forced oscillation
Let us return to the forced oscillations. Consider a mass๐‘˜
spring system as before, where we have a mass ๐‘š on a spring
๐‘š
with spring constant ๐‘˜, with damping ๐‘, and a force ๐น(๐‘ก)
applied to the mass. Suppose the forcing function ๐น(๐‘ก) is
damping ๐‘
2๐ฟ-periodic for some ๐ฟ > 0. We saw this problem in chapter 2
with ๐น(๐‘ก) = ๐น0 cos(๐œ”๐‘ก). The equation that governs this particular setup is
๐‘š๐‘ฅ 00(๐‘ก) + ๐‘๐‘ฅ 0(๐‘ก) + ๐‘˜๐‘ฅ(๐‘ก) = ๐น(๐‘ก).
๐น(๐‘ก)
(4.9)
The general solution of (4.9) consists of the complementary solution ๐‘ฅ ๐‘ , which solves
the associated homogeneous equation ๐‘š๐‘ฅ 00 + ๐‘๐‘ฅ 0 + ๐‘˜๐‘ฅ = 0, and a particular solution of (4.9)
we call ๐‘ฅ ๐‘ . For ๐‘ > 0, the complementary solution ๐‘ฅ ๐‘ will decay as time goes by. Therefore,
we are mostly interested in a particular solution ๐‘ฅ ๐‘ that does not decay and is periodic with
the same period as ๐น(๐‘ก). We call this particular solution the steady periodic solution and we
write it as ๐‘ฅ ๐‘ ๐‘ as before. What is new in this section is that we consider an arbitrary forcing
function ๐น(๐‘ก) instead of a simple cosine.
For simplicity, suppose ๐‘ = 0. The problem with ๐‘ > 0 is very similar. The equation
๐‘š๐‘ฅ 00 + ๐‘˜๐‘ฅ = 0
has the general solution
๐‘ฅ(๐‘ก) = ๐ด cos(๐œ”0 ๐‘ก) + ๐ต sin(๐œ”0 ๐‘ก),
q
where ๐œ”0 = ๐‘š๐‘˜ . Any solution to ๐‘š๐‘ฅ 00(๐‘ก) + ๐‘˜๐‘ฅ(๐‘ก) = ๐น(๐‘ก) is of the form ๐ด cos(๐œ”0 ๐‘ก) +
๐ต sin(๐œ”0 ๐‘ก) + ๐‘ฅ ๐‘ ๐‘ . The steady periodic solution ๐‘ฅ ๐‘ ๐‘ has the same period as ๐น(๐‘ก).
In the spirit of the last section and the idea of undetermined coefficients we first write
∞
๐‘›๐œ‹ ๐‘›๐œ‹ ๐‘0 Õ
๐น(๐‘ก) =
+
๐‘ ๐‘› cos
๐‘ก + ๐‘‘๐‘› sin
๐‘ก .
2
๐ฟ
๐ฟ
๐‘›=1
Then we write a proposed steady periodic solution ๐‘ฅ as
∞
๐‘›๐œ‹ ๐‘›๐œ‹ ๐‘Ž0 Õ
๐‘ฅ(๐‘ก) =
+
๐‘Ž ๐‘› cos
๐‘ก + ๐‘ ๐‘› sin
๐‘ก ,
2
๐ฟ
๐ฟ
๐‘›=1
where ๐‘Ž ๐‘› and ๐‘ ๐‘› are unknowns. We plug ๐‘ฅ into the differential equation and solve for ๐‘Ž ๐‘›
and ๐‘ ๐‘› in terms of ๐‘ ๐‘› and ๐‘‘๐‘› . This process is perhaps best understood by example.
227
4.5. APPLICATIONS OF FOURIER SERIES
Example 4.5.1: Suppose that ๐‘˜ = 2, and ๐‘š = 1. The units are again the mks units (meterskilograms-seconds). There is a jetpack strapped to the mass, which fires with a force of 1
newton for 1 second and then is off for 1 second, and so on. We want to find the steady
periodic solution.
The equation is, therefore,
๐‘ฅ 00 + 2๐‘ฅ = ๐น(๐‘ก),
where ๐น(๐‘ก) is the step function
(
๐น(๐‘ก) =
0
if −1 < ๐‘ก < 0,
1
if
0 < ๐‘ก < 1,
extended periodically. We write
∞
๐‘0 Õ
๐‘ ๐‘› cos(๐‘›๐œ‹๐‘ก) + ๐‘‘๐‘› sin(๐‘›๐œ‹๐‘ก).
+
๐น(๐‘ก) =
2
๐‘›=1
We compute
๐‘๐‘› =
๐‘0 =
๐‘‘๐‘› =
∫
1
−1
∫ 1
−1
∫ 1
−1
∫ 1
=
๐น(๐‘ก) cos(๐‘›๐œ‹๐‘ก) ๐‘‘๐‘ก =
∫
1
cos(๐‘›๐œ‹๐‘ก) ๐‘‘๐‘ก = 0
for ๐‘› ≥ 1,
0
๐น(๐‘ก) ๐‘‘๐‘ก =
∫
1
๐‘‘๐‘ก = 1,
0
๐น(๐‘ก) sin(๐‘›๐œ‹๐‘ก) ๐‘‘๐‘ก
sin(๐‘›๐œ‹๐‘ก) ๐‘‘๐‘ก
0
− cos(๐‘›๐œ‹๐‘ก)
=
๐‘›๐œ‹
๐‘›
=
So
1
(๐‘ก=0
2
1 − (−1)
= ๐œ‹๐‘›
๐œ‹๐‘›
0
if ๐‘› odd,
if ๐‘› even.
∞
1 Õ 2
๐น(๐‘ก) = +
sin(๐‘›๐œ‹๐‘ก).
2
๐œ‹๐‘›
๐‘›=1
๐‘› odd
We want to try
∞
๐‘Ž0 Õ
๐‘ฅ(๐‘ก) =
+
๐‘Ž ๐‘› cos(๐‘›๐œ‹๐‘ก) + ๐‘ ๐‘› sin(๐‘›๐œ‹๐‘ก).
2
๐‘›=1
Once we plug ๐‘ฅ into the differential equation ๐‘ฅ 00 + 2๐‘ฅ = ๐น(๐‘ก), it is clear that ๐‘Ž ๐‘› = 0 for ๐‘› ≥ 1
as there are no corresponding terms in the series for ๐น(๐‘ก). Similarly ๐‘ ๐‘› = 0 for ๐‘› even.
228
CHAPTER 4. FOURIER SERIES AND PDES
Hence we try
๐‘ฅ(๐‘ก) =
∞
Õ
๐‘Ž0
+
๐‘ ๐‘› sin(๐‘›๐œ‹๐‘ก).
2
๐‘›=1
๐‘› odd
We plug into the differential equation and obtain
00
๐‘ฅ + 2๐‘ฅ =
∞ h
Õ
i
−๐‘ ๐‘› ๐‘› ๐œ‹ sin(๐‘›๐œ‹๐‘ก) + ๐‘Ž0 + 2
2 2
๐‘›=1
๐‘› odd
= ๐‘Ž0 +
∞ h
Õ
๐‘ ๐‘› sin(๐‘›๐œ‹๐‘ก)
i
๐‘›=1
๐‘› odd
∞
Õ
๐‘ ๐‘› (2 − ๐‘› 2 ๐œ‹2 ) sin(๐‘›๐œ‹๐‘ก)
๐‘›=1
๐‘› odd
∞
1 Õ 2
sin(๐‘›๐œ‹๐‘ก).
= ๐น(๐‘ก) = +
2
๐œ‹๐‘›
๐‘›=1
๐‘› odd
So ๐‘Ž 0 = 21 , ๐‘ ๐‘› = 0 for even ๐‘›, and for odd ๐‘› we get
๐‘๐‘› =
2
.
๐œ‹๐‘›(2 − ๐‘› 2 ๐œ‹2 )
The steady periodic solution has the Fourier series
∞
1 Õ
2
๐‘ฅ ๐‘ ๐‘ (๐‘ก) = +
sin(๐‘›๐œ‹๐‘ก).
2 ๐œ‹2 )
4
๐œ‹๐‘›(2
−
๐‘›
๐‘›=1
๐‘› odd
We know this is the steady periodic solution as it contains no terms of the complementary
solution and it is periodic with the same period as ๐น(๐‘ก) itself. See Figure 4.14 on the next
page for the plot of this solution.
4.5.2
Resonance
Just as when the forcing function was a simple cosine, we may encounter resonance.
Assume ๐‘ = 0 and let us discuss only pure resonance. Let ๐น(๐‘ก) be 2๐ฟ-periodic and consider
๐‘š๐‘ฅ 00(๐‘ก) + ๐‘˜๐‘ฅ(๐‘ก) = ๐น(๐‘ก).
When we expand ๐น(๐‘ก) and find that some of its terms coincide with the complementary
solution to ๐‘š๐‘ฅ 00 + ๐‘˜๐‘ฅ = 0, we cannot use those terms in the guess. Just like before, they
disappear when we plug them into the left-hand side and we get a contradictory equation
(such as 0 = 1). That is, suppose
๐‘ฅ ๐‘ = ๐ด cos(๐œ”0 ๐‘ก) + ๐ต sin(๐œ”0 ๐‘ก),
229
4.5. APPLICATIONS OF FOURIER SERIES
0.5
0.0
2.5
5.0
7.5
10.0
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0.0
0.0
2.5
5.0
0.0
10.0
7.5
Figure 4.14: Plot of the steady periodic solution ๐‘ฅ ๐‘ ๐‘ of Example 4.5.1.
where ๐œ”0 =
๐‘๐œ‹
๐ฟ
for some positive integer ๐‘. We have to modify our guess and try
๐‘๐œ‹
๐‘๐œ‹
๐‘Ž0
+ ๐‘ก ๐‘Ž ๐‘ cos
๐‘ก + ๐‘ ๐‘ sin
๐‘ก
๐‘ฅ(๐‘ก) =
2
๐ฟ
๐ฟ
+
∞
Õ
๐‘Ž ๐‘› cos
๐‘›=1
๐‘›≠๐‘
๐‘›๐œ‹ ๐‘›๐œ‹ ๐ฟ
๐ฟ
๐‘ก + ๐‘ ๐‘› sin
๐‘ก .
In other words, we multiply the offending term by ๐‘ก. From then on, we proceed as before.
Of course, the solution is not a Fourier series (it is not even
since
periodic) ๐‘๐œ‹
it contains
๐‘๐œ‹
these terms multiplied by ๐‘ก. Further, the terms ๐‘ก ๐‘Ž ๐‘ cos ๐ฟ ๐‘ก + ๐‘ ๐‘ sin ๐ฟ ๐‘ก eventually
dominate and lead to wild oscillations. As before, this behavior is called pure resonance or
just resonance.
Note that there now may be infinitely many resonance frequencies to hit. That is, as we
change the frequency of ๐น (we change ๐ฟ), different terms from the Fourier series of ๐น may
interfere with the complementary solution and cause resonance. However, we should note
that since everything is an approximation and in particular ๐‘ is never actually zero but
something very close to zero, only the first few resonance frequencies matter in real life.
Example 4.5.2: We want to solve the equation
2๐‘ฅ 00 + 18๐œ‹2 ๐‘ฅ = ๐น(๐‘ก),
where
(
๐น(๐‘ก) =
−1
if −1 < ๐‘ก < 0,
1
if
0 < ๐‘ก < 1,
extended periodically. We note that
๐น(๐‘ก) =
∞
Õ
4
๐‘›=1
๐‘› odd
๐œ‹๐‘›
sin(๐‘›๐œ‹๐‘ก).
(4.10)
230
CHAPTER 4. FOURIER SERIES AND PDES
Exercise 4.5.1: Compute the Fourier series of ๐น to verify the equation above.
As
q
๐‘˜
๐‘š
=
q
18๐œ‹2
2
= 3๐œ‹, the solution to (4.10) is
๐‘ฅ(๐‘ก) = ๐‘1 cos(3๐œ‹๐‘ก) + ๐‘2 sin(3๐œ‹๐‘ก) + ๐‘ฅ ๐‘ (๐‘ก)
for some particular solution ๐‘ฅ ๐‘ .
If we just try an ๐‘ฅ ๐‘ given as a Fourier series with sin(๐‘›๐œ‹๐‘ก) as usual, the complementary
equation, 2๐‘ฅ 00 + 18๐œ‹2 ๐‘ฅ = 0, eats our 3rd harmonic. That is, the term with sin(3๐œ‹๐‘ก) is already
in in our complementary solution. Therefore, we pull that term out and multiply it by ๐‘ก.
We also add a cosine term to get everything right. That is, we try
๐‘ฅ ๐‘ (๐‘ก) = ๐‘Ž 3 ๐‘ก cos(3๐œ‹๐‘ก) + ๐‘ 3 ๐‘ก sin(3๐œ‹๐‘ก) +
∞
Õ
๐‘ ๐‘› sin(๐‘›๐œ‹๐‘ก).
๐‘›=1
๐‘› odd
๐‘›≠3
Let us compute the second derivative.
๐‘ฅ 00๐‘ (๐‘ก) = −6๐‘Ž 3 ๐œ‹ sin(3๐œ‹๐‘ก) − 9๐œ‹2 ๐‘Ž3 ๐‘ก cos(3๐œ‹๐‘ก) + 6๐‘ 3 ๐œ‹ cos(3๐œ‹๐‘ก) − 9๐œ‹2 ๐‘3 ๐‘ก sin(3๐œ‹๐‘ก)
+
∞
Õ
(−๐‘› 2 ๐œ‹2 ๐‘ ๐‘› ) sin(๐‘›๐œ‹๐‘ก).
๐‘›=1
๐‘› odd
๐‘›≠3
We now plug into the left-hand side of the differential equation.
2๐‘ฅ 00๐‘ + 18๐œ‹2 ๐‘ฅ ๐‘ = − 12๐‘Ž 3 ๐œ‹ sin(3๐œ‹๐‘ก) − 18๐œ‹2 ๐‘Ž 3 ๐‘ก cos(3๐œ‹๐‘ก) + 12๐‘3 ๐œ‹ cos(3๐œ‹๐‘ก) − 18๐œ‹2 ๐‘ 3 ๐‘ก sin(3๐œ‹๐‘ก)
+ 18๐œ‹2 ๐‘Ž 3 ๐‘ก cos(3๐œ‹๐‘ก)
+
∞
Õ
+ 18๐œ‹2 ๐‘ 3 ๐‘ก sin(3๐œ‹๐‘ก)
(−2๐‘› 2 ๐œ‹2 ๐‘ ๐‘› + 18๐œ‹2 ๐‘ ๐‘› ) sin(๐‘›๐œ‹๐‘ก).
๐‘›=1
๐‘› odd
๐‘›≠3
We simplify,
2๐‘ฅ 00๐‘
+ 18๐œ‹ ๐‘ฅ ๐‘ = −12๐‘Ž 3 ๐œ‹ sin(3๐œ‹๐‘ก) + 12๐‘3 ๐œ‹ cos(3๐œ‹๐‘ก) +
2
∞
Õ
(−2๐‘› 2 ๐œ‹2 ๐‘ ๐‘› + 18๐œ‹2 ๐‘ ๐‘› ) sin(๐‘›๐œ‹๐‘ก).
๐‘›=1
๐‘› odd
๐‘›≠3
This series has to equal to the series for ๐น(๐‘ก). We equate the coefficients and solve for ๐‘Ž3
and ๐‘ ๐‘› .
4/(3๐œ‹)
−1
=
,
−12๐œ‹
9๐œ‹2
๐‘ 3 = 0,
4
2
๐‘๐‘› =
=
๐‘›๐œ‹(18๐œ‹2 − 2๐‘› 2 ๐œ‹2 ) ๐œ‹3 ๐‘›(9 − ๐‘› 2 )
๐‘Ž3 =
for ๐‘› odd and ๐‘› ≠ 3.
231
4.5. APPLICATIONS OF FOURIER SERIES
That is,
∞
Õ
2
−1
๐‘ฅ ๐‘ (๐‘ก) =
๐‘ก cos(3๐œ‹๐‘ก) +
sin(๐‘›๐œ‹๐‘ก).
2
3
2)
9๐œ‹
๐œ‹
๐‘›(9
−
๐‘›
๐‘›=1
๐‘› odd
๐‘›≠3
When ๐‘ > 0, you do not have to worry about pure resonance. That is, there are never
any conflicts and you do not need to multiply any terms by ๐‘ก. There is a corresponding
concept of practical resonance and it is very similar to the ideas we already explored in
chapter 2. Basically what happens in practical resonance is that one of the coefficients in
the series for ๐‘ฅ ๐‘ ๐‘ can get very big. Let us not go into details here.
4.5.3
Exercises
1
00
Exercise 4.5.2: Let ๐น(๐‘ก) = 12 + ∞
๐‘›=1 ๐‘› 2 cos(๐‘›๐œ‹๐‘ก). Find the steady periodic solution to ๐‘ฅ + 2๐‘ฅ =
๐น(๐‘ก). Express your solution as a Fourier series.
Í
1
00
0
Exercise 4.5.3: Let ๐น(๐‘ก) = ∞
๐‘›=1 ๐‘› 3 sin(๐‘›๐œ‹๐‘ก). Find the steady periodic solution to ๐‘ฅ +๐‘ฅ +๐‘ฅ = ๐น(๐‘ก).
Express your solution as a Fourier series.
Í
1
00
Exercise 4.5.4: Let ๐น(๐‘ก) = ∞
๐‘›=1 ๐‘› 2 cos(๐‘›๐œ‹๐‘ก). Find the steady periodic solution to ๐‘ฅ + 4๐‘ฅ = ๐น(๐‘ก).
Express your solution as a Fourier series.
Í
Exercise 4.5.5: Let ๐น(๐‘ก) = ๐‘ก for −1 < ๐‘ก < 1 and extended periodically. Find the steady periodic
solution to ๐‘ฅ 00 + ๐‘ฅ = ๐น(๐‘ก). Express your solution as a series.
Exercise 4.5.6: Let ๐น(๐‘ก) = ๐‘ก for −1 < ๐‘ก < 1 and extended periodically. Find the steady periodic
solution to ๐‘ฅ 00 + ๐œ‹2 ๐‘ฅ = ๐น(๐‘ก). Express your solution as a series.
Exercise
√ 4.5.101: Let ๐น(๐‘ก) = sin(2๐œ‹๐‘ก) + 0.1 cos(10๐œ‹๐‘ก). Find the steady periodic solution to
00
๐‘ฅ + 2 ๐‘ฅ = ๐น(๐‘ก). Express your solution as a Fourier series.
−๐‘› cos(2๐‘›๐‘ก). Find the steady periodic solution to ๐‘ฅ 00 + 3๐‘ฅ =
Exercise 4.5.102: Let ๐น(๐‘ก) = ∞
๐‘›=1 ๐‘’
๐น(๐‘ก). Express your solution as a Fourier series.
Í
Exercise 4.5.103:
√ Let ๐น(๐‘ก) = |๐‘ก | for −1 ≤ ๐‘ก ≤ 1 extended periodically. Find the steady periodic
00
solution to ๐‘ฅ + 3 ๐‘ฅ = ๐น(๐‘ก). Express your solution as a series.
Exercise 4.5.104: Let ๐น(๐‘ก) = |๐‘ก | for −1 ≤ ๐‘ก ≤ 1 extended periodically. Find the steady periodic
solution to ๐‘ฅ 00 + ๐œ‹2 ๐‘ฅ = ๐น(๐‘ก). Express your solution as a series.
232
4.6
CHAPTER 4. FOURIER SERIES AND PDES
PDEs, separation of variables, and the heat equation
Note: 2 lectures, §9.5 in [EP], §10.5 in [BD]
Let us recall that a partial differential equation or PDE is an equation containing the partial
derivatives with respect to several independent variables. Solving PDEs will be our main
application of Fourier series.
A PDE is said to be linear if the dependent variable and its derivatives appear at most to
the first power and in no functions. We will only talk about linear PDEs. Together with a
PDE, we usually specify some boundary conditions, where the value of the solution or its
derivatives is given along the boundary of a region, and/or some initial conditions where
the value of the solution or its derivatives is given for some initial time. Sometimes such
conditions are mixed together and we will refer to them simply as side conditions.
We will study three specific partial differential equations, each one representing a
general class of equations. First, we will study the heat equation, which is an example of a
parabolic PDE. Next, we will study the wave equation, which is an example of a hyperbolic
PDE. Finally, we will study the Laplace equation, which is an example of an elliptic PDE.
Each of our examples will illustrate behavior that is typical for the whole class.
4.6.1
Heat on an insulated wire
Let us start with the heat equation. Consider a wire (or a thin metal rod) of length ๐ฟ that is
insulated except at the endpoints. Let ๐‘ฅ denote the position along the wire and let ๐‘ก denote
time. See Figure 4.15.
temperature ๐‘ข
0
๐ฟ
๐‘ฅ
insulation
Figure 4.15: Insulated wire.
Let ๐‘ข(๐‘ฅ, ๐‘ก) denote the temperature at point ๐‘ฅ at time ๐‘ก. The equation governing this
setup is the so-called one-dimensional heat equation:
๐œ•๐‘ข
๐œ•2 ๐‘ข
= ๐‘˜ 2,
๐œ•๐‘ก
๐œ•๐‘ฅ
where ๐‘˜ > 0 is a constant (the thermal conductivity of the material). That is, the change in
heat at a specific point is proportional to the second derivative of the heat along the wire.
4.6. PDES, SEPARATION OF VARIABLES, AND THE HEAT EQUATION
233
This makes sense; if at a fixed ๐‘ก the graph of the heat distribution has a maximum (the
graph is concave down), then heat flows away from the maximum. And vice versa.
We generally use a more convenient notation for partial derivatives. We write ๐‘ข๐‘ก instead
2
๐œ•๐‘ข
of ๐œ•๐‘ก , and we write ๐‘ข๐‘ฅ๐‘ฅ instead of ๐œ•๐œ•๐‘ฅ๐‘ข2 . With this notation the heat equation becomes
๐‘ข๐‘ก = ๐‘˜๐‘ข๐‘ฅ๐‘ฅ .
For the heat equation, we must also have some boundary conditions. We assume that
the ends of the wire are either exposed and touching some body of constant heat, or the
ends are insulated. If the ends of the wire are kept at temperature 0, then the conditions are
๐‘ข(0, ๐‘ก) = 0
and
๐‘ข(๐ฟ, ๐‘ก) = 0.
If, on the other hand, the ends are also insulated, the conditions are
๐‘ข๐‘ฅ (0, ๐‘ก) = 0
and
๐‘ข๐‘ฅ (๐ฟ, ๐‘ก) = 0.
Let us see why that is so. If ๐‘ข๐‘ฅ is positive at some point ๐‘ฅ0 , then at a particular time, ๐‘ข is
smaller to the left of ๐‘ฅ0 , and higher to the right of ๐‘ฅ0 . Heat is flowing from high heat to low
heat, that is to the left. On the other hand if ๐‘ข๐‘ฅ is negative then heat is again flowing from
high heat to low heat, that is to the right. So when ๐‘ข๐‘ฅ is zero, that is a point through which
heat is not flowing. In other words, ๐‘ข๐‘ฅ (0, ๐‘ก) = 0 means no heat is flowing in or out of the
wire at the point ๐‘ฅ = 0.
We have two conditions along the ๐‘ฅ-axis as there are two derivatives in the ๐‘ฅ direction.
These side conditions are said to be homogeneous (i.e., ๐‘ข or a derivative of ๐‘ข is set to zero).
We also need an initial condition—the temperature distribution at time ๐‘ก = 0. That is,
๐‘ข(๐‘ฅ, 0) = ๐‘“ (๐‘ฅ),
for some known function ๐‘“ (๐‘ฅ). This initial condition is not a homogeneous side condition.
4.6.2
Separation of variables
The heat equation is linear as ๐‘ข and its derivatives do not appear to any powers or in any
functions. Thus the principle of superposition still applies for the heat equation (without
side conditions): If ๐‘ข1 and ๐‘ข2 are solutions and ๐‘1 , ๐‘2 are constants, then ๐‘ข = ๐‘1 ๐‘ข1 + ๐‘2 ๐‘ข2 is
also a solution.
Exercise 4.6.1: Verify the principle of superposition for the heat equation.
Superposition preserves some of the side conditions. In particular, if ๐‘ข1 and ๐‘ข2 are
solutions that satisfy ๐‘ข(0, ๐‘ก) = 0 and ๐‘ข(๐ฟ, ๐‘ก) = 0, and ๐‘ 1 , ๐‘2 are constants, then ๐‘ข = ๐‘ 1 ๐‘ข1 + ๐‘2 ๐‘ข2
is still a solution that satisfies ๐‘ข(0, ๐‘ก) = 0 and ๐‘ข(๐ฟ, ๐‘ก) = 0. Similarly for the side conditions
๐‘ข๐‘ฅ (0, ๐‘ก) = 0 and ๐‘ข๐‘ฅ (๐ฟ, ๐‘ก) = 0. In general, superposition preserves all homogeneous side
conditions.
234
CHAPTER 4. FOURIER SERIES AND PDES
The method of separation of variables is to try to find solutions that are products of
functions of one variable. For the heat equation, we try to find solutions of the form
๐‘ข(๐‘ฅ, ๐‘ก) = ๐‘‹(๐‘ฅ)๐‘‡(๐‘ก).
That the desired solution we are looking for is of this form is too much to hope for. What is
perfectly reasonable to ask, however, is to find enough “building-block” solutions of the
form ๐‘ข(๐‘ฅ, ๐‘ก) = ๐‘‹(๐‘ฅ)๐‘‡(๐‘ก) using this procedure so that the desired solution to the PDE is
somehow constructed from these building blocks by the use of superposition.
Let us try to solve the heat equation
๐‘ข๐‘ก = ๐‘˜๐‘ข๐‘ฅ๐‘ฅ
with
๐‘ข(0, ๐‘ก) = 0,
๐‘ข(๐ฟ, ๐‘ก) = 0,
and
๐‘ข(๐‘ฅ, 0) = ๐‘“ (๐‘ฅ).
We guess ๐‘ข(๐‘ฅ, ๐‘ก) = ๐‘‹(๐‘ฅ)๐‘‡(๐‘ก). We will try to make this guess satisfy the differential equation,
๐‘ข๐‘ก = ๐‘˜๐‘ข๐‘ฅ๐‘ฅ , and the homogeneous side conditions, ๐‘ข(0, ๐‘ก) = 0 and ๐‘ข(๐ฟ, ๐‘ก) = 0. Then, as
superposition preserves the differential equation and the homogeneous side conditions,
we will try to build up a solution from these building blocks to solve the nonhomogeneous
initial condition ๐‘ข(๐‘ฅ, 0) = ๐‘“ (๐‘ฅ).
First we plug ๐‘ข(๐‘ฅ, ๐‘ก) = ๐‘‹(๐‘ฅ)๐‘‡(๐‘ก) into the heat equation to obtain
๐‘‹(๐‘ฅ)๐‘‡ 0(๐‘ก) = ๐‘˜๐‘‹ 00(๐‘ฅ)๐‘‡(๐‘ก).
We rewrite as
๐‘‡ 0(๐‘ก)
๐‘‹ 00(๐‘ฅ)
=
.
๐‘˜๐‘‡(๐‘ก)
๐‘‹(๐‘ฅ)
This equation must hold for all ๐‘ฅ and all ๐‘ก. But the left-hand side does not depend on ๐‘ฅ
and the right-hand side does not depend on ๐‘ก. Hence, each side must be a constant. Let us
call this constant −๐œ† (the minus sign is for convenience later). We obtain the two equations
๐‘‡ 0(๐‘ก)
๐‘‹ 00(๐‘ฅ)
= −๐œ† =
.
๐‘˜๐‘‡(๐‘ก)
๐‘‹(๐‘ฅ)
In other words,
๐‘‹ 00(๐‘ฅ) + ๐œ†๐‘‹(๐‘ฅ) = 0,
๐‘‡ 0(๐‘ก) + ๐œ†๐‘˜๐‘‡(๐‘ก) = 0.
The boundary condition ๐‘ข(0, ๐‘ก) = 0 implies ๐‘‹(0)๐‘‡(๐‘ก) = 0. We are looking for a nontrivial
solution and so we can assume that ๐‘‡(๐‘ก) is not identically zero. Hence ๐‘‹(0) = 0. Similarly,
๐‘ข(๐ฟ, ๐‘ก) = 0 implies ๐‘‹(๐ฟ) = 0. We are looking for nontrivial solutions ๐‘‹ of the eigenvalue
problem ๐‘‹ 00 + ๐œ†๐‘‹ = 0, ๐‘‹(0) = 0, ๐‘‹(๐ฟ) = 0. We have previously found that the only
2 2
eigenvalues are ๐œ†๐‘› = ๐‘›๐ฟ๐œ‹2 , for integers ๐‘› ≥ 1, where eigenfunctions are sin ๐‘›๐œ‹
๐ฟ ๐‘ฅ . Hence,
let us pick the solutions
๐‘›๐œ‹ ๐‘‹๐‘› (๐‘ฅ) = sin
๐‘ฅ .
๐ฟ
4.6. PDES, SEPARATION OF VARIABLES, AND THE HEAT EQUATION
235
The corresponding ๐‘‡๐‘› must satisfy the equation
๐‘‡๐‘›0 (๐‘ก) +
๐‘› 2 ๐œ‹2
๐‘˜๐‘‡๐‘› (๐‘ก) = 0.
๐ฟ2
This is one of our fundamental equations, and the solution is just an exponential:
๐‘‡๐‘› (๐‘ก) = ๐‘’
−๐‘› 2 ๐œ‹2
๐ฟ2
๐‘˜๐‘ก
.
It will be useful to note that ๐‘‡๐‘› (0) = 1. Our building-block solutions are
๐‘ข๐‘› (๐‘ฅ, ๐‘ก) = ๐‘‹๐‘› (๐‘ฅ)๐‘‡๐‘› (๐‘ก) = sin
We note that ๐‘ข๐‘› (๐‘ฅ, 0) = sin
๐‘›๐œ‹
๐ฟ ๐‘ฅ
๐‘›๐œ‹ ๐ฟ
๐‘ฅ ๐‘’
−๐‘› 2 ๐œ‹2
๐ฟ2
๐‘˜๐‘ก
.
. Let us write ๐‘“ (๐‘ฅ) as the sine series
๐‘“ (๐‘ฅ) =
∞
Õ
๐‘ ๐‘› sin
๐‘›=1
๐‘›๐œ‹ ๐ฟ
๐‘ฅ .
That is, we find the Fourier series of the odd periodic extension of ๐‘“ (๐‘ฅ). We used the
sine series as it corresponds to the eigenvalue problem for ๐‘‹(๐‘ฅ) above. Finally, we use
superposition to write the solution as
๐‘ข(๐‘ฅ, ๐‘ก) =
∞
Õ
๐‘ ๐‘› ๐‘ข๐‘› (๐‘ฅ, ๐‘ก) =
๐‘›=1
∞
Õ
๐‘ ๐‘› sin
๐‘›=1
๐‘›๐œ‹ ๐ฟ
๐‘ฅ ๐‘’
−๐‘› 2 ๐œ‹2
๐ฟ2
๐‘˜๐‘ก
.
Why does this solution work? First note that it is a solution to the heat equation by
superposition. It satisfies ๐‘ข(0, ๐‘ก) = 0 and ๐‘ข(๐ฟ, ๐‘ก) = 0, because ๐‘ฅ = 0 or ๐‘ฅ = ๐ฟ makes all the
sines vanish. Finally, plugging in ๐‘ก = 0, we notice that ๐‘‡๐‘› (0) = 1 and so
๐‘ข(๐‘ฅ, 0) =
∞
Õ
๐‘ ๐‘› ๐‘ข๐‘› (๐‘ฅ, 0) =
๐‘›=1
∞
Õ
๐‘›=1
๐‘ ๐‘› sin
๐‘›๐œ‹ ๐ฟ
๐‘ฅ = ๐‘“ (๐‘ฅ).
Example 4.6.1: Consider an insulated wire of length 1 whose ends are embedded in ice
(temperature 0). Let ๐‘˜ = 0.003 and let the initial heat distribution be ๐‘ข(๐‘ฅ, 0) = 50 ๐‘ฅ (1 − ๐‘ฅ).
See Figure 4.16 on the next page. Suppose we want to find the temperature function ๐‘ข(๐‘ฅ, ๐‘ก).
Let us also suppose we want to find when (at what ๐‘ก) does the maximum temperature in
the wire drop to one half of the initial maximum of 12.5.
We are solving the following PDE problem:
๐‘ข๐‘ก = 0.003 ๐‘ข๐‘ฅ๐‘ฅ ,
๐‘ข(0, ๐‘ก) = ๐‘ข(1, ๐‘ก) = 0,
๐‘ข(๐‘ฅ, 0) = 50 ๐‘ฅ (1 − ๐‘ฅ)
for 0 < ๐‘ฅ < 1.
236
CHAPTER 4. FOURIER SERIES AND PDES
0.00
0.25
0.50
0.75
1.00
12.5
12.5
10.0
10.0
7.5
7.5
5.0
5.0
2.5
2.5
0.0
0.00
0.0
0.25
0.50
0.75
1.00
Figure 4.16: Initial distribution of temperature in the wire.
We write ๐‘“ (๐‘ฅ) = 50 ๐‘ฅ (1 − ๐‘ฅ) for 0 < ๐‘ฅ < 1 as a sine series. That is, ๐‘“ (๐‘ฅ) =
where
๐‘๐‘› = 2
∫
1
๐‘›
50 ๐‘ฅ (1 − ๐‘ฅ) sin(๐‘›๐œ‹๐‘ฅ) ๐‘‘๐‘ฅ =
0
(
0
200 (−1)
200
−
=
400
๐œ‹3 ๐‘› 3
๐œ‹3 ๐‘› 3
๐œ‹3 ๐‘› 3
Í∞
๐‘›=1 ๐‘ ๐‘›
sin(๐‘›๐œ‹๐‘ฅ),
if ๐‘› even,
if ๐‘› odd.
The solution ๐‘ข(๐‘ฅ, ๐‘ก), plotted in Figure 4.17 on the facing page for 0 ≤ ๐‘ก ≤ 100, is given
by the series:
∞
Õ
2 2
400
๐‘ข(๐‘ฅ, ๐‘ก) =
sin(๐‘›๐œ‹๐‘ฅ) ๐‘’ −๐‘› ๐œ‹ 0.003 ๐‘ก .
3
3
๐œ‹ ๐‘›
๐‘›=1
๐‘› odd
Finally, let us answer the question about the maximum temperature. It is relatively easy
to see that the maximum temperature at any fixed time is always at ๐‘ฅ = 0.5, in the middle
of the wire. The plot of ๐‘ข(๐‘ฅ, ๐‘ก) confirms this intuition. If we plug in ๐‘ฅ = 0.5, we get
๐‘ข(0.5, ๐‘ก) =
∞
Õ
400
๐‘›=1
๐‘› odd
๐œ‹3 ๐‘› 3
sin(๐‘›๐œ‹ 0.5) ๐‘’ −๐‘›
2 ๐œ‹2 0.003 ๐‘ก
.
For ๐‘› = 3 and higher (remember ๐‘› is only odd), the terms of the series are insignificant
compared to the first term. The first term in the series is already a very good approximation
of the function. Hence
2
400
๐‘ข(0.5, ๐‘ก) ≈ 3 ๐‘’ −๐œ‹ 0.003 ๐‘ก .
๐œ‹
The approximation gets better and better as ๐‘ก gets larger as the other terms decay much
faster. Let us plot the function ๐‘ข(0.5, ๐‘ก), the temperature at the midpoint of the wire at time
๐‘ก, in Figure 4.18 on the next page. The figure also plots the approximation by the first term.
After ๐‘ก = 5 or so it would be hard to tell the difference between the first term of the
series for ๐‘ข(๐‘ฅ, ๐‘ก) and the real solution ๐‘ข(๐‘ฅ, ๐‘ก). This behavior is a general feature of solving
4.6. PDES, SEPARATION OF VARIABLES, AND THE HEAT EQUATION
0.00
0
237
t
20
40
0.25
x
60
0.50
80
u(x,t)
100
0.75
1.00
12.5
12.5
10.0
10.0
7.5
7.5
5.0
5.0
2.5
2.5
0.0
0.0
11.700
10.400
9.100
7.800
6.500
5.200
3.900
2.600
1.300
0.000
0.25
0
20
0.50
40
x
0.75
60
80
t
100
1.00
Figure 4.17: Plot of the temperature of the wire at position ๐‘ฅ at time ๐‘ก.
0
25
50
75
100
12.5
12.5
10.0
10.0
7.5
7.5
5.0
5.0
2.5
2.5
0
25
50
75
100
Figure 4.18: Temperature at the midpoint of the wire (the bottom curve), and the approximation of this
temperature by using only the first term in the series (top curve).
the heat equation. If you are interested in behavior for large enough ๐‘ก, only the first one or
two terms may be necessary.
238
CHAPTER 4. FOURIER SERIES AND PDES
Let us get back to the question of when is the maximum temperature one half of the
initial maximum temperature. That is, when is the temperature at the midpoint 12.5/2 = 6.25.
We notice on the graph that if we use the approximation by the first term we will be close
enough. We solve
2
400
6.25 = 3 ๐‘’ −๐œ‹ 0.003 ๐‘ก .
๐œ‹
That is,
๐œ‹3
ln 6.25
400
๐‘ก=
≈ 24.5.
−๐œ‹2 0.003
So the maximum temperature drops to half at about ๐‘ก = 24.5.
We mention an interesting behavior of the solution to the heat equation. The heat
equation “smoothes” out the function ๐‘“ (๐‘ฅ) as ๐‘ก grows. For a fixed ๐‘ก, the solution is a Fourier
−๐‘› 2 ๐œ‹2
๐‘˜๐‘ก
series with coefficients ๐‘ ๐‘› ๐‘’ ๐ฟ2 . If ๐‘ก > 0, then these coefficients go to zero faster than
any ๐‘›1๐‘ for any power ๐‘. In other words, the Fourier series has infinitely many derivatives
everywhere. Thus even if the function ๐‘“ (๐‘ฅ) has jumps and corners, then for a fixed ๐‘ก > 0,
the solution ๐‘ข(๐‘ฅ, ๐‘ก) as a function of ๐‘ฅ is as smooth as we want it to be.
Example 4.6.2: When the initial condition is already a sine series, then there is no need to
compute anything, you just need to plug in. Consider
๐‘ข๐‘ก = 0.3 ๐‘ข๐‘ฅ๐‘ฅ ,
๐‘ข(0, ๐‘ก) = ๐‘ข(1, ๐‘ก) = 0,
๐‘ข(๐‘ฅ, 0) = 0.1 sin(๐œ‹๐‘ก) + sin(2๐œ‹๐‘ก).
The solution is then
๐‘ข(๐‘ฅ, ๐‘ก) = 0.1 sin(๐œ‹๐‘ก)๐‘’ −0.3๐œ‹ ๐‘ก + sin(2๐œ‹๐‘ก)๐‘’ −1.2๐œ‹ ๐‘ก .
2
4.6.3
2
Insulated ends
Now suppose the ends of the wire are insulated. In this case, we are solving the equation
๐‘ข๐‘ก = ๐‘˜๐‘ข๐‘ฅ๐‘ฅ
with
๐‘ข๐‘ฅ (0, ๐‘ก) = 0,
๐‘ข๐‘ฅ (๐ฟ, ๐‘ก) = 0,
and
๐‘ข(๐‘ฅ, 0) = ๐‘“ (๐‘ฅ).
Yet again we try a solution of the form ๐‘ข(๐‘ฅ, ๐‘ก) = ๐‘‹(๐‘ฅ)๐‘‡(๐‘ก). By the same procedure as before
we plug into the heat equation and arrive at the following two equations
๐‘‹ 00(๐‘ฅ) + ๐œ†๐‘‹(๐‘ฅ) = 0,
๐‘‡ 0(๐‘ก) + ๐œ†๐‘˜๐‘‡(๐‘ก) = 0.
At this point the story changes slightly. The boundary condition ๐‘ข๐‘ฅ (0, ๐‘ก) = 0 implies
๐‘‹ 0(0)๐‘‡(๐‘ก) = 0. Hence ๐‘‹ 0(0) = 0. Similarly, ๐‘ข๐‘ฅ (๐ฟ, ๐‘ก) = 0 implies ๐‘‹ 0(๐ฟ) = 0. We are looking
for nontrivial solutions ๐‘‹ of the eigenvalue problem ๐‘‹ 00 + ๐œ†๐‘‹ = 0, ๐‘‹ 0(0) = 0, ๐‘‹ 0(๐ฟ) = 0. We
2 2
have previously found that the only eigenvalues are ๐œ†๐‘› = ๐‘›๐ฟ๐œ‹2 , for integers ๐‘› ≥ 0, where
4.6. PDES, SEPARATION OF VARIABLES, AND THE HEAT EQUATION
eigenfunctions are cos
solutions
๐‘›๐œ‹
๐ฟ ๐‘ฅ
239
(we include the constant eigenfunction). Hence, let us pick
๐‘‹๐‘› (๐‘ฅ) = cos
๐‘›๐œ‹ ๐ฟ
๐‘ฅ
๐‘‹0 (๐‘ฅ) = 1.
and
The corresponding ๐‘‡๐‘› must satisfy the equation
๐‘‡๐‘›0 (๐‘ก) +
๐‘› 2 ๐œ‹2
๐‘˜๐‘‡๐‘› (๐‘ก) = 0.
๐ฟ2
For ๐‘› ≥ 1, as before,
๐‘‡๐‘› (๐‘ก) = ๐‘’
−๐‘› 2 ๐œ‹2
๐ฟ2
๐‘˜๐‘ก
.
For ๐‘› = 0, we have ๐‘‡00(๐‘ก) = 0 and hence ๐‘‡0 (๐‘ก) = 1. Our building-block solutions are
๐‘ข๐‘› (๐‘ฅ, ๐‘ก) = ๐‘‹๐‘› (๐‘ฅ)๐‘‡๐‘› (๐‘ก) = cos
๐‘›๐œ‹ ๐ฟ
๐‘ฅ ๐‘’
−๐‘› 2 ๐œ‹2
๐ฟ2
๐‘˜๐‘ก
,
and
๐‘ข0 (๐‘ฅ, ๐‘ก) = 1.
We note that ๐‘ข๐‘› (๐‘ฅ, 0) = cos
๐‘›๐œ‹
๐ฟ ๐‘ฅ
. Let us write ๐‘“ using the cosine series
∞
๐‘›๐œ‹ ๐‘Ž0 Õ
๐‘Ž ๐‘› cos
+
๐‘ฅ .
๐‘“ (๐‘ฅ) =
2
๐ฟ
๐‘›=1
That is, we find the Fourier series of the even periodic extension of ๐‘“ (๐‘ฅ).
We use superposition to write the solution as
๐‘ข(๐‘ฅ, ๐‘ก) =
∞
∞
๐‘›=1
๐‘›=1
๐‘›๐œ‹ −๐‘›2 ๐œ‹2
๐‘Ž0 Õ
๐‘Ž0 Õ
๐‘˜๐‘ก
+
๐‘Ž ๐‘› ๐‘ข๐‘› (๐‘ฅ, ๐‘ก) =
+
๐‘Ž ๐‘› cos
๐‘ฅ ๐‘’ ๐ฟ2 .
2
2
๐ฟ
Example 4.6.3: Let us try the same equation as before, but for insulated ends. We are
solving the following PDE problem
๐‘ข๐‘ก = 0.003 ๐‘ข๐‘ฅ๐‘ฅ ,
๐‘ข๐‘ฅ (0, ๐‘ก) = ๐‘ข๐‘ฅ (1, ๐‘ก) = 0,
๐‘ข(๐‘ฅ, 0) = 50 ๐‘ฅ (1 − ๐‘ฅ)
for 0 < ๐‘ฅ < 1.
For this problem, we must find the cosine series of ๐‘ข(๐‘ฅ, 0). For 0 < ๐‘ฅ < 1 we have
∞ Õ
25
−200
+
cos(๐‘›๐œ‹๐‘ฅ).
50 ๐‘ฅ (1 − ๐‘ฅ) =
3
๐œ‹2 ๐‘› 2
๐‘›=2
๐‘› even
240
CHAPTER 4. FOURIER SERIES AND PDES
0.00 0
x
5
0.25
t
10
0.50
15
20
0.75
u(x,t)
25
1.00
30
12.5
12.5
10.0
10.0
7.5
7.5
5.0
11.700
10.400
9.100
7.800
6.500
5.200
3.900
2.600
1.300
0.000
5.0
2.5
2.5
0.0
0
0.0
0.00
5
0.25
10
15
0.50
20
t
0.75
25
x
30 1.00
Figure 4.19: Plot of the temperature of the insulated wire at position ๐‘ฅ at time ๐‘ก.
The calculation is left to the reader. Hence, the solution to the PDE problem, plotted in
Figure 4.19, is given by the series
∞ Õ
25
−200
−๐‘› 2 ๐œ‹2 0.003 ๐‘ก
๐‘ข(๐‘ฅ, ๐‘ก) =
cos(๐‘›๐œ‹๐‘ฅ)
๐‘’
.
+
2๐‘›2
3
๐œ‹
๐‘›=2
๐‘› even
Note in the graph that as time goes on, the temperature evens out across the wire.
Eventually, all the terms except the constant die out, and you will be left with a uniform
temperature of 25
3 ≈ 8.33 along the entire length of the wire.
Let us expand on the last point. The constant term in the series is
๐‘Ž0
1
=
2
๐ฟ
∫
๐ฟ
๐‘“ (๐‘ฅ) ๐‘‘๐‘ฅ.
0
In other words, ๐‘Ž20 is the average value of ๐‘“ (๐‘ฅ), that is, the average of the initial temperature.
As the wire is insulated everywhere, no heat can get out, no heat can get in. So the
temperature tries to distribute evenly over time, and the average temperature must always
be the same, in particular it is always ๐‘Ž20 . As time goes to infinity, the temperature goes to
the constant ๐‘Ž20 everywhere.
4.6. PDES, SEPARATION OF VARIABLES, AND THE HEAT EQUATION
4.6.4
241
Exercises
Exercise 4.6.2: Consider a wire of length 2, with ๐‘˜ = 0.001 and an initial temperature distribution
๐‘ข(๐‘ฅ, 0) = 50๐‘ฅ. Both ends are embedded in ice (temperature 0). Find the solution as a series.
Exercise 4.6.3: Find a series solution of
๐‘ข๐‘ก = ๐‘ข๐‘ฅ๐‘ฅ ,
๐‘ข(0, ๐‘ก) = ๐‘ข(1, ๐‘ก) = 0,
๐‘ข(๐‘ฅ, 0) = 100
for 0 < ๐‘ฅ < 1.
Exercise 4.6.4: Find a series solution of
๐‘ข๐‘ก = ๐‘ข๐‘ฅ๐‘ฅ ,
๐‘ข๐‘ฅ (0, ๐‘ก) = ๐‘ข๐‘ฅ (๐œ‹, ๐‘ก) = 0,
๐‘ข(๐‘ฅ, 0) = 3 cos(๐‘ฅ) + cos(3๐‘ฅ)
for 0 < ๐‘ฅ < ๐œ‹.
Exercise 4.6.5: Find a series solution of
1
๐‘ข๐‘ก = ๐‘ข๐‘ฅ๐‘ฅ ,
3
๐‘ข๐‘ฅ (0, ๐‘ก) = ๐‘ข๐‘ฅ (๐œ‹, ๐‘ก) = 0,
10๐‘ฅ
๐‘ข(๐‘ฅ, 0) =
for 0 < ๐‘ฅ < ๐œ‹.
๐œ‹
Exercise 4.6.6: Find a series solution of
๐‘ข๐‘ก = ๐‘ข๐‘ฅ๐‘ฅ ,
๐‘ข(0, ๐‘ก) = 0,
๐‘ข(1, ๐‘ก) = 100,
๐‘ข(๐‘ฅ, 0) = sin(๐œ‹๐‘ฅ)
for 0 < ๐‘ฅ < 1.
Hint: Use the fact that ๐‘ข(๐‘ฅ, ๐‘ก) = 100๐‘ฅ is a solution satisfying ๐‘ข๐‘ก = ๐‘ข๐‘ฅ๐‘ฅ , ๐‘ข(0, ๐‘ก) = 0, ๐‘ข(1, ๐‘ก) = 100.
Then use superposition.
Exercise 4.6.7: Find the steady state temperature solution as a function of ๐‘ฅ alone, by letting
๐‘ก → ∞ in the solution from exercises 4.6.5 and 4.6.6. Verify that it satisfies the equation ๐‘ข๐‘ฅ๐‘ฅ = 0.
Exercise 4.6.8: Use separation variables to find a nontrivial solution to ๐‘ข๐‘ฅ๐‘ฅ + ๐‘ข ๐‘ฆ ๐‘ฆ = 0, where
๐‘ข(๐‘ฅ, 0) = 0 and ๐‘ข(0, ๐‘ฆ) = 0. Hint: Try ๐‘ข(๐‘ฅ, ๐‘ฆ) = ๐‘‹(๐‘ฅ)๐‘Œ(๐‘ฆ).
Exercise 4.6.9 (challenging): Suppose that one end of the wire is insulated (say at ๐‘ฅ = 0) and the
other end is kept at zero temperature. That is, find a series solution of
๐‘ข๐‘ก = ๐‘˜๐‘ข๐‘ฅ๐‘ฅ ,
๐‘ข๐‘ฅ (0, ๐‘ก) = ๐‘ข(๐ฟ, ๐‘ก) = 0,
๐‘ข(๐‘ฅ, 0) = ๐‘“ (๐‘ฅ)
for 0 < ๐‘ฅ < ๐ฟ.
Express any coefficients in the series by integrals of ๐‘“ (๐‘ฅ).
242
CHAPTER 4. FOURIER SERIES AND PDES
Exercise 4.6.10 (challenging): Suppose that the wire is circular and insulated, so there are no
ends. You can think of this as simply connecting the two ends and making sure the solution matches
up at the ends. That is, find a series solution of
๐‘ข๐‘ก = ๐‘˜๐‘ข๐‘ฅ๐‘ฅ ,
๐‘ข(0, ๐‘ก) = ๐‘ข(๐ฟ, ๐‘ก),
๐‘ข(๐‘ฅ, 0) = ๐‘“ (๐‘ฅ)
๐‘ข๐‘ฅ (0, ๐‘ก) = ๐‘ข๐‘ฅ (๐ฟ, ๐‘ก),
for 0 < ๐‘ฅ < ๐ฟ.
Express any coefficients in the series by integrals of ๐‘“ (๐‘ฅ).
Exercise 4.6.11: Consider a wire insulated on both ends, ๐ฟ = 1, ๐‘˜ = 1, and ๐‘ข(๐‘ฅ, 0) = cos2 (๐œ‹๐‘ฅ).
a) Find the solution ๐‘ข(๐‘ฅ, ๐‘ก). Hint: a trig identity.
b) Find the average temperature.
c) Initially the temperature variation is 1 (maximum minus the minimum). Find the time when
the variation is 1/2.
Exercise 4.6.101: Find a series solution of
๐‘ข๐‘ก = 3๐‘ข๐‘ฅ๐‘ฅ ,
๐‘ข(0, ๐‘ก) = ๐‘ข(๐œ‹, ๐‘ก) = 0,
๐‘ข(๐‘ฅ, 0) = 5 sin(๐‘ฅ) + 2 sin(5๐‘ฅ)
for 0 < ๐‘ฅ < ๐œ‹.
Exercise 4.6.102: Find a series solution of
๐‘ข๐‘ก = 0.1๐‘ข๐‘ฅ๐‘ฅ ,
๐‘ข๐‘ฅ (0, ๐‘ก) = ๐‘ข๐‘ฅ (๐œ‹, ๐‘ก) = 0,
๐‘ข(๐‘ฅ, 0) = 1 + 2 cos(๐‘ฅ)
for 0 < ๐‘ฅ < ๐œ‹.
Exercise 4.6.103: Use separation of variables to find a nontrivial solution to ๐‘ข๐‘ฅ๐‘ก = ๐‘ข๐‘ฅ๐‘ฅ .
Exercise 4.6.104: Use separation of variables to find a nontrivial solution to ๐‘ข๐‘ฅ + ๐‘ข๐‘ก = ๐‘ข. Hint:
Try ๐‘ข(๐‘ฅ, ๐‘ก) = ๐‘‹(๐‘ฅ) + ๐‘‡(๐‘ก).
Exercise 4.6.105: Suppose that the temperature on the wire is fixed at 0 at the ends, ๐ฟ = 1, ๐‘˜ = 1,
and ๐‘ข(๐‘ฅ, 0) = 100 sin(2๐œ‹๐‘ฅ).
a) What is the temperature at ๐‘ฅ = 1/2 at any time.
b) What is the maximum and the minimum temperature on the wire at ๐‘ก = 0.
c) At what time is the maximum temperature on the wire exactly one half of the initial maximum
at ๐‘ก = 0.
243
4.7. ONE-DIMENSIONAL WAVE EQUATION
4.7
One-dimensional wave equation
Note: 1 lecture, §9.6 in [EP], §10.7 in [BD]
Imagine we have a tensioned guitar string of length ๐ฟ. Let us only consider vibrations
in one direction. Let ๐‘ฅ denote the position along the string, let ๐‘ก denote time, and let ๐‘ฆ
denote the displacement of the string from the rest position. See Figure 4.20.
๐‘ฆ
๐‘ฆ
๐ฟ
0
๐‘ฅ
Figure 4.20: Vibrating string of length ๐ฟ, ๐‘ฅ is position, ๐‘ฆ is displacement.
The equation that governs this setup is the so-called one-dimensional wave equation:
๐‘ฆ๐‘ก๐‘ก = ๐‘Ž 2 ๐‘ฆ ๐‘ฅ๐‘ฅ ,
for some constant ๐‘Ž > 0. The intuition is similar to the heat equation, replacing velocity
with acceleration: the acceleration at a specific point is proportional to the second derivative
of the shape of the string. In other words when the string is concave down then ๐‘ข๐‘ฅ๐‘ฅ is
negative and the string wants to accelerate downwards, so ๐‘ข๐‘ก๐‘ก should be negative. And
vice versa. The wave equation is an example of a hyperbolic PDE.
Assume that the ends of the string are fixed in place as on the guitar:
๐‘ฆ(0, ๐‘ก) = 0
and
๐‘ฆ(๐ฟ, ๐‘ก) = 0.
Note that we have two conditions along the ๐‘ฅ-axis as there are two derivatives in the ๐‘ฅ
direction.
There are also two derivatives along the ๐‘ก direction and hence we need two further
conditions here. We need to know the initial position and the initial velocity of the string.
That is, for some known functions ๐‘“ (๐‘ฅ) and ๐‘”(๐‘ฅ), we impose
๐‘ฆ(๐‘ฅ, 0) = ๐‘“ (๐‘ฅ)
and
๐‘ฆ๐‘ก (๐‘ฅ, 0) = ๐‘”(๐‘ฅ).
The equation is linear, so superposition works just as it did for the heat equation. And
again we will use separation of variables to find enough building-block solutions to get
the overall solution. There is one change however. It will be easier to solve two separate
problems and add their solutions.
244
CHAPTER 4. FOURIER SERIES AND PDES
The two problems we will solve are
and
๐‘ค ๐‘ก๐‘ก = ๐‘Ž 2 ๐‘ค ๐‘ฅ๐‘ฅ ,
๐‘ค(0, ๐‘ก) = ๐‘ค(๐ฟ, ๐‘ก) = 0,
๐‘ค(๐‘ฅ, 0) = 0
๐‘ค ๐‘ก (๐‘ฅ, 0) = ๐‘”(๐‘ฅ)
for 0 < ๐‘ฅ < ๐ฟ,
for 0 < ๐‘ฅ < ๐ฟ,
(4.11)
๐‘ง ๐‘ก๐‘ก = ๐‘Ž 2 ๐‘ง ๐‘ฅ๐‘ฅ ,
๐‘ง(0, ๐‘ก) = ๐‘ง(๐ฟ, ๐‘ก) = 0,
๐‘ง(๐‘ฅ, 0) = ๐‘“ (๐‘ฅ)
๐‘ง ๐‘ก (๐‘ฅ, 0) = 0
for 0 < ๐‘ฅ < ๐ฟ,
for 0 < ๐‘ฅ < ๐ฟ.
(4.12)
The principle of superposition implies that ๐‘ฆ = ๐‘ค + ๐‘ง solves the wave equation and
furthermore ๐‘ฆ(๐‘ฅ, 0) = ๐‘ค(๐‘ฅ, 0) + ๐‘ง(๐‘ฅ, 0) = ๐‘“ (๐‘ฅ) and ๐‘ฆ๐‘ก (๐‘ฅ, 0) = ๐‘ค ๐‘ก (๐‘ฅ, 0) + ๐‘ง ๐‘ก (๐‘ฅ, 0) = ๐‘”(๐‘ฅ).
Hence, ๐‘ฆ is a solution to
๐‘ฆ๐‘ก๐‘ก = ๐‘Ž 2 ๐‘ฆ ๐‘ฅ๐‘ฅ ,
๐‘ฆ(0, ๐‘ก) = ๐‘ฆ(๐ฟ, ๐‘ก) = 0,
๐‘ฆ(๐‘ฅ, 0) = ๐‘“ (๐‘ฅ)
๐‘ฆ๐‘ก (๐‘ฅ, 0) = ๐‘”(๐‘ฅ)
for 0 < ๐‘ฅ < ๐ฟ,
for 0 < ๐‘ฅ < ๐ฟ.
(4.13)
The reason for all this complexity is that superposition only works for homogeneous
conditions such as ๐‘ฆ(0, ๐‘ก) = ๐‘ฆ(๐ฟ, ๐‘ก) = 0, ๐‘ฆ(๐‘ฅ, 0) = 0, or ๐‘ฆ๐‘ก (๐‘ฅ, 0) = 0. Therefore, we can use
separation of variables to find many building-block solutions solving all the homogeneous
conditions. We can then use them to construct a solution satisfying the remaining
nonhomogeneous condition.
Let us start with (4.11). We try a solution of the form ๐‘ค(๐‘ฅ, ๐‘ก) = ๐‘‹(๐‘ฅ)๐‘‡(๐‘ก) again. We plug
into the wave equation to obtain
๐‘‹(๐‘ฅ)๐‘‡ 00(๐‘ก) = ๐‘Ž 2 ๐‘‹ 00(๐‘ฅ)๐‘‡(๐‘ก).
Rewriting we get
๐‘‡ 00(๐‘ก)
๐‘‹ 00(๐‘ฅ)
=
.
๐‘‹(๐‘ฅ)
๐‘Ž 2๐‘‡(๐‘ก)
Again, left-hand side depends only on ๐‘ก and the right-hand side depends only on ๐‘ฅ. So
both sides equal a constant, which we denote by −๐œ†:
๐‘‡ 00(๐‘ก)
๐‘‹ 00(๐‘ฅ)
=
−๐œ†
=
.
๐‘‹(๐‘ฅ)
๐‘Ž 2๐‘‡(๐‘ก)
We solve to get two ordinary differential equations
๐‘‹ 00(๐‘ฅ) + ๐œ†๐‘‹(๐‘ฅ) = 0,
๐‘‡ 00(๐‘ก) + ๐œ†๐‘Ž 2๐‘‡(๐‘ก) = 0.
245
4.7. ONE-DIMENSIONAL WAVE EQUATION
The conditions 0 = ๐‘ค(0, ๐‘ก) = ๐‘‹(0)๐‘‡(๐‘ก) implies ๐‘‹(0) = 0 and ๐‘ค(๐ฟ, ๐‘ก) = 0 implies that ๐‘‹(๐ฟ) = 0.
2 2
Therefore, the only nontrivial solutions for the first equation are when ๐œ† = ๐œ†๐‘› = ๐‘›๐ฟ๐œ‹2 and
they are
๐‘›๐œ‹ ๐‘‹๐‘› (๐‘ฅ) = sin
๐‘ฅ .
๐ฟ
The general solution for ๐‘‡ for this particular ๐œ†๐‘› is
๐‘‡๐‘› (๐‘ก) = ๐ด cos
๐‘›๐œ‹๐‘Ž ๐‘›๐œ‹๐‘Ž ๐ฟ
๐ฟ
๐‘ก + ๐ต sin
๐‘ก .
We also have the condition that ๐‘ค(๐‘ฅ, 0) = 0 or ๐‘‹(๐‘ฅ)๐‘‡(0) = 0. This implies that ๐‘‡(0) = 0,
๐ฟ
which in turn forces ๐ด = 0. It is convenient to pick ๐ต = ๐‘›๐œ‹๐‘Ž
(you will see why in a moment)
and hence
๐‘›๐œ‹๐‘Ž ๐ฟ
๐‘‡๐‘› (๐‘ก) =
sin
๐‘ก .
๐‘›๐œ‹๐‘Ž
๐ฟ
Our building-block solutions are
๐‘ค ๐‘› (๐‘ฅ, ๐‘ก) =
We differentiate in ๐‘ก:
๐‘›๐œ‹ ๐‘›๐œ‹๐‘Ž ๐ฟ
sin
๐‘ฅ sin
๐‘ก .
๐‘›๐œ‹๐‘Ž
๐ฟ
๐ฟ
๐‘›๐œ‹ ๐‘›๐œ‹๐‘Ž ๐œ•๐‘ค ๐‘›
(๐‘ฅ, ๐‘ก) = sin
๐‘ฅ cos
๐‘ก .
๐ฟ
๐ฟ
๐œ•๐‘ก
Hence,
๐‘›๐œ‹ ๐œ•๐‘ค ๐‘›
(๐‘ฅ, 0) = sin
๐‘ฅ .
๐ฟ
๐œ•๐‘ก
We expand ๐‘”(๐‘ฅ) in terms of these sines as
๐‘”(๐‘ฅ) =
∞
Õ
๐‘ ๐‘› sin
๐‘›๐œ‹ ๐ฟ
๐‘›=1
๐‘ฅ .
Using superposition we write the solution to (4.11) as a series
๐‘ค(๐‘ฅ, ๐‘ก) =
∞
Õ
๐‘›=1
๐‘ ๐‘› ๐‘ค ๐‘› (๐‘ฅ, ๐‘ก) =
∞
Õ
๐‘๐‘›
๐‘›=1
๐‘›๐œ‹ ๐‘›๐œ‹๐‘Ž ๐ฟ
sin
๐‘ฅ sin
๐‘ก .
๐‘›๐œ‹๐‘Ž
๐ฟ
๐ฟ
Exercise 4.7.1: Check that ๐‘ค(๐‘ฅ, 0) = 0 and ๐‘ค ๐‘ก (๐‘ฅ, 0) = ๐‘”(๐‘ฅ).
We solve (4.12) similarly. We again try ๐‘ง(๐‘ฅ, ๐‘ฆ) = ๐‘‹(๐‘ฅ)๐‘‡(๐‘ก). The procedure works exactly
the same at first. We obtain
๐‘‹ 00(๐‘ฅ) + ๐œ†๐‘‹(๐‘ฅ) = 0,
๐‘‡ 00(๐‘ก) + ๐œ†๐‘Ž 2๐‘‡(๐‘ก) = 0,
and the conditions ๐‘‹(0) = 0, ๐‘‹(๐ฟ) = 0. So again ๐œ† = ๐œ†๐‘› =
๐‘‹๐‘› (๐‘ฅ) = sin
๐‘›๐œ‹ ๐ฟ
๐‘ฅ .
๐‘› 2 ๐œ‹2
๐ฟ2
and
246
CHAPTER 4. FOURIER SERIES AND PDES
This time the condition on ๐‘‡ is ๐‘‡ 0(0) = 0. Thus we get that ๐ต = 0 and we take
๐‘‡๐‘› (๐‘ก) = cos
๐‘›๐œ‹๐‘Ž ๐‘ก .
๐ฟ
Our building-block solution is
๐‘ง ๐‘› (๐‘ฅ, ๐‘ก) = sin
As ๐‘ง ๐‘› (๐‘ฅ, 0) = sin
๐‘›๐œ‹
๐ฟ ๐‘ฅ
๐‘›๐œ‹ ๐‘›๐œ‹๐‘Ž ๐ฟ
๐ฟ
๐‘ฅ cos
๐‘ก .
, we expand ๐‘“ (๐‘ฅ) in terms of these sines as
๐‘“ (๐‘ฅ) =
∞
Õ
๐‘ ๐‘› sin
๐‘›=1
๐‘›๐œ‹ ๐ฟ
๐‘ฅ .
And we write down the solution to (4.12) as a series
๐‘ง(๐‘ฅ, ๐‘ก) =
∞
Õ
๐‘ ๐‘› ๐‘ง ๐‘› (๐‘ฅ, ๐‘ก) =
๐‘›=1
∞
Õ
๐‘›=1
๐‘ ๐‘› sin
๐‘›๐œ‹ ๐‘›๐œ‹๐‘Ž ๐ฟ
๐ฟ
๐‘ฅ cos
๐‘ก .
Exercise 4.7.2: Fill in the details in the derivation of the solution of (4.12). Check that the solution
satisfies all the side conditions.
Putting these two solutions together, let us state the result as a theorem.
Theorem 4.7.1. Take the equation
๐‘ฆ๐‘ก๐‘ก = ๐‘Ž 2 ๐‘ฆ ๐‘ฅ๐‘ฅ ,
๐‘ฆ(0, ๐‘ก) = ๐‘ฆ(๐ฟ, ๐‘ก) = 0,
๐‘ฆ(๐‘ฅ, 0) = ๐‘“ (๐‘ฅ)
๐‘ฆ๐‘ก (๐‘ฅ, 0) = ๐‘”(๐‘ฅ)
where
๐‘“ (๐‘ฅ) =
∞
Õ
๐‘ ๐‘› sin
๐‘›๐œ‹ ๐ฟ
๐‘›=1
๐‘ฅ
and
(4.14)
for 0 < ๐‘ฅ < ๐ฟ,
for 0 < ๐‘ฅ < ๐ฟ,
๐‘”(๐‘ฅ) =
∞
Õ
๐‘ ๐‘› sin
๐‘›=1
๐‘›๐œ‹ ๐ฟ
๐‘ฅ .
Then the solution ๐‘ฆ(๐‘ฅ, ๐‘ก) can be written as a sum of the solutions of (4.11) and (4.12):
๐‘ฆ(๐‘ฅ, ๐‘ก) =
∞
Õ
๐‘›=1
=
∞
Õ
๐‘›=1
๐‘›๐œ‹ ๐‘›๐œ‹๐‘Ž ๐‘›๐œ‹ ๐‘›๐œ‹๐‘Ž ๐ฟ
๐‘๐‘›
sin
๐‘ฅ sin
๐‘ก + ๐‘ ๐‘› sin
๐‘ฅ cos
๐‘ก
๐‘›๐œ‹๐‘Ž
๐ฟ
๐ฟ
๐ฟ
๐ฟ
sin
๐‘›๐œ‹ ๐ฟ
๐‘ฅ
๐‘›๐œ‹๐‘Ž ๐‘›๐œ‹๐‘Ž ๐ฟ
๐‘๐‘›
sin
๐‘ก + ๐‘ ๐‘› cos
๐‘ก .
๐‘›๐œ‹๐‘Ž
๐ฟ
๐ฟ
Example 4.7.1: Consider a string of length 2 plucked in the middle, it has an initial shape
given in Figure 4.21 on the facing page. That is,
(
๐‘“ (๐‘ฅ) =
0.1 ๐‘ฅ
if 0 ≤ ๐‘ฅ ≤ 1,
0.1 (2 − ๐‘ฅ) if 1 < ๐‘ฅ ≤ 2.
247
4.7. ONE-DIMENSIONAL WAVE EQUATION
๐‘ฆ
0.1
0
๐‘ฅ
2
Figure 4.21: Initial shape of a plucked string from Example 4.7.1.
Let the string start at rest (๐‘”(๐‘ฅ) = 0), and let ๐‘Ž = 1 for simplicity. In other words, we
wish to solve the problem:
๐‘ฆ๐‘ก๐‘ก = ๐‘ฆ ๐‘ฅ๐‘ฅ ,
๐‘ฆ(0, ๐‘ก) = ๐‘ฆ(2, ๐‘ก) = 0,
๐‘ฆ(๐‘ฅ, 0) = ๐‘“ (๐‘ฅ)
๐‘ฆ๐‘ก (๐‘ฅ, 0) = 0.
and
We leave it to the reader to compute the sine series of ๐‘“ (๐‘ฅ). The series will be
๐‘“ (๐‘ฅ) =
∞
Õ
0.8
๐‘›=1
Note that sin
๐‘›๐œ‹
2
๐‘› 2 ๐œ‹2
sin
๐‘›๐œ‹ 2
sin
๐‘›๐œ‹ ๐‘ฅ .
2
is the sequence 1, 0, −1, 0, 1, 0, −1, . . . for ๐‘› = 1, 2, 3, 4, . . .. Therefore,
๐œ‹ 0.8
0.8
0.8
3๐œ‹
5๐œ‹
๐‘ฅ − 2 sin
๐‘ฅ +
๐‘ฅ −···
๐‘“ (๐‘ฅ) = 2 sin
sin
2
2
2
๐œ‹
9๐œ‹
25๐œ‹2
The solution ๐‘ฆ(๐‘ฅ, ๐‘ก) is given by
๐‘ฆ(๐‘ฅ, ๐‘ก) =
∞
Õ
0.8
๐‘›=1
๐‘› 2 ๐œ‹2
sin
๐‘›๐œ‹ 2
∞
Õ
0.8(−1)๐‘š+1
sin
๐‘›๐œ‹ ๐‘›๐œ‹ 2
2
๐‘ฅ cos
๐‘ก
(2๐‘š − 1)๐œ‹
(2๐‘š − 1)๐œ‹
=
sin
๐‘ฅ cos
๐‘ก
2 2
2
2
(2๐‘š
−
1)
๐œ‹
๐‘š=1
๐œ‹ ๐œ‹ 0.8
0.8
3๐œ‹
3๐œ‹
= 2 sin
๐‘ฅ cos
๐‘ก − 2 sin
๐‘ฅ cos
๐‘ก
2
2
2
2
๐œ‹
9๐œ‹
0.8
5๐œ‹
5๐œ‹
sin
+
๐‘ฅ cos
๐‘ก −···
2
2
25๐œ‹2
See Figure 4.22 on the next page for a plot for 0 < ๐‘ก < 3. Notice that unlike the heat
equation, the solution does not become “smoother,” the “sharp edges” remain. We will see
the reason for this behavior in the next section where we derive the solution to the wave
equation in a different way.
248
CHAPTER 4. FOURIER SERIES AND PDES
0.0
0
t
1
0.5
2
x
1.0
y(x,t)
3
0.10
1.5
y
0.10
0.05
0.05
0.00
0.00
-0.05
-0.05
-0.10
y
2.0
0.110
0.088
0.066
0.044
0.022
0.000
-0.022
-0.044
-0.066
-0.088
-0.110
0.0
0.5
-0.10
0
1.0
x
1
1.5
t
2
2.0
3
Figure 4.22: Shape of the plucked string for 0 < ๐‘ก < 3.
Make sure you understand what the plot, such as the one in the figure, is telling you.
For each fixed ๐‘ก, you can think of the function ๐‘ฆ(๐‘ฅ, ๐‘ก) as just a function of ๐‘ฅ. This function
gives you the shape of the string at time ๐‘ก. See Figure 4.23 on the facing page for plots of at
๐‘ฆ as a function of ๐‘ฅ at several different values of ๐‘ก. On this plot you can see the sharp edges
remaining much better.
One thing to take away from all this is how a guitar sounds. Notice that the (angular)
frequencies that come up in the solution are ๐‘› ๐œ‹๐‘Ž
๐ฟ . That is, there is a certain base fundamental
๐œ‹๐‘Ž
frequency ๐ฟ , and then we also get all the multiples of this frequency, which in music are
called the overtones. Which overtones appear and with what amplitude is what musicians
call the timbre of the note. Mathematicians usually call this the spectrum. Because all the
frequencies are multiples of one frequency (the fundamental) we get a nice pleasing sound.
The fundamental frequency ๐œ‹๐‘Ž
๐ฟ increases as we decrease length ๐ฟ. That is, if we place a
finger on the fingerboard and then pluck a string we get a higher note. The constant ๐‘Ž is
given by
s
๐‘Ž=
๐‘‡
,
๐œŒ
where ๐‘‡ is tension and ๐œŒ is the linear density of the string. Tightening the string (turning
249
4.7. ONE-DIMENSIONAL WAVE EQUATION
0.0
0.5
1.0
1.5
2.0
0.0
0.5
1.0
1.5
2.0
0.10
0.10
0.10
0.10
0.05
0.05
0.05
0.05
0.00
0.00
0.00
0.00
-0.05
-0.05
-0.05
-0.05
-0.10
-0.10
-0.10
0.0
0.5
1.0
1.5
2.0
0.0
0.5
1.0
1.5
2.0
-0.10
0.0
0.5
1.0
1.5
2.0
0.0
0.5
1.0
1.5
2.0
0.10
0.10
0.10
0.10
0.05
0.05
0.05
0.05
0.00
0.00
0.00
0.00
-0.05
-0.05
-0.05
-0.05
-0.10
-0.10
-0.10
0.0
0.5
1.0
1.5
2.0
-0.10
0.0
0.5
1.0
1.5
2.0
Figure 4.23: Plucked string for ๐‘ก = 0, ๐‘ก = 0.4, ๐‘ก = 0.8, and ๐‘ก = 1.2.
the tuning peg on a guitar) increases ๐‘Ž and hence produces a higher fundamental frequency
(a higher note). On the other hand using a heavier string reduces ๐‘Ž and produces a lower
fundamental frequency (a lower note). A bass guitar has longer thicker strings, while a
ukulele has short strings made of lighter material.
Something rather interesting is the almost symmetry between space and time. In its
simplest form we see this symmetry in the solutions
sin
๐‘›๐œ‹ ๐‘›๐œ‹๐‘Ž ๐ฟ
๐ฟ
๐‘ฅ sin
๐‘ก .
Except for the ๐‘Ž, time and space are just the same.
In general, the solution for a fixed ๐‘ฅ is a Fourier series in ๐‘ก, for a fixed ๐‘ก it is a Fourier
series in ๐‘ฅ, and the coefficients are related. If the shape ๐‘“ (๐‘ฅ) or the initial velocity have
lots of corners, then the sound wave will have lots of corners. That is because the Fourier
coefficients of the initial shape decay to zero (as ๐‘› → ∞) at the same rate as the Fourier
coefficients of the wave in time (for some fixed ๐‘ฅ). So if you use a sharp object to pick the
string, you get a sharper sound with lots of high frequency components, while if you use
your thumb, you get a softer sound without so many high overtones. Similarly if you pluck
250
CHAPTER 4. FOURIER SERIES AND PDES
close to the bridge, you are getting a pluck that looks more like the sawtooth, and you get
an even sharper sound.
In fact, if you look at the formula for the solution, you see that for any fixed ๐‘ฅ we get an
almost arbitrary Fourier series in ๐‘ก, everything except the constant term. You can essentially
obtain any sound you want by plucking the string in just the right way. Of course we are
considering an ideal string of no stiffness and no air resistance. Those variables clearly
impact the sound as well.
4.7.1
Exercises
Exercise 4.7.3: Solve
๐‘ฆ๐‘ก๐‘ก = 9๐‘ฆ ๐‘ฅ๐‘ฅ ,
๐‘ฆ(0, ๐‘ก) = ๐‘ฆ(1, ๐‘ก) = 0,
๐‘ฆ(๐‘ฅ, 0) = sin(3๐œ‹๐‘ฅ) + 41 sin(6๐œ‹๐‘ฅ)
๐‘ฆ๐‘ก (๐‘ฅ, 0) = 0
for 0 < ๐‘ฅ < 1,
for 0 < ๐‘ฅ < 1.
Exercise 4.7.4: Solve
๐‘ฆ๐‘ก๐‘ก = 4๐‘ฆ ๐‘ฅ๐‘ฅ ,
๐‘ฆ(0, ๐‘ก) = ๐‘ฆ(1, ๐‘ก) = 0,
๐‘ฆ(๐‘ฅ, 0) = sin(3๐œ‹๐‘ฅ) + 14 sin(6๐œ‹๐‘ฅ)
๐‘ฆ๐‘ก (๐‘ฅ, 0) = sin(9๐œ‹๐‘ฅ)
for 0 < ๐‘ฅ < 1,
for 0 < ๐‘ฅ < 1.
Exercise 4.7.5: Derive the solution for a general plucked string of length ๐ฟ and any constant ๐‘Ž (in
the equation ๐‘ฆ๐‘ก๐‘ก = ๐‘Ž 2 ๐‘ฆ ๐‘ฅ๐‘ฅ ), where we raise the string some distance ๐‘ at the midpoint and let go.
Exercise 4.7.6: Imagine that a stringed musical instrument falls on the floor. Suppose that the
length of the string is 1 and ๐‘Ž = 1. When the musical instrument hits the ground the string was in
rest position and hence ๐‘ฆ(๐‘ฅ, 0) = 0. However, the string was moving at some velocity at impact
(๐‘ก = 0), say ๐‘ฆ๐‘ก (๐‘ฅ, 0) = −1. Find the solution ๐‘ฆ(๐‘ฅ, ๐‘ก) for the shape of the string at time ๐‘ก.
Exercise 4.7.7 (challenging): Suppose that you have a vibrating string and that there is air
resistance proportional to the velocity. That is, you have
๐‘ฆ๐‘ก๐‘ก = ๐‘Ž 2 ๐‘ฆ ๐‘ฅ๐‘ฅ − ๐‘˜ ๐‘ฆ๐‘ก ,
๐‘ฆ(0, ๐‘ก) = ๐‘ฆ(1, ๐‘ก) = 0,
๐‘ฆ(๐‘ฅ, 0) = ๐‘“ (๐‘ฅ)
๐‘ฆ๐‘ก (๐‘ฅ, 0) = 0
for 0 < ๐‘ฅ < 1,
for 0 < ๐‘ฅ < 1.
Suppose that 0 < ๐‘˜ < 2๐œ‹๐‘Ž. Derive a series solution to the problem. Any coefficients in the series
should be expressed as integrals of ๐‘“ (๐‘ฅ).
Exercise 4.7.8: Suppose you touch the guitar string exactly in the middle to ensure another condition
๐‘ข(๐ฟ/2, ๐‘ก) = 0 for all time. Which multiples of the fundamental frequency ๐œ‹๐‘Ž
๐ฟ show up in the solution?
251
4.7. ONE-DIMENSIONAL WAVE EQUATION
Exercise 4.7.101: Solve
๐‘ฆ๐‘ก๐‘ก = ๐‘ฆ ๐‘ฅ๐‘ฅ ,
๐‘ฆ(0, ๐‘ก) = ๐‘ฆ(๐œ‹, ๐‘ก) = 0,
๐‘ฆ(๐‘ฅ, 0) = sin(๐‘ฅ)
๐‘ฆ๐‘ก (๐‘ฅ, 0) = sin(๐‘ฅ)
for 0 < ๐‘ฅ < ๐œ‹,
for 0 < ๐‘ฅ < ๐œ‹.
Exercise 4.7.102: Solve
๐‘ฆ๐‘ก๐‘ก = 25๐‘ฆ ๐‘ฅ๐‘ฅ ,
๐‘ฆ(0, ๐‘ก) = ๐‘ฆ(2, ๐‘ก) = 0,
๐‘ฆ(๐‘ฅ, 0) = 0
๐‘ฆ๐‘ก (๐‘ฅ, 0) = sin(๐œ‹๐‘ก) + 0.1 sin(2๐œ‹๐‘ก)
for 0 < ๐‘ฅ < 2,
for 0 < ๐‘ฅ < 2.
Exercise 4.7.103: Solve
๐‘ฆ๐‘ก๐‘ก = 2๐‘ฆ ๐‘ฅ๐‘ฅ ,
๐‘ฆ(0, ๐‘ก) = ๐‘ฆ(๐œ‹, ๐‘ก) = 0,
๐‘ฆ(๐‘ฅ, 0) = ๐‘ฅ
๐‘ฆ๐‘ก (๐‘ฅ, 0) = 0
for 0 < ๐‘ฅ < ๐œ‹,
for 0 < ๐‘ฅ < ๐œ‹.
Exercise 4.7.104: Let’s see what happens when ๐‘Ž = 0. Find a solution to ๐‘ฆ๐‘ก๐‘ก = 0, ๐‘ฆ(0, ๐‘ก) =
๐‘ฆ(๐œ‹, ๐‘ก) = 0, ๐‘ฆ(๐‘ฅ, 0) = sin(2๐‘ฅ), ๐‘ฆ๐‘ก (๐‘ฅ, 0) = sin(๐‘ฅ).
252
4.8
CHAPTER 4. FOURIER SERIES AND PDES
D’Alembert solution of the wave equation
Note: 1 lecture, different from §9.6 in [EP], part of §10.7 in [BD]
We have solved the wave equation by using Fourier series. But it is often more convenient
to use the so-called d’Alembert solution to the wave equation∗ . While this solution can be
derived using Fourier series as well, it is really an awkward use of those concepts. It is
easier and more instructive to derive this solution by making a correct change of variables
to get an equation that can be solved by simple integration.
Suppose we wish to solve the wave equation
๐‘ฆ๐‘ก๐‘ก = ๐‘Ž 2 ๐‘ฆ ๐‘ฅ๐‘ฅ
(4.15)
subject to the side conditions
๐‘ฆ(0, ๐‘ก) = ๐‘ฆ(๐ฟ, ๐‘ก) = 0 for all ๐‘ก,
4.8.1
๐‘ฆ(๐‘ฅ, 0) = ๐‘“ (๐‘ฅ)
0 < ๐‘ฅ < ๐ฟ,
๐‘ฆ๐‘ก (๐‘ฅ, 0) = ๐‘”(๐‘ฅ)
0 < ๐‘ฅ < ๐ฟ.
(4.16)
Change of variables
We will transform the equation into a simpler form where it can be solved by simple
integration. We change variables to ๐œ‰ = ๐‘ฅ − ๐‘Ž๐‘ก, ๐œ‚ = ๐‘ฅ + ๐‘Ž๐‘ก. The chain rule says:
๐œ•๐œ‚ ๐œ•
๐œ•๐œ‰ ๐œ•
๐œ•
๐œ•
๐œ•
=
+
=
+
,
๐œ•๐‘ฅ ๐œ•๐‘ฅ ๐œ•๐œ‰ ๐œ•๐‘ฅ ๐œ•๐œ‚ ๐œ•๐œ‰ ๐œ•๐œ‚
๐œ•๐œ‚ ๐œ•
๐œ•
๐œ•๐œ‰ ๐œ•
๐œ•
๐œ•
=
+
= −๐‘Ž
+๐‘Ž .
๐œ•๐‘ก
๐œ•๐‘ก ๐œ•๐œ‰ ๐œ•๐‘ก ๐œ•๐œ‚
๐œ•๐œ‰
๐œ•๐œ‚
We compute
๐œ•2 ๐‘ฆ
๐œ•๐‘ฆ ๐œ•๐‘ฆ
๐œ•2 ๐‘ฆ
๐œ•2 ๐‘ฆ
๐œ•2 ๐‘ฆ
๐œ•
๐œ•
๐‘ฆ ๐‘ฅ๐‘ฅ =
=
+
+
=
+2
+
,
๐œ•๐œ‰ ๐œ•๐œ‚ ๐œ•๐œ‰ ๐œ•๐œ‚
๐œ•๐œ‰๐œ•๐œ‚ ๐œ•๐œ‚2
๐œ•๐‘ฅ 2
๐œ•๐œ‰2
๐œ•2 ๐‘ฆ
๐œ•๐‘ฆ
๐œ•๐‘ฆ
๐œ•2 ๐‘ฆ
๐œ•2 ๐‘ฆ
๐œ•2 ๐‘ฆ
๐œ•
๐œ•
๐‘ฆ๐‘ก๐‘ก = 2 = −๐‘Ž
+๐‘Ž
−๐‘Ž
+๐‘Ž
= ๐‘Ž 2 2 − 2๐‘Ž 2
+ ๐‘Ž2 2 .
๐œ•๐œ‰
๐œ•๐œ‚
๐œ•๐œ‰
๐œ•๐œ‚
๐œ•๐œ‰๐œ•๐œ‚
๐œ•๐‘ก
๐œ•๐œ‰
๐œ•๐œ‚
In the computations above, we used the fact from calculus that
we got into the wave equation,
0 = ๐‘Ž 2 ๐‘ฆ ๐‘ฅ๐‘ฅ − ๐‘ฆ๐‘ก๐‘ก = 4๐‘Ž 2
๐œ•2 ๐‘ฆ
๐œ•๐œ‰๐œ•๐œ‚
=
๐œ•2 ๐‘ฆ
.
๐œ•๐œ‚๐œ•๐œ‰
We plug what
๐œ•2 ๐‘ฆ
= 4๐‘Ž 2 ๐‘ฆ๐œ‰๐œ‚ .
๐œ•๐œ‰๐œ•๐œ‚
Therefore, the wave equation (4.15) transforms into ๐‘ฆ๐œ‰๐œ‚ = 0. It is easy to find the general
solution to this equation by integrating twice. Keeping ๐œ‰ constant, we integrate with respect
∗ Named
after the French mathematician Jean le Rond d’Alembert (1717–1783).
253
4.8. D’ALEMBERT SOLUTION OF THE WAVE EQUATION
to ๐œ‚ first∗ and notice that the constant of integration depends on ๐œ‰; for each ๐œ‰ we might get
a different constant of integration. We get ๐‘ฆ๐œ‰ = ๐ถ(๐œ‰). Next, we integrate
with respect to ๐œ‰
∫
and notice that the constant of integration depends on ๐œ‚. Thus, ๐‘ฆ = ๐ถ(๐œ‰) ๐‘‘๐œ‰ + ๐ต(๐œ‚). The
solution must, therefore, be of the following form for some functions ๐ด(๐œ‰) and ๐ต(๐œ‚):
๐‘ฆ = ๐ด(๐œ‰) + ๐ต(๐œ‚) = ๐ด(๐‘ฅ − ๐‘Ž๐‘ก) + ๐ต(๐‘ฅ + ๐‘Ž๐‘ก).
The solution is a superposition of two functions (waves) traveling at speed ๐‘Ž in opposite
directions. The coordinates ๐œ‰ and ๐œ‚ are called the characteristic coordinates, and a similar
technique can be applied to more complicated hyperbolic PDE. And in fact, in § 1.9 it is
used to solve first order linear PDE. Basically, to solve the wave equation (or more general
hyperbolic equations) we find certain characteristic curves along which the equation is
really just an ODE, or a pair of ODEs. In this case these are the curves where ๐œ‰ and ๐œ‚ are
constant.
4.8.2
D’Alembert’s formula
We know what any solution must look like, but we need to solve for the given side
conditions. We will just give the formula and see that it works. First let ๐น(๐‘ฅ) denote the odd
periodic extension of ๐‘“ (๐‘ฅ), and let ๐บ(๐‘ฅ) denote the odd periodic extension of ๐‘”(๐‘ฅ). Define
1
1
๐ด(๐‘ฅ) = ๐น(๐‘ฅ) −
2
2๐‘Ž
๐‘ฅ
∫
∫
1
1
๐ต(๐‘ฅ) = ๐น(๐‘ฅ) +
2
2๐‘Ž
๐บ(๐‘ ) ๐‘‘๐‘ ,
0
๐‘ฅ
๐บ(๐‘ ) ๐‘‘๐‘ .
0
We claim this ๐ด(๐‘ฅ) and ๐ต(๐‘ฅ) give the solution. Explicitly, the solution is ๐‘ฆ(๐‘ฅ, ๐‘ก) = ๐ด(๐‘ฅ −
๐‘Ž๐‘ก) + ๐ต(๐‘ฅ + ๐‘Ž๐‘ก) or in other words:
1
1
๐‘ฆ(๐‘ฅ, ๐‘ก) = ๐น(๐‘ฅ − ๐‘Ž๐‘ก) −
2
2๐‘Ž
∫
0
๐‘ฅ−๐‘Ž๐‘ก
1
1
๐บ(๐‘ ) ๐‘‘๐‘  + ๐น(๐‘ฅ + ๐‘Ž๐‘ก) +
2
2๐‘Ž
1
๐น(๐‘ฅ − ๐‘Ž๐‘ก) + ๐น(๐‘ฅ + ๐‘Ž๐‘ก)
+
=
2
2๐‘Ž
∫
๐‘ฅ+๐‘Ž๐‘ก
∫
๐‘ฅ+๐‘Ž๐‘ก
๐บ(๐‘ ) ๐‘‘๐‘ 
0
(4.17)
๐บ(๐‘ ) ๐‘‘๐‘ .
๐‘ฅ−๐‘Ž๐‘ก
Let us check that the d’Alembert formula really works.
๐น(๐‘ฅ) + ๐น(๐‘ฅ)
1
๐‘ฆ(๐‘ฅ, 0) =
+
2
2๐‘Ž
∫
๐‘ฅ
๐บ(๐‘ ) ๐‘‘๐‘  = ๐น(๐‘ฅ).
๐‘ฅ
So far so good. Assume for simplicity ๐น is differentiable. And we use the first form of (4.17)
as it is easier to differentiate. By the fundamental theorem of calculus we have
๐‘ฆ๐‘ก (๐‘ฅ, ๐‘ก) =
−๐‘Ž 0
1
๐‘Ž
1
๐น (๐‘ฅ − ๐‘Ž๐‘ก) + ๐บ(๐‘ฅ − ๐‘Ž๐‘ก) + ๐น0(๐‘ฅ + ๐‘Ž๐‘ก) + ๐บ(๐‘ฅ + ๐‘Ž๐‘ก).
2
2
2
2
So
๐‘ฆ๐‘ก (๐‘ฅ, 0) =
∗ There
−๐‘Ž 0
1
๐‘Ž
1
๐น (๐‘ฅ) + ๐บ(๐‘ฅ) + ๐น0(๐‘ฅ) + ๐บ(๐‘ฅ) = ๐บ(๐‘ฅ).
2
2
2
2
is nothing special about ๐œ‚, you can integrate with ๐œ‰ first, if you wish.
254
CHAPTER 4. FOURIER SERIES AND PDES
Yay! We’re smoking now. OK, now the boundary conditions. Note that ๐น(๐‘ฅ) and ๐บ(๐‘ฅ) are
odd. So
1
๐น(−๐‘Ž๐‘ก) + ๐น(๐‘Ž๐‘ก)
+
๐‘ฆ(0, ๐‘ก) =
2
2๐‘Ž
∫
๐‘Ž๐‘ก
−๐‘Ž๐‘ก
−๐น(๐‘Ž๐‘ก) + ๐น(๐‘Ž๐‘ก)
1
๐บ(๐‘ ) ๐‘‘๐‘  =
+
2
2๐‘Ž
∫
๐‘Ž๐‘ก
๐บ(๐‘ ) ๐‘‘๐‘  = 0 + 0 = 0.
−๐‘Ž๐‘ก
Now ๐น(๐‘ฅ) is odd and 2๐ฟ-periodic, so
๐น(๐ฟ − ๐‘Ž๐‘ก) + ๐น(๐ฟ + ๐‘Ž๐‘ก) = ๐น(−๐ฟ − ๐‘Ž๐‘ก) + ๐น(๐ฟ + ๐‘Ž๐‘ก) = −๐น(๐ฟ + ๐‘Ž๐‘ก) + ๐น(๐ฟ + ๐‘Ž๐‘ก) = 0.
Next, ๐บ(๐‘ ) is odd and 2๐ฟ-periodic, so we change variables ๐‘ฃ = ๐‘  − ๐ฟ. We then notice that
๐บ(๐‘ฃ + ๐ฟ) = ๐บ(๐‘ฃ − ๐ฟ) = −๐บ(−๐‘ฃ + ๐ฟ), so ๐บ(๐‘ฃ + ๐ฟ) is odd as a function of ๐‘ฃ:
∫
๐ฟ+๐‘Ž๐‘ก
๐บ(๐‘ ) ๐‘‘๐‘  =
๐ฟ−๐‘Ž๐‘ก
∫
๐‘Ž๐‘ก
๐บ(๐‘ฃ + ๐ฟ) ๐‘‘๐‘ฃ = 0.
−๐‘Ž๐‘ก
Hence
1
๐น(๐ฟ − ๐‘Ž๐‘ก) + ๐น(๐ฟ + ๐‘Ž๐‘ก)
๐‘ฆ(๐ฟ, ๐‘ก) =
+
2
2๐‘Ž
And voilà, it works.
∫
๐ฟ+๐‘Ž๐‘ก
๐ฟ−๐‘Ž๐‘ก
๐บ(๐‘ ) ๐‘‘๐‘  = 0 + 0 = 0.
Example 4.8.1: D’Alembert says that the solution is a superposition of two functions
(waves) moving in the opposite direction at “speed” ๐‘Ž. To get an idea of how it works, let
us work out an example. Consider the simpler setup
๐‘ฆ๐‘ก๐‘ก = ๐‘ฆ ๐‘ฅ๐‘ฅ ,
๐‘ฆ(0, ๐‘ก) = ๐‘ฆ(1, ๐‘ก) = 0,
๐‘ฆ(๐‘ฅ, 0) = ๐‘“ (๐‘ฅ),
๐‘ฆ๐‘ก (๐‘ฅ, 0) = 0.
Here ๐‘“ (๐‘ฅ) is an impulse of height 1 centered at ๐‘ฅ = 0.5:
๐‘“ (๐‘ฅ) =
๏ฃฑ
0
๏ฃด
๏ฃด
๏ฃด
๏ฃด
๏ฃฒ
๏ฃด
if
20 (๐‘ฅ − 0.45)
0 ≤ ๐‘ฅ < 0.45,
if 0.45 ≤ ๐‘ฅ < 0.5,
๏ฃด
20 (0.55 − ๐‘ฅ) if 0.5 ≤ ๐‘ฅ < 0.55,
๏ฃด
๏ฃด
๏ฃด
๏ฃด0
if 0.55 ≤ ๐‘ฅ ≤ 1.
๏ฃณ
The graph of this impulse is the top left plot in Figure 4.24 on the next page.
Let ๐น(๐‘ฅ) be the odd periodic extension of ๐‘“ (๐‘ฅ). Then (4.17) says that the solution is
๐‘ฆ(๐‘ฅ, ๐‘ก) =
๐น(๐‘ฅ − ๐‘ก) + ๐น(๐‘ฅ + ๐‘ก)
.
2
It is not hard to compute specific values of ๐‘ฆ(๐‘ฅ, ๐‘ก). For example, to compute ๐‘ฆ(0.1, 0.6)
we notice ๐‘ฅ − ๐‘ก = −0.5 and ๐‘ฅ + ๐‘ก = 0.7. Now ๐น(−0.5) = − ๐‘“ (0.5) = −20 (0.55 − 0.5) = −1
and ๐น(0.7) = ๐‘“ (0.7) = 0. Hence ๐‘ฆ(0.1, 0.6) = −1+0
2 = −0.5. As you can see the d’Alembert
solution is much easier to actually compute and to plot than the Fourier series solution.
See Figure 4.24 on the facing page for plots of the solution ๐‘ฆ for several different ๐‘ก.
255
4.8. D’ALEMBERT SOLUTION OF THE WAVE EQUATION
0.00
0.25
0.50
0.75
1.00
0.00
0.25
0.50
0.75
1.00
1.0
1.0
1.0
1.0
0.5
0.5
0.5
0.5
0.0
0.0
0.0
0.0
-0.5
-0.5
-0.5
-0.5
-1.0
-1.0
-1.0
0.00
0.25
0.50
0.75
1.00
0.00
0.25
0.50
0.75
1.00
-1.0
0.00
0.25
0.50
0.75
1.00
0.00
0.25
0.50
0.75
1.00
1.0
1.0
1.0
1.0
0.5
0.5
0.5
0.5
0.0
0.0
0.0
0.0
-0.5
-0.5
-0.5
-0.5
-1.0
-1.0
-1.0
0.00
0.25
0.50
0.75
1.00
0.00
-1.0
0.25
0.50
0.75
1.00
Figure 4.24: Plot of the d’Alembert solution for ๐‘ก = 0, ๐‘ก = 0.2, ๐‘ก = 0.4, and ๐‘ก = 0.6.
4.8.3
Another way to solve for the side conditions
It is perhaps easier and more useful to memorize the procedure rather than the formula
itself. The important thing to remember is that a solution to the wave equation is a
superposition of two waves traveling in opposite directions. That is,
๐‘ฆ(๐‘ฅ, ๐‘ก) = ๐ด(๐‘ฅ − ๐‘Ž๐‘ก) + ๐ต(๐‘ฅ + ๐‘Ž๐‘ก).
If you think about it, the exact formulas for ๐ด and ๐ต are not hard to guess once you realize
what kind of side conditions ๐‘ฆ(๐‘ฅ, ๐‘ก) is supposed to satisfy. Let us find the formula again,
but slightly differently. Best approach is to do it in stages. When ๐‘”(๐‘ฅ) = 0 (and hence
๐บ(๐‘ฅ) = 0) the solution is
๐น(๐‘ฅ − ๐‘Ž๐‘ก) + ๐น(๐‘ฅ + ๐‘Ž๐‘ก)
.
2
On the other hand, when ๐‘“ (๐‘ฅ) = 0 (and hence ๐น(๐‘ฅ) = 0), we let
๐ป(๐‘ฅ) =
∫
0
๐‘ฅ
๐บ(๐‘ ) ๐‘‘๐‘ .
256
CHAPTER 4. FOURIER SERIES AND PDES
The solution in this case is
1
2๐‘Ž
∫
๐‘ฅ+๐‘Ž๐‘ก
๐‘ฅ−๐‘Ž๐‘ก
๐บ(๐‘ ) ๐‘‘๐‘  =
−๐ป(๐‘ฅ − ๐‘Ž๐‘ก) + ๐ป(๐‘ฅ + ๐‘Ž๐‘ก)
.
2๐‘Ž
By superposition we get a solution for the general side conditions (4.16) (when neither ๐‘“ (๐‘ฅ)
nor ๐‘”(๐‘ฅ) are identically zero).
๐‘ฆ(๐‘ฅ, ๐‘ก) =
๐น(๐‘ฅ − ๐‘Ž๐‘ก) + ๐น(๐‘ฅ + ๐‘Ž๐‘ก) −๐ป(๐‘ฅ − ๐‘Ž๐‘ก) + ๐ป(๐‘ฅ + ๐‘Ž๐‘ก)
+
.
2
2๐‘Ž
(4.18)
Do note the minus sign before the ๐ป, and the ๐‘Ž in the second denominator.
Exercise 4.8.1: Check that the new formula (4.18) satisfies the side conditions (4.16).
Warning: Make sure you use the odd periodic extensions ๐น(๐‘ฅ) and ๐บ(๐‘ฅ), when you
have formulas for ๐‘“ (๐‘ฅ) and ๐‘”(๐‘ฅ). The thing is, those formulas in general hold only for
0 < ๐‘ฅ < ๐ฟ, and are not usually equal to ๐น(๐‘ฅ) and ๐บ(๐‘ฅ) for other ๐‘ฅ.
4.8.4
Some remarks
Let us remark that the formula ๐‘ฆ(๐‘ฅ, ๐‘ก) = ๐ด(๐‘ฅ − ๐‘Ž๐‘ก) + ๐ต(๐‘ฅ + ๐‘Ž๐‘ก) is the reason why the solution
of the wave equation doesn’t get “nicer” as time goes on, that is, why in the examples
where the initial conditions had corners, the solution also has corners at every time ๐‘ก.
The corners bring us to another interesting remark. Nobody ever notices at first that
our example solutions are not even differentiable (they have corners): In Example 4.8.1
above, the solution is not differentiable whenever ๐‘ฅ = ๐‘ก + 0.5 or ๐‘ฅ = −๐‘ก + 0.5 for example.
Really to be able to compute ๐‘ข๐‘ฅ๐‘ฅ or ๐‘ข๐‘ก๐‘ก , you need not one, but two derivatives. Fear not, we
could think of a shape that is very nearly ๐น(๐‘ฅ) but does have two derivatives by rounding
the corners a little bit, and then the solution would be very nearly ๐น(๐‘ฅ−๐‘ก)+๐น(๐‘ฅ+๐‘ก)
and nobody
2
would notice the switch.
One final remark is what the d’Alembert solution tells us about what part of the initial
conditions influence the solution at a certain point. We can figure this out by “traveling
backwards along the characteristics.” Let us suppose that the string is very long (perhaps
infinite) for simplicity. Since the solution at time ๐‘ก is
๐น(๐‘ฅ − ๐‘Ž๐‘ก) + ๐น(๐‘ฅ + ๐‘Ž๐‘ก)
1
๐‘ฆ(๐‘ฅ, ๐‘ก) =
+
2
2๐‘Ž
∫
๐‘ฅ+๐‘Ž๐‘ก
๐บ(๐‘ ) ๐‘‘๐‘ ,
๐‘ฅ−๐‘Ž๐‘ก
we notice that we have only used the initial conditions in the interval [๐‘ฅ − ๐‘Ž๐‘ก, ๐‘ฅ + ๐‘Ž๐‘ก]. These
two endpoints are called the wavefronts, as that is where the wave front is given an initial
(๐‘ก = 0) disturbance at ๐‘ฅ. So if ๐‘Ž = 1, an observer sitting at ๐‘ฅ = 0 at time ๐‘ก = 1 has only seen
the initial conditions for ๐‘ฅ in the range [−1, 1] and is blissfully unaware of anything else.
This is why for example we do not know that a supernova has occurred in the universe
until we see its light, millions of years from the time when it did in fact happen.
257
4.8. D’ALEMBERT SOLUTION OF THE WAVE EQUATION
4.8.5
Exercises
Exercise 4.8.2: Using the d’Alembert solution solve ๐‘ฆ๐‘ก๐‘ก = 4๐‘ฆ ๐‘ฅ๐‘ฅ , 0 < ๐‘ฅ < ๐œ‹, ๐‘ก > 0, ๐‘ฆ(0, ๐‘ก) =
๐‘ฆ(๐œ‹, ๐‘ก) = 0, ๐‘ฆ(๐‘ฅ, 0) = sin ๐‘ฅ, and ๐‘ฆ๐‘ก (๐‘ฅ, 0) = sin ๐‘ฅ. Hint: Note that sin ๐‘ฅ is the odd periodic
extension of ๐‘ฆ(๐‘ฅ, 0) and ๐‘ฆ๐‘ก (๐‘ฅ, 0).
Exercise 4.8.3: Using the d’Alembert solution solve ๐‘ฆ๐‘ก๐‘ก = 2๐‘ฆ ๐‘ฅ๐‘ฅ , 0 < ๐‘ฅ < 1, ๐‘ก > 0, ๐‘ฆ(0, ๐‘ก) =
๐‘ฆ(1, ๐‘ก) = 0, ๐‘ฆ(๐‘ฅ, 0) = sin5 (๐œ‹๐‘ฅ), and ๐‘ฆ๐‘ก (๐‘ฅ, 0) = sin3 (๐œ‹๐‘ฅ).
Exercise 4.8.4: Take ๐‘ฆ๐‘ก๐‘ก = 4๐‘ฆ ๐‘ฅ๐‘ฅ , 0 < ๐‘ฅ < ๐œ‹, ๐‘ก > 0, ๐‘ฆ(0, ๐‘ก) = ๐‘ฆ(๐œ‹, ๐‘ก) = 0, ๐‘ฆ(๐‘ฅ, 0) = ๐‘ฅ(๐œ‹ − ๐‘ฅ), and
๐‘ฆ๐‘ก (๐‘ฅ, 0) = 0.
a) Solve using the d’Alembert formula. Hint: You can use the sine series for ๐‘ฆ(๐‘ฅ, 0).
b) Find the solution as a function of ๐‘ฅ for a fixed ๐‘ก = 0.5, ๐‘ก = 1, and ๐‘ก = 2. Do not use the sine
series here.
Exercise 4.8.5: Derive the d’Alembert solution for ๐‘ฆ๐‘ก๐‘ก = ๐‘Ž 2 ๐‘ฆ ๐‘ฅ๐‘ฅ , 0 < ๐‘ฅ < ๐œ‹, ๐‘ก > 0, ๐‘ฆ(0, ๐‘ก) =
๐‘ฆ(๐œ‹, ๐‘ก) = 0, ๐‘ฆ(๐‘ฅ, 0) = ๐‘“ (๐‘ฅ), and ๐‘ฆ๐‘ก (๐‘ฅ, 0) = 0, using the Fourier series solution of the wave equation,
by applying an appropriate trigonometric identity. Hint:
it first
Do๐‘›๐œ‹๐‘Ž
for a single term of the Fourier
๐‘›๐œ‹
series solution, in particular do it when ๐‘ฆ is sin ๐ฟ ๐‘ฅ sin ๐ฟ ๐‘ก .
Exercise 4.8.6: The d’Alembert solution still works if there are no boundary conditions and the
initial condition is defined on the whole real line. Suppose that ๐‘ฆ๐‘ก๐‘ก = ๐‘ฆ ๐‘ฅ๐‘ฅ (for all ๐‘ฅ on the real line
and ๐‘ก ≥ 0), ๐‘ฆ(๐‘ฅ, 0) = ๐‘“ (๐‘ฅ), and ๐‘ฆ๐‘ก (๐‘ฅ, 0) = 0, where
๐‘“ (๐‘ฅ) =
๏ฃฑ
0
๏ฃด
๏ฃด
๏ฃด
๏ฃด
๏ฃฒ
๏ฃด
๐‘ฅ+1
if
๐‘ฅ < −1,
if −1 ≤ ๐‘ฅ < 0,
๏ฃด
−๐‘ฅ + 1 if
๏ฃด
๏ฃด
๏ฃด
๏ฃด0
if
๏ฃณ
0 ≤ ๐‘ฅ < 1,
1 < ๐‘ฅ.
Solve using the d’Alembert solution. That is, write down a piecewise definition for the solution.
Then sketch the solution for ๐‘ก = 0, ๐‘ก = 1/2, ๐‘ก = 1, and ๐‘ก = 2.
Exercise 4.8.101: Using the d’Alembert solution solve ๐‘ฆ๐‘ก๐‘ก = 9๐‘ฆ ๐‘ฅ๐‘ฅ , 0 < ๐‘ฅ < 1, ๐‘ก > 0, ๐‘ฆ(0, ๐‘ก) =
๐‘ฆ(1, ๐‘ก) = 0, ๐‘ฆ(๐‘ฅ, 0) = sin(2๐œ‹๐‘ฅ), and ๐‘ฆ๐‘ก (๐‘ฅ, 0) = sin(3๐œ‹๐‘ฅ).
Exercise 4.8.102: Take ๐‘ฆ๐‘ก๐‘ก = 4๐‘ฆ ๐‘ฅ๐‘ฅ , 0 < ๐‘ฅ < 1, ๐‘ก > 0, ๐‘ฆ(0, ๐‘ก) = ๐‘ฆ(1, ๐‘ก) = 0, ๐‘ฆ(๐‘ฅ, 0) = ๐‘ฅ − ๐‘ฅ 2 , and
๐‘ฆ๐‘ก (๐‘ฅ, 0) = 0. Using the d’Alembert solution find the solution at
a) ๐‘ก = 0.1,
b) ๐‘ก = 1/2,
c) ๐‘ก = 1.
You may have to split your answer up by cases.
Exercise 4.8.103: Take ๐‘ฆ๐‘ก๐‘ก = 100๐‘ฆ ๐‘ฅ๐‘ฅ , 0 < ๐‘ฅ < 4, ๐‘ก > 0, ๐‘ฆ(0, ๐‘ก) = ๐‘ฆ(4, ๐‘ก) = 0, ๐‘ฆ(๐‘ฅ, 0) = ๐น(๐‘ฅ), and
๐‘ฆ๐‘ก (๐‘ฅ, 0) = 0. Suppose that ๐น(0) = 0, ๐น(1) = 2, ๐น(2) = 3, ๐น(3) = 1. Using the d’Alembert solution
find
a) ๐‘ฆ(1, 1),
b) ๐‘ฆ(4, 3),
c) ๐‘ฆ(3, 9).
258
4.9
CHAPTER 4. FOURIER SERIES AND PDES
Steady state temperature and the Laplacian
Note: 1 lecture, §9.7 in [EP], §10.8 in [BD]
Consider an insulated wire, a plate, or a 3-dimensional object. We apply certain
fixed temperatures on the ends of the wire, the edges of the plate, or on all sides of the
3-dimensional object. We wish to find out what is the steady state temperature distribution.
That is, we wish to know what will be the temperature after long enough period of time.
We are really looking for a solution to the heat equation that is not dependent on time.
Let us first solve the problem in one space variable. We are looking for a function ๐‘ข that
satisfies
๐‘ข๐‘ก = ๐‘˜๐‘ข๐‘ฅ๐‘ฅ ,
but such that ๐‘ข๐‘ก = 0 for all ๐‘ฅ and ๐‘ก. Hence, we are looking for a function of ๐‘ฅ alone that
satisfies ๐‘ข๐‘ฅ๐‘ฅ = 0. It is easy to solve this equation by integration and we see that ๐‘ข = ๐ด๐‘ฅ + ๐ต
for some constants ๐ด and ๐ต.
Consider an insulated wire where we apply constant temperature ๐‘‡1 at one end (say
where ๐‘ฅ = 0) and ๐‘‡2 on the other end (at ๐‘ฅ = ๐ฟ where ๐ฟ is the length of the wire). Our
steady state solution is
๐‘‡2 − ๐‘‡1
๐‘ข(๐‘ฅ) =
๐‘ฅ + ๐‘‡1 .
๐ฟ
This solution agrees with our common sense intuition with how the heat should be
distributed in the wire. So in one dimension, the steady state solutions are basically just
straight lines.
Things are more complicated in two or more space dimensions. Let us restrict to two
space dimensions for simplicity. The heat equation in two space variables is
๐‘ข๐‘ก = ๐‘˜(๐‘ข๐‘ฅ๐‘ฅ + ๐‘ข ๐‘ฆ ๐‘ฆ ),
(4.19)
or more commonly written as ๐‘ข๐‘ก = ๐‘˜Δ๐‘ข or ๐‘ข๐‘ก = ๐‘˜∇2 ๐‘ข. Here the Δ and ∇2 symbols mean
๐œ•2
๐œ•2
+ ๐œ•๐‘ฆ
2 . We will use Δ from now on. The reason for using such a notation is that you
๐œ•๐‘ฅ 2
can define Δ to be the right thing for any number of space dimensions and then the heat
equation is always ๐‘ข๐‘ก = ๐‘˜Δ๐‘ข. The operator Δ is called the Laplacian.
OK, now that we have notation out of the way, let us see what does an equation for the
steady state solution look like. We are looking for a solution to (4.19) that does not depend
on ๐‘ก, or in other words ๐‘ข๐‘ก = 0. Hence we are looking for a function ๐‘ข(๐‘ฅ, ๐‘ฆ) such that
Δ๐‘ข = ๐‘ข๐‘ฅ๐‘ฅ + ๐‘ข ๐‘ฆ ๐‘ฆ = 0.
This equation is called the Laplace equation∗ , and is an example of an elliptic equation.
Solutions to the Laplace equation are called harmonic functions and have many nice
properties and applications far beyond the steady state heat problem.
∗ Named
after the French mathematician Pierre-Simon, marquis de Laplace (1749–1827).
259
4.9. STEADY STATE TEMPERATURE AND THE LAPLACIAN
Harmonic functions in two variables are no longer just linear (plane graphs). For
example, you can check that the functions ๐‘ฅ 2 − ๐‘ฆ 2 and ๐‘ฅ๐‘ฆ are harmonic. However, note
that if ๐‘ข๐‘ฅ๐‘ฅ is positive, ๐‘ข is concave up in the ๐‘ฅ direction, then ๐‘ข ๐‘ฆ ๐‘ฆ must be negative and ๐‘ข
must be concave down in the ๐‘ฆ direction. A harmonic function can never have any “hilltop”
or “valley” on the graph. This observation is consistent with our intuitive idea of steady
state heat distribution; the hottest or coldest spot will not be inside.
Commonly the Laplace equation is part of a so-called Dirichlet problem∗ . That is, we
have a region in the ๐‘ฅ๐‘ฆ-plane and we specify certain values along the boundaries of the
region. We then try to find a solution ๐‘ข to the Laplace equation defined on this region such
that ๐‘ข agrees with the values we specified on the boundary.
In this section we consider a rectangular region. For simplicity we specify boundary
values to be zero at 3 of the four edges and only specify an arbitrary function at one edge.
As we still have the principle of superposition, we can use this simpler solution to derive
the general solution for arbitrary boundary values by solving 4 different problems, one for
each edge, and adding those solutions together. This setup is left as an exercise.
We wish to solve the following problem. Let โ„Ž and ๐‘ค be the height and width of our
rectangle, with one corner at the origin and lying in the first quadrant.
Δ๐‘ข = 0,
(0, โ„Ž)
๐‘ข=0
๐‘ข=0
Δ๐‘ข = 0
(0, 0)
๐‘ข = ๐‘“ (๐‘ฅ)
(4.20)
๐‘ข(0, ๐‘ฆ) = 0
for 0 < ๐‘ฆ < โ„Ž, (4.21)
๐‘ข(๐‘ฅ, โ„Ž) = 0
for 0 < ๐‘ฅ < ๐‘ค, (4.22)
๐‘ข(๐‘ค, ๐‘ฆ) = 0
for 0 < ๐‘ฆ < โ„Ž, (4.23)
(๐‘ค, โ„Ž)
๐‘ข=0
๐‘ข(๐‘ฅ, 0) = ๐‘“ (๐‘ฅ) for 0 < ๐‘ฅ < ๐‘ค. (4.24)
(๐‘ค, 0)
The method we apply is separation of variables. Again, we will come up with
enough building-block solutions satisfying all the homogeneous boundary conditions (all
conditions except (4.24)). We notice that superposition still works for the equation and all
the homogeneous conditions. Therefore, we can use the Fourier series for ๐‘“ (๐‘ฅ) to solve the
problem as before.
We try ๐‘ข(๐‘ฅ, ๐‘ฆ) = ๐‘‹(๐‘ฅ)๐‘Œ(๐‘ฆ). We plug ๐‘ข into the equation to get
๐‘‹ 00๐‘Œ + ๐‘‹๐‘Œ 00 = 0.
We put the ๐‘‹s on one side and the ๐‘Œs on the other to get
−
∗ Named
๐‘‹ 00 ๐‘Œ 00
=
.
๐‘‹
๐‘Œ
after the German mathematician Johann Peter Gustav Lejeune Dirichlet (1805–1859).
260
CHAPTER 4. FOURIER SERIES AND PDES
The left-hand side only depends on ๐‘ฅ and the right-hand side only depends on ๐‘ฆ. Therefore,
00
๐‘Œ 00
there is some constant ๐œ† such that ๐œ† = −๐‘‹
๐‘‹ = ๐‘Œ . And we get two equations
๐‘‹ 00 + ๐œ†๐‘‹ = 0,
๐‘Œ 00 − ๐œ†๐‘Œ = 0.
Furthermore, the homogeneous boundary conditions imply that ๐‘‹(0) = ๐‘‹(๐‘ค) = 0 and
๐‘Œ(โ„Ž) = 0. Taking the equation for ๐‘‹ we have already seen that we have a nontrivial solution
2 2
if and only if ๐œ† = ๐œ†๐‘› = ๐‘›๐‘ค๐œ‹2 and the solution is a multiple of
๐‘‹๐‘› (๐‘ฅ) = sin
๐‘›๐œ‹ ๐‘ค
๐‘ฅ .
For these given ๐œ†๐‘› , the general solution for ๐‘Œ (one for each ๐‘›) is
๐‘Œ๐‘› (๐‘ฆ) = ๐ด๐‘› cosh
๐‘›๐œ‹ ๐‘ค
๐‘ฆ + ๐ต๐‘› sinh
๐‘›๐œ‹ ๐‘ค
๐‘ฆ .
(4.25)
There is only one condition on ๐‘Œ๐‘› and hence we can pick one of ๐ด๐‘› or ๐ต๐‘› to be something
convenient. It will be useful to have ๐‘Œ๐‘› (0) = 1, so let ๐ด๐‘› = 1. Setting ๐‘Œ๐‘› (โ„Ž) = 0 and solving
for ๐ต๐‘› , we get
๐‘›๐œ‹โ„Ž
๐‘ค
− cosh
๐ต๐‘› =
sinh
๐‘›๐œ‹โ„Ž
๐‘ค
.
After we plug the ๐ด๐‘› and ๐ต๐‘› we into (4.25) and simplify by using the identity sinh(๐›ผ − ๐›ฝ) =
sinh(๐›ผ) cosh(๐›ฝ) − cosh(๐›ผ) sinh(๐›ฝ), we find
๐‘Œ๐‘› (๐‘ฆ) =
sinh
sinh
๐‘›๐œ‹(โ„Ž−๐‘ฆ)
๐‘ค
๐‘›๐œ‹โ„Ž
๐‘ค
.
We define ๐‘ข๐‘› (๐‘ฅ, ๐‘ฆ) = ๐‘‹๐‘› (๐‘ฅ)๐‘Œ๐‘› (๐‘ฆ). And note that ๐‘ข๐‘› satisfies (4.20)–(4.23). Observe
๐‘ข๐‘› (๐‘ฅ, 0) = ๐‘‹๐‘› (๐‘ฅ)๐‘Œ๐‘› (0) = sin
Suppose
๐‘“ (๐‘ฅ) =
∞
Õ
๐‘ ๐‘› sin
๐‘›=1
๐‘›๐œ‹ ๐‘›๐œ‹๐‘ฅ ๐‘ค
๐‘ค
๐‘ฅ .
.
Then we get a solution of (4.20)–(4.24) of the following form.
๐‘›๐œ‹(โ„Ž−๐‘ฆ)
๐‘›๐œ‹ © sinh
ª
๐‘ค
๐‘ข(๐‘ฅ, ๐‘ฆ) =
๐‘ ๐‘› ๐‘ข๐‘› (๐‘ฅ, ๐‘ฆ) =
๐‘ ๐‘› sin
๐‘ฅ ­­
®® .
๐‘ค
sinh ๐‘›๐œ‹โ„Ž
๐‘›=1
๐‘›=1
๐‘ค
«
¬
∞
Õ
∞
Õ
As ๐‘ข๐‘› satisfies (4.20)–(4.23) and any linear combination (finite or infinite) of ๐‘ข๐‘› also satisfies
(4.20)–(4.23), then ๐‘ข satisfies (4.20)–(4.23). We plug in ๐‘ฆ = 0 to see ๐‘ข satisfies (4.24) as well.
261
4.9. STEADY STATE TEMPERATURE AND THE LAPLACIAN
Example 4.9.1: Take ๐‘ค = โ„Ž = ๐œ‹ and let ๐‘“ (๐‘ฅ) = ๐œ‹. Let us compute the sine series for the
function ๐œ‹ (same as the series for the square wave). For 0 < ๐‘ฅ < ๐œ‹, we have
๐‘“ (๐‘ฅ) =
∞
Õ
4
๐‘›=1
๐‘› odd
๐‘›
sin(๐‘›๐‘ฅ).
The solution ๐‘ข(๐‘ฅ, ๐‘ฆ), see Figure 4.25, to the corresponding Dirichlet problem is given as
๐‘ข(๐‘ฅ, ๐‘ฆ) =
∞
Õ
4
๐‘›=1
๐‘› odd
0.0
๐‘›
sin(๐‘›๐‘ฅ)
sinh(๐‘›๐œ‹)
0.0
.
y
0.5
1.0
0.5
x
sinh ๐‘›(๐œ‹ − ๐‘ฆ)
1.5
1.0
2.0
2.5
1.5
3.0
u(x,y)
2.0
2.5
3.0
3.0
2.5
3.0
2.0
2.5
1.5
2.0
1.0
1.5
3.142
2.828
2.514
2.199
1.885
1.571
1.257
0.943
0.628
0.314
0.000
0.5
1.0
0.0
0.5
0.0
0.5
0.0
1.0
0.0
1.5
0.5
1.0
2.0
1.5
2.5
2.0
2.5
y
3.0
x
3.0
Figure 4.25: Steady state temperature of a square plate, three sides held at zero and one side held at ๐œ‹.
This scenario corresponds to the steady state temperature on a square plate of width ๐œ‹
with 3 sides held at 0 degrees and one side held at ๐œ‹ degrees. If we have arbitrary initial
data on all sides, then we solve four problems, each using one piece of nonhomogeneous
data. Then we use the principle of superposition to add up all four solutions to have a
solution to the original problem.
262
CHAPTER 4. FOURIER SERIES AND PDES
A different way to visualize solutions of the Laplace equation is to take a wire and bend
it so that it corresponds to the graph of the temperature above the boundary of your region.
Cut a rubber sheet in the shape of your region—a square in our case—and stretch it fixing
the edges of the sheet to the wire. The rubber sheet is a good approximation of the graph
of the solution to the Laplace equation with the given boundary data.
4.9.1
Exercises
Exercise 4.9.1: Let ๐‘… be the region described by 0 < ๐‘ฅ < ๐œ‹ and 0 < ๐‘ฆ < ๐œ‹. Solve the problem
Δ๐‘ข = 0,
๐‘ข(๐‘ฅ, 0) = sin ๐‘ฅ,
๐‘ข(๐‘ฅ, ๐œ‹) = 0,
๐‘ข(0, ๐‘ฆ) = 0,
๐‘ข(๐œ‹, ๐‘ฆ) = 0.
Exercise 4.9.2: Let ๐‘… be the region described by 0 < ๐‘ฅ < 1 and 0 < ๐‘ฆ < 1. Solve the problem
๐‘ข๐‘ฅ๐‘ฅ + ๐‘ข ๐‘ฆ ๐‘ฆ = 0,
๐‘ข(๐‘ฅ, 0) = sin(๐œ‹๐‘ฅ) − sin(2๐œ‹๐‘ฅ),
๐‘ข(0, ๐‘ฆ) = 0,
๐‘ข(๐‘ฅ, 1) = 0,
๐‘ข(1, ๐‘ฆ) = 0.
Exercise 4.9.3: Let ๐‘… be the region described by 0 < ๐‘ฅ < 1 and 0 < ๐‘ฆ < 1. Solve the problem
๐‘ข๐‘ฅ๐‘ฅ + ๐‘ข ๐‘ฆ ๐‘ฆ = 0,
๐‘ข(๐‘ฅ, 0) = ๐‘ข(๐‘ฅ, 1) = ๐‘ข(0, ๐‘ฆ) = ๐‘ข(1, ๐‘ฆ) = ๐ถ.
for some constant ๐ถ. Hint: Guess, then check your intuition.
Exercise 4.9.4: Let ๐‘… be the region described by 0 < ๐‘ฅ < ๐œ‹ and 0 < ๐‘ฆ < ๐œ‹. Solve
Δ๐‘ข = 0,
๐‘ข(๐‘ฅ, 0) = 0,
๐‘ข(๐‘ฅ, ๐œ‹) = ๐œ‹,
๐‘ข(0, ๐‘ฆ) = ๐‘ฆ,
๐‘ข(๐œ‹, ๐‘ฆ) = ๐‘ฆ.
Hint: Try a solution of the form ๐‘ข(๐‘ฅ, ๐‘ฆ) = ๐‘‹(๐‘ฅ) + ๐‘Œ(๐‘ฆ) (different separation of variables).
Exercise 4.9.5: Use the solution of Exercise 4.9.4 to solve
Δ๐‘ข = 0,
๐‘ข(๐‘ฅ, 0) = sin ๐‘ฅ,
๐‘ข(๐‘ฅ, ๐œ‹) = ๐œ‹,
๐‘ข(0, ๐‘ฆ) = ๐‘ฆ,
๐‘ข(๐œ‹, ๐‘ฆ) = ๐‘ฆ.
Hint: Use superposition.
Exercise 4.9.6: Let ๐‘… be the region described by 0 < ๐‘ฅ < ๐‘ค and 0 < ๐‘ฆ < โ„Ž. Solve the problem
๐‘ข๐‘ฅ๐‘ฅ + ๐‘ข ๐‘ฆ ๐‘ฆ = 0,
๐‘ข(๐‘ฅ, 0) = 0,
๐‘ข(๐‘ฅ, โ„Ž) = ๐‘“ (๐‘ฅ),
๐‘ข(0, ๐‘ฆ) = 0,
๐‘ข(๐‘ค, ๐‘ฆ) = 0.
The solution should be in series form using the Fourier series coefficients of ๐‘“ (๐‘ฅ).
263
4.9. STEADY STATE TEMPERATURE AND THE LAPLACIAN
Exercise 4.9.7: Let ๐‘… be the region described by 0 < ๐‘ฅ < ๐‘ค and 0 < ๐‘ฆ < โ„Ž. Solve the problem
๐‘ข๐‘ฅ๐‘ฅ + ๐‘ข ๐‘ฆ ๐‘ฆ = 0,
๐‘ข(๐‘ฅ, 0) = 0,
๐‘ข(๐‘ฅ, โ„Ž) = 0,
๐‘ข(0, ๐‘ฆ) = ๐‘“ (๐‘ฆ),
๐‘ข(๐‘ค, ๐‘ฆ) = 0.
The solution should be in series form using the Fourier series coefficients of ๐‘“ (๐‘ฆ).
Exercise 4.9.8: Let ๐‘… be the region described by 0 < ๐‘ฅ < ๐‘ค and 0 < ๐‘ฆ < โ„Ž. Solve the problem
๐‘ข๐‘ฅ๐‘ฅ + ๐‘ข ๐‘ฆ ๐‘ฆ = 0,
๐‘ข(๐‘ฅ, 0) = 0,
๐‘ข(๐‘ฅ, โ„Ž) = 0,
๐‘ข(0, ๐‘ฆ) = 0,
๐‘ข(๐‘ค, ๐‘ฆ) = ๐‘“ (๐‘ฆ).
The solution should be in series form using the Fourier series coefficients of ๐‘“ (๐‘ฆ).
Exercise 4.9.9: Let ๐‘… be the region described by 0 < ๐‘ฅ < 1 and 0 < ๐‘ฆ < 1. Solve the problem
๐‘ข๐‘ฅ๐‘ฅ + ๐‘ข ๐‘ฆ ๐‘ฆ = 0,
๐‘ข(๐‘ฅ, 0) = sin(9๐œ‹๐‘ฅ),
๐‘ข(0, ๐‘ฆ) = 0,
๐‘ข(๐‘ฅ, 1) = sin(2๐œ‹๐‘ฅ),
๐‘ข(1, ๐‘ฆ) = 0.
Hint: Use superposition.
Exercise 4.9.10: Let ๐‘… be the region described by 0 < ๐‘ฅ < 1 and 0 < ๐‘ฆ < 1. Solve the problem
๐‘ข๐‘ฅ๐‘ฅ + ๐‘ข ๐‘ฆ ๐‘ฆ = 0,
๐‘ข(๐‘ฅ, 0) = sin(๐œ‹๐‘ฅ),
๐‘ข(๐‘ฅ, 1) = sin(๐œ‹๐‘ฅ),
๐‘ข(0, ๐‘ฆ) = sin(๐œ‹๐‘ฆ),
๐‘ข(1, ๐‘ฆ) = sin(๐œ‹๐‘ฆ).
Hint: Use superposition.
Exercise 4.9.11 (challenging): Using only your intuition find ๐‘ข(1/2, 1/2), for the problem Δ๐‘ข = 0,
where ๐‘ข(0, ๐‘ฆ) = ๐‘ข(1, ๐‘ฆ) = 100 for 0 < ๐‘ฆ < 1, and ๐‘ข(๐‘ฅ, 0) = ๐‘ข(๐‘ฅ, 1) = 0 for 0 < ๐‘ฅ < 1. Explain.
Exercise 4.9.101: Let ๐‘… be the region described by 0 < ๐‘ฅ < 1 and 0 < ๐‘ฆ < 1. Solve the problem
Δ๐‘ข = 0,
๐‘ข(๐‘ฅ, 0) =
∞
Õ
1
๐‘›=1
๐‘›2
sin(๐‘›๐œ‹๐‘ฅ),
๐‘ข(๐‘ฅ, 1) = 0,
๐‘ข(0, ๐‘ฆ) = 0,
๐‘ข(1, ๐‘ฆ) = 0.
Exercise 4.9.102: Let ๐‘… be the region described by 0 < ๐‘ฅ < 1 and 0 < ๐‘ฆ < 2. Solve the problem
Δ๐‘ข = 0,
๐‘ข(๐‘ฅ, 0) = 0.1 sin(๐œ‹๐‘ฅ),
๐‘ข(๐‘ฅ, 2) = 0,
๐‘ข(0, ๐‘ฆ) = 0,
๐‘ข(1, ๐‘ฆ) = 0.
264
CHAPTER 4. FOURIER SERIES AND PDES
4.10 Dirichlet problem in the circle and the Poisson kernel
Note: 2 lectures, §9.7 in [EP], §10.8 in [BD]
4.10.1
Laplace in polar coordinates
A more natural setting for the Laplace equation Δ๐‘ข = 0 is a circle rather than a rectangle.
On the other hand, what makes the problem somewhat more difficult is that we need polar
coordinates.
Recall that the polar coordinates for the (๐‘ฅ, ๐‘ฆ)-plane are (๐‘Ÿ, ๐œƒ):
(๐‘Ÿ, ๐œƒ)
๐‘Ÿ
๐‘ฅ = ๐‘Ÿ cos ๐œƒ, ๐‘ฆ = ๐‘Ÿ sin ๐œƒ,
๐œƒ
where ๐‘Ÿ ≥ 0 and −๐œ‹ < ๐œƒ ≤ ๐œ‹. So the point (๐‘ฅ, ๐‘ฆ) is distance ๐‘Ÿ from the
origin at an angle ๐œƒ from the positive ๐‘ฅ-axis.
Now that we know our coordinates, let us give the problem we wish to solve. We have
a circular region of radius 1, and we are interested in the Dirichlet problem for the Laplace
equation for this region. Let ๐‘ข(๐‘Ÿ, ๐œƒ) denote the temperature at the point (๐‘Ÿ, ๐œƒ) in polar
coordinates.
We have the problem:
๐‘ข(1, ๐œƒ) = ๐‘”(๐œƒ)
Δ๐‘ข = 0,
for ๐‘Ÿ < 1,
(4.26)
๐‘ข(1, ๐œƒ) = ๐‘”(๐œƒ), for −๐œ‹ < ๐œƒ ≤ ๐œ‹.
Δ๐‘ข = 0
The first issue we face is that we do not know the
Laplacian in polar coordinates. Normally we would
find ๐‘ข๐‘ฅ๐‘ฅ and ๐‘ข ๐‘ฆ ๐‘ฆ in terms of the derivatives in ๐‘Ÿ and ๐œƒ.
radius 1
We would need to solve for ๐‘Ÿ and ๐œƒ in terms of ๐‘ฅ and ๐‘ฆ.
In this case it is more convenient to work in reverse. We
compute derivatives in ๐‘Ÿ and ๐œƒ in terms of derivatives
in ๐‘ฅ and ๐‘ฆ and then we solve. The computations are
easier this way. First
๐‘ฅ ๐‘Ÿ = cos ๐œƒ, ๐‘ฅ ๐œƒ = −๐‘Ÿ sin ๐œƒ,
๐‘ฆ๐‘Ÿ = sin ๐œƒ,
๐‘ฆ๐œƒ = ๐‘Ÿ cos ๐œƒ.
Next by chain rule we obtain
๐‘ข๐‘Ÿ = ๐‘ข๐‘ฅ ๐‘ฅ ๐‘Ÿ + ๐‘ข ๐‘ฆ ๐‘ฆ๐‘Ÿ = cos(๐œƒ)๐‘ข๐‘ฅ + sin(๐œƒ)๐‘ข ๐‘ฆ ,
๐‘ข๐‘Ÿ๐‘Ÿ = cos(๐œƒ)(๐‘ข๐‘ฅ๐‘ฅ ๐‘ฅ ๐‘Ÿ + ๐‘ข๐‘ฅ ๐‘ฆ ๐‘ฆ๐‘Ÿ ) + sin(๐œƒ)(๐‘ข ๐‘ฆ๐‘ฅ ๐‘ฅ ๐‘Ÿ + ๐‘ข ๐‘ฆ ๐‘ฆ ๐‘ฆ๐‘Ÿ )
= cos2 (๐œƒ)๐‘ข๐‘ฅ๐‘ฅ + 2 cos(๐œƒ) sin(๐œƒ)๐‘ข๐‘ฅ ๐‘ฆ + sin2 (๐œƒ)๐‘ข ๐‘ฆ ๐‘ฆ .
4.10. DIRICHLET PROBLEM IN THE CIRCLE AND THE POISSON KERNEL
265
Similarly for the ๐œƒ derivative. Note that we have to use the product rule for the second
derivative.
๐‘ข๐œƒ = ๐‘ข๐‘ฅ ๐‘ฅ ๐œƒ + ๐‘ข ๐‘ฆ ๐‘ฆ๐œƒ = −๐‘Ÿ sin(๐œƒ)๐‘ข๐‘ฅ + ๐‘Ÿ cos(๐œƒ)๐‘ข ๐‘ฆ ,
๐‘ข๐œƒ๐œƒ = −๐‘Ÿ cos(๐œƒ)๐‘ข๐‘ฅ − ๐‘Ÿ sin(๐œƒ)(๐‘ข๐‘ฅ๐‘ฅ ๐‘ฅ ๐œƒ + ๐‘ข๐‘ฅ ๐‘ฆ ๐‘ฆ๐œƒ ) − ๐‘Ÿ sin(๐œƒ)๐‘ข ๐‘ฆ + ๐‘Ÿ cos(๐œƒ)(๐‘ข ๐‘ฆ๐‘ฅ ๐‘ฅ ๐œƒ + ๐‘ข ๐‘ฆ ๐‘ฆ ๐‘ฆ๐œƒ )
= −๐‘Ÿ cos(๐œƒ)๐‘ข๐‘ฅ − ๐‘Ÿ sin(๐œƒ)๐‘ข ๐‘ฆ + ๐‘Ÿ 2 sin2 (๐œƒ)๐‘ข๐‘ฅ๐‘ฅ − ๐‘Ÿ 2 2 sin(๐œƒ) cos(๐œƒ)๐‘ข๐‘ฅ ๐‘ฆ + ๐‘Ÿ 2 cos2 (๐œƒ)๐‘ข ๐‘ฆ ๐‘ฆ .
Let us now try to solve for ๐‘ข๐‘ฅ๐‘ฅ + ๐‘ข ๐‘ฆ ๐‘ฆ . We start with ๐‘Ÿ12 ๐‘ข๐œƒ๐œƒ to get rid of those pesky ๐‘Ÿ 2 . If we
add ๐‘ข๐‘Ÿ๐‘Ÿ and use the fact that cos2 (๐œƒ) + sin2 (๐œƒ) = 1, we get
1
1
1
๐‘ข๐œƒ๐œƒ + ๐‘ข๐‘Ÿ๐‘Ÿ = ๐‘ข๐‘ฅ๐‘ฅ + ๐‘ข ๐‘ฆ ๐‘ฆ − cos(๐œƒ)๐‘ข๐‘ฅ − sin(๐œƒ)๐‘ข ๐‘ฆ .
2
๐‘Ÿ
๐‘Ÿ
๐‘Ÿ
We’re not quite there yet, but all we are lacking is 1๐‘Ÿ ๐‘ข๐‘Ÿ . Adding it we obtain the Laplacian in
polar coordinates:
1
1
Δ๐‘ข = ๐‘ข๐‘ฅ๐‘ฅ + ๐‘ข ๐‘ฆ ๐‘ฆ = 2 ๐‘ข๐œƒ๐œƒ + ๐‘ข๐‘Ÿ + ๐‘ข๐‘Ÿ๐‘Ÿ .
๐‘Ÿ
๐‘Ÿ
Notice that the Laplacian in polar coordinates no longer has constant coefficients.
4.10.2
Series solution
Let us separate variables as usual. That is let us try ๐‘ข(๐‘Ÿ, ๐œƒ) = ๐‘…(๐‘Ÿ)Θ(๐œƒ). Then
0 = Δ๐‘ข =
1
1
๐‘…Θ00 + ๐‘…0 Θ + ๐‘…00 Θ.
2
๐‘Ÿ
๐‘Ÿ
Let us put ๐‘… on one side and Θ on the other and conclude that both sides must be constant.
1
1
๐‘…Θ00 = − ๐‘…0 + ๐‘…00 Θ
2
๐‘Ÿ
๐‘Ÿ
00
Θ
๐‘Ÿ๐‘…0 + ๐‘Ÿ 2 ๐‘…00
=−
= −๐œ†
Θ
๐‘…
We get two equations:
Θ00 + ๐œ†Θ = 0,
๐‘Ÿ 2 ๐‘…00 + ๐‘Ÿ๐‘…0 − ๐œ†๐‘… = 0.
Let us first focus on Θ. We know that ๐‘ข(๐‘Ÿ, ๐œƒ) ought to be 2๐œ‹-periodic in ๐œƒ, that is,
๐‘ข(๐‘Ÿ, ๐œƒ) = ๐‘ข(๐‘Ÿ, ๐œƒ + 2๐œ‹). Therefore, the solution to Θ00 + ๐œ†Θ = 0 must be 2๐œ‹-periodic. We
have seen such a problem in Example 4.1.5. We conclude that ๐œ† = ๐‘› 2 for a nonnegative
integer ๐‘› = 0, 1, 2, 3, . . .. The equation becomes Θ00 + ๐‘› 2 Θ = 0. When ๐‘› = 0 the equation
is just Θ00 = 0, so we have the general solution ๐ด๐œƒ + ๐ต. As Θ is periodic, ๐ด = 0. For
convenience we write this solution as
๐‘Ž0
Θ0 =
2
266
CHAPTER 4. FOURIER SERIES AND PDES
for some constant ๐‘Ž 0 . For positive ๐‘›, the solution to Θ00 + ๐‘› 2 Θ = 0 is
Θ๐‘› = ๐‘Ž ๐‘› cos(๐‘›๐œƒ) + ๐‘ ๐‘› sin(๐‘›๐œƒ),
for some constants ๐‘Ž ๐‘› and ๐‘ ๐‘› .
Next, we consider the equation for ๐‘…,
๐‘Ÿ 2 ๐‘…00 + ๐‘Ÿ๐‘…0 − ๐‘› 2 ๐‘… = 0.
This equation appeared in exercises before—we solved it in Exercise 2.1.6 and Exercise 2.1.7
on page 83. The idea is to try a solution ๐‘Ÿ ๐‘  and if that does not give us two solutions, also
try a solution of the form ๐‘Ÿ ๐‘  ln ๐‘Ÿ. Let us name the solution for ๐‘… ๐‘› . When ๐‘› = 0 we obtain
๐‘…0 = ๐ด๐‘Ÿ 0 + ๐ต๐‘Ÿ 0 ln ๐‘Ÿ = ๐ด + ๐ต ln ๐‘Ÿ,
and if ๐‘› > 0, we get
๐‘… ๐‘› = ๐ด๐‘Ÿ ๐‘› + ๐ต๐‘Ÿ −๐‘› .
The function ๐‘ข(๐‘Ÿ, ๐œƒ) must be finite at the origin, that is, when ๐‘Ÿ = 0. So ๐ต = 0 in both cases.
Set ๐ด = 1 in both cases as well; the constants in Θ๐‘› will pick up the slack so nothing is lost.
Let
๐‘…0 = 1,
and
๐‘…๐‘› = ๐‘Ÿ ๐‘› .
Hence our building block solutions are
๐‘ข0 (๐‘Ÿ, ๐œƒ) =
๐‘Ž0
,
2
๐‘ข๐‘› (๐‘Ÿ, ๐œƒ) = ๐‘Ž ๐‘› ๐‘Ÿ ๐‘› cos(๐‘›๐œƒ) + ๐‘ ๐‘› ๐‘Ÿ ๐‘› sin(๐‘›๐œƒ).
Putting everything together our solution is:
∞
๐‘Ž0 Õ
๐‘ข(๐‘Ÿ, ๐œƒ) =
+
๐‘Ž ๐‘› ๐‘Ÿ ๐‘› cos(๐‘›๐œƒ) + ๐‘ ๐‘› ๐‘Ÿ ๐‘› sin(๐‘›๐œƒ).
2
๐‘›=1
We look at the boundary condition in (4.26),
∞
๐‘”(๐œƒ) = ๐‘ข(1, ๐œƒ) =
๐‘Ž0 Õ
+
๐‘Ž ๐‘› cos(๐‘›๐œƒ) + ๐‘ ๐‘› sin(๐‘›๐œƒ).
2
๐‘›=1
Therefore, to solve (4.26) we expand ๐‘”(๐œƒ), which is a 2๐œ‹-periodic function, as a Fourier
series, and then multiply the ๐‘› th term by ๐‘Ÿ ๐‘› . To find the ๐‘Ž ๐‘› and the ๐‘ ๐‘› we compute
1
๐‘Ž๐‘› =
๐œ‹
∫
๐œ‹
๐‘”(๐œƒ) cos(๐‘›๐œƒ) ๐‘‘๐œƒ,
and
−๐œ‹
1
๐‘๐‘› =
๐œ‹
∫
๐œ‹
๐‘”(๐œƒ) sin(๐‘›๐œƒ) ๐‘‘๐œƒ.
−๐œ‹
Example 4.10.1: Suppose we wish to solve
Δ๐‘ข = 0,
0 ≤ ๐‘Ÿ < 1,
๐‘ข(1, ๐œƒ) = cos(10 ๐œƒ),
−๐œ‹ < ๐œƒ ≤ ๐œ‹,
−๐œ‹ < ๐œƒ ≤ ๐œ‹.
4.10. DIRICHLET PROBLEM IN THE CIRCLE AND THE POISSON KERNEL
267
The solution is
๐‘ข(๐‘Ÿ, ๐œƒ) = ๐‘Ÿ 10 cos(10 ๐œƒ).
See the plot in Figure 4.26. The thing to notice in this example is that the effect of a high
frequency is mostly felt at the boundary. In the middle of the disc, the solution is very
close to zero. That is because ๐‘Ÿ 10 is rather small when ๐‘Ÿ is close to 0.
1.0 -1.0
x
-0.5
y
0.5
0.0
0.5
0.0
1.0
u(r,theta)
1.5
-0.5
-1.0
1.0
1.5
0.5
1.0
0.0
0.5
-0.5
0.0
1.200
0.900
0.600
0.300
0.000
-0.300
-0.600
-0.900
-1.200
-1.500
-1.0
-0.5
-1.5
-1.0
1.0
0.5
-1.5
-1.0
0.0
-0.5
0.0
-0.5
y
0.5
1.0 -1.0
x
Figure 4.26: The solution of the Dirichlet problem in the disc with cos(10 ๐œƒ) as boundary data.
Example 4.10.2: Let us solve a more difficult problem. Consider a long rod with circular
cross section of radius 1. Suppose we wish to solve the steady state heat problem in
the rod. If the rod is long enough, we simply need to solve the Laplace equation in two
dimensions. Let us put the center of the rod at the origin and we have exactly the region
we are currently studying—a circle of radius 1. For the boundary conditions, suppose in
Cartesian coordinates ๐‘ฅ and ๐‘ฆ, the temperature on the boundary is 0 when ๐‘ฆ < 0, and it is
2๐‘ฆ when ๐‘ฆ > 0.
Let us set the problem up. As ๐‘ฆ = ๐‘Ÿ sin(๐œƒ), then on the circle of radius 1, that is, where
๐‘Ÿ = 1, we have 2๐‘ฆ = 2 sin(๐œƒ). So
0 ≤ ๐‘Ÿ < 1,
Δ๐‘ข = 0,
(
๐‘ข(1, ๐œƒ) =
−๐œ‹ < ๐œƒ ≤ ๐œ‹,
0 ≤ ๐œƒ ≤ ๐œ‹,
2 sin(๐œƒ)
if
0
if −๐œ‹ < ๐œƒ < 0.
268
CHAPTER 4. FOURIER SERIES AND PDES
We must now compute the Fourier series for the boundary condition. By now the
reader has plentiful experience in computing Fourier series and so we simply state that
∞
Õ
2
−4
๐‘ข(1, ๐œƒ) = + sin(๐œƒ) +
cos(2๐‘›๐œƒ).
๐œ‹
๐œ‹(4๐‘› 2 − 1)
๐‘›=1
Exercise 4.10.1: Compute the series for ๐‘ข(1, ๐œƒ) and verify that it really is what we have just claimed.
Hint: Be careful, make sure not to divide by zero.
We now simply write the solution (see Figure 4.27) by multiplying by ๐‘Ÿ ๐‘› in the right
places.
∞
Õ −4๐‘Ÿ 2๐‘›
2
๐‘ข(๐‘Ÿ, ๐œƒ) = + ๐‘Ÿ sin(๐œƒ) +
cos(2๐‘›๐œƒ).
๐œ‹
๐œ‹(4๐‘› 2 − 1)
๐‘›=1
x
1.0
-0.5
0.5
y
0.0
0.5
0.0
1.0
u(r,theta)
2.0
-0.5
2.0
1.5
1.5
1.0
1.0
0.5
0.5
2.000
1.800
1.600
1.400
1.200
1.000
0.800
0.600
0.400
0.200
0.000
0.0
1.0
0.5
0.0
0.0
-0.5
0.0
-0.5
y
0.5
x
1.0
Figure 4.27: The solution of the Dirichlet problem with boundary data 0 for ๐‘ฆ < 0 and 2๐‘ฆ for ๐‘ฆ > 0.
269
4.10. DIRICHLET PROBLEM IN THE CIRCLE AND THE POISSON KERNEL
4.10.3
Poisson kernel
There is another way to solve the Dirichlet problem with the help of an integral kernel.
That is, we will find a function ๐‘ƒ(๐‘Ÿ, ๐œƒ, ๐›ผ) called the Poisson kernel∗ such that
1
๐‘ข(๐‘Ÿ, ๐œƒ) =
2๐œ‹
∫
๐œ‹
๐‘ƒ(๐‘Ÿ, ๐œƒ, ๐›ผ) ๐‘”(๐›ผ) ๐‘‘๐›ผ.
−๐œ‹
While the integral will generally not be solvable analytically, it can be evaluated numerically.
In fact, unless the boundary data is given as a Fourier series already, it may be much easier
to numerically evaluate this formula as there is only one integral to evaluate.
The formula also has theoretical applications. For instance, as ๐‘ƒ(๐‘Ÿ, ๐œƒ, ๐›ผ) will have
infinitely many derivatives, then via differentiating under the integral we find that the
solution ๐‘ข(๐‘Ÿ, ๐œƒ) has infinitely many derivatives, at least when inside the circle, ๐‘Ÿ < 1. By
“having infinitely many derivatives,” what you should think of is that ๐‘ข(๐‘Ÿ, ๐œƒ) has “no
corners” and all of its partial derivatives of all orders exist and also have “no corners.”
We will compute the formula for ๐‘ƒ(๐‘Ÿ, ๐œƒ, ๐›ผ) from the series solution, and this idea can be
applied anytime you have a convenient series solution where the coefficients are obtained
via integration. Hence you can apply this reasoning to obtain such integral kernels for other
equations, such as the heat equation. The computation is long and tedious, but not overly
difficult. Since the ideas are often applied in similar contexts, it is good to understand how
this computation works.
What we do is start with the series solution and replace the coefficients with the integrals
that compute them. Then we try to write everything as a single integral. We must use a
different dummy variable for the integration and hence we use ๐›ผ instead of ๐œƒ.
∞
๐‘ข(๐‘Ÿ, ๐œƒ) =
๐‘Ž0 Õ
๐‘Ž ๐‘› ๐‘Ÿ ๐‘› cos(๐‘›๐œƒ) + ๐‘ ๐‘› ๐‘Ÿ ๐‘› sin(๐‘›๐œƒ)
+
2
1
=
2๐œ‹
๐‘›=1
๐œ‹
∫
๐‘”(๐›ผ) ๐‘‘๐›ผ +
−๐œ‹
|
{z
๐‘Ž0
2
∫
1
+
๐œ‹
}
๐œ‹
∫
1
2๐œ‹
∫
=
๐‘”(๐›ผ) cos(๐‘›๐›ผ) ๐‘‘๐›ผ ๐‘Ÿ ๐‘› cos(๐‘›๐œƒ)+
−๐œ‹
{z
๐‘Ž๐‘›
}
−๐œ‹
๐‘๐‘›
๐‘”(๐›ผ) + 2
−๐œ‹
∞
Õ
}
!
๐‘”(๐›ผ) cos(๐‘›๐›ผ) ๐‘Ÿ ๐‘› cos(๐‘›๐œƒ) + ๐‘”(๐›ผ) sin(๐‘›๐›ผ) ๐‘Ÿ ๐‘› sin(๐‘›๐œƒ) ๐‘‘๐›ผ
๐‘›=1
๐œ‹
1+2
−๐œ‹
∞
Õ
!
๐‘Ÿ ๐‘› cos(๐‘›๐›ผ) cos(๐‘›๐œƒ) + sin(๐‘›๐›ผ) sin(๐‘›๐œƒ)
๐‘›=1
|
∗ Named
{z
๐œ‹
๐œ‹
๐‘›=1 |
๐œ‹
๐‘”(๐›ผ) sin(๐‘›๐›ผ) ๐‘‘๐›ผ ๐‘Ÿ ๐‘› sin(๐‘›๐œƒ)
|
1
=
2๐œ‹
∞ ∫
Õ
1
{z
๐‘ƒ(๐‘Ÿ,๐œƒ,๐›ผ)
for the French mathematician Siméon Denis Poisson (1781–1840).
}
๐‘”(๐›ผ) ๐‘‘๐›ผ
270
CHAPTER 4. FOURIER SERIES AND PDES
OK, so we have what we wanted, the expression in the parentheses is the Poisson kernel,
๐‘ƒ(๐‘Ÿ, ๐œƒ, ๐›ผ). However, we can do a lot better. It is still given as a series, and we would really
like to have a nice simple expression for it. We must work a little harder. The trick is to
rewrite everything in terms of complex exponentials. Let us work just on the kernel.
๐‘ƒ(๐‘Ÿ, ๐œƒ, ๐›ผ) = 1 + 2
∞
Õ
๐‘Ÿ ๐‘› cos(๐‘›๐›ผ) cos(๐‘›๐œƒ) + sin(๐‘›๐›ผ) sin(๐‘›๐œƒ)
๐‘›=1
=1+2
∞
Õ
๐‘Ÿ ๐‘› cos ๐‘›(๐œƒ − ๐›ผ)
๐‘›=1
=1+
∞
Õ
๐‘Ÿ ๐‘› ๐‘’ ๐‘–๐‘›(๐œƒ−๐›ผ) + ๐‘’ −๐‘–๐‘›(๐œƒ−๐›ผ)
๐‘›=1
=1+
∞
Õ
๐‘Ÿ๐‘’ ๐‘–(๐œƒ−๐›ผ)
๐‘›
+
๐‘›=1
∞
Õ
๐‘›
๐‘Ÿ๐‘’ −๐‘–(๐œƒ−๐›ผ) .
๐‘›=1
In the expression above, we recognize the geometric series. Recall from calculus that if ๐‘ง is a
complex number where |๐‘ง| < 1, then
∞
Õ
๐‘ง๐‘› =
๐‘›=1
๐‘ง
.
1−๐‘ง
Note that ๐‘› starts at 1 and that is why we have the ๐‘ง in the numerator. It is the standard
geometric series multiplied by ๐‘ง. We can use ๐‘ง = ๐‘Ÿ๐‘’ ๐‘–(๐œƒ−๐›ผ) , as lo and behold |๐‘Ÿ๐‘’ ๐‘–(๐œƒ−๐›ผ) | = ๐‘Ÿ < 1.
Let us continue with the computation.
๐‘ƒ(๐‘Ÿ, ๐œƒ, ๐›ผ) = 1 +
∞
Õ
๐‘Ÿ๐‘’ ๐‘–(๐œƒ−๐›ผ)
๐‘›=1
=1+
=
๐‘Ÿ๐‘’ ๐‘–(๐œƒ−๐›ผ)
๐‘›
+
∞
Õ
๐‘Ÿ๐‘’ −๐‘–(๐œƒ−๐›ผ)
๐‘›
๐‘›=1
๐‘Ÿ๐‘’ −๐‘–(๐œƒ−๐›ผ)
+
1 − ๐‘Ÿ๐‘’ ๐‘–(๐œƒ−๐›ผ)
1 − ๐‘Ÿ๐‘’ −๐‘–(๐œƒ−๐›ผ)
๐‘–(๐œƒ−๐›ผ)
1 − ๐‘Ÿ๐‘’
1 − ๐‘Ÿ๐‘’ −๐‘–(๐œƒ−๐›ผ) + 1 − ๐‘Ÿ๐‘’ −๐‘–(๐œƒ−๐›ผ) ๐‘Ÿ๐‘’ ๐‘–(๐œƒ−๐›ผ) + 1 − ๐‘Ÿ๐‘’ ๐‘–(๐œƒ−๐›ผ) ๐‘Ÿ๐‘’ −๐‘–(๐œƒ−๐›ผ)
1 − ๐‘Ÿ๐‘’ ๐‘–(๐œƒ−๐›ผ) 1 − ๐‘Ÿ๐‘’ −๐‘–(๐œƒ−๐›ผ)
1 − ๐‘Ÿ2
1 − ๐‘Ÿ๐‘’ ๐‘–(๐œƒ−๐›ผ) − ๐‘Ÿ๐‘’ −๐‘–(๐œƒ−๐›ผ) + ๐‘Ÿ 2
1 − ๐‘Ÿ2
=
.
1 − 2๐‘Ÿ cos(๐œƒ − ๐›ผ) + ๐‘Ÿ 2
=
That’s a formula we can live with. The solution to the Dirichlet problem using the Poisson
kernel is
∫ ๐œ‹
1
1 − ๐‘Ÿ2
๐‘ข(๐‘Ÿ, ๐œƒ) =
๐‘”(๐›ผ) ๐‘‘๐›ผ.
2๐œ‹ −๐œ‹ 1 − 2๐‘Ÿ cos(๐œƒ − ๐›ผ) + ๐‘Ÿ 2
4.10. DIRICHLET PROBLEM IN THE CIRCLE AND THE POISSON KERNEL
271
1
Sometimes the formula for the Poisson kernel is given together with the constant 2๐œ‹
, in
which case we should of course not leave it in front of the integral. Also, often the limits
of the integral are given as 0 to 2๐œ‹; everything inside is 2๐œ‹-periodic in ๐›ผ, so this does not
change the integral.
Let us not leave the Poisson kernel without explaining
๐‘ 
its geometric meaning. Let ๐‘  be the distance from (๐‘Ÿ, ๐œƒ) to (1, ๐›ผ)
(๐‘Ÿ, ๐œƒ)
(1, ๐›ผ). You may recall from calculus that this distance ๐‘  in
polar coordinates is given precisely by the square root of
1
๐‘Ÿ
1 − 2๐‘Ÿ cos(๐œƒ − ๐›ผ) + ๐‘Ÿ 2 . That is, the Poisson kernel is really
the formula
1 − ๐‘Ÿ2
.
๐‘ 2
One final note we make about the formula is that it is
really a weighted average of the boundary values. First let
us look at what happens at the origin, that is when ๐‘Ÿ = 0.
๐œ‹
1
1 − 02
๐‘ข(0, 0) =
๐‘”(๐›ผ) ๐‘‘๐›ผ
2๐œ‹ −๐œ‹ 1 − 2(0) cos(๐œƒ − ๐›ผ) + 02
∫ ๐œ‹
1
=
๐‘”(๐›ผ) ๐‘‘๐›ผ.
2๐œ‹ −๐œ‹
∫
So ๐‘ข(0, 0) is precisely the average value of ๐‘”(๐œƒ) and therefore the average value of ๐‘ข on the
boundary. This is a general feature of harmonic functions, the value at some point ๐‘ is
equal to the average of the values on a circle centered at ๐‘.
What the formula says is that the value of the solution at any point in the circle is a
weighted average of the boundary data ๐‘”(๐œƒ). The kernel is bigger when (1, ๐›ผ) is closer to
(๐‘Ÿ, ๐œƒ). Therefore when computing ๐‘ข(๐‘Ÿ, ๐œƒ) we give more weight to the values ๐‘”(๐›ผ) when
(1, ๐›ผ) is closer to (๐‘Ÿ, ๐œƒ) and less weight to the values ๐‘”(๐›ผ) when (1, ๐›ผ) far from (๐‘Ÿ, ๐œƒ).
4.10.4
Exercises
Exercise 4.10.2: Using series solve Δ๐‘ข = 0, ๐‘ข(1, ๐œƒ) = |๐œƒ|, for −๐œ‹ < ๐œƒ ≤ ๐œ‹.
Exercise 4.10.3: Using series solve Δ๐‘ข = 0, ๐‘ข(1, ๐œƒ) = ๐‘”(๐œƒ) for the following data. Hint: trig
identities.
a) ๐‘”(๐œƒ) = 1/2 + 3 sin(๐œƒ) + cos(3๐œƒ)
b) ๐‘”(๐œƒ) = 3 cos(3๐œƒ) + 3 sin(3๐œƒ) + sin(9๐œƒ)
c) ๐‘”(๐œƒ) = 2 cos(๐œƒ + 1)
d) ๐‘”(๐œƒ) = sin2 (๐œƒ)
Exercise 4.10.4: Using the Poisson kernel, give the solution to Δ๐‘ข = 0, where ๐‘ข(1, ๐œƒ) is zero for ๐œƒ
outside the interval [−๐œ‹/4, ๐œ‹/4] and ๐‘ข(1, ๐œƒ) is 1 for ๐œƒ on the interval [−๐œ‹/4, ๐œ‹/4].
272
CHAPTER 4. FOURIER SERIES AND PDES
Exercise 4.10.5:
a) Draw a graph for the Poisson kernel as a function of ๐›ผ when ๐‘Ÿ = 1/2 and ๐œƒ = 0.
b) Describe what happens to the graph when you make ๐‘Ÿ bigger (as it approaches 1).
c) Knowing that the solution ๐‘ข(๐‘Ÿ, ๐œƒ) is the weighted average of ๐‘”(๐œƒ) with Poisson kernel as the
weight, explain what your answer to part b) means.
Exercise 4.10.6: Let ๐‘”(๐œƒ) be the function ๐‘ฅ๐‘ฆ = cos ๐œƒ sin ๐œƒ on the boundary. Use the series solution
to find a solution to the Dirichlet problem Δ๐‘ข = 0, ๐‘ข(1, ๐œƒ) = ๐‘”(๐œƒ). Now convert the solution to
Cartesian coordinates ๐‘ฅ and ๐‘ฆ. Is this solution surprising? Hint: use your trig identities.
Exercise 4.10.7: Carry out the computation we needed in the separation of variables and solve
๐‘Ÿ 2 ๐‘…00 + ๐‘Ÿ๐‘…0 − ๐‘› 2 ๐‘… = 0, for ๐‘› = 0, 1, 2, 3, . . ..
Exercise 4.10.8 (challenging): Derive the series solution to the Dirichlet problem if the region is a
circle of radius ๐œŒ rather than 1. That is, solve Δ๐‘ข = 0, ๐‘ข(๐œŒ, ๐œƒ) = ๐‘”(๐œƒ).
Exercise 4.10.9 (challenging):
a) Find the solution for Δ๐‘ข = 0, ๐‘ข(1, ๐œƒ) = ๐‘ฅ 2 ๐‘ฆ 3 + 5๐‘ฅ 2 . Write the answer in Cartesian
coordinates.
b) Now solve Δ๐‘ข = 0, ๐‘ข(1, ๐œƒ) = ๐‘ฅ ๐‘˜ ๐‘ฆโ„“ . Write the solution in Cartesian coordinates.
๐‘›
๐‘— ๐‘˜
c) Suppose you have a polynomial ๐‘ƒ(๐‘ฅ, ๐‘ฆ) = ๐‘š
๐‘—=0 ๐‘˜=0 ๐‘ ๐‘—,๐‘˜ ๐‘ฅ ๐‘ฆ , solve Δ๐‘ข = 0, ๐‘ข(1, ๐œƒ) =
๐‘ƒ(๐‘ฅ, ๐‘ฆ) (that is, write down the formula for the answer). Write the answer in Cartesian
coordinates.
Í
Í
Notice the answer is again a polynomial in ๐‘ฅ and ๐‘ฆ. See also Exercise 4.10.6.
Exercise 4.10.101: Using series solve Δ๐‘ข = 0, ๐‘ข(1, ๐œƒ) = 1 +
∞
Í
๐‘›=1
1
๐‘›2
sin(๐‘›๐œƒ).
Exercise 4.10.102: Using the series solution find the solution to Δ๐‘ข = 0, ๐‘ข(1, ๐œƒ) = 1 − cos(๐œƒ).
Express the solution in Cartesian coordinates (that is, using ๐‘ฅ and ๐‘ฆ).
Exercise 4.10.103:
a) Try and guess a solution to Δ๐‘ข = −1, ๐‘ข(1, ๐œƒ) = 0. Hint: try a solution that only depends on
๐‘Ÿ. Also first, don’t worry about the boundary condition.
b) Now solve Δ๐‘ข = −1, ๐‘ข(1, ๐œƒ) = sin(2๐œƒ) using superposition.
Exercise 4.10.104 (challenging): Derive the Poisson kernel solution if the region is a circle of
radius ๐œŒ rather than 1. That is, solve Δ๐‘ข = 0, ๐‘ข(๐œŒ, ๐œƒ) = ๐‘”(๐œƒ).
Chapter 5
More on eigenvalue problems
5.1
Sturm–Liouville problems
Note: 2 lectures, §10.1 in [EP], §11.2 in [BD]
5.1.1
Boundary value problems
In chapter 4 we encountered several different eigenvalue problems such as:
๐‘‹ 00(๐‘ฅ) + ๐œ†๐‘‹(๐‘ฅ) = 0,
with different boundary conditions
๐‘‹(0) = 0
๐‘‹ 0(0) = 0
๐‘‹ 0(0) = 0
๐‘‹(0) = 0
๐‘‹(๐ฟ) = 0
๐‘‹ 0(๐ฟ) = 0
๐‘‹(๐ฟ) = 0
๐‘‹ 0(๐ฟ) = 0
(Dirichlet), or
(Neumann), or
(Mixed), or
(Mixed), . . .
For example, these boundary problems came up in the study of the heat equation ๐‘ข๐‘ก = ๐‘˜๐‘ข๐‘ฅ๐‘ฅ
when we were trying to solve the equation by the method of separation of variables in § 4.6.
Dirichlet conditions correspond to applying a zero temperature at the ends, Neumann
means insulating the ends, etc. Other types of endpoint conditions also arise naturally,
such as the Robin boundary conditions
โ„Ž๐‘‹(0) − ๐‘‹ 0(0) = 0,
โ„Ž๐‘‹(๐ฟ) + ๐‘‹ 0(๐ฟ) = 0,
for some constant โ„Ž. These conditions come up when the ends are immersed in some
medium.
In the separation of variables computation we encountered an eigenvalue problem and
found the eigenfunctions ๐‘‹๐‘› (๐‘ฅ). We then found the eigenfunction decomposition of the initial
temperature ๐‘“ (๐‘ฅ) = ๐‘ข(๐‘ฅ, 0),
๐‘“ (๐‘ฅ) =
∞
Õ
๐‘›=1
๐‘ ๐‘› ๐‘‹๐‘› (๐‘ฅ).
274
CHAPTER 5. MORE ON EIGENVALUE PROBLEMS
Once we had this decomposition and found suitable ๐‘‡๐‘› (๐‘ก) such that ๐‘‡๐‘› (0) = 1 and such
that ๐‘‡๐‘› (๐‘ก)๐‘‹๐‘› (๐‘ฅ) were solutions to the heat equation, we wrote the solution to the original
problem, including the initial condition, as
๐‘ข(๐‘ฅ, ๐‘ก) =
∞
Õ
๐‘ ๐‘› ๐‘‡๐‘› (๐‘ก)๐‘‹๐‘› (๐‘ฅ).
๐‘›=1
To study more general problems with this method, we must study more general
eigenvalue problems. First, we study second order linear equations of the form
๐‘‘๐‘ฆ
๐‘‘
๐‘(๐‘ฅ)
− ๐‘ž(๐‘ฅ)๐‘ฆ + ๐œ†๐‘Ÿ(๐‘ฅ)๐‘ฆ = 0.
๐‘‘๐‘ฅ
๐‘‘๐‘ฅ
(5.1)
Essentially any second order linear equation of the form ๐‘Ž(๐‘ฅ)๐‘ฆ 00 +๐‘(๐‘ฅ)๐‘ฆ 0 + ๐‘(๐‘ฅ)๐‘ฆ +๐œ†๐‘‘(๐‘ฅ)๐‘ฆ = 0
can be written as (5.1) after multiplying by a proper factor.
Example 5.1.1 (Bessel): Put the following equation into the form (5.1):
2 00
0
๐‘ฅ ๐‘ฆ + ๐‘ฅ๐‘ฆ + ๐œ†๐‘ฅ − ๐‘›
Multiply both sides by
1
๐‘ฅ
2
2
๐‘ฆ = 0.
to obtain
1 2 00
๐‘›2
0
2
2
00
0
๐‘ฅ ๐‘ฆ + ๐‘ฅ๐‘ฆ + ๐œ†๐‘ฅ − ๐‘› ๐‘ฆ = ๐‘ฅ๐‘ฆ + ๐‘ฆ + ๐œ†๐‘ฅ −
๐‘ฆ
๐‘ฅ
๐‘ฅ
๐‘‘๐‘ฆ
๐‘‘
๐‘›2
=
๐‘ฅ
−
๐‘ฆ + ๐œ†๐‘ฅ๐‘ฆ = 0.
๐‘‘๐‘ฅ
๐‘‘๐‘ฅ
๐‘ฅ
The Bessel equation turns up for example in the solution of the two-dimensional wave
equation. If you want to see how one solves the equation, you can look at subsection 7.3.3.
The so-called Sturm–Liouville problem∗ is to seek nontrivial solutions to
๐‘‘๐‘ฆ
๐‘‘
๐‘(๐‘ฅ)
− ๐‘ž(๐‘ฅ)๐‘ฆ + ๐œ†๐‘Ÿ(๐‘ฅ)๐‘ฆ = 0,
๐‘‘๐‘ฅ
๐‘‘๐‘ฅ
๐›ผ1 ๐‘ฆ(๐‘Ž) − ๐›ผ2 ๐‘ฆ 0(๐‘Ž) = 0,
๐‘Ž < ๐‘ฅ < ๐‘,
(5.2)
๐›ฝ 1 ๐‘ฆ(๐‘) + ๐›ฝ 2 ๐‘ฆ 0(๐‘) = 0.
In particular, we seek ๐œ†s that allow for nontrivial solutions. The ๐œ†s that admit nontrivial
solutions are called the eigenvalues and the corresponding nontrivial solutions are called
eigenfunctions. The constants ๐›ผ1 and ๐›ผ 2 should not be both zero, same for ๐›ฝ1 and ๐›ฝ 2 .
∗ Named after the French mathematicians Jacques Charles François Sturm (1803–1855) and Joseph Liouville
(1809–1882).
275
5.1. STURM–LIOUVILLE PROBLEMS
Theorem 5.1.1. Suppose ๐‘(๐‘ฅ), ๐‘ 0(๐‘ฅ), ๐‘ž(๐‘ฅ) and ๐‘Ÿ(๐‘ฅ) are continuous on [๐‘Ž, ๐‘] and suppose ๐‘(๐‘ฅ) > 0
and ๐‘Ÿ(๐‘ฅ) > 0 for all ๐‘ฅ in [๐‘Ž, ๐‘]. Then the Sturm–Liouville problem (5.2) has an increasing sequence
of eigenvalues
๐œ†1 < ๐œ†2 < ๐œ†3 < · · ·
such that
lim ๐œ†๐‘› = +∞
๐‘›→∞
and such that to each ๐œ†๐‘› there is (up to a constant multiple) a single eigenfunction ๐‘ฆ๐‘› (๐‘ฅ).
Moreover, if ๐‘ž(๐‘ฅ) ≥ 0 and ๐›ผ1 , ๐›ผ2 , ๐›ฝ 1 , ๐›ฝ 2 ≥ 0, then ๐œ†๐‘› ≥ 0 for all ๐‘›.
Problems satisfying the hypothesis of the theorem (including the “Moreover”) are called
regular Sturm–Liouville problems, and we will only consider such problems here. That is, a
regular problem is one where ๐‘(๐‘ฅ), ๐‘ 0(๐‘ฅ), ๐‘ž(๐‘ฅ) and ๐‘Ÿ(๐‘ฅ) are continuous, ๐‘(๐‘ฅ) > 0, ๐‘Ÿ(๐‘ฅ) > 0,
๐‘ž(๐‘ฅ) ≥ 0, and ๐›ผ1 , ๐›ผ2 , ๐›ฝ 1 , ๐›ฝ 2 ≥ 0, where neither ๐›ผ1 and ๐›ผ2 are both zero, nor ๐›ฝ 1 and ๐›ฝ 2 are
both zero. Note: Be careful about the signs. Also be careful about the inequalities for ๐‘Ÿ and
๐‘, they must be strict for all ๐‘ฅ in the interval [๐‘Ž, ๐‘], including the endpoints!
When zero is an eigenvalue, we usually start labeling the eigenvalues at 0 rather than at
1 for convenience. That is we label the eigenvalues ๐œ†0 < ๐œ†1 < ๐œ†2 < · · · .
Example 5.1.2: The problem ๐‘ฆ 00 + ๐œ†๐‘ฆ, 0 < ๐‘ฅ < ๐ฟ, ๐‘ฆ(0) = 0, and ๐‘ฆ(๐ฟ) = 0 is a regular
Sturm–Liouville problem: ๐‘(๐‘ฅ) = 1, ๐‘ž(๐‘ฅ) = 0, ๐‘Ÿ(๐‘ฅ) = 1, and we have ๐‘(๐‘ฅ) = 1 > 0 and
๐‘Ÿ(๐‘ฅ) = 1 > 0. We also have ๐‘Ž = 0, ๐‘ = ๐ฟ, ๐›ผ1 = ๐›ฝ 1 = 1, ๐›ผ2 = ๐›ฝ2 = 0. The eigenvalues are
2 2
๐œ†๐‘› = ๐‘›๐ฟ๐œ‹2 and eigenfunctions are ๐‘ฆ๐‘› (๐‘ฅ) = sin ๐‘›๐œ‹
๐‘ฅ
. All eigenvalues are nonnegative as
๐ฟ
predicted by the theorem.
Exercise 5.1.1: Find eigenvalues and eigenfunctions for
๐‘ฆ 00 + ๐œ†๐‘ฆ = 0,
๐‘ฆ 0(0) = 0,
๐‘ฆ 0(1) = 0.
Identify the ๐‘, ๐‘ž, ๐‘Ÿ, ๐›ผ ๐‘— , ๐›ฝ ๐‘— . Can you use the theorem above to make the search for eigenvalues easier?
Hint: Consider the condition −๐‘ฆ 0(0) = 0.
Example 5.1.3: Find eigenvalues and eigenfunctions of the problem
๐‘ฆ 00 + ๐œ†๐‘ฆ = 0,
0
0 < ๐‘ฅ < 1,
โ„Ž๐‘ฆ(0) − ๐‘ฆ (0) = 0,
๐‘ฆ 0(1) = 0,
โ„Ž > 0.
These equations give a regular Sturm–Liouville problem.
Exercise 5.1.2: Identify ๐‘, ๐‘ž, ๐‘Ÿ, ๐›ผ ๐‘— , ๐›ฝ ๐‘— in the example above.
By Theorem 5.1.1, ๐œ† ≥ 0. So the general solution (without boundary conditions) is
√
√
๐‘ฆ(๐‘ฅ) = ๐ด cos( ๐œ† ๐‘ฅ) + ๐ต sin( ๐œ† ๐‘ฅ)
if ๐œ† > 0,
๐‘ฆ(๐‘ฅ) = ๐ด๐‘ฅ + ๐ต
if ๐œ† = 0.
276
CHAPTER 5. MORE ON EIGENVALUE PROBLEMS
Let us see if ๐œ† = 0 is an eigenvalue: We must satisfy 0 = โ„Ž๐ต − ๐ด and ๐ด = 0, hence ๐ต = 0
(as โ„Ž > 0). Therefore, 0 is not an eigenvalue (no nonzero solution, so no eigenfunction).
Now let us try ๐œ† > 0. We plug in the boundary conditions:
√
0 = โ„Ž๐ด − ๐œ† ๐ต,
√
√
√
√
0 = −๐ด ๐œ† sin( ๐œ†) + ๐ต ๐œ† cos( ๐œ†).
If ๐ด = 0, then ๐ต = 0 and vice versa, hence both are nonzero. So ๐ต =
√
√
√
√
−๐ด ๐œ† sin( ๐œ†) + √โ„Ž๐ด ๐œ† cos( ๐œ†). As ๐ด ≠ 0 we get
โ„Ž๐ด
√
,
๐œ†
and 0 =
๐œ†
√
√
√
0 = − ๐œ† sin( ๐œ†) + โ„Ž cos( ๐œ†),
or
√
โ„Ž
√ = tan ๐œ†.
๐œ†
We use a computer to find ๐œ†๐‘› . There are tables available, though using a computer
or a graphing calculator is far more convenient nowadays. Easiest method is to plot the
functions โ„Ž/๐‘ฅ and tan ๐‘ฅ and see for which ๐‘ฅ they
√ intersect. There is an infinite
√ number of
intersections. Denote the first√intersection√by ๐œ†1 , the second intersection by ๐œ†2 , etc. For
example, when โ„Ž = 1, we get ๐œ†1 ≈ 0.86, ๐œ†2 ≈ 3.43, . . . . That is ๐œ†1 ≈ 0.74, ๐œ†2 ≈ 11.73, . . . .
A plot for โ„Ž = 1 is given in Figure 5.1 on the next page. The appropriate eigenfunction (let
√
๐ด = 1 for convenience, then ๐ต = โ„Ž/ ๐œ†) is
p
p
โ„Ž
๐‘ฆ๐‘› (๐‘ฅ) = cos( ๐œ†๐‘› ๐‘ฅ) + √ sin( ๐œ†๐‘› ๐‘ฅ).
๐œ†๐‘›
When โ„Ž = 1 we get (approximately)
๐‘ฆ1 (๐‘ฅ) ≈ cos(0.86 ๐‘ฅ) +
5.1.2
1
sin(0.86 ๐‘ฅ),
0.86
๐‘ฆ2 (๐‘ฅ) ≈ cos(3.43 ๐‘ฅ) +
1
sin(3.43 ๐‘ฅ),
3.43
....
Orthogonality
We have seen the notion of orthogonality before. For example, we have shown that sin(๐‘›๐‘ฅ)
are orthogonal for distinct ๐‘› on [0, ๐œ‹]. For general Sturm–Liouville problems we need
a more general setup. Let ๐‘Ÿ(๐‘ฅ) be a weight function (any function, though generally we
assume it is positive) on [๐‘Ž, ๐‘]. Two functions ๐‘“ (๐‘ฅ), ๐‘”(๐‘ฅ) are said to be orthogonal with
respect to the weight function ๐‘Ÿ(๐‘ฅ) when
∫
๐‘Ž
๐‘
๐‘“ (๐‘ฅ) ๐‘”(๐‘ฅ) ๐‘Ÿ(๐‘ฅ) ๐‘‘๐‘ฅ = 0.
277
5.1. STURM–LIOUVILLE PROBLEMS
0
2
4
6
4
4
2
2
0
0
-2
-2
-4
-4
0
2
4
Figure 5.1: Plot of
1
๐‘ฅ
6
and tan ๐‘ฅ.
In this setting, we define the inner product as
def
h ๐‘“ , ๐‘”i =
∫
๐‘
๐‘“ (๐‘ฅ) ๐‘”(๐‘ฅ) ๐‘Ÿ(๐‘ฅ) ๐‘‘๐‘ฅ,
๐‘Ž
and then say ๐‘“ and ๐‘” are orthogonal whenever h ๐‘“ , ๐‘”i = 0. The results and concepts are
again analogous to finite-dimensional linear algebra.
The idea of the given inner product is that those ๐‘ฅ where ๐‘Ÿ(๐‘ฅ) is greater have more
weight. Nontrivial (nonconstant) ๐‘Ÿ(๐‘ฅ) arise naturally, for example from a change of variables.
Hence, you could think of a change of variables such that ๐‘‘๐œ‰ = ๐‘Ÿ(๐‘ฅ) ๐‘‘๐‘ฅ.
Eigenfunctions of a regular Sturm–Liouville problem satisfy an orthogonality property,
just like the eigenfunctions in § 4.1. Its proof is very similar to the analogous Theorem 4.1.1
on page 193.
Theorem 5.1.2. Suppose we have a regular Sturm–Liouville problem
๐‘‘๐‘ฆ
๐‘‘
๐‘(๐‘ฅ)
− ๐‘ž(๐‘ฅ)๐‘ฆ + ๐œ†๐‘Ÿ(๐‘ฅ)๐‘ฆ = 0,
๐‘‘๐‘ฅ
๐‘‘๐‘ฅ
๐›ผ1 ๐‘ฆ(๐‘Ž) − ๐›ผ2 ๐‘ฆ 0(๐‘Ž) = 0,
๐›ฝ 1 ๐‘ฆ(๐‘) + ๐›ฝ 2 ๐‘ฆ 0(๐‘) = 0.
Let ๐‘ฆ ๐‘— and ๐‘ฆ ๐‘˜ be two distinct eigenfunctions for two distinct eigenvalues ๐œ† ๐‘— and ๐œ† ๐‘˜ . Then
∫
๐‘Ž
๐‘
๐‘ฆ ๐‘— (๐‘ฅ) ๐‘ฆ ๐‘˜ (๐‘ฅ) ๐‘Ÿ(๐‘ฅ) ๐‘‘๐‘ฅ = 0,
that is, ๐‘ฆ ๐‘— and ๐‘ฆ ๐‘˜ are orthogonal with respect to the weight function ๐‘Ÿ.
278
5.1.3
CHAPTER 5. MORE ON EIGENVALUE PROBLEMS
Fredholm alternative
The Fredholm alternative theorem we talked about before (Theorem 4.1.2 on page 194) holds
for all regular Sturm–Liouville problems. We state it here for completeness.
Theorem 5.1.3 (Fredholm alternative). Suppose that we have a regular Sturm–Liouville problem.
Then either
๐‘‘๐‘ฆ
๐‘‘
๐‘(๐‘ฅ)
− ๐‘ž(๐‘ฅ)๐‘ฆ + ๐œ†๐‘Ÿ(๐‘ฅ)๐‘ฆ = 0,
๐‘‘๐‘ฅ
๐‘‘๐‘ฅ
๐›ผ1 ๐‘ฆ(๐‘Ž) − ๐›ผ2 ๐‘ฆ 0(๐‘Ž) = 0,
๐›ฝ 1 ๐‘ฆ(๐‘) + ๐›ฝ 2 ๐‘ฆ 0(๐‘) = 0,
has a nonzero solution (๐œ† is an eigenvalue), or
๐‘‘๐‘ฆ
๐‘‘
๐‘(๐‘ฅ)
− ๐‘ž(๐‘ฅ)๐‘ฆ + ๐œ†๐‘Ÿ(๐‘ฅ)๐‘ฆ = ๐‘“ (๐‘ฅ),
๐‘‘๐‘ฅ
๐‘‘๐‘ฅ
๐›ผ1 ๐‘ฆ(๐‘Ž) − ๐›ผ2 ๐‘ฆ 0(๐‘Ž) = 0,
๐›ฝ 1 ๐‘ฆ(๐‘) + ๐›ฝ 2 ๐‘ฆ 0(๐‘) = 0,
has a unique solution for any ๐‘“ (๐‘ฅ) continuous on [๐‘Ž, ๐‘].
This theorem is used in much the same way as we did before in § 4.4. It is used when
solving more general nonhomogeneous boundary value problems. The theorem does not
help us solve the problem, but it tells us when a unique solution exists, so that we know
when to spend time looking for it. To solve the problem we decompose ๐‘“ (๐‘ฅ) and ๐‘ฆ(๐‘ฅ) in
terms of eigenfunctions of the homogeneous problem, and then solve for the coefficients of
the series for ๐‘ฆ(๐‘ฅ).
5.1.4
Eigenfunction series
What we want to do with the eigenfunctions once we have them is to compute the
eigenfunction decomposition of an arbitrary function ๐‘“ (๐‘ฅ). That is, we wish to write
๐‘“ (๐‘ฅ) =
∞
Õ
๐‘ ๐‘› ๐‘ฆ๐‘› (๐‘ฅ),
(5.3)
๐‘›=1
where ๐‘ฆ๐‘› (๐‘ฅ) are eigenfunctions. We wish to find out if we can represent any function ๐‘“ (๐‘ฅ)
in this way, and if so, we wish to calculate ๐‘ ๐‘› (and of course we would want to know if the
sum converges). OK, so imagine we could write ๐‘“ (๐‘ฅ) as (5.3). We will assume convergence
279
5.1. STURM–LIOUVILLE PROBLEMS
and the ability to integrate the series term by term. Because of orthogonality we have
h ๐‘“ , ๐‘ฆ๐‘š i =
∫
๐‘
๐‘“ (๐‘ฅ) ๐‘ฆ๐‘š (๐‘ฅ) ๐‘Ÿ(๐‘ฅ) ๐‘‘๐‘ฅ =
∫
๐‘Ž
=
๐‘
∞
Õ
๐‘Ž
๐‘›=1
∞
Õ
∫
๐‘๐‘›
๐‘ ๐‘› ๐‘ฆ๐‘› (๐‘ฅ) ๐‘ฆ๐‘š (๐‘ฅ) ๐‘Ÿ(๐‘ฅ) ๐‘‘๐‘ฅ
๐‘
๐‘ฆ๐‘› (๐‘ฅ) ๐‘ฆ๐‘š (๐‘ฅ) ๐‘Ÿ(๐‘ฅ) ๐‘‘๐‘ฅ
๐‘Ž
๐‘›=1
= ๐‘๐‘š
!
๐‘
∫
๐‘ฆ๐‘š (๐‘ฅ) ๐‘ฆ๐‘š (๐‘ฅ) ๐‘Ÿ(๐‘ฅ) ๐‘‘๐‘ฅ = ๐‘ ๐‘š h๐‘ฆ๐‘š , ๐‘ฆ๐‘š i.
๐‘Ž
Hence,
∫๐‘
๐‘“ (๐‘ฅ) ๐‘ฆ๐‘š (๐‘ฅ) ๐‘Ÿ(๐‘ฅ) ๐‘‘๐‘ฅ
h ๐‘“ , ๐‘ฆ๐‘š i
๐‘๐‘š =
= ∫๐‘Ž ๐‘
.
2
h๐‘ฆ๐‘š , ๐‘ฆ๐‘š i
๐‘ฆ (๐‘ฅ) ๐‘Ÿ(๐‘ฅ) ๐‘‘๐‘ฅ
๐‘Ž
(5.4)
๐‘š
Note that ๐‘ฆ๐‘š are known up to a constant multiple, so we could have picked a scalar
multiple of an eigenfunction
such that h๐‘ฆ๐‘š , ๐‘ฆ๐‘š i = 1 (if we had an arbitrary eigenfunction
p
๐‘ฆ˜ ๐‘š , divide it by h ๐‘ฆ˜ ๐‘š , ๐‘ฆ˜ ๐‘š i). When h๐‘ฆ๐‘š , ๐‘ฆ๐‘š i = 1 we have the simpler form ๐‘ ๐‘š = h ๐‘“ , ๐‘ฆ๐‘š i.
The following theorem holds more generally, but the statement given is enough for our
purposes.
Theorem 5.1.4. Suppose ๐‘“ is a piecewise smooth continuous function on [๐‘Ž, ๐‘]. If ๐‘ฆ1 , ๐‘ฆ2 , . . . are
eigenfunctions of a regular Sturm–Liouville problem, one for each eigenvalue, then there exist real
constants ๐‘ 1 , ๐‘2 , . . . given by (5.4) such that (5.3) converges and holds for ๐‘Ž < ๐‘ฅ < ๐‘.
Example 5.1.4: Consider
๐‘ฆ 00 + ๐œ†๐‘ฆ = 0,
๐‘ฆ(0) = 0,
0 < ๐‘ฅ < ๐œ‹/2 ,
๐‘ฆ 0(๐œ‹/2)
= 0.
The above is a regular Sturm–Liouville problem, and Theorem 5.1.1 on page 275 says that
if ๐œ† is an eigenvalue then ๐œ† ≥ 0.
Suppose ๐œ† = 0. The general solution is ๐‘ฆ(๐‘ฅ) = ๐ด๐‘ฅ + ๐ต. We plug in the initial conditions
to get 0 = ๐‘ฆ(0) = ๐ต, and 0 = ๐‘ฆ 0(๐œ‹/2) = ๐ด. Hence ๐œ† = 0 is not an eigenvalue.
So let us consider ๐œ† > 0, where the general solution is
√
√
๐‘ฆ(๐‘ฅ) = ๐ด cos( ๐œ† ๐‘ฅ) + ๐ต sin( ๐œ† ๐‘ฅ).
√
√ Plugging in the boundary conditions we get 0 = ๐‘ฆ(0) = ๐ด and 0 = ๐‘ฆ 0(๐œ‹/2) = ๐œ† ๐ต cos ๐œ† ๐œ‹2 .
√ √
Since ๐ด is zero, then ๐ต cannot be zero. Hence cos ๐œ† ๐œ‹2 = 0. This means that ๐œ† ๐œ‹2 is an
√
odd integral multiple of ๐œ‹/2, i.e. (2๐‘› − 1) ๐œ‹2 = ๐œ†๐‘› ๐œ‹2 . Solving for ๐œ†๐‘› we get
๐œ†๐‘› = (2๐‘› − 1)2 .
We can take ๐ต = 1. Our eigenfunctions are
๐‘ฆ๐‘› (๐‘ฅ) = sin (2๐‘› − 1)๐‘ฅ .
280
CHAPTER 5. MORE ON EIGENVALUE PROBLEMS
A little bit of calculus shows
∫
๐œ‹
2
sin (2๐‘› − 1)๐‘ฅ
2
๐‘‘๐‘ฅ =
0
๐œ‹
.
4
So any piecewise smooth function ๐‘“ (๐‘ฅ) on [0, ๐œ‹/2] can be written as
๐‘“ (๐‘ฅ) =
∞
Õ
๐‘ ๐‘› sin (2๐‘› − 1)๐‘ฅ ,
๐‘›=1
where
∫
๐œ‹
∫
2
๐‘“ (๐‘ฅ) sin (2๐‘› − 1)๐‘ฅ ๐‘‘๐‘ฅ
h ๐‘“ , ๐‘ฆ๐‘› i
4
0
๐‘๐‘› =
= ∫๐œ‹
=
2
๐œ‹ 0
h๐‘ฆ๐‘› , ๐‘ฆ๐‘› i
2
sin
(2๐‘›
−
1)๐‘ฅ
๐‘‘๐‘ฅ
0
๐œ‹
2
๐‘“ (๐‘ฅ) sin (2๐‘› − 1)๐‘ฅ ๐‘‘๐‘ฅ.
Note that the series converges to an odd 2๐œ‹-periodic extension of ๐‘“ (๐‘ฅ). With the regular
sine series we would expect a function with period 2 ๐œ‹2 = ๐œ‹.
Exercise 5.1.3 (challenging): In the example above, the function is defined on 0 < ๐‘ฅ < ๐œ‹/2, yet the
series with respect to the eigenfunctions sin (2๐‘› − 1)๐‘ฅ converges to an odd 2๐œ‹-periodic extension
of ๐‘“ (๐‘ฅ). Find out how is the extension defined for ๐œ‹/2 < ๐‘ฅ < ๐œ‹.
Let us compute an example. Consider ๐‘“ (๐‘ฅ) = ๐‘ฅ for 0 < ๐‘ฅ < ๐œ‹/2. Some calculus later we
find
∫ ๐œ‹
2
4
4(−1)๐‘›+1
๐‘๐‘› =
๐‘“ (๐‘ฅ) sin (2๐‘› − 1)๐‘ฅ ๐‘‘๐‘ฅ =
,
๐œ‹ 0
๐œ‹(2๐‘› − 1)2
and so for ๐‘ฅ in [0, ๐œ‹/2],
๐‘“ (๐‘ฅ) =
∞
Õ
4(−1)๐‘›+1
๐‘›=1
๐œ‹(2๐‘› − 1)
sin (2๐‘› − 1)๐‘ฅ .
2
This is different from the ๐œ‹-periodic regular sine series which can be computed to be
๐‘“ (๐‘ฅ) =
∞
Õ
(−1)๐‘›+1
๐‘›=1
๐‘›
sin(2๐‘›๐‘ฅ).
Both sums converge are equal to ๐‘“ (๐‘ฅ) for 0 < ๐‘ฅ < ๐œ‹/2, but the eigenfunctions involved come
from different eigenvalue problems.
5.1.5
Exercises
Exercise 5.1.4: Find eigenvalues and eigenfunctions of
๐‘ฆ 00 + ๐œ†๐‘ฆ = 0,
๐‘ฆ(0) − ๐‘ฆ 0(0) = 0,
๐‘ฆ(1) = 0.
281
5.1. STURM–LIOUVILLE PROBLEMS
Exercise 5.1.5: Expand the function ๐‘“ (๐‘ฅ) = ๐‘ฅ on 0 ≤ ๐‘ฅ ≤ 1 using eigenfunctions of the system
๐‘ฆ 00 + ๐œ†๐‘ฆ = 0,
๐‘ฆ 0(0) = 0,
๐‘ฆ(1) = 0.
Exercise 5.1.6: Suppose that you had a Sturm–Liouville problem on the interval [0, 1] and came up
with ๐‘ฆ๐‘› (๐‘ฅ) = sin(๐›พ๐‘›๐‘ฅ), where ๐›พ > 0 is some constant. Decompose ๐‘“ (๐‘ฅ) = ๐‘ฅ, 0 < ๐‘ฅ < 1 in terms
of these eigenfunctions.
Exercise 5.1.7: Find eigenvalues and eigenfunctions of
๐‘ฆ (4) + ๐œ†๐‘ฆ = 0,
๐‘ฆ(0) = 0,
๐‘ฆ 0(0) = 0,
๐‘ฆ(1) = 0,
๐‘ฆ 0(1) = 0.
This problem is not a Sturm–Liouville problem, but the idea is the same.
Exercise 5.1.8 (more challenging): Find eigenvalues and eigenfunctions for
๐‘‘ ๐‘ฅ 0
(๐‘’ ๐‘ฆ ) + ๐œ†๐‘’ ๐‘ฅ ๐‘ฆ = 0,
๐‘‘๐‘ฅ
๐‘ฆ(0) = 0,
๐‘ฆ(1) = 0.
Hint: First write the system as a constant coefficient system to find general solutions. Do note that
Theorem 5.1.1 on page 275 guarantees ๐œ† ≥ 0.
Exercise 5.1.101: Find eigenvalues and eigenfunctions of
๐‘ฆ 00 + ๐œ†๐‘ฆ = 0,
๐‘ฆ(−1) = 0,
๐‘ฆ(1) = 0.
Exercise 5.1.102: Put the following problems into the standard form for Sturm–Liouville problems,
that is, find ๐‘(๐‘ฅ), ๐‘ž(๐‘ฅ), ๐‘Ÿ(๐‘ฅ), ๐›ผ1 , ๐›ผ2 , ๐›ฝ 1 , and ๐›ฝ 2 , and decide if the problems are regular or not.
a) ๐‘ฅ ๐‘ฆ 00 + ๐œ†๐‘ฆ = 0 for 0 < ๐‘ฅ < 1, ๐‘ฆ(0) = 0, ๐‘ฆ(1) = 0.
b) (1 + ๐‘ฅ 2 )๐‘ฆ 00 + 2๐‘ฅ๐‘ฆ 0 + (๐œ† − ๐‘ฅ 2 )๐‘ฆ = 0 for −1 < ๐‘ฅ < 1, ๐‘ฆ(−1) = 0, ๐‘ฆ(1) + ๐‘ฆ 0(1) = 0.∗
∗ In
an earlier version of this book, a typo rendered the equation as (1 + ๐‘ฅ 2 )๐‘ฆ 00 − 2๐‘ฅ ๐‘ฆ 0 + (๐œ† − ๐‘ฅ 2 )๐‘ฆ = 0 ending
up with something harder than intended. Try this equation for a further challenge.
282
5.2
CHAPTER 5. MORE ON EIGENVALUE PROBLEMS
Higher order eigenvalue problems
Note: 1 lecture, §10.2 in [EP], exercises in §11.2 in [BD]
The eigenfunction series can arise even from higher order equations. Consider an
elastic beam (say made of steel). We will study the transversal vibrations of the beam. That
is, suppose the beam lies along the ๐‘ฅ-axis and let ๐‘ฆ(๐‘ฅ, ๐‘ก) measure the displacement of the
point ๐‘ฅ on the beam at time ๐‘ก. See Figure 5.2.
๐‘ฆ
๐‘ฆ
๐‘ฅ
Figure 5.2: Transversal vibrations of a beam.
The equation that governs this setup is
๐‘Ž4
๐œ•4 ๐‘ฆ ๐œ•2 ๐‘ฆ
+
= 0,
๐œ•๐‘ฅ 4 ๐œ•๐‘ก 2
for some constant ๐‘Ž > 0, let us not worry about the physics∗ .
Suppose the beam is of length 1 simply supported (hinged) at the ends. The beam is
displaced by some function ๐‘“ (๐‘ฅ) at time ๐‘ก = 0 and then let go (initial velocity is 0). Then ๐‘ฆ
satisfies:
๐‘Ž 4 ๐‘ฆ ๐‘ฅ๐‘ฅ๐‘ฅ๐‘ฅ + ๐‘ฆ๐‘ก๐‘ก = 0
(0 < ๐‘ฅ < 1, ๐‘ก > 0),
๐‘ฆ(0, ๐‘ก) = ๐‘ฆ ๐‘ฅ๐‘ฅ (0, ๐‘ก) = 0,
(5.5)
๐‘ฆ(1, ๐‘ก) = ๐‘ฆ ๐‘ฅ๐‘ฅ (1, ๐‘ก) = 0,
๐‘ฆ(๐‘ฅ, 0) = ๐‘“ (๐‘ฅ),
๐‘ฆ๐‘ก (๐‘ฅ, 0) = 0.
Again we try ๐‘ฆ(๐‘ฅ, ๐‘ก) = ๐‘‹(๐‘ฅ)๐‘‡(๐‘ก) and plug in to get ๐‘Ž 4 ๐‘‹ (4)๐‘‡ + ๐‘‹๐‘‡ 00 = 0 or
๐‘‹ (4) −๐‘‡ 00
= 4 = ๐œ†.
๐‘‹
๐‘Ž ๐‘‡
The equations are
๐‘‡ 00 + ๐œ†๐‘Ž 4๐‘‡ = 0,
๐‘‹ (4) − ๐œ†๐‘‹ = 0.
you are interested, ๐‘Ž 4 = ๐ธ๐ผ
๐œŒ , where ๐ธ is the elastic modulus, ๐ผ is the second moment of area of the cross
section, and ๐œŒ is linear density.
∗ If
283
5.2. HIGHER ORDER EIGENVALUE PROBLEMS
The boundary conditions ๐‘ฆ(0, ๐‘ก) = ๐‘ฆ ๐‘ฅ๐‘ฅ (0, ๐‘ก) = 0 and ๐‘ฆ(1, ๐‘ก) = ๐‘ฆ ๐‘ฅ๐‘ฅ (1, ๐‘ก) = 0 imply
๐‘‹(0) = ๐‘‹ 00(0) = 0,
๐‘‹(1) = ๐‘‹ 00(1) = 0.
and
The initial homogeneous condition ๐‘ฆ๐‘ก (๐‘ฅ, 0) = 0 implies
๐‘‡ 0(0) = 0.
As usual, we leave the nonhomogeneous ๐‘ฆ(๐‘ฅ, 0) = ๐‘“ (๐‘ฅ) for later.
Considering the equation for ๐‘‡, that is, ๐‘‡ 00 + ๐œ†๐‘Ž 4๐‘‡ = 0, and physical intuition leads us
to the fact that if ๐œ† is an eigenvalue then ๐œ† > 0: We expect vibration and not exponential
growth nor decay in the ๐‘ก direction (there is no friction in our model for instance). So there
are no negative eigenvalues. Similarly ๐œ† = 0 is not an eigenvalue.
Exercise 5.2.1: Justify ๐œ† > 0 just from the equation for ๐‘‹ and the boundary conditions.
√4
Let ๐œ” = ๐œ†, that is ๐œ” 4 = ๐œ†, to avoid writing the fourth root all the time. Notice ๐œ” > 0.
The general solution to ๐‘‹ (4) − ๐œ” 4 ๐‘‹ = 0 is
๐‘‹(๐‘ฅ) = ๐ด๐‘’ ๐œ”๐‘ฅ + ๐ต๐‘’ −๐œ”๐‘ฅ + ๐ถ sin(๐œ”๐‘ฅ) + ๐ท cos(๐œ”๐‘ฅ).
Now 0 = ๐‘‹(0) = ๐ด + ๐ต + ๐ท, 0 = ๐‘‹ 00(0) = ๐œ” 2 (๐ด + ๐ต − ๐ท). Solving, ๐ท = 0 and ๐ต = −๐ด. So
๐‘‹(๐‘ฅ) = ๐ด๐‘’ ๐œ”๐‘ฅ − ๐ด๐‘’ −๐œ”๐‘ฅ + ๐ถ sin(๐œ”๐‘ฅ).
Also 0 = ๐‘‹(1) = ๐ด(๐‘’ ๐œ” − ๐‘’ −๐œ” ) + ๐ถ sin ๐œ”, and 0 = ๐‘‹ 00(1) = ๐ด๐œ” 2 (๐‘’ ๐œ” − ๐‘’ −๐œ” ) − ๐ถ๐œ”2 sin ๐œ”. This
means that ๐ถ sin ๐œ” = 0 and ๐ด(๐‘’ ๐œ” − ๐‘’ −๐œ” ) = 2๐ด sinh ๐œ” = 0. If ๐œ” > 0, then sinh ๐œ” ≠ 0 and so
๐ด = 0. Thus ๐ถ ≠ 0, otherwise ๐œ† is not an eigenvalue. Also, ๐œ” must be an integer multiple
of ๐œ‹. Hence ๐œ” = ๐‘›๐œ‹ and ๐‘› ≥ 1 (as ๐œ” > 0). We can take ๐ถ = 1. So the eigenvalues are
๐œ†๐‘› = ๐‘› 4 ๐œ‹4 and corresponding eigenfunctions are sin(๐‘›๐œ‹๐‘ฅ).
Now ๐‘‡ 00 + ๐‘› 4 ๐œ‹4 ๐‘Ž 4๐‘‡ = 0. The general solution is ๐‘‡(๐‘ก) = ๐ด sin(๐‘› 2 ๐œ‹2 ๐‘Ž 2 ๐‘ก) + ๐ต cos(๐‘› 2 ๐œ‹2 ๐‘Ž 2 ๐‘ก).
But ๐‘‡ 0(0) = 0 and hence ๐ด = 0. We take ๐ต = 1 to make ๐‘‡(0) = 1 for convenience. So our
solutions are ๐‘‡๐‘› (๐‘ก) = cos(๐‘› 2 ๐œ‹2 ๐‘Ž 2 ๐‘ก).
The eigenfunctions are just the sines, so we decompose the function ๐‘“ (๐‘ฅ) using the sine
series. That is, we find numbers ๐‘ ๐‘› such that for 0 < ๐‘ฅ < 1,
๐‘“ (๐‘ฅ) =
∞
Õ
๐‘ ๐‘› sin(๐‘›๐œ‹๐‘ฅ).
๐‘›=1
Then the solution to (5.5) is
๐‘ฆ(๐‘ฅ, ๐‘ก) =
∞
Õ
๐‘›=1
๐‘ ๐‘› ๐‘‹๐‘› (๐‘ฅ)๐‘‡๐‘› (๐‘ก) =
∞
Õ
๐‘›=1
๐‘ ๐‘› sin(๐‘›๐œ‹๐‘ฅ) cos(๐‘› 2 ๐œ‹2 ๐‘Ž 2 ๐‘ก).
284
CHAPTER 5. MORE ON EIGENVALUE PROBLEMS
The point is that ๐‘‹๐‘› ๐‘‡๐‘› is a solution that satisfies all the homogeneous conditions (all
conditions except the initial position). And since ๐‘‡๐‘› (0) = 1,
๐‘ฆ(๐‘ฅ, 0) =
∞
Õ
๐‘ ๐‘› ๐‘‹๐‘› (๐‘ฅ)๐‘‡๐‘› (0) =
๐‘›=1
∞
Õ
๐‘ ๐‘› ๐‘‹๐‘› (๐‘ฅ) =
๐‘›=1
∞
Õ
๐‘ ๐‘› sin(๐‘›๐œ‹๐‘ฅ) = ๐‘“ (๐‘ฅ).
๐‘›=1
So ๐‘ฆ(๐‘ฅ, ๐‘ก) solves (5.5).
The natural (angular) frequencies of the system are ๐‘› 2 ๐œ‹2 ๐‘Ž 2 . These frequencies are
all integer multiples of the fundamental frequency ๐œ‹2 ๐‘Ž 2 , so we get a nice musical note.
The exact frequencies and their amplitude are what musicians call the timbre of the note
(outside of music it is called the spectrum).
The timbre of a beam is different than for a vibrating string where we get “more” of
the lower frequencies since we get all integer multiples, 1, 2, 3, 4, 5, . . .. For a steel beam
we get only the square multiples 1, 4, 9, 16, 25, . . .. That is why when you hit a steel beam
you hear a very pure sound. The sound of a xylophone or vibraphone is, therefore, very
different from a guitar or piano.
Example 5.2.1: Consider ๐‘“ (๐‘ฅ) =
๐‘ฅ(๐‘ฅ−1)
10 .
๐‘“ (๐‘ฅ) =
On 0 < ๐‘ฅ < 1 (you know how to do this by now)
∞
Õ
๐‘›=1
๐‘› odd
4
sin(๐‘›๐œ‹๐‘ฅ).
5๐œ‹3 ๐‘› 3
Hence, the solution to (5.5) with the given initial position ๐‘“ (๐‘ฅ) is
๐‘ฆ(๐‘ฅ, ๐‘ก) =
∞
Õ
๐‘›=1
๐‘› odd
4
sin(๐‘›๐œ‹๐‘ฅ) cos(๐‘› 2 ๐œ‹2 ๐‘Ž 2 ๐‘ก).
5๐œ‹3 ๐‘› 3
There are other boundary conditions than just hinged ends. There are three basic
possibilities: hinged, free, or fixed. Let us consider the end at ๐‘ฅ = 0. For the other end, it is
the same idea. If the end is hinged, then
๐‘ข(0, ๐‘ก) = ๐‘ข๐‘ฅ๐‘ฅ (0, ๐‘ก) = 0.
If the end is free, that is, it is just floating in air, then
๐‘ข๐‘ฅ๐‘ฅ (0, ๐‘ก) = ๐‘ข๐‘ฅ๐‘ฅ๐‘ฅ (0, ๐‘ก) = 0.
And finally, if the end is clamped or fixed, for example it is welded to a wall, then
๐‘ข(0, ๐‘ก) = ๐‘ข๐‘ฅ (0, ๐‘ก) = 0.
5.2. HIGHER ORDER EIGENVALUE PROBLEMS
5.2.1
285
Exercises
Exercise 5.2.2: Suppose you have a beam of length 5 with free ends. Let ๐‘ฆ be the transverse
deviation of the beam at position ๐‘ฅ on the beam (0 < ๐‘ฅ < 5). You know that the constants are such
that this satisfies the equation ๐‘ฆ๐‘ก๐‘ก + 4๐‘ฆ ๐‘ฅ๐‘ฅ๐‘ฅ๐‘ฅ = 0. Suppose you know that the initial shape of the
beam is the graph of ๐‘ฅ(5 − ๐‘ฅ), and the initial velocity is uniformly equal to 2 (same for each ๐‘ฅ) in the
positive ๐‘ฆ direction. Set up the equation together with the boundary and initial conditions. Just set
up, do not solve.
Exercise 5.2.3: Suppose you have a beam of length 5 with one end free and one end fixed (the
fixed end is at ๐‘ฅ = 5). Let ๐‘ข be the longitudinal deviation of the beam at position ๐‘ฅ on the beam
(0 < ๐‘ฅ < 5). You know that the constants are such that this satisfies the equation ๐‘ข๐‘ก๐‘ก = 4๐‘ข๐‘ฅ๐‘ฅ .
−(๐‘ฅ−5)
Suppose you know that the initial displacement of the beam is ๐‘ฅ−5
50 , and the initial velocity is 100
in the positive ๐‘ข direction. Set up the equation together with the boundary and initial conditions.
Just set up, do not solve.
Exercise 5.2.4: Suppose the beam is ๐ฟ units long, everything else kept the same as in (5.5). What is
the equation and the series solution?
Exercise 5.2.5: Suppose you have
๐‘Ž 4 ๐‘ฆ ๐‘ฅ๐‘ฅ๐‘ฅ๐‘ฅ + ๐‘ฆ๐‘ก๐‘ก = 0
(0 < ๐‘ฅ < 1, ๐‘ก > 0),
๐‘ฆ(0, ๐‘ก) = ๐‘ฆ ๐‘ฅ๐‘ฅ (0, ๐‘ก) = 0,
๐‘ฆ(1, ๐‘ก) = ๐‘ฆ ๐‘ฅ๐‘ฅ (1, ๐‘ก) = 0,
๐‘ฆ(๐‘ฅ, 0) = ๐‘“ (๐‘ฅ),
๐‘ฆ๐‘ก (๐‘ฅ, 0) = ๐‘”(๐‘ฅ).
That is, you have also an initial velocity. Find a series solution. Hint: Use the same idea as we did
for the wave equation.
Exercise 5.2.101: Suppose you have a beam of length 1 with hinged ends. Let ๐‘ฆ be the transverse
deviation of the beam at position ๐‘ฅ on the beam (0 < ๐‘ฅ < 1). You know that the constants are such
that this satisfies the equation ๐‘ฆ๐‘ก๐‘ก + 4๐‘ฆ ๐‘ฅ๐‘ฅ๐‘ฅ๐‘ฅ = 0. Suppose you know that the initial shape of the beam
is the graph of sin(๐œ‹๐‘ฅ), and the initial velocity is 0. Solve for ๐‘ฆ.
Exercise 5.2.102: Suppose you have a beam of length 10 with two fixed ends. Let ๐‘ฆ be the transverse
deviation of the beam at position ๐‘ฅ on the beam (0 < ๐‘ฅ < 10). You know that the constants are
such that this satisfies the equation ๐‘ฆ๐‘ก๐‘ก + 9๐‘ฆ ๐‘ฅ๐‘ฅ๐‘ฅ๐‘ฅ = 0. Suppose you know that the initial shape of the
beam is the graph of sin(๐œ‹๐‘ฅ), and the initial velocity is uniformly equal to ๐‘ฅ(10 − ๐‘ฅ). Set up the
equation together with the boundary and initial conditions. Just set up, do not solve.
286
5.3
CHAPTER 5. MORE ON EIGENVALUE PROBLEMS
Steady periodic solutions
Note: 1–2 lectures, §10.3 in [EP], not in [BD]
5.3.1
Forced vibrating string
Consider a guitar string of length ๐ฟ. We studied this setup in § 4.7. Let ๐‘ฅ be the position
on the string, ๐‘ก the time, and ๐‘ฆ the displacement of the string. See Figure 5.3.
๐‘ฆ
๐‘ฆ
๐ฟ
0
๐‘ฅ
Figure 5.3: Vibrating string.
The problem is governed by the wave equation
๐‘ฆ๐‘ก๐‘ก = ๐‘Ž 2 ๐‘ฆ ๐‘ฅ๐‘ฅ ,
๐‘ฆ(0, ๐‘ก) = 0,
๐‘ฆ(๐ฟ, ๐‘ก) = 0,
๐‘ฆ(๐‘ฅ, 0) = ๐‘“ (๐‘ฅ), ๐‘ฆ๐‘ก (๐‘ฅ, 0) = ๐‘”(๐‘ฅ).
(5.6)
We found that the solution is of the form
๐‘ฆ=
∞ Õ
๐‘›=1
๐ด๐‘› cos
๐‘›๐œ‹๐‘Ž ๐ฟ
๐‘ก + ๐ต๐‘› sin
๐‘›๐œ‹๐‘Ž ๐ฟ
๐‘ก
sin
๐‘›๐œ‹ ๐ฟ
๐‘ฅ ,
where ๐ด๐‘› and ๐ต๐‘› are determined by the initial conditions. The natural frequencies of the
system are the (angular) frequencies ๐‘›๐œ‹๐‘Ž
๐ฟ for integers ๐‘› ≥ 1.
But these are free vibrations. What if there is an external force acting on the string. Let
us assume say air vibrations (noise), for example from a second string. Or perhaps a jet
engine. For simplicity, assume nice pure sound and assume the force is uniform at every
position on the string. Let us say ๐น(๐‘ก) = ๐น0 cos(๐œ”๐‘ก) as force per unit mass. Then our wave
equation becomes (remember force is mass times acceleration)
๐‘ฆ๐‘ก๐‘ก = ๐‘Ž 2 ๐‘ฆ ๐‘ฅ๐‘ฅ + ๐น0 cos(๐œ”๐‘ก),
(5.7)
with the same boundary conditions of course.
We want to find the solution here that satisfies the equation above and
๐‘ฆ(0, ๐‘ก) = 0,
๐‘ฆ(๐ฟ, ๐‘ก) = 0,
๐‘ฆ(๐‘ฅ, 0) = 0,
๐‘ฆ๐‘ก (๐‘ฅ, 0) = 0.
(5.8)
287
5.3. STEADY PERIODIC SOLUTIONS
That is, the string is initially at rest. First we find a particular solution ๐‘ฆ ๐‘ of (5.7) that
satisfies ๐‘ฆ(0, ๐‘ก) = ๐‘ฆ(๐ฟ, ๐‘ก) = 0. We define the functions ๐‘“ and ๐‘” as
๐‘“ (๐‘ฅ) = −๐‘ฆ ๐‘ (๐‘ฅ, 0),
๐‘”(๐‘ฅ) = −
๐œ•๐‘ฆ ๐‘
๐œ•๐‘ก
(๐‘ฅ, 0).
We then find solution ๐‘ฆ ๐‘ of (5.6). If we add the two solutions, we find that ๐‘ฆ = ๐‘ฆ ๐‘ + ๐‘ฆ ๐‘
solves (5.7) with the initial conditions.
Exercise 5.3.1: Check that ๐‘ฆ = ๐‘ฆ ๐‘ + ๐‘ฆ ๐‘ solves (5.7) and the side conditions (5.8).
So the big issue here is to find the particular solution ๐‘ฆ ๐‘ . We look at the equation and
we make an educated guess
๐‘ฆ ๐‘ (๐‘ฅ, ๐‘ก) = ๐‘‹(๐‘ฅ) cos(๐œ”๐‘ก).
We plug in to get
−๐œ”2 ๐‘‹ cos(๐œ”๐‘ก) = ๐‘Ž 2 ๐‘‹ 00 cos(๐œ”๐‘ก) + ๐น0 cos(๐œ”๐‘ก),
or −๐œ” 2 ๐‘‹ = ๐‘Ž 2 ๐‘‹ 00 + ๐น0 after canceling the cosine. We know how to find a general solution to
this equation (it is a nonhomogeneous constant coefficient equation). The general solution
is
๐œ” ๐œ” ๐น
0
๐‘‹(๐‘ฅ) = ๐ด cos
๐‘ฅ + ๐ต sin
๐‘ฅ − 2.
๐‘Ž
๐‘Ž
๐œ”
The endpoint conditions imply ๐‘‹(0) = ๐‘‹(๐ฟ) = 0. So
๐น0
,
๐œ”2
0 = ๐‘‹(0) = ๐ด −
or ๐ด =
๐น0
,
๐œ”2
and also
๐น0
๐œ”๐ฟ
๐œ”๐ฟ
๐น0
0 = ๐‘‹(๐ฟ) = 2 cos
+ ๐ต sin
− 2.
๐‘Ž
๐‘Ž
๐œ”
๐œ”
Assuming that sin( ๐œ”๐ฟ
๐‘Ž ) is not zero we can solve for ๐ต to get
๐ต=
Therefore,
๐œ”๐ฟ
๐‘Ž −
๐œ” 2 sin ๐œ”๐ฟ
๐‘Ž
−๐น0 cos
1
.
(5.9)
๐œ” cos ๐œ”๐ฟ − 1
๐œ” ๐น0
๐‘Ž
๐‘‹(๐‘ฅ) = 2 cos
๐‘ฅ −
๐‘ฅ −1 .
sin
๐‘Ž
๐‘Ž
๐œ”
sin ๐œ”๐ฟ
๐‘Ž
!
The particular solution ๐‘ฆ ๐‘ we are looking for is
๐œ” cos ๐œ”๐ฟ − 1
๐œ” ๐น0
๐‘Ž
๐‘ฆ ๐‘ (๐‘ฅ, ๐‘ก) = 2 cos
๐‘ฅ −
sin
๐‘ฅ − 1 cos(๐œ”๐‘ก).
๐‘Ž
๐‘Ž
๐œ”
sin ๐œ”๐ฟ
๐‘Ž
!
288
CHAPTER 5. MORE ON EIGENVALUE PROBLEMS
Exercise 5.3.2: Check that ๐‘ฆ ๐‘ works.
Now we get to the point that we skipped. Suppose sin( ๐œ”๐ฟ
๐‘Ž ) = 0. What this means is
that ๐œ” is equal to one of the natural frequencies of the system, i.e. a multiple of ๐œ‹๐‘Ž
๐ฟ . We
notice that if ๐œ” is not equal to a multiple of the base frequency, but is very close, then the
coefficient ๐ต in (5.9) seems to become very large. But let us not jump to conclusions just
๐œ”๐ฟ
yet. When ๐œ” = ๐‘›๐œ‹๐‘Ž
๐ฟ for ๐‘› even, then cos( ๐‘Ž ) = 1 and hence we really get that ๐ต = 0. So
๐œ”๐ฟ
๐‘›๐œ‹๐‘Ž
resonance occurs only when both cos( ๐‘Ž ) = −1 and sin( ๐œ”๐ฟ
๐‘Ž ) = 0. That is when ๐œ” = ๐ฟ for
odd ๐‘›.
We could again solve for the resonance solution if we wanted to, but it is, in the right
sense, the limit of the solutions as ๐œ” gets close to a resonance frequency. In real life, pure
resonance never occurs anyway.
The calculation above explains why a string begins to vibrate if the identical string is
plucked close by. In the absence of friction this vibration would get louder and louder
as time goes on. On the other hand, you are unlikely to get large vibration if the forcing
frequency is not close to a resonance frequency even if you have a jet engine running close
to the string. That is, the amplitude does not keep increasing unless you tune to just the
right frequency.
Similar resonance phenomena occur when you break a wine glass using human voice
(yes this is possible, but not easy∗ ) if you happen to hit just the right frequency. Remember
a glass has much purer sound, i.e. it is more like a vibraphone, so there are far fewer
resonance frequencies to hit.
When the forcing function is more complicated, you decompose it in terms of the
Fourier series and apply the result above. You may also need to solve the problem above if
the forcing function is a sine rather than a cosine, but if you think about it, the solution is
almost the same.
Example 5.3.1: Let us do the computation for specific values. Suppose ๐น0 = 1 and ๐œ” = 1
and ๐ฟ = 1 and ๐‘Ž = 1. Then
cos(1) − 1
๐‘ฆ ๐‘ (๐‘ฅ, ๐‘ก) = cos(๐‘ฅ) −
sin(๐‘ฅ) − 1 cos(๐‘ก).
sin(1)
Write ๐ต =
cos(1)−1
sin(1)
for simplicity.
Then plug in ๐‘ก = 0 to get
๐‘“ (๐‘ฅ) = −๐‘ฆ ๐‘ (๐‘ฅ, 0) = − cos ๐‘ฅ + ๐ต sin ๐‘ฅ + 1,
and after differentiating in ๐‘ก we see that ๐‘”(๐‘ฅ) = −
∗ Mythbusters,
๐œ•๐‘ฆ ๐‘
(๐‘ฅ, 0)
๐œ•๐‘ก
= 0.
episode 31, Discovery Channel, originally aired may 18th 2005.
289
5.3. STEADY PERIODIC SOLUTIONS
Hence to find ๐‘ฆ ๐‘ we need to solve the problem
๐‘ฆ๐‘ก๐‘ก = ๐‘ฆ ๐‘ฅ๐‘ฅ ,
๐‘ฆ(0, ๐‘ก) = 0,
๐‘ฆ(1, ๐‘ก) = 0,
๐‘ฆ(๐‘ฅ, 0) = − cos ๐‘ฅ + ๐ต sin ๐‘ฅ + 1,
๐‘ฆ๐‘ก (๐‘ฅ, 0) = 0.
The formula that we use to define ๐‘ฆ(๐‘ฅ, 0) is not odd, hence it is not a simple matter of
plugging in the expression for ๐‘ฆ(๐‘ฅ, 0) to the d’Alembert formula directly! You must define
๐น to be the odd, 2-periodic extension of ๐‘ฆ(๐‘ฅ, 0). Then our solution is
๐น(๐‘ฅ + ๐‘ก) + ๐น(๐‘ฅ − ๐‘ก)
cos(1) − 1
๐‘ฆ(๐‘ฅ, ๐‘ก) =
+ cos(๐‘ฅ) −
sin(๐‘ฅ) − 1 cos(๐‘ก).
2
sin(1)
(5.10)
It is not hard to compute specific values for an odd periodic extension of a function and
hence (5.10) is a wonderful solution to the problem. For example, it is very easy to have a
computer do it, unlike a series solution. A plot is given in Figure 5.4.
0
0.0
t
1
2
0.2
3
4
x
5
0.5
y(x,t)
0.8
0.240
0.148
0.099
0.049
0.000
-0.049
-0.099
-0.148
-0.197
-0.254
0.20
1.0
0.20
0.10
0.10
y
y
0.00
0.00
-0.10
-0.10
-0.20
0.0
-0.20
0.2
0.5
0
x
1
2
0.8
3
t
Figure 5.4: Plot of ๐‘ฆ(๐‘ฅ, ๐‘ก) =
4
1.0
5
๐น(๐‘ฅ+๐‘ก)+๐น(๐‘ฅ−๐‘ก)
2
+ cos(๐‘ฅ) −
cos(1)−1
sin(1)
sin(๐‘ฅ) − 1 cos(๐‘ก).
290
5.3.2
CHAPTER 5. MORE ON EIGENVALUE PROBLEMS
Underground temperature oscillations
Let ๐‘ข(๐‘ฅ, ๐‘ก) be the temperature at a certain location at depth ๐‘ฅ underground at time ๐‘ก. See
Figure 5.5.
The temperature ๐‘ข satisfies the heat equation
๐‘ข๐‘ก = ๐‘˜๐‘ข๐‘ฅ๐‘ฅ , where ๐‘˜ is the diffusivity of the soil. We
know the temperature at the surface ๐‘ข(0, ๐‘ก) from
weather records. Let us assume for simplicity that
depth ๐‘ฅ
๐‘ข(0, ๐‘ก) = ๐‘‡0 + ๐ด0 cos(๐œ”๐‘ก),
where ๐‘‡0 is the yearly mean temperature, and ๐‘ก = 0
Figure 5.5: Underground temperature.
is midsummer (you can put negative sign above to
make it midwinter if you wish). ๐ด0 gives the typical
variation for the year. That is, the hottest temperature is ๐‘‡0 + ๐ด0 and the coldest is ๐‘‡0 − ๐ด0 .
For simplicity, we assume that ๐‘‡0 = 0. The frequency ๐œ” is picked depending on the units of
๐‘ก, such that when ๐‘ก = 1 year, then ๐œ”๐‘ก = 2๐œ‹. For example if ๐‘ก is in years, then ๐œ” = 2๐œ‹.
It seems reasonable that the temperature at depth ๐‘ฅ also oscillates with the same
frequency. This, in fact, is the steady periodic solution, a solution independent of the initial
conditions. So we are looking for a solution of the form
๐‘ข(๐‘ฅ, ๐‘ก) = ๐‘‰(๐‘ฅ) cos(๐œ”๐‘ก) + ๐‘Š(๐‘ฅ) sin(๐œ”๐‘ก)
for the problem
๐‘ข๐‘ก = ๐‘˜๐‘ข๐‘ฅ๐‘ฅ ,
๐‘ข(0, ๐‘ก) = ๐ด0 cos(๐œ”๐‘ก).
(5.11)
We employ the complex exponential here to make calculations simpler. Suppose we
have a complex-valued function
โ„Ž(๐‘ฅ, ๐‘ก) = ๐‘‹(๐‘ฅ) ๐‘’ ๐‘–๐œ”๐‘ก .
We look for an โ„Ž such that Re โ„Ž = ๐‘ข. To find an โ„Ž, whose real part satisfies (5.11), we look
for an โ„Ž such that
โ„Ž ๐‘ก = ๐‘˜ โ„Ž ๐‘ฅ๐‘ฅ ,
โ„Ž(0, ๐‘ก) = ๐ด0 ๐‘’ ๐‘–๐œ”๐‘ก .
(5.12)
Exercise 5.3.3: Suppose โ„Ž satisfies (5.12). Use Euler’s formula for the complex exponential to check
that ๐‘ข = Re โ„Ž satisfies (5.11).
Substitute โ„Ž into (5.12).
๐‘–๐œ”๐‘‹ ๐‘’ ๐‘–๐œ”๐‘ก = ๐‘˜๐‘‹ 00 ๐‘’ ๐‘–๐œ”๐‘ก .
Hence,
๐‘˜๐‘‹ 00 − ๐‘–๐œ”๐‘‹ = 0,
or
๐‘‹ 00 − ๐›ผ2 ๐‘‹ = 0,
291
5.3. STEADY PERIODIC SOLUTIONS
√
p
√ so you could simplify to ๐›ผ = ±(1 + ๐‘–) ๐œ” . Hence
where ๐›ผ = ± ๐‘–๐œ”
.
Note
that
±
๐‘– = ± 1+๐‘–
๐‘˜
2๐‘˜
2
the general solution is
√๐œ”
√๐œ”
๐‘‹(๐‘ฅ) = ๐ด๐‘’ −(1+๐‘–) 2๐‘˜ ๐‘ฅ + ๐ต๐‘’ (1+๐‘–) 2๐‘˜ ๐‘ฅ .
q
We assume that an ๐‘‹(๐‘ฅ) that solves the problem must be bounded as ๐‘ฅ → ∞ since ๐‘ข(๐‘ฅ, ๐‘ก)
should be bounded (we are not worrying about the earth core!). If you use Euler’s formula
to expand the complex exponentials, note that the second term is unbounded (if ๐ต ≠ 0),
while the first term is always bounded. Hence ๐ต = 0.
√๐œ”
(1+๐‘–) 2๐‘˜
๐‘ฅ
Exercise
is unbounded as ๐‘ฅ → ∞, while
√ 5.3.4: Use Euler’s formula to show that ๐‘’
๐‘’ −(1+๐‘–)
๐œ”
2๐‘˜
๐‘ฅ
is bounded as ๐‘ฅ → ∞.
Furthermore, ๐‘‹(0) = ๐ด0 since โ„Ž(0, ๐‘ก) = ๐ด0 ๐‘’ ๐‘–๐œ”๐‘ก . Thus ๐ด = ๐ด0 . This means that
√๐œ”
√๐œ”
√๐œ”
√๐œ”
โ„Ž(๐‘ฅ, ๐‘ก) = ๐ด0 ๐‘’ −(1+๐‘–) 2๐‘˜ ๐‘ฅ ๐‘’ ๐‘–๐œ”๐‘ก = ๐ด0 ๐‘’ −(1+๐‘–) 2๐‘˜ ๐‘ฅ+๐‘–๐œ”๐‘ก = ๐ด0 ๐‘’ − 2๐‘˜ ๐‘ฅ ๐‘’ ๐‘–(๐œ”๐‘ก− 2๐‘˜ ๐‘ฅ) .
We need to get the real part of โ„Ž, so we apply Euler’s formula to get
โ„Ž(๐‘ฅ, ๐‘ก) = ๐ด0 ๐‘’
√๐œ”
−
2๐‘˜
๐‘ฅ
cos ๐œ”๐‘ก −
r
Then finally
๐‘ข(๐‘ฅ, ๐‘ก) = Re โ„Ž(๐‘ฅ, ๐‘ก) = ๐ด0 ๐‘’
๐œ”
๐‘ฅ + ๐‘– sin ๐œ”๐‘ก −
2๐‘˜
√๐œ”
−
2๐‘˜
๐‘ฅ
cos ๐œ”๐‘ก −
r
r
๐œ”
๐‘ฅ
2๐‘˜
.
๐œ”
๐‘ฅ .
2๐‘˜
Yay!
the phase is different at different depths. At depth ๐‘ฅ the phase is delayed by
p Notice
๐œ”
๐‘ฅ 2๐‘˜ . For example in cgs units (centimeters-grams-seconds) we have ๐‘˜ = 0.005 (typical
2๐œ‹
value for soil), ๐œ” = seconds2๐œ‹in a year = 31,557,341
≈ 1.99 × 10−7 . Then if we compute where the
๐œ”
phase shift ๐‘ฅ 2๐‘˜
= ๐œ‹ we find the depth in centimeters where the seasons are reversed.
That is, we get the depth at which summer is the coldest and winter is the warmest. We
get approximately 700 centimeters, which is approximately 23 feet below ground.
Be careful not to jump to conclusions. The temperature swings
decay rapidly as you
√
p
๐œ”
dig deeper. The amplitude of the temperature swings is ๐ด0 ๐‘’ − 2๐‘˜ ๐‘ฅ . This function decays
very quickly as ๐‘ฅ (the depth) grows. Let us again take typical parameters as above. We
also assume that our surface temperature swing is ±15โ—ฆ Celsius, that is, ๐ด0 = 15. Then the
maximum temperature variation at 700 centimeters is only ±0.66โ—ฆ Celsius.
You need not dig very deep to get an effective “refrigerator,” with nearly constant
temperature. That is why wines are kept in a cellar; you need consistent temperature. The
temperature differential could also be used for energy. A home could be heated or cooled
by taking advantage of the fact above. Even without the earth core you could heat a home
in the winter and cool it in the summer. The earth core makes the temperature higher the
deeper you dig, although you need to dig somewhat deep to feel a difference. We did not
take that into account above.
292
5.3.3
CHAPTER 5. MORE ON EIGENVALUE PROBLEMS
Exercises
Exercise 5.3.5: Suppose that the forcing function for the vibrating string is ๐น0 sin(๐œ”๐‘ก). Derive the
particular solution ๐‘ฆ ๐‘ .
Exercise 5.3.6: Take the forced vibrating string. Suppose that ๐ฟ = 1, ๐‘Ž = 1. Suppose that the
forcing function is the square wave that is 1 on the interval 0 < ๐‘ฅ < 1 and −1 on the interval
−1 < ๐‘ฅ < 0. Find the particular solution. Hint: You may want to use result of Exercise 5.3.5.
Exercise 5.3.7: The units are cgs (centimeters-grams-seconds). For ๐‘˜ = 0.005, ๐œ” = 1.991 × 10−7 ,
๐ด0 = 20. Find the depth at which the temperature variation is half (±10 degrees) of what it is on the
surface.
Exercise 5.3.8: Derive the solution for underground temperature oscillation without assuming that
๐‘‡0 = 0.
Exercise 5.3.101: Take the forced vibrating string. Suppose that ๐ฟ = 1, ๐‘Ž = 1. Suppose that
the forcing function is a sawtooth, that is |๐‘ฅ| − 21 on −1 < ๐‘ฅ < 1 extended periodically. Find the
particular solution.
Exercise 5.3.102: The units are cgs (centimeters-grams-seconds). For ๐‘˜ = 0.01, ๐œ” = 1.991 × 10−7 ,
๐ด0 = 25. Find the depth at which the summer is again the hottest point.
Chapter 6
The Laplace transform
6.1
The Laplace transform
Note: 1.5–2 lectures, §10.1 in [EP], §6.1 and parts of §6.2 in [BD]
6.1.1
The transform
In this chapter we will discuss the Laplace transform† . The Laplace transform is a very
efficient method to solve certain ODE or PDE problems. The transform takes a differential
equation and turns it into an algebraic equation. If the algebraic equation can be solved,
applying the inverse transform gives us our desired solution. The Laplace transform also has
applications in the analysis of electrical circuits, NMR spectroscopy, signal processing, and
elsewhere. Finally, understanding the Laplace transform will also help with understanding
the related Fourier transform, which, however, requires more understanding of complex
numbers. We will not cover the Fourier transform.
The Laplace transform also gives a lot of insight into the nature of the equations we are
dealing with. It can be seen as converting between the time and the frequency domain. For
example, take the standard equation
๐‘š๐‘ฅ 00(๐‘ก) + ๐‘๐‘ฅ 0(๐‘ก) + ๐‘˜๐‘ฅ(๐‘ก) = ๐‘“ (๐‘ก).
We can think of ๐‘ก as time and ๐‘“ (๐‘ก) as incoming signal. The Laplace transform will convert
the equation from a differential equation in time to an algebraic (no derivatives) equation,
where the new independent variable ๐‘  is the frequency.
We can think of the Laplace transform as a black box. It eats functions and spits out
functions in a new variable. We write โ„’ ๐‘“ (๐‘ก) = ๐น(๐‘ ) for the Laplace transform of ๐‘“ (๐‘ก).
It is common to write lower case letters for functions in the time domain and upper case
letters for functions in the frequency domain. We use the same letter to denote that one
† Just
like the Laplace equation and the Laplacian, the Laplace transform is also named after Pierre-Simon,
marquis de Laplace (1749–1827).
294
CHAPTER 6. THE LAPLACE TRANSFORM
function is the Laplace transform of the other. For example ๐น(๐‘ ) is the Laplace transform of
๐‘“ (๐‘ก). Let us define the transform.
def
โ„’ ๐‘“ (๐‘ก) = ๐น(๐‘ ) =
∫
∞
๐‘’ −๐‘ ๐‘ก ๐‘“ (๐‘ก) ๐‘‘๐‘ก.
0
We note that we are only considering ๐‘ก ≥ 0 in the transform. Of course, if we think of ๐‘ก
as time there is no problem, we are generally interested in finding out what will happen
in the future (Laplace transform is one place where it is safe to ignore the past). Let us
compute some simple transforms.
Example 6.1.1: Suppose ๐‘“ (๐‘ก) = 1, then
∫
โ„’{1} =
∞
๐‘’
−๐‘ ๐‘ก
0
๐‘’ −๐‘ ๐‘ก
๐‘‘๐‘ก =
−๐‘ 
∞
๐‘’ −๐‘ ๐‘ก
= lim
โ„Ž→∞ −๐‘ 
๐‘ก=0
โ„Ž
= lim
โ„Ž→∞
๐‘ก=0
๐‘’ −๐‘  โ„Ž
1
1
−
= .
−๐‘ 
−๐‘ 
๐‘ 
The limit (the improper integral) only exists if ๐‘  > 0. So โ„’{1} is only defined for ๐‘  > 0.
Example 6.1.2: Suppose ๐‘“ (๐‘ก) = ๐‘’ −๐‘Ž๐‘ก , then
โ„’ ๐‘’
−๐‘Ž๐‘ก
∫
=
∞
๐‘’
−๐‘ ๐‘ก −๐‘Ž๐‘ก
๐‘’
∞
∫
๐‘‘๐‘ก =
0
๐‘’
−(๐‘ +๐‘Ž)๐‘ก
0
๐‘’ −(๐‘ +๐‘Ž)๐‘ก
๐‘‘๐‘ก =
−(๐‘  + ๐‘Ž)
∞
=
๐‘ก=0
1
.
๐‘ +๐‘Ž
The limit only exists if ๐‘  + ๐‘Ž > 0. So โ„’ ๐‘’ −๐‘Ž๐‘ก is only defined for ๐‘  + ๐‘Ž > 0.
Example 6.1.3: Suppose ๐‘“ (๐‘ก) = ๐‘ก, then using integration by parts
∫
∞
โ„’{๐‘ก} =
๐‘’ −๐‘ ๐‘ก ๐‘ก ๐‘‘๐‘ก
0
∞
−๐‘ก๐‘’ −๐‘ ๐‘ก
=
๐‘ 
=0+
1
๐‘ 
1
+
๐‘ 
๐‘ก=0
∫
∞
๐‘’ −๐‘ ๐‘ก ๐‘‘๐‘ก
0
∞
๐‘’ −๐‘ ๐‘ก
−๐‘ 
๐‘ก=0
1
= 2.
๐‘ 
Again, the limit only exists if ๐‘  > 0.
Example 6.1.4: A common function is the unit step function, which is sometimes called the
Heaviside function∗ . This function is generally given as
(
๐‘ข(๐‘ก) =
∗ The
0 if ๐‘ก < 0,
1 if ๐‘ก ≥ 0.
function is named after the English mathematician, engineer, and physicist Oliver Heaviside
(1850–1925). Only by coincidence is the function “heavy” on “one side.”
295
6.1. THE LAPLACE TRANSFORM
Let us find the Laplace transform of ๐‘ข(๐‘ก − ๐‘Ž), where ๐‘Ž ≥ 0 is some constant. That is, the
function that is 0 for ๐‘ก < ๐‘Ž and 1 for ๐‘ก ≥ ๐‘Ž.
โ„’ ๐‘ข(๐‘ก − ๐‘Ž) =
∞
∫
๐‘’
−๐‘ ๐‘ก
๐‘ข(๐‘ก − ๐‘Ž) ๐‘‘๐‘ก =
∞
∫
๐‘’
−๐‘ ๐‘ก
๐‘Ž
0
๐‘’ −๐‘ ๐‘ก
๐‘‘๐‘ก =
−๐‘ 
∞
=
๐‘ก=๐‘Ž
๐‘’ −๐‘Ž๐‘ 
,
๐‘ 
where of course ๐‘  > 0 (and ๐‘Ž ≥ 0 as we said before).
By applying similar procedures we can compute the transforms of many elementary
functions. Many basic transforms are listed in Table 6.1.
๐‘“ (๐‘ก)
โ„’ ๐‘“ (๐‘ก)
๐‘“ (๐‘ก)
โ„’ ๐‘“ (๐‘ก)
๐ถ
๐ถ
๐‘ 
1
๐‘ 2
2
๐‘ 3
6
๐‘ 4
๐‘›!
๐‘  ๐‘›+1
1
๐‘ +๐‘Ž
sin(๐œ”๐‘ก)
๐œ”
๐‘  2 +๐œ” 2
๐‘ 
๐‘  2 +๐œ” 2
๐œ”
๐‘  2 −๐œ” 2
๐‘ 
๐‘  2 −๐œ” 2
๐‘’ −๐‘Ž๐‘ 
๐‘ 
๐‘ก
๐‘ก2
๐‘ก3
๐‘ก๐‘›
๐‘’ −๐‘Ž๐‘ก
cos(๐œ”๐‘ก)
sinh(๐œ”๐‘ก)
cosh(๐œ”๐‘ก)
๐‘ข(๐‘ก − ๐‘Ž)
Table 6.1: Some Laplace transforms (๐ถ, ๐œ”, and ๐‘Ž are constants).
Exercise 6.1.1: Verify Table 6.1.
Since the transform is defined by an integral. We can use the linearity properties of the
integral. For example, suppose ๐ถ is a constant, then
โ„’ ๐ถ ๐‘“ (๐‘ก) =
∫
∞
๐‘’
−๐‘ ๐‘ก
๐ถ ๐‘“ (๐‘ก) ๐‘‘๐‘ก = ๐ถ
0
∞
∫
๐‘’ −๐‘ ๐‘ก ๐‘“ (๐‘ก) ๐‘‘๐‘ก = ๐ถโ„’ ๐‘“ (๐‘ก) .
0
So we can “pull out” a constant out of the transform. Similarly we have linearity. Since
linearity is very important we state it as a theorem.
Theorem 6.1.1 (Linearity of the Laplace transform). Suppose that ๐ด, ๐ต, and ๐ถ are constants,
then
โ„’ ๐ด ๐‘“ (๐‘ก) + ๐ต๐‘”(๐‘ก) = ๐ดโ„’ ๐‘“ (๐‘ก) + ๐ตโ„’ ๐‘”(๐‘ก) ,
and in particular
โ„’ ๐ถ ๐‘“ (๐‘ก) = ๐ถโ„’ ๐‘“ (๐‘ก) .
Exercise 6.1.2: Verify the theorem. That is, show that โ„’ ๐ด ๐‘“ (๐‘ก)+๐ต๐‘”(๐‘ก) = ๐ดโ„’ ๐‘“ (๐‘ก) +๐ตโ„’ ๐‘”(๐‘ก) .
296
CHAPTER 6. THE LAPLACE TRANSFORM
These rules together with Table 6.1 on the preceding page make it easy to find the
Laplace transform of a whole lot of functions already. But be careful. It is a common
mistake to think that the Laplace transform of a product is the product of the transforms.
In general
โ„’ ๐‘“ (๐‘ก)๐‘”(๐‘ก) ≠ โ„’ ๐‘“ (๐‘ก) โ„’ ๐‘”(๐‘ก) .
It must also be noted that not all functions have a Laplace transform. For example, the
function 1๐‘ก does not have a Laplace transform as the integral diverges for all ๐‘ . Similarly,
2
tan ๐‘ก or ๐‘’ ๐‘ก do not have Laplace transforms.
6.1.2
Existence and uniqueness
When does the Laplace transform exist? A function ๐‘“ (๐‘ก) is of exponential order as ๐‘ก goes to
infinity if
| ๐‘“ (๐‘ก)| ≤ ๐‘€๐‘’ ๐‘๐‘ก ,
for some constants ๐‘€ and ๐‘, for sufficiently large ๐‘ก (say for all ๐‘ก > ๐‘ก0 for some ๐‘ก0 ). The
simplest way to check this condition is to try and compute
lim
๐‘ก→∞
๐‘“ (๐‘ก)
.
๐‘’ ๐‘๐‘ก
If the limit exists and is finite (usually zero), then ๐‘“ (๐‘ก) is of exponential order.
Exercise 6.1.3: Use L’Hopital’s rule from calculus to show that a polynomial is of exponential order.
Hint: Note that a sum of two exponential order functions is also of exponential order. Then show
that ๐‘ก ๐‘› is of exponential order for any ๐‘›.
For an exponential order function we have existence and uniqueness of the Laplace
transform.
Theorem 6.1.2 (Existence).
Let ๐‘“ (๐‘ก) be continuous and of exponential order for a certain constant
๐‘. Then ๐น(๐‘ ) = โ„’ ๐‘“ (๐‘ก) is defined for all ๐‘  > ๐‘.
The existence is not difficult to see. Let ๐‘“ (๐‘ก) be of exponential order, that is | ๐‘“ (๐‘ก)| ≤ ๐‘€๐‘’ ๐‘๐‘ก
for all ๐‘ก > 0 (for simplicity ๐‘ก0 = 0). Let ๐‘  > ๐‘, or in other words (๐‘  − ๐‘) > 0. By the
comparison theorem from calculus, the improper integral defining โ„’ ๐‘“ (๐‘ก) exists if the
following integral exists
∫
0
∞
๐‘’
−๐‘ ๐‘ก
๐‘๐‘ก
(๐‘€๐‘’ ) ๐‘‘๐‘ก = ๐‘€
∫
∞
๐‘’
−(๐‘ −๐‘)๐‘ก
0
๐‘’ −(๐‘ −๐‘)๐‘ก
๐‘‘๐‘ก = ๐‘€
−(๐‘  − ๐‘)
∞
=
๐‘ก=0
๐‘€
.
๐‘ −๐‘
The transform also exists for some other functions that are not of exponential order,
but that will not be relevant to us. Before dealing with uniqueness, let us note that for
exponential order functions we obtain that their Laplace transform decays at infinity:
lim ๐น(๐‘ ) = 0.
๐‘ →∞
297
6.1. THE LAPLACE TRANSFORM
Theorem 6.1.3 (Uniqueness). Let ๐‘“ (๐‘ก) and ๐‘”(๐‘ก) be continuous and of exponential order. Suppose
that there exists a constant ๐ถ, such that ๐น(๐‘ ) = ๐บ(๐‘ ) for all ๐‘  > ๐ถ. Then ๐‘“ (๐‘ก) = ๐‘”(๐‘ก) for all ๐‘ก ≥ 0.
Both theorems hold for piecewise continuous functions as well. Recall that piecewise
continuous means that the function is continuous except perhaps at a discrete set of points,
where it has jump discontinuities like the Heaviside function. Uniqueness, however, does
not “see” values at the discontinuities. So we can only conclude that ๐‘“ (๐‘ก) = ๐‘”(๐‘ก) outside of
discontinuities. For example, the unit step function is sometimes defined using ๐‘ข(0) = 1/2.
This new step function, however, has the exact same Laplace transform as the one we
defined earlier where ๐‘ข(0) = 1.
6.1.3
The inverse transform
As we said, the Laplace transform will allow us to convert a differential equation into an
algebraic equation. Once we solve the algebraic equation in the frequency domain we will
want to get back to the time domain, as that is what
we are interested in. Given a function
๐น(๐‘ ), we wish to find a function ๐‘“ (๐‘ก) such that โ„’ ๐‘“ (๐‘ก) = ๐น(๐‘ ). Theorem 6.1.3 says that the
solution ๐‘“ (๐‘ก) is unique.
So we can without fear make the following definition.
Suppose ๐น(๐‘ ) = โ„’ ๐‘“ (๐‘ก) for some function ๐‘“ (๐‘ก). Define the inverse Laplace transform as
def
โ„’ −1 ๐น(๐‘ ) = ๐‘“ (๐‘ก).
There is an integral formula for the inverse, but it is not as simple as the transform itself—it
requires complex numbers and path integrals. For us it will suffice to compute the inverse
using Table 6.1 on page 295.
1
Example 6.1.5: Take ๐น(๐‘ ) = ๐‘ +1
. Find the inverse Laplace transform.
We look at the table to find
1
−1
โ„’
= ๐‘’ −๐‘ก .
๐‘ +1
As the Laplace transform is linear, the inverse Laplace transform is also linear. That is,
โ„’ −1 ๐ด๐น(๐‘ ) + ๐ต๐บ(๐‘ ) = ๐ดโ„’ −1 ๐น(๐‘ ) + ๐ตโ„’ −1 ๐บ(๐‘ ) .
Of course, we also have โ„’ −1 ๐ด๐น(๐‘ ) = ๐ดโ„’ −1 ๐น(๐‘ ) . Let us demonstrate how linearity can
be used.
Example 6.1.6: Take ๐น(๐‘ ) = ๐‘  ๐‘ +๐‘ +1
3 +๐‘  . Find the inverse Laplace transform.
First we use the method of partial fractions to write ๐น in a form where we can use Table 6.1
on page 295. We factor the denominator as ๐‘ (๐‘  2 + 1) and write
2
๐‘  2 + ๐‘  + 1 ๐ด ๐ต๐‘  + ๐ถ
= + 2
.
๐‘ 
๐‘ 3 + ๐‘ 
๐‘  +1
Putting the right-hand side over a common denominator and equating the numerators
we get ๐ด(๐‘  2 + 1) + ๐‘ (๐ต๐‘  + ๐ถ) = ๐‘  2 + ๐‘  + 1. Expanding and equating coefficients we obtain
298
CHAPTER 6. THE LAPLACE TRANSFORM
๐ด + ๐ต = 1, ๐ถ = 1, ๐ด = 1, and thus ๐ต = 0. In other words,
๐น(๐‘ ) =
๐‘ 2 + ๐‘  + 1 1
1
= + 2
.
3
๐‘  ๐‘  +1
๐‘  +๐‘ 
By linearity of the inverse Laplace transform we get
โ„’
−1
1
๐‘ 2 + ๐‘  + 1
−1
−1 1
+
โ„’
=
โ„’
= 1 + sin ๐‘ก.
๐‘ 
๐‘ 3 + ๐‘ 
๐‘ 2 + 1
Another useful property is the so-called shifting property or the first shifting property
โ„’ ๐‘’ −๐‘Ž๐‘ก ๐‘“ (๐‘ก) = ๐น(๐‘  + ๐‘Ž),
where ๐น(๐‘ ) is the Laplace transform of ๐‘“ (๐‘ก).
Exercise 6.1.4: Derive the first shifting property from the definition of the Laplace transform.
The shifting property can be used, for example, when the denominator is a more
complicated quadratic that may come up in the method of partial fractions. We complete
the square and write such quadratics as (๐‘  + ๐‘Ž)2 + ๐‘ and then use the shifting property.
Example 6.1.7: Find โ„’ −1
1
๐‘  2 +4๐‘ +8
.
First we complete the square to make the denominator (๐‘  + 2)2 + 4. Next we find
โ„’
−1
1
1
= sin(2๐‘ก).
2
2
๐‘  +4
Putting it all together with the shifting property, we find
โ„’
−1
1
1
1
= โ„’ −1
= ๐‘’ −2๐‘ก sin(2๐‘ก).
2
2
2
๐‘  + 4๐‘  + 8
(๐‘  + 2) + 4
In general, we want to be able to apply the Laplace transform to rational functions, that
is functions of the form
๐น(๐‘ )
๐บ(๐‘ )
where ๐น(๐‘ ) and ๐บ(๐‘ ) are polynomials. Since normally, for the functions that we are
considering, the Laplace transform goes to zero as ๐‘  → ∞, it is not hard to see that the
degree of ๐น(๐‘ ) must be smaller than that of ๐บ(๐‘ ). Such rational functions are called proper
rational functions and we can always apply the method of partial fractions. Of course this
means we need to be able to factor the denominator into linear and quadratic terms, which
involves finding the roots of the denominator.
299
6.1. THE LAPLACE TRANSFORM
6.1.4
Exercises
Exercise 6.1.5: Find the Laplace transform of 3 + ๐‘ก 5 + sin(๐œ‹๐‘ก).
Exercise 6.1.6: Find the Laplace transform of ๐‘Ž + ๐‘๐‘ก + ๐‘๐‘ก 2 for some constants ๐‘Ž, ๐‘, and ๐‘.
Exercise 6.1.7: Find the Laplace transform of ๐ด cos(๐œ”๐‘ก) + ๐ต sin(๐œ”๐‘ก).
Exercise 6.1.8: Find the Laplace transform of cos2 (๐œ”๐‘ก).
Exercise 6.1.9: Find the inverse Laplace transform of
4
.
๐‘  2 −9
Exercise 6.1.10: Find the inverse Laplace transform of
2๐‘ 
.
๐‘  2 −1
Exercise 6.1.11: Find the inverse Laplace transform of
1
.
(๐‘ −1)2 (๐‘ +1)
(
Exercise 6.1.12: Find the Laplace transform of ๐‘“ (๐‘ก) =
Exercise 6.1.13: Find the inverse Laplace transform of
๐‘ก
if ๐‘ก ≥ 1,
0 if ๐‘ก < 1.
๐‘ 
.
(๐‘  2 +๐‘ +2)(๐‘ +4)
Exercise 6.1.14: Find the Laplace transform of sin ๐œ”(๐‘ก − ๐‘Ž) .
Exercise 6.1.15: Find the Laplace transform of ๐‘ก sin(๐œ”๐‘ก). Hint: Several integrations by parts.
Exercise 6.1.101: Find the Laplace transform of 4(๐‘ก + 1)2 .
Exercise 6.1.102: Find the inverse Laplace transform of
8
.
๐‘  3 (๐‘ +2)
Exercise 6.1.103: Find the Laplace transform of ๐‘ก๐‘’ −๐‘ก . Hint: Integrate by parts.
Exercise 6.1.104: Find the Laplace transform of sin(๐‘ก)๐‘’ −๐‘ก . Hint: Integrate by parts.
300
CHAPTER 6. THE LAPLACE TRANSFORM
6.2
Transforms of derivatives and ODEs
Note: 2 lectures, §7.2–7.3 in [EP], §6.2 and §6.3 in [BD]
6.2.1
Transforms of derivatives
Let us see how the Laplace transform is used for differential equations. First let us try to
find the Laplace transform of a function that is a derivative. Suppose ๐‘”(๐‘ก) is a differentiable
function of exponential order, that is, | ๐‘”(๐‘ก)| ≤ ๐‘€๐‘’ ๐‘๐‘ก for some ๐‘€ and ๐‘. So โ„’ ๐‘”(๐‘ก) exists,
and what is more, lim๐‘ก→∞ ๐‘’ −๐‘ ๐‘ก ๐‘”(๐‘ก) = 0 when ๐‘  > ๐‘. Then
0
โ„’ ๐‘” (๐‘ก) =
∫
∞
๐‘’
h
−๐‘ ๐‘ก 0
๐‘” (๐‘ก) ๐‘‘๐‘ก = ๐‘’
−๐‘ ๐‘ก
๐‘”(๐‘ก)
i∞
∫
−
๐‘ก=0
0
∞
(−๐‘ ) ๐‘’ −๐‘ ๐‘ก ๐‘”(๐‘ก) ๐‘‘๐‘ก = −๐‘”(0) + ๐‘ โ„’ ๐‘”(๐‘ก) .
0
We repeat this procedure for higher derivatives. The results are listed in Table 6.2. The
procedure also works for piecewise smooth functions, that is functions that are piecewise
continuous with a piecewise continuous derivative.
๐‘“ (๐‘ก)
โ„’ ๐‘“ (๐‘ก) = ๐น(๐‘ )
๐‘” 0(๐‘ก)
๐‘ ๐บ(๐‘ ) − ๐‘”(0)
๐‘” 00(๐‘ก)
๐‘  2 ๐บ(๐‘ ) − ๐‘  ๐‘”(0) − ๐‘” 0(0)
๐‘” 000(๐‘ก)
๐‘  3 ๐บ(๐‘ ) − ๐‘  2 ๐‘”(0) − ๐‘  ๐‘” 0(0) − ๐‘” 00(0)
Table 6.2: Laplace transforms of derivatives (๐บ(๐‘ ) = โ„’ ๐‘”(๐‘ก) as usual).
Exercise 6.2.1: Verify Table 6.2.
6.2.2
Solving ODEs with the Laplace transform
Notice that the Laplace transform turns differentiation into multiplication by ๐‘ . Let us see
how to apply this fact to differential equations.
Example 6.2.1: Take the equation
๐‘ฅ 00(๐‘ก) + ๐‘ฅ(๐‘ก) = cos(2๐‘ก),
๐‘ฅ 0(0) = 1.
๐‘ฅ(0) = 0,
We will take the Laplace transform of both sides. By ๐‘‹(๐‘ ) we will, as usual, denote the
Laplace transform of ๐‘ฅ(๐‘ก).
โ„’ ๐‘ฅ 00(๐‘ก) + ๐‘ฅ(๐‘ก) = โ„’ cos(2๐‘ก) ,
๐‘ 
๐‘  2 ๐‘‹(๐‘ ) − ๐‘ ๐‘ฅ(0) − ๐‘ฅ 0(0) + ๐‘‹(๐‘ ) = 2
.
๐‘  +4
6.2. TRANSFORMS OF DERIVATIVES AND ODES
301
We plug in the initial conditions now—this makes the computations more streamlined—to
obtain
๐‘ 
๐‘  2 ๐‘‹(๐‘ ) − 1 + ๐‘‹(๐‘ ) = 2
.
๐‘  +4
We solve for ๐‘‹(๐‘ ),
1
๐‘ 
+ 2
.
๐‘‹(๐‘ ) = 2
2
(๐‘  + 1)(๐‘  + 4) ๐‘  + 1
We use partial fractions (exercise) to write
๐‘‹(๐‘ ) =
1 ๐‘ 
1 ๐‘ 
1
−
+
.
3 ๐‘ 2 + 1 3 ๐‘ 2 + 4 ๐‘ 2 + 1
Now take the inverse Laplace transform to obtain
๐‘ฅ(๐‘ก) =
1
1
cos(๐‘ก) − cos(2๐‘ก) + sin(๐‘ก).
3
3
The procedure for linear constant coefficient equations is as follows. We take an ordinary
differential equation in the time variable ๐‘ก. We apply the Laplace transform to transform
the equation into an algebraic (non differential) equation in the frequency domain. All the
๐‘ฅ(๐‘ก), ๐‘ฅ 0(๐‘ก), ๐‘ฅ 00(๐‘ก), and so on, will be converted to ๐‘‹(๐‘ ), ๐‘ ๐‘‹(๐‘ ) − ๐‘ฅ(0), ๐‘  2 ๐‘‹(๐‘ ) − ๐‘ ๐‘ฅ(0) − ๐‘ฅ 0(0),
and so on. We solve the equation for ๐‘‹(๐‘ ). Then taking the inverse transform, if possible,
we find ๐‘ฅ(๐‘ก).
It should be noted that since not every function has a Laplace transform, not every
equation can be solved in this manner. Also if the equation is not a linear constant coefficient
ODE, then by applying the Laplace transform we may not obtain an algebraic equation.
6.2.3
Using the Heaviside function
Before we move on to more general equations than those we could solve before, we want to
consider the Heaviside function. See Figure 6.1 on the next page for the graph.
(
๐‘ข(๐‘ก) =
0 if ๐‘ก < 0,
1 if ๐‘ก ≥ 0.
This function is useful for putting together functions, or cutting functions off. Most
commonly it is used as ๐‘ข(๐‘ก − ๐‘Ž) for some constant ๐‘Ž. This just shifts the graph to the right
by ๐‘Ž. That is, it is a function that is 0 when ๐‘ก < ๐‘Ž and 1 when ๐‘ก ≥ ๐‘Ž. Suppose for example
that ๐‘“ (๐‘ก) is a “signal” and you started receiving the signal sin ๐‘ก at time ๐‘ก = ๐œ‹. The function
๐‘“ (๐‘ก) should then be defined as
(
๐‘“ (๐‘ก) =
0
if ๐‘ก < ๐œ‹,
sin ๐‘ก
if ๐‘ก ≥ ๐œ‹.
Using the Heaviside function, ๐‘“ (๐‘ก) can be written as
๐‘“ (๐‘ก) = ๐‘ข(๐‘ก − ๐œ‹) sin ๐‘ก.
302
CHAPTER 6. THE LAPLACE TRANSFORM
-1.0
-0.5
0.0
0.5
1.0
1.00
1.00
0.75
0.75
0.50
0.50
0.25
0.25
0.00
0.00
-1.0
-0.5
0.0
0.5
1.0
Figure 6.1: Plot of the Heaviside (unit step) function ๐‘ข(๐‘ก).
Similarly the step function that is 1 on the interval [1, 2) and zero everywhere else can be
written as
๐‘ข(๐‘ก − 1) − ๐‘ข(๐‘ก − 2).
The Heaviside function is useful to define functions defined piecewise. If you want to
define ๐‘“ (๐‘ก) such that ๐‘“ (๐‘ก) = ๐‘ก when ๐‘ก is in [0, 1], ๐‘“ (๐‘ก) = −๐‘ก + 2 when ๐‘ก is in [1, 2], and ๐‘“ (๐‘ก) = 0
otherwise, then you can use the expression
๐‘“ (๐‘ก) = ๐‘ก ๐‘ข(๐‘ก) − ๐‘ข(๐‘ก − 1) + (−๐‘ก + 2) ๐‘ข(๐‘ก − 1) − ๐‘ข(๐‘ก − 2) .
Hence it is useful to know how the Heaviside function interacts with the Laplace
transform. We have already seen that
โ„’ ๐‘ข(๐‘ก − ๐‘Ž) =
๐‘’ −๐‘Ž๐‘ 
.
๐‘ 
This can be generalized into a shifting property or second shifting property.
โ„’ ๐‘“ (๐‘ก − ๐‘Ž) ๐‘ข(๐‘ก − ๐‘Ž) = ๐‘’ −๐‘Ž๐‘  โ„’ ๐‘“ (๐‘ก) .
(6.1)
Example 6.2.2: The forcing function in our setup need not be periodic. Consider the
mass-spring system
๐‘ฅ 00(๐‘ก) + ๐‘ฅ(๐‘ก) = ๐‘“ (๐‘ก),
๐‘ฅ(0) = 0,
๐‘ฅ 0(0) = 0,
where ๐‘“ (๐‘ก) = 1 if 1 ≤ ๐‘ก < 5 and zero otherwise. Imagine a rocket attached to the mass
is fired for 4 seconds starting at ๐‘ก = 1. Or perhaps imagine an RLC circuit, where the
voltage is raised at a constant rate for 4 seconds starting at ๐‘ก = 1, and then held steady
again starting at ๐‘ก = 5.
303
6.2. TRANSFORMS OF DERIVATIVES AND ODES
We can write ๐‘“ (๐‘ก) = ๐‘ข(๐‘ก − 1) − ๐‘ข(๐‘ก − 5). We transform the equation and we plug in the
initial conditions as before to obtain
๐‘  2 ๐‘‹(๐‘ ) + ๐‘‹(๐‘ ) =
๐‘’ −๐‘  ๐‘’ −5๐‘ 
−
.
๐‘ 
๐‘ 
We solve for ๐‘‹(๐‘ ) to obtain
๐‘‹(๐‘ ) =
๐‘’ −5๐‘ 
๐‘’ −๐‘ 
−
.
๐‘ (๐‘  2 + 1) ๐‘ (๐‘  2 + 1)
We leave it as an exercise to the reader to show that
โ„’
In other words โ„’{1 − cos ๐‘ก} =
โ„’
−1
−1
1
= 1 − cos ๐‘ก.
๐‘ (๐‘  2 + 1)
1
.
๐‘ (๐‘  2 +1)
So using (6.1) we find
๐‘’ −๐‘ 
= โ„’ −1 {๐‘’ −๐‘  โ„’{1 − cos ๐‘ก}} = 1 − cos(๐‘ก − 1) ๐‘ข(๐‘ก − 1).
2
๐‘ (๐‘  + 1)
Similarly
โ„’
−1
−5๐‘ 
๐‘’ −5๐‘ 
−1
=
โ„’
๐‘’
โ„’{1
−
cos
๐‘ก}
=
1
−
cos(๐‘ก
−
5)
๐‘ข(๐‘ก − 5).
๐‘ (๐‘  2 + 1)
Hence, the solution is
๐‘ฅ(๐‘ก) = 1 − cos(๐‘ก − 1) ๐‘ข(๐‘ก − 1) − 1 − cos(๐‘ก − 5) ๐‘ข(๐‘ก − 5).
The plot of this solution is given in Figure 6.2 on the following page.
6.2.4
Transfer functions
The Laplace transform leads to the following useful concept for studying the steady state
behavior of a linear system. Consider an equation of the form
๐ฟ๐‘ฅ = ๐‘“ (๐‘ก),
where ๐ฟ is a linear constant coefficient differential operator. Then ๐‘“ (๐‘ก) is usually thought of
as input of the system and ๐‘ฅ(๐‘ก) is thought of as the output of the system. For example, for
a mass-spring system the input is the forcing function and the output is the behavior of
the mass. We would like to have a convenient way to study the behavior of the system for
different inputs.
Let us suppose that all the initial conditions are zero and take the Laplace transform of
the equation, we obtain the equation
๐ด(๐‘ )๐‘‹(๐‘ ) = ๐น(๐‘ ).
304
CHAPTER 6. THE LAPLACE TRANSFORM
0
5
10
15
20
2
2
1
1
0
0
-1
-1
-2
-2
0
5
10
15
20
Figure 6.2: Plot of ๐‘ฅ(๐‘ก).
Solving for the ratio ๐‘‹(๐‘ )/๐น(๐‘ ) we obtain the so-called transfer function ๐ป(๐‘ ) = 1/๐ด(๐‘ ), that is,
๐ป(๐‘ ) =
๐‘‹(๐‘ )
.
๐น(๐‘ )
In other words, ๐‘‹(๐‘ ) = ๐ป(๐‘ )๐น(๐‘ ). We obtain an algebraic dependence of the output of the
system based on the input. We can now easily study the steady state behavior of the system
given different inputs by simply multiplying by the transfer function.
Example 6.2.3: Given ๐‘ฅ 00 + ๐œ”02 ๐‘ฅ = ๐‘“ (๐‘ก), let us find the transfer function (assuming the initial
conditions are zero).
First, we take the Laplace transform of the equation.
๐‘  2 ๐‘‹(๐‘ ) + ๐œ”02 ๐‘‹(๐‘ ) = ๐น(๐‘ ).
Now we solve for the transfer function ๐‘‹(๐‘ )/๐น(๐‘ ).
๐ป(๐‘ ) =
๐‘‹(๐‘ )
1
=
.
๐น(๐‘ )
๐‘  2 + ๐œ”02
Let us see how to use the transfer function. Suppose we have the constant input ๐‘“ (๐‘ก) = 1.
Hence ๐น(๐‘ ) = 1/๐‘  , and
1
1
๐‘‹(๐‘ ) = ๐ป(๐‘ )๐น(๐‘ ) =
.
2
๐‘ 2 + ๐œ” ๐‘ 
0
Taking the inverse Laplace transform of ๐‘‹(๐‘ ) we obtain
๐‘ฅ(๐‘ก) =
1 − cos(๐œ”0 ๐‘ก)
๐œ”02
.
305
6.2. TRANSFORMS OF DERIVATIVES AND ODES
6.2.5
Transforms of integrals
A feature of Laplace transforms is that it is also able to easily deal with integral equations.
That is, equations in which integrals rather than derivatives of functions appear. The basic
property, which can be proved by applying the definition and doing integration by parts, is
∫
๐‘ก
1
๐น(๐‘ ).
๐‘ 
๐‘“ (๐œ) ๐‘‘๐œ =
โ„’
0
It is sometimes useful (e.g. for computing the inverse transform) to write this as
∫
๐‘ก
๐‘“ (๐œ) ๐‘‘๐œ = โ„’ −1
0
Example 6.2.4: To compute โ„’ −1
rule.
โ„’
−1
1 1
=
๐‘  ๐‘ 2 + 1
∫
n
1
๐‘ (๐‘  2 +1)
๐‘ก
โ„’
−1
0
o
1
๐น(๐‘ ) .
๐‘ 
we could proceed by applying this integration
1
2
๐‘  +1
๐‘‘๐œ =
∫
๐‘ก
sin ๐œ ๐‘‘๐œ = 1 − cos ๐‘ก.
0
Example 6.2.5: An equation containing an integral of the unknown function is called an
integral equation. Consider
๐‘ก =
2
∫
๐‘ก
๐‘’ ๐œ ๐‘ฅ(๐œ) ๐‘‘๐œ,
0
where we wish to solve for ๐‘ฅ(๐‘ก). We apply the Laplace transform and the shifting property
to get
2
1 1
= โ„’ ๐‘’ ๐‘ก ๐‘ฅ(๐‘ก) = ๐‘‹(๐‘  − 1),
3
๐‘ 
๐‘ 
๐‘ 
where ๐‘‹(๐‘ ) = โ„’ ๐‘ฅ(๐‘ก) . Thus
๐‘‹(๐‘  − 1) =
2
๐‘ 2
๐‘‹(๐‘ ) =
or
2
(๐‘  + 1)2
.
We use the shifting property again
๐‘ฅ(๐‘ก) = 2๐‘’ −๐‘ก ๐‘ก.
6.2.6
Exercises
Exercise 6.2.2: Using the Heaviside function write down the piecewise function that is 0 for ๐‘ก < 0,
๐‘ก 2 for ๐‘ก in [0, 1] and ๐‘ก for ๐‘ก > 1.
Exercise 6.2.3: Using the Laplace transform solve
๐‘š๐‘ฅ 00 + ๐‘๐‘ฅ 0 + ๐‘˜๐‘ฅ = 0,
๐‘ฅ(0) = ๐‘Ž,
๐‘ฅ 0(0) = ๐‘,
where ๐‘š > 0, ๐‘ > 0, ๐‘˜ > 0, and ๐‘ 2 − 4๐‘˜๐‘š > 0 (system is overdamped).
306
CHAPTER 6. THE LAPLACE TRANSFORM
Exercise 6.2.4: Using the Laplace transform solve
๐‘š๐‘ฅ 00 + ๐‘๐‘ฅ 0 + ๐‘˜๐‘ฅ = 0,
๐‘ฅ(0) = ๐‘Ž,
๐‘ฅ 0(0) = ๐‘,
where ๐‘š > 0, ๐‘ > 0, ๐‘˜ > 0, and ๐‘ 2 − 4๐‘˜๐‘š < 0 (system is underdamped).
Exercise 6.2.5: Using the Laplace transform solve
๐‘š๐‘ฅ 00 + ๐‘๐‘ฅ 0 + ๐‘˜๐‘ฅ = 0,
๐‘ฅ(0) = ๐‘Ž,
๐‘ฅ 0(0) = ๐‘,
where ๐‘š > 0, ๐‘ > 0, ๐‘˜ > 0, and ๐‘ 2 = 4๐‘˜๐‘š (system is critically damped).
Exercise 6.2.6: Solve ๐‘ฅ 00 + ๐‘ฅ = ๐‘ข(๐‘ก − 1) for initial conditions ๐‘ฅ(0) = 0 and ๐‘ฅ 0(0) = 0.
Exercise 6.2.7: Show the differentiation of the transform property. Suppose โ„’ ๐‘“ (๐‘ก) = ๐น(๐‘ ), then
show
โ„’ −๐‘ก ๐‘“ (๐‘ก) = ๐น0(๐‘ ).
Hint: Differentiate under the integral sign.
Exercise 6.2.8: Solve ๐‘ฅ 000 + ๐‘ฅ = ๐‘ก 3 ๐‘ข(๐‘ก − 1) for initial conditions ๐‘ฅ(0) = 1 and ๐‘ฅ 0(0) = 0, ๐‘ฅ 00(0) = 0.
Exercise 6.2.9: Show the second shifting property: โ„’ ๐‘“ (๐‘ก − ๐‘Ž) ๐‘ข(๐‘ก − ๐‘Ž) = ๐‘’ −๐‘Ž๐‘  โ„’ ๐‘“ (๐‘ก) .
Exercise 6.2.10: Let us think of the mass-spring system with a rocket from Example 6.2.2. We
noticed that the solution kept oscillating after the rocket stopped running. The amplitude of the
oscillation depends on the time that the rocket was fired (for 4 seconds in the example).
a) Find a formula for the amplitude of the resulting oscillation in terms of the amount of time the
rocket is fired.
b) Is there a nonzero time (if so what is it?) for which the rocket fires and the resulting oscillation
has amplitude 0 (the mass is not moving)?
Exercise 6.2.11: Define
๏ฃฑ
๏ฃด
(๐‘ก − 1)2 if 1 ≤ ๐‘ก < 2,
๏ฃด
๏ฃฒ
๏ฃด
๐‘“ (๐‘ก) = 3 − ๐‘ก
๏ฃด
๏ฃด
๏ฃด0
๏ฃณ
if 2 ≤ ๐‘ก < 3,
otherwise.
a) Sketch the graph of ๐‘“ (๐‘ก).
b) Write down ๐‘“ (๐‘ก) using the Heaviside function.
c) Solve ๐‘ฅ 00 + ๐‘ฅ = ๐‘“ (๐‘ก), ๐‘ฅ(0) = 0, ๐‘ฅ 0(0) = 0 using Laplace transform.
Exercise 6.2.12: Find the transfer function for ๐‘š๐‘ฅ 00 + ๐‘๐‘ฅ 0 + ๐‘˜๐‘ฅ = ๐‘“ (๐‘ก) (assuming the initial
conditions are zero).
307
6.2. TRANSFORMS OF DERIVATIVES AND ODES
Exercise 6.2.101: Using the Heaviside function ๐‘ข(๐‘ก), write down the function
๐‘ก < 1,
๏ฃฑ
๏ฃด
0
๏ฃด
๏ฃฒ
๏ฃด
if
๏ฃด
๏ฃด
๏ฃด1
๏ฃณ
if 2 ≤ ๐‘ก.
๐‘“ (๐‘ก) = ๐‘ก − 1 if 1 ≤ ๐‘ก < 2,
Exercise 6.2.102: Solve ๐‘ฅ 00 − ๐‘ฅ = (๐‘ก 2 − 1)๐‘ข(๐‘ก − 1) for initial conditions ๐‘ฅ(0) = 1, ๐‘ฅ 0(0) = 2 using
the Laplace transform.
Exercise 6.2.103: Find the transfer function for ๐‘ฅ 0 + ๐‘ฅ = ๐‘“ (๐‘ก) (assuming the initial conditions are
zero).
308
6.3
CHAPTER 6. THE LAPLACE TRANSFORM
Convolution
Note: 1 or 1.5 lectures, §7.2 in [EP], §6.6 in [BD]
6.3.1
The convolution
The Laplace transformation of a product is not the product of the transforms. All hope is
not lost however. We simply have to use a different type of a “product.” Take two functions
๐‘“ (๐‘ก) and ๐‘”(๐‘ก) defined for ๐‘ก ≥ 0, and define the convolution∗ of ๐‘“ (๐‘ก) and ๐‘”(๐‘ก) as
def
( ๐‘“ ∗ ๐‘”)(๐‘ก) =
∫
๐‘ก
๐‘“ (๐œ)๐‘”(๐‘ก − ๐œ) ๐‘‘๐œ.
(6.2)
0
As you can see, the convolution of two functions of ๐‘ก is another function of ๐‘ก.
Example 6.3.1: Take ๐‘“ (๐‘ก) = ๐‘’ ๐‘ก and ๐‘”(๐‘ก) = ๐‘ก for ๐‘ก ≥ 0. Then
( ๐‘“ ∗ ๐‘”)(๐‘ก) =
๐‘ก
∫
๐‘’ ๐œ (๐‘ก − ๐œ) ๐‘‘๐œ = ๐‘’ ๐‘ก − ๐‘ก − 1.
0
To solve the integral we did one integration by parts.
Example 6.3.2: Take ๐‘“ (๐‘ก) = sin(๐œ”๐‘ก) and ๐‘”(๐‘ก) = cos(๐œ”๐‘ก) for ๐‘ก ≥ 0. Then
( ๐‘“ ∗ ๐‘”)(๐‘ก) =
∫
๐‘ก
sin(๐œ”๐œ) cos ๐œ”(๐‘ก − ๐œ) ๐‘‘๐œ.
0
Apply the identity
cos(๐œƒ) sin(๐œ“) =
1
sin(๐œƒ + ๐œ“) − sin(๐œƒ − ๐œ“) ,
2
to get
( ๐‘“ ∗ ๐‘”)(๐‘ก) =
∫
0
๐‘ก
1
sin(๐œ”๐‘ก) − sin(๐œ”๐‘ก − 2๐œ”๐œ) ๐‘‘๐œ
2
1
1
=
๐œ sin(๐œ”๐‘ก) +
cos(2๐œ”๐œ − ๐œ”๐‘ก)
2
4๐œ”
1
= ๐‘ก sin(๐œ”๐‘ก).
2
๐‘ก
๐œ=0
The formula holds only for ๐‘ก ≥ 0. The functions ๐‘“ , ๐‘”, and ๐‘“ ∗ ๐‘” are undefined for ๐‘ก < 0.
∗ For those that have seen convolution before, you may have seen it defined as ( ๐‘“
∫∞
∗ ๐‘”)(๐‘ก) = −∞ ๐‘“ (๐œ)๐‘”(๐‘ก −๐œ) ๐‘‘๐œ.
This definition agrees with (6.2) if you define ๐‘“ (๐‘ก) and ๐‘”(๐‘ก) to be zero for ๐‘ก < 0. When discussing the Laplace
transform the definition we gave is sufficient. Convolution does occur in many other applications, however,
where you may have to use the more general definition with infinities.
309
6.3. CONVOLUTION
Convolution has many properties that make it behave like a product. Let ๐‘ be a constant
and ๐‘“ , ๐‘”, and โ„Ž be functions. Then
๐‘“ ∗ ๐‘” = ๐‘”∗ ๐‘“,
(๐‘ ๐‘“ ) ∗ ๐‘” = ๐‘“ ∗ (๐‘ ๐‘”) = ๐‘( ๐‘“ ∗ ๐‘”),
( ๐‘“ ∗ ๐‘”) ∗ โ„Ž = ๐‘“ ∗ (๐‘” ∗ โ„Ž).
The most interesting property for us is the following theorem.
Theorem 6.3.1. Let ๐‘“ (๐‘ก) and ๐‘”(๐‘ก) be of exponential order, then
โ„’ ( ๐‘“ ∗ ๐‘”)(๐‘ก) = โ„’
∫
๐‘ก
๐‘“ (๐œ)๐‘”(๐‘ก − ๐œ) ๐‘‘๐œ = โ„’ ๐‘“ (๐‘ก) โ„’ ๐‘”(๐‘ก) .
0
In other words, the Laplace transform of a convolution is the product of the Laplace
transforms. The simplest way to use this result is in reverse.
Example 6.3.3: Suppose we have the function of ๐‘  defined by
1
1 1
=
.
2
๐‘  + 1 ๐‘ 2
(๐‘  + 1)๐‘ 
We recognize the two entries of Table 6.2. That is,
โ„’
−1
1
= ๐‘’ −๐‘ก
๐‘ +1
and
Therefore,
โ„’
−1
1 1
=
๐‘  + 1 ๐‘ 2
∫
โ„’
−1
1
= ๐‘ก.
๐‘ 2
๐‘ก
๐œ๐‘’ −(๐‘ก−๐œ) ๐‘‘๐œ = ๐‘’ −๐‘ก + ๐‘ก − 1.
0
The calculation of the integral involved an integration by parts.
6.3.2
Solving ODEs
The next example demonstrates the full power of the convolution and the Laplace transform.
We can give the solution to the forced oscillation problem for any forcing function as a
definite integral.
Example 6.3.4: Find the solution to
๐‘ฅ 00 + ๐œ”02 ๐‘ฅ = ๐‘“ (๐‘ก),
๐‘ฅ(0) = 0,
๐‘ฅ 0(0) = 0,
for an arbitrary function ๐‘“ (๐‘ก).
We first apply the Laplace transform to the equation. Denote the transform of ๐‘ฅ(๐‘ก) by
๐‘‹(๐‘ ) and the transform of ๐‘“ (๐‘ก) by ๐น(๐‘ ) as usual. We get
๐‘  2 ๐‘‹(๐‘ ) + ๐œ”02 ๐‘‹(๐‘ ) = ๐น(๐‘ ),
310
CHAPTER 6. THE LAPLACE TRANSFORM
or in other words
๐‘‹(๐‘ ) = ๐น(๐‘ )
We know
(
โ„’ −1
Therefore,
๐‘ฅ(๐‘ก) =
๐‘ก
∫
0
+
๐‘  2 + ๐œ”02
)
1
๐‘ 2
1
๐œ”02
=
.
sin(๐œ”0 ๐‘ก)
.
๐œ”0
sin ๐œ”0 (๐‘ก − ๐œ)
๐‘“ (๐œ)
๐‘‘๐œ,
๐œ”0
or if we reverse the order
๐‘ฅ(๐‘ก) =
∫
0
๐‘ก
sin(๐œ”0 ๐œ)
๐‘“ (๐‘ก − ๐œ) ๐‘‘๐œ.
๐œ”0
Notice one more feature of this example. We can now see how Laplace transform
handles resonance. Suppose that ๐‘“ (๐‘ก) = cos(๐œ”0 ๐‘ก). Then
๐‘ฅ(๐‘ก) =
∫
0
๐‘ก
sin(๐œ”0 ๐œ)
1
cos ๐œ”0 (๐‘ก − ๐œ) ๐‘‘๐œ =
๐œ”0
๐œ”0
∫
๐‘ก
sin(๐œ”0 ๐œ) cos ๐œ”0 (๐‘ก − ๐œ) ๐‘‘๐œ.
0
We have computed the convolution of sine and cosine in Example 6.3.2. Hence
1
๐‘ฅ(๐‘ก) =
๐œ”0
1
1
๐‘ก sin(๐œ”0 ๐‘ก) =
๐‘ก sin(๐œ”0 ๐‘ก).
2
2๐œ”0
Note the ๐‘ก in front of the sine. The solution, therefore, grows without bound as ๐‘ก gets large,
meaning we get resonance.
Similarly, we can solve any constant coefficient equation with an arbitrary forcing
function ๐‘“ (๐‘ก) as a definite integral using convolution. A definite integral, rather than a
closed form solution, is usually enough for most practical purposes. It is not hard to
numerically evaluate a definite integral.
6.3.3
Volterra integral equation
A common integral equation is the Volterra integral equation∗
๐‘ฅ(๐‘ก) = ๐‘“ (๐‘ก) +
∫
๐‘ก
๐‘”(๐‘ก − ๐œ)๐‘ฅ(๐œ) ๐‘‘๐œ,
0
where ๐‘“ (๐‘ก) and ๐‘”(๐‘ก) are known functions and ๐‘ฅ(๐‘ก) is an unknown we wish to solve for. To
find ๐‘ฅ(๐‘ก), we apply the Laplace transform to the equation to obtain
๐‘‹(๐‘ ) = ๐น(๐‘ ) + ๐บ(๐‘ )๐‘‹(๐‘ ),
∗ Named
for the Italian mathematician Vito Volterra (1860–1940).
311
6.3. CONVOLUTION
where ๐‘‹(๐‘ ), ๐น(๐‘ ), and ๐บ(๐‘ ) are the Laplace transforms of ๐‘ฅ(๐‘ก), ๐‘“ (๐‘ก), and ๐‘”(๐‘ก) respectively.
We find
๐น(๐‘ )
๐‘‹(๐‘ ) =
.
1 − ๐บ(๐‘ )
To find ๐‘ฅ(๐‘ก) we now need to find the inverse Laplace transform of ๐‘‹(๐‘ ).
Example 6.3.5: Solve
๐‘ฅ(๐‘ก) = ๐‘’
−๐‘ก
๐‘ก
∫
sinh(๐‘ก − ๐œ)๐‘ฅ(๐œ) ๐‘‘๐œ.
+
0
We apply Laplace transform to obtain
๐‘‹(๐‘ ) =
or
๐‘‹(๐‘ ) =
1
๐‘ +1
1−
1
๐‘  2 −1
1
1
+ 2
๐‘‹(๐‘ ),
๐‘ +1 ๐‘  −1
=
๐‘ −1
๐‘ 
1
=
−
.
๐‘ 2 − 2 ๐‘ 2 − 2 ๐‘ 2 − 2
It is not hard to apply Table 6.1 on page 295 to find
๐‘ฅ(๐‘ก) = cosh
6.3.4
√
√ 1
2 ๐‘ก − √ sinh 2 ๐‘ก .
2
Exercises
Exercise 6.3.1: Let ๐‘“ (๐‘ก) = ๐‘ก 2 for ๐‘ก ≥ 0, and ๐‘”(๐‘ก) = ๐‘ข(๐‘ก − 1). Compute ๐‘“ ∗ ๐‘”.
Exercise 6.3.2: Let ๐‘“ (๐‘ก) = ๐‘ก for ๐‘ก ≥ 0, and ๐‘”(๐‘ก) = sin ๐‘ก for ๐‘ก ≥ 0. Compute ๐‘“ ∗ ๐‘”.
Exercise 6.3.3: Find the solution to
๐‘š๐‘ฅ 00 + ๐‘๐‘ฅ 0 + ๐‘˜๐‘ฅ = ๐‘“ (๐‘ก),
๐‘ฅ(0) = 0,
๐‘ฅ 0(0) = 0,
for an arbitrary function ๐‘“ (๐‘ก), where ๐‘š > 0, ๐‘ > 0, ๐‘˜ > 0, and ๐‘ 2 − 4๐‘˜๐‘š > 0 (the system is
overdamped). Write the solution as a definite integral.
Exercise 6.3.4: Find the solution to
๐‘š๐‘ฅ 00 + ๐‘๐‘ฅ 0 + ๐‘˜๐‘ฅ = ๐‘“ (๐‘ก),
๐‘ฅ(0) = 0,
๐‘ฅ 0(0) = 0,
for an arbitrary function ๐‘“ (๐‘ก), where ๐‘š > 0, ๐‘ > 0, ๐‘˜ > 0, and ๐‘ 2 − 4๐‘˜๐‘š < 0 (the system is
underdamped). Write the solution as a definite integral.
Exercise 6.3.5: Find the solution to
๐‘š๐‘ฅ 00 + ๐‘๐‘ฅ 0 + ๐‘˜๐‘ฅ = ๐‘“ (๐‘ก),
๐‘ฅ(0) = 0,
๐‘ฅ 0(0) = 0,
for an arbitrary function ๐‘“ (๐‘ก), where ๐‘š > 0, ๐‘ > 0, ๐‘˜ > 0, and ๐‘ 2 = 4๐‘˜๐‘š (the system is critically
damped). Write the solution as a definite integral.
312
CHAPTER 6. THE LAPLACE TRANSFORM
Exercise 6.3.6: Solve
๐‘ฅ(๐‘ก) = ๐‘’
−๐‘ก
๐‘ก
∫
cos(๐‘ก − ๐œ)๐‘ฅ(๐œ) ๐‘‘๐œ.
+
0
Exercise 6.3.7: Solve
๐‘ฅ(๐‘ก) = cos ๐‘ก +
∫
๐‘ก
cos(๐‘ก − ๐œ)๐‘ฅ(๐œ) ๐‘‘๐œ.
0
Exercise 6.3.8: Compute โ„’ −1
n
๐‘ 
2
(๐‘  2 +4)
o
using convolution.
2
Exercise 6.3.9: Write down the solution to ๐‘ฅ 00 − 2๐‘ฅ = ๐‘’ −๐‘ก , ๐‘ฅ(0) = 0, ๐‘ฅ 0(0) = 0 as a definite
2
integral. Hint: Do not try to compute the Laplace transform of ๐‘’ −๐‘ก .
Exercise 6.3.101: Let ๐‘“ (๐‘ก) = cos ๐‘ก for ๐‘ก ≥ 0, and ๐‘”(๐‘ก) = ๐‘’ −๐‘ก . Compute ๐‘“ ∗ ๐‘”.
Exercise 6.3.102: Compute โ„’ −1
5
๐‘  4 +๐‘  2
using convolution.
Exercise 6.3.103: Solve ๐‘ฅ 00 + ๐‘ฅ = sin ๐‘ก, ๐‘ฅ(0) = 0, ๐‘ฅ 0(0) = 0 using convolution.
Exercise 6.3.104: Solve ๐‘ฅ 000 + ๐‘ฅ 0 = ๐‘“ (๐‘ก), ๐‘ฅ(0) = 0, ๐‘ฅ 0(0) = 0, ๐‘ฅ 00(0) = 0 using convolution. Write
the result as a definite integral.
313
6.4. DIRAC DELTA AND IMPULSE RESPONSE
6.4
Dirac delta and impulse response
Note: 1 or 1.5 lecture, §7.6 in [EP], §6.5 in [BD]
6.4.1
Rectangular pulse
Often we study a physical system by putting in a short pulse and then seeing what the
system does. The resulting behavior is often called impulse response, and understanding it
tells us how the system responds to any input. Let us see what we mean by a pulse. The
simplest kind of a pulse is a simple rectangular pulse defined by
๏ฃฑ
๏ฃด
0
๏ฃด
๏ฃฒ
๏ฃด
๐œ‘(๐‘ก) = ๐‘€
๏ฃด
๏ฃด
๏ฃด0
๏ฃณ
๐‘ก < ๐‘Ž,
if
if ๐‘Ž ≤ ๐‘ก < ๐‘,
if ๐‘ ≤ ๐‘ก.
See Figure 6.3 for a graph.
Notice that
0.0
๐œ‘(๐‘ก) = ๐‘€ ๐‘ข(๐‘ก − ๐‘Ž) − ๐‘ข(๐‘ก − ๐‘) ,
where ๐‘ข(๐‘ก) is the unit step function.
Let us take the Laplace transform of a
square pulse,
โ„’ ๐œ‘(๐‘ก) = โ„’ ๐‘€ ๐‘ข(๐‘ก − ๐‘Ž) − ๐‘ข(๐‘ก − ๐‘)
=๐‘€
๐‘’ −๐‘Ž๐‘  − ๐‘’ −๐‘๐‘ 
.
๐‘ 
1.0
1.5
2.0
2.5
3.0
2.0
1.5
1.5
1.0
1.0
0.5
0.5
0.0
0.0
0.0
For simplicity, let ๐‘Ž = 0. It is also convenient to set ๐‘€ = 1/๐‘ so that
∫
0.5
2.0
0.5
1.0
1.5
2.0
2.5
3.0
Figure 6.3: Sample square pulse with ๐‘Ž = 0.5,
๐‘ = 1, and ๐‘€ = 2.
∞
๐œ‘(๐‘ก) ๐‘‘๐‘ก = 1.
0
That is, to have the pulse have “unit mass.” For such a pulse,
๐‘ข(๐‘ก) − ๐‘ข(๐‘ก − ๐‘)
1 − ๐‘’ −๐‘๐‘ 
โ„’ ๐œ‘(๐‘ก) = โ„’
=
.
๐‘
๐‘๐‘ 
We want ๐‘ to be very small; we wish to have the pulse be very short and very tall. By
letting ๐‘ go to zero we arrive at the concept of the Dirac delta function.
314
6.4.2
CHAPTER 6. THE LAPLACE TRANSFORM
The delta function
The Dirac delta function∗ is not exactly a function; it is sometimes called a generalized function.
We avoid unnecessary details and simply say that it is an object that does not really make
sense unless we integrate it. The motivation is that we would like a “function” ๐›ฟ(๐‘ก) such
that for any continuous function ๐‘“ (๐‘ก) we have
∞
∫
๐›ฟ(๐‘ก) ๐‘“ (๐‘ก) ๐‘‘๐‘ก = ๐‘“ (0).
−∞
The formula should hold if we integrate over any interval that contains 0, not just (−∞, ∞).
So ๐›ฟ(๐‘ก) is a “function” with all its “mass” at the single point ๐‘ก = 0. In other words, for any
interval [๐‘, ๐‘‘]
(
๐‘‘
∫
๐›ฟ(๐‘ก) ๐‘‘๐‘ก =
๐‘
1 if the interval [๐‘, ๐‘‘] contains 0, i.e. ๐‘ ≤ 0 ≤ ๐‘‘,
0 otherwise.
Unfortunately there is no such function in the classical sense. You could informally think
that ๐›ฟ(๐‘ก) is zero for ๐‘ก ≠ 0 and somehow infinite at ๐‘ก = 0.
A good way to think about ๐›ฟ(๐‘ก) is as a limit of short pulses whose integral is 1. For
example, suppose that we have a square pulse ๐œ‘(๐‘ก) as above with ๐‘Ž = 0, ๐‘€ = 1/๐‘ , that is
๐œ‘(๐‘ก) = ๐‘ข(๐‘ก)−๐‘ข(๐‘ก−๐‘)
. Compute
๐‘
∫
∞
๐œ‘(๐‘ก) ๐‘“ (๐‘ก) ๐‘‘๐‘ก =
∫
−∞
∞
−∞
1
๐‘ข(๐‘ก) − ๐‘ข(๐‘ก − ๐‘)
๐‘“ (๐‘ก) ๐‘‘๐‘ก =
๐‘
๐‘
∫
๐‘
๐‘“ (๐‘ก) ๐‘‘๐‘ก.
0
If ๐‘“ (๐‘ก) is continuous at ๐‘ก = 0, then for very small ๐‘, the function ๐‘“ (๐‘ก) is approximately equal
to ๐‘“ (0) on the interval [0, ๐‘]. We approximate the integral
1
๐‘
Hence,
∫
∫
∞
lim
๐‘→0
−∞
๐‘
1
๐‘“ (๐‘ก) ๐‘‘๐‘ก ≈
๐‘
0
∫
๐‘
๐‘“ (0) ๐‘‘๐‘ก = ๐‘“ (0).
0
1
๐œ‘(๐‘ก) ๐‘“ (๐‘ก) ๐‘‘๐‘ก = lim
๐‘→0 ๐‘
∫
๐‘
๐‘“ (๐‘ก) ๐‘‘๐‘ก = ๐‘“ (0).
0
Let us therefore accept ๐›ฟ(๐‘ก) as an object that is possible to integrate. We often want to
shift ๐›ฟ to another point, for example ๐›ฟ(๐‘ก − ๐‘Ž). In that case,
∫
∞
๐›ฟ(๐‘ก − ๐‘Ž) ๐‘“ (๐‘ก) ๐‘‘๐‘ก = ๐‘“ (๐‘Ž).
−∞
Note that ๐›ฟ(๐‘Ž − ๐‘ก) is the same object as ๐›ฟ(๐‘ก − ๐‘Ž). In other words, the convolution of ๐›ฟ(๐‘ก) with
๐‘“ (๐‘ก) is again ๐‘“ (๐‘ก),
( ๐‘“ ∗ ๐›ฟ)(๐‘ก) =
∫
๐‘ก
๐›ฟ(๐‘ก − ๐‘ ) ๐‘“ (๐‘ ) ๐‘‘๐‘  = ๐‘“ (๐‘ก).
0
∗ Named
after the English physicist and mathematician Paul Adrien Maurice Dirac (1902–1984).
315
6.4. DIRAC DELTA AND IMPULSE RESPONSE
As we can integrate ๐›ฟ(๐‘ก), we compute its Laplace transform:
โ„’ ๐›ฟ(๐‘ก − ๐‘Ž) =
∞
∫
๐‘’ −๐‘ ๐‘ก ๐›ฟ(๐‘ก − ๐‘Ž) ๐‘‘๐‘ก = ๐‘’ −๐‘Ž๐‘  .
0
In particular,
โ„’ ๐›ฟ(๐‘ก) = 1.
Remark 6.4.1: The Laplace transform of ๐›ฟ(๐‘ก − ๐‘Ž) would be the Laplace transform of the
derivative of the Heaviside function ๐‘ข(๐‘ก − ๐‘Ž), if the Heaviside function had a derivative.
First,
๐‘’ −๐‘Ž๐‘ 
โ„’ ๐‘ข(๐‘ก − ๐‘Ž) =
.
๐‘ 
To obtain what the Laplace transform of the derivative would be we multiply by ๐‘ , to obtain
๐‘’ −๐‘Ž๐‘  , which is the Laplace transform of ๐›ฟ(๐‘ก − ๐‘Ž). We see the same thing using integration,
∫
๐‘ก
๐›ฟ(๐‘  − ๐‘Ž) ๐‘‘๐‘  = ๐‘ข(๐‘ก − ๐‘Ž).
0
So in a certain sense
i
๐‘‘ h
๐‘ข(๐‘ก − ๐‘Ž) = ๐›ฟ(๐‘ก − ๐‘Ž). ”
๐‘‘๐‘ก
This line of reasoning allows us to talk about derivatives of functions with jump discontinuities. We can think of the derivative of the Heaviside function ๐‘ข(๐‘ก − ๐‘Ž) as being somehow
infinite at ๐‘Ž, which is precisely our intuitive understanding of the delta function.
“
Example 6.4.1: Let us compute โ„’ −1 ๐‘ +1
. So far we only computed the inverse transform
๐‘ 
of proper rational functions in the ๐‘  variable. That is, the numerator was of lower degree
than the denominator. Not so with ๐‘ +1
๐‘  . We can use the delta function to compute,
โ„’
−1
๐‘ +1
1
1
= โ„’ −1 1 +
= โ„’ −1 {1} + โ„’ −1
= ๐›ฟ(๐‘ก) + 1.
๐‘ 
๐‘ 
๐‘ 
The resulting object is a generalized function and only makes sense when put underneath
an integral.
6.4.3
Impulse response
As we said before, in the differential equation ๐ฟ๐‘ฅ = ๐‘“ (๐‘ก), we think of ๐‘“ (๐‘ก) as input, and ๐‘ฅ(๐‘ก)
as the output. We think of the delta function as an impulse, and so to find the response to
an impulse, we use the delta function in place of ๐‘“ (๐‘ก). The solution to
๐ฟ๐‘ฅ = ๐›ฟ(๐‘ก)
is called the impulse response.
316
CHAPTER 6. THE LAPLACE TRANSFORM
Example 6.4.2: Solve (find the impulse response)
๐‘ฅ 00 + ๐œ”02 ๐‘ฅ = ๐›ฟ(๐‘ก),
๐‘ฅ(0) = 0,
๐‘ฅ 0(0) = 0.
(6.3)
We first apply the Laplace transform to the equation. Denote the transform of ๐‘ฅ(๐‘ก) by
๐‘‹(๐‘ ).
1
.
๐‘  2 ๐‘‹(๐‘ ) + ๐œ”02 ๐‘‹(๐‘ ) = 1,
and so
๐‘‹(๐‘ ) =
2
๐‘  + ๐œ”02
The inverse Laplace transform produces
๐‘ฅ(๐‘ก) =
sin(๐œ”0 ๐‘ก)
.
๐œ”0
Let us notice something about the example above. In Example 6.3.4, we found that
when the input is ๐‘“ (๐‘ก), the solution to ๐ฟ๐‘ฅ = ๐‘“ (๐‘ก) is given by
๐‘ฅ(๐‘ก) =
๐‘ก
∫
0
sin ๐œ”0 (๐‘ก − ๐œ)
๐‘“ (๐œ)
๐‘‘๐œ.
๐œ”0
That is, the solution for an arbitrary input is given as convolution with the impulse response. Let
us see why. The key is to notice that for functions ๐‘ฅ(๐‘ก) and ๐‘“ (๐‘ก),
๐‘‘2
(๐‘ฅ ∗ ๐‘“ ) (๐‘ก) = 2
๐‘‘๐‘ก
00
∫
๐‘ก
๐‘“ (๐œ)๐‘ฅ(๐‘ก − ๐œ) ๐‘‘๐œ =
0
∫
๐‘ก
๐‘“ (๐œ)๐‘ฅ 00(๐‘ก − ๐œ) ๐‘‘๐œ = (๐‘ฅ 00 ∗ ๐‘“ )(๐‘ก).
0
We simply differentiate twice under the integral∗ , the details are left as an exercise. If we
convolve the entire equation (6.3), the left-hand side becomes
(๐‘ฅ 00 + ๐œ”02 ๐‘ฅ) ∗ ๐‘“ = (๐‘ฅ 00 ∗ ๐‘“ ) + ๐œ”02 (๐‘ฅ ∗ ๐‘“ ) = (๐‘ฅ ∗ ๐‘“ )00 + ๐œ”02 (๐‘ฅ ∗ ๐‘“ ).
The right-hand side becomes
(๐›ฟ ∗ ๐‘“ )(๐‘ก) = ๐‘“ (๐‘ก).
Therefore ๐‘ฆ(๐‘ก) = (๐‘ฅ ∗ ๐‘“ )(๐‘ก) is the solution to
๐‘ฆ 00 + ๐œ”02 ๐‘ฆ = ๐‘“ (๐‘ก).
This procedure works in general for other linear equations ๐ฟ๐‘ฅ = ๐‘“ (๐‘ก). If you determine
the impulse response, you also know how to obtain the output ๐‘ฅ(๐‘ก) for any input ๐‘“ (๐‘ก) by
simply convolving the impulse response and the input ๐‘“ (๐‘ก).
∗ You
should really think of the integral going over (−∞, ∞) rather than over [0, ๐‘ก] and simply assume that
๐‘“ (๐‘ก) and ๐‘ฅ(๐‘ก) are continuous and zero for negative ๐‘ก.
317
6.4. DIRAC DELTA AND IMPULSE RESPONSE
6.4.4
Three-point beam bending
Let us give another quite different example where the delta function turns up: Representing
point loads on a steel beam. Consider a beam of length ๐ฟ, resting on two simple supports
at the ends. Let ๐‘ฅ denote the position on the beam, and let ๐‘ฆ(๐‘ฅ) denote the deflection of the
beam in the vertical direction. The deflection ๐‘ฆ(๐‘ฅ) satisfies the Euler–Bernoulli equation∗ ,
๐ธ๐ผ
๐‘‘4 ๐‘ฆ
= ๐น(๐‘ฅ),
๐‘‘๐‘ฅ 4
where ๐ธ and ๐ผ are constants† and ๐น(๐‘ฅ) is the force applied per unit length at position ๐‘ฅ. The
situation we are interested in is when the force is applied at a single point as in Figure 6.4.
๐‘ฆ
๐น๐›ฟ(๐‘ฅ − ๐‘Ž)
๐‘ฅ
Figure 6.4: Three-point bending.
The equation becomes
๐‘‘4 ๐‘ฆ
= −๐น๐›ฟ(๐‘ฅ − ๐‘Ž),
๐‘‘๐‘ฅ 4
where ๐‘ฅ = ๐‘Ž is the point where the mass is applied. The constant ๐น is the force applied and
the minus sign indicates that the force is downward, that is, in the negative ๐‘ฆ direction.
The end points of the beam satisfy the conditions,
๐ธ๐ผ
๐‘ฆ(0) = 0,
๐‘ฆ 00(0) = 0,
๐‘ฆ(๐ฟ) = 0,
๐‘ฆ 00(๐ฟ) = 0.
See § 5.2 for further information about endpoint conditions applied to beams.
Example 6.4.3: Suppose that length of the beam is 2, and ๐ธ๐ผ = 1 for simplicity. Further
suppose that the force ๐น = 1 is applied at ๐‘ฅ = 1. That is, we have the equation
๐‘‘4 ๐‘ฆ
= −๐›ฟ(๐‘ฅ − 1),
๐‘‘๐‘ฅ 4
and the endpoint conditions are
๐‘ฆ(0) = 0,
∗ Named
๐‘ฆ 00(0) = 0,
๐‘ฆ(2) = 0,
๐‘ฆ 00(2) = 0.
for the Swiss mathematicians Jacob Bernoulli (1654–1705), Daniel Bernoulli (1700–1782), the
nephew of Jacob, and Leonhard Paul Euler (1707–1783).
† ๐ธ is the elastic modulus and ๐ผ is the second moment of area. Let us not worry about the details and
simply think of these as some given constants.
318
CHAPTER 6. THE LAPLACE TRANSFORM
We could integrate, but using the Laplace transform is even easier. We apply the
transform in the ๐‘ฅ variable rather than the ๐‘ก variable. Let us again denote the transform of
๐‘ฆ(๐‘ฅ) as ๐‘Œ(๐‘ ).
๐‘  4๐‘Œ(๐‘ ) − ๐‘  3 ๐‘ฆ(0) − ๐‘  2 ๐‘ฆ 0(0) − ๐‘  ๐‘ฆ 00(0) − ๐‘ฆ 000(0) = −๐‘’ −๐‘  .
We notice that ๐‘ฆ(0) = 0 and ๐‘ฆ 00(0) = 0. Let us call ๐ถ1 = ๐‘ฆ 0(0) and ๐ถ2 = ๐‘ฆ 000(0). We solve for
๐‘Œ(๐‘ ),
−๐‘’ −๐‘  ๐ถ1 ๐ถ2
๐‘Œ(๐‘ ) = 4 + 2 + 4 .
๐‘ 
๐‘ 
๐‘ 
We take the inverse Laplace transform utilizing the second shifting property (6.1) to take
the inverse of the first term.
๐‘ฆ(๐‘ฅ) =
๐ถ2 3
−(๐‘ฅ − 1)3
๐‘ข(๐‘ฅ − 1) + ๐ถ1 ๐‘ฅ +
๐‘ฅ .
6
6
We still need to apply two of the endpoint conditions. As the conditions are at ๐‘ฅ = 2 we
can simply replace ๐‘ข(๐‘ฅ − 1) = 1 when taking the derivatives. Therefore,
−(2 − 1)3
๐ถ2 3 −1
4
0 = ๐‘ฆ(2) =
+ ๐ถ1 (2) +
2 =
+ 2๐ถ1 + ๐ถ2 ,
6
6
6
3
and
−3 · 2 · (2 − 1) ๐ถ2
+
3 · 2 · 2 = −1 + 2๐ถ2 .
6
6
Hence ๐ถ2 = 12 and solving for ๐ถ1 using the first equation we obtain ๐ถ1 =
for the beam deflection is
0 = ๐‘ฆ 00(2) =
−1
4 .
Our solution
๐‘ฅ ๐‘ฅ3
−(๐‘ฅ − 1)3
๐‘ข(๐‘ฅ − 1) − + .
๐‘ฆ(๐‘ฅ) =
6
4 12
6.4.5
Exercises
Exercise 6.4.1: Solve (find the impulse response) ๐‘ฅ 00 + ๐‘ฅ 0 + ๐‘ฅ = ๐›ฟ(๐‘ก), ๐‘ฅ(0) = 0, ๐‘ฅ 0(0) = 0.
Exercise 6.4.2: Solve (find the impulse response) ๐‘ฅ 00 + 2๐‘ฅ 0 + ๐‘ฅ = ๐›ฟ(๐‘ก), ๐‘ฅ(0) = 0, ๐‘ฅ 0(0) = 0.
Exercise 6.4.3: A pulse can come later and can be bigger. Solve ๐‘ฅ 00 + 4๐‘ฅ = 4๐›ฟ(๐‘ก − 1), ๐‘ฅ(0) = 0,
๐‘ฅ 0(0) = 0.
Exercise 6.4.4: Suppose that ๐‘“ (๐‘ก) and ๐‘”(๐‘ก) are differentiable functions and suppose that ๐‘“ (๐‘ก) =
๐‘”(๐‘ก) = 0 for all ๐‘ก ≤ 0. Show that
( ๐‘“ ∗ ๐‘”)0(๐‘ก) = ( ๐‘“ 0 ∗ ๐‘”)(๐‘ก) = ( ๐‘“ ∗ ๐‘” 0)(๐‘ก).
Exercise 6.4.5: Suppose that ๐ฟ๐‘ฅ = ๐›ฟ(๐‘ก), ๐‘ฅ(0) = 0, ๐‘ฅ 0(0) = 0, has the solution ๐‘ฅ = ๐‘’ −๐‘ก for ๐‘ก > 0.
Find the solution to ๐ฟ๐‘ฅ = ๐‘ก 2 , ๐‘ฅ(0) = 0, ๐‘ฅ 0(0) = 0 for ๐‘ก > 0.
Exercise 6.4.6: Compute โ„’ −1
n
๐‘  2 +๐‘ +1
๐‘ 2
o
.
6.4. DIRAC DELTA AND IMPULSE RESPONSE
319
Exercise 6.4.7 (challenging): Solve Example 6.4.3 via integrating 4 times in the ๐‘ฅ variable.
Exercise 6.4.8: Suppose we have a beam of length 1 simply supported at the ends and suppose that
force ๐น = 1 is applied at ๐‘ฅ = 34 in the downward direction. Suppose that ๐ธ๐ผ = 1 for simplicity. Find
the beam deflection ๐‘ฆ(๐‘ฅ).
Exercise 6.4.101: Solve (find the impulse response) ๐‘ฅ 00 = ๐›ฟ(๐‘ก), ๐‘ฅ(0) = 0, ๐‘ฅ 0(0) = 0.
Exercise 6.4.102: Solve (find the impulse response) ๐‘ฅ 0 + ๐‘Ž๐‘ฅ = ๐›ฟ(๐‘ก), ๐‘ฅ(0) = 0, ๐‘ฅ 0(0) = 0.
Exercise 6.4.103: Suppose that ๐ฟ๐‘ฅ = ๐›ฟ(๐‘ก), ๐‘ฅ(0) = 0, ๐‘ฅ 0(0) = 0, has the solution ๐‘ฅ(๐‘ก) = cos(๐‘ก) for
๐‘ก > 0. Find (in closed form) the solution to ๐ฟ๐‘ฅ = sin(๐‘ก), ๐‘ฅ(0) = 0, ๐‘ฅ 0(0) = 0 for ๐‘ก > 0.
โ„’ −1
n
๐‘ 2
๐‘  2 +1
Exercise 6.4.105: Compute โ„’ −1
n
3๐‘  2 ๐‘’ −๐‘  +2
๐‘ 2
Exercise 6.4.104: Compute
o
.
o
.
320
6.5
CHAPTER 6. THE LAPLACE TRANSFORM
Solving PDEs with the Laplace transform
Note: 1–1.5 lecture, can be skipped
The Laplace transform comes from the same family of transforms as does the Fourier
series∗ , which we used in chapter 4 to solve partial differential equations (PDEs). It is
therefore not surprising that we can also solve PDEs with the Laplace transform.
Given a PDE in two independent variables ๐‘ฅ and ๐‘ก, we use the Laplace transform on
one of the variables (taking the transform of everything in sight), and derivatives in that
variable become multiplications by the transformed variable ๐‘ . The PDE becomes an ODE,
which we solve. Afterwards we invert the transform to find a solution to the original
problem. It is best to see the procedure on an example.
Example 6.5.1: Consider the first order PDE
๐‘ฆ๐‘ก = −๐›ผ๐‘ฆ ๐‘ฅ ,
for ๐‘ฅ > 0, ๐‘ก > 0,
with side conditions
๐‘ฆ(0, ๐‘ก) = ๐ถ,
๐‘ฆ(๐‘ฅ, 0) = 0.
This equation is called the convection equation or sometimes the transport equation, and it
already made an appearance in § 1.9, with different conditions. See Figure 6.5 for a diagram
of the setup.
A physical setup of this equation is a river
of solid goo, as we do not want anything to
๐‘ก
diffuse. The function ๐‘ฆ is the concentration of
some toxic substance† . The variable ๐‘ฅ denotes
๐‘ฆ=๐ถ
๐‘ฆ๐‘ก = −๐›ผ๐‘ฆ ๐‘ฅ
position where ๐‘ฅ = 0 is the location of a factory
spewing the toxic substance into the river. The
toxic substance flows into the river so that at
๐‘ฅ = 0 the concentration is always ๐ถ. We wish
(0, 0)
๐‘ฆ=0
๐‘ฅ
to see what happens past the factory, that is
at ๐‘ฅ > 0. Let ๐‘ก be the time, and assume the
Figure 6.5: Transport equation on a half line.
factory started operations at ๐‘ก = 0, so that at
๐‘ก = 0 the river is just pure goo.
Consider a function of two variables ๐‘ฆ(๐‘ฅ, ๐‘ก). Let us fix ๐‘ฅ and transform the ๐‘ก variable.
For convenience, we treat the transformed ๐‘  variable as a parameter, since there are no
derivatives in ๐‘ . That is, we write ๐‘Œ(๐‘ฅ) for the transformed function, and treat it as a
function of ๐‘ฅ, leaving ๐‘  as a parameter.
๐‘Œ(๐‘ฅ) = โ„’ ๐‘ฆ(๐‘ฅ, ๐‘ก) =
∫
∞
๐‘ฆ(๐‘ฅ, ๐‘ก)๐‘’ −๐‘ ๐‘ก ๐‘‘๐‘ .
0
∗ There
is a corresponding Fourier transform on the real line as well that looks sort of like the Laplace
transform.
† It’s a river of goo already, we’re not hurting the environment much more.
321
6.5. SOLVING PDES WITH THE LAPLACE TRANSFORM
The transform of a derivative with respect to ๐‘ฅ is just differentiating the transformed
function:
โ„’ ๐‘ฆ ๐‘ฅ (๐‘ฅ, ๐‘ก) =
∫
∞
0
๐‘‘
๐‘ฆ ๐‘ฅ (๐‘ฅ, ๐‘ก)๐‘’ −๐‘ ๐‘ก ๐‘‘๐‘  =
๐‘‘๐‘ฅ
∫
∞
๐‘ฆ(๐‘ฅ, ๐‘ก)๐‘’ −๐‘ ๐‘ก ๐‘‘๐‘  = ๐‘Œ 0(๐‘ฅ).
0
To transform the derivative in ๐‘ก (the variable being transformed), we use the rules from
§ 6.2:
โ„’ ๐‘ฆ๐‘ก (๐‘ฅ, ๐‘ก) = ๐‘ ๐‘Œ(๐‘ฅ) − ๐‘ฆ(๐‘ฅ, 0).
In our specific case, ๐‘ฆ(๐‘ฅ, 0) = 0, and so โ„’ ๐‘ฆ๐‘ก (๐‘ฅ, ๐‘ก) = ๐‘ ๐‘Œ(๐‘ฅ). We transform the equation
to find
๐‘ ๐‘Œ(๐‘ฅ) = −๐›ผ๐‘Œ 0(๐‘ฅ).
This ODE needs an initial condition. The initial condition is the other side condition of the
PDE, the one that depends on ๐‘ฅ. Everything is transformed, so we must also transform
this condition
๐ถ
๐‘Œ(0) = โ„’ ๐‘ฆ(0, ๐‘ก) = โ„’ ๐ถ = .
๐‘ 
We solve the ODE problem ๐‘ ๐‘Œ(๐‘ฅ) = −๐›ผ๐‘Œ 0(๐‘ฅ), ๐‘Œ(0) =
๐‘Œ(๐‘ฅ) =
๐ถ
๐‘ ,
to find
๐ถ −๐‘ ๐‘ฅ
๐‘’ ๐›ผ .
๐‘ 
We are not done, we have ๐‘Œ(๐‘ฅ), but we really want ๐‘ฆ(๐‘ฅ, ๐‘ก). We transform the ๐‘  variable
back to ๐‘ก. Let
(
0 if ๐‘ก < 0,
๐‘ข(๐‘ก) =
1 otherwise
be the Heaviside function. As
โ„’ ๐‘ข(๐‘ก − ๐‘Ž) =
∫
∞
๐‘ข(๐‘ก − ๐‘Ž) ๐‘’
−๐‘ ๐‘ก
∞
๐‘’ −๐‘ ๐‘ก ๐‘‘๐‘ก =
๐‘Ž
0
then
๐‘ฆ(๐‘ฅ, ๐‘ก) = โ„’
๐‘‘๐‘ก =
∫
−1
In other words,
๐ถ −๐‘ ๐‘ฅ
๐‘’ ๐›ผ = ๐ถ๐‘ข ๐‘ก − ๐‘ฅ/๐›ผ .
๐‘ 
(
๐‘ฆ(๐‘ฅ, ๐‘ก) =
๐‘’ −๐‘Ž๐‘ 
,
๐‘ 
0
if ๐‘ก < ๐‘ฅ/๐›ผ ,
๐ถ
otherwise.
See Figure 6.6 on the following page for a diagram of this solution. The line of slope 1/๐›ผ
indicates the wavefront of the toxic substance in the picture as it is leaving the factory.
What the equation does is simply move the initial condition to the right at speed ๐›ผ.
Shhh. . . ๐‘ฆ is not differentiable, it is not even continuous (nobody ever seems to notice).
How could we plug something that’s not differentiable into the equation? Well, just think
322
CHAPTER 6. THE LAPLACE TRANSFORM
๐‘ก
๐‘ฆ=๐ถ
wavefront, slope 1/๐›ผ
๐‘ฆ=๐ถ
๐‘ฆ=0
๐‘ฆ=0
(0, 0)
๐‘ฅ
Figure 6.6: Wavefront of toxic substance is a line of slope 1/๐›ผ.
of a differentiable function very very close to ๐‘ฆ. Or, if you recognize the derivative of the
Heaviside function as the delta function, then all is well too:
๐‘ฆ๐‘ก (๐‘ฅ, ๐‘ก) =
and
๐‘ฆ ๐‘ฅ (๐‘ฅ, ๐‘ก) =
๐œ• ๐ถ๐‘ข ๐‘ก − ๐‘ฅ/๐›ผ = ๐ถ๐‘ข 0 ๐‘ก − ๐‘ฅ/๐›ผ = ๐ถ๐›ฟ ๐‘ก − ๐‘ฅ/๐›ผ
๐œ•๐‘ก
๐œ• ๐ถ
๐ถ
๐ถ๐‘ข ๐‘ก − ๐‘ฅ/๐›ผ = − ๐‘ข 0 ๐‘ก − ๐‘ฅ/๐›ผ = − ๐›ฟ ๐‘ก − ๐‘ฅ/๐›ผ .
๐›ผ
๐›ผ
๐œ•๐‘ฅ
So ๐‘ฆ๐‘ก = −๐›ผ๐‘ฆ ๐‘ฅ .
Laplace equation is very good with constant coefficient equations. One advantage
of Laplace is that it easily handles nonhomogeneous side conditions. Let us try a more
complicated example.
Example 6.5.2: Consider
๐‘ฆ๐‘ก + ๐‘ฆ ๐‘ฅ + ๐‘ฆ = 0,
for ๐‘ฅ > 0, ๐‘ก > 0,
๐‘ฆ(0, ๐‘ก) = sin(๐‘ก),
๐‘ฆ(๐‘ฅ, 0) = 0.
Again, we transform ๐‘ก, and we write ๐‘Œ(๐‘ฅ) for the transformed function. As ๐‘ฆ(๐‘ฅ, 0) = 0,
we find
1
๐‘ ๐‘Œ(๐‘ฅ) + ๐‘Œ 0(๐‘ฅ) + ๐‘Œ(๐‘ฅ) = 0,
๐‘Œ(0) = 2
.
๐‘  +1
The solution of the transformed equation is
๐‘Œ(๐‘ฅ) =
๐‘ 2
1
1
๐‘’ −(๐‘ +1)๐‘ฅ = 2
๐‘’ −๐‘ฅ๐‘  ๐‘’ −๐‘ฅ .
+1
๐‘  +1
Using the second shifting property (6.1) and linearity of the transform, we obtain the
solution
๐‘ฆ(๐‘ฅ, ๐‘ก) = ๐‘’ −๐‘ฅ sin(๐‘ก − ๐‘ฅ)๐‘ข(๐‘ก − ๐‘ฅ).
323
6.5. SOLVING PDES WITH THE LAPLACE TRANSFORM
We can also detect when the problem is ill-posed in the sense that it has no solution. Let
us change the equation to
− ๐‘ฆ๐‘ก + ๐‘ฆ ๐‘ฅ = 0,
for ๐‘ฅ > 0, ๐‘ก > 0,
๐‘ฆ(0, ๐‘ก) = sin(๐‘ก),
๐‘ฆ(๐‘ฅ, 0) = 0.
Then the problem has no solution. First, let us see this in the language of § 1.9. The
characteristic curves are ๐‘ก = −๐‘ฅ + ๐ถ. If ๐œ is the the characteristic coordinate, then we find
the equation ๐‘ฆ๐œ = 0 along the curve, meaning a solution is constant along characteristic
curves. But these curves intersect both the ๐‘ฅ-axis and the ๐‘ก-axis. For example, the curve
๐‘ก = −๐‘ฅ + 1 intersects at (1, 0) and (0, 1). The solution is constant along the curve so ๐‘ฆ(1, 0)
should equal ๐‘ฆ(0, 1). But ๐‘ฆ(1, 0) = 0 and ๐‘ฆ(0, 1) = sin(1) ≠ 0. See Figure 6.7.
๐‘ก
๐‘ฆ(0, 1) = sin(1)
๐‘ก = −๐‘ฅ + 1
๐‘ฆ = sin(๐‘ก)
๐‘ฆ is constant along this characteristic curve
๐‘ฆ(1, 0) = 0
๐‘ฆ=0
(0, 0)
๐‘ฅ
Figure 6.7: Ill-posed problem.
Now consider the transform. The transformed problem is
−๐‘ ๐‘Œ(๐‘ฅ) + ๐‘Œ 0(๐‘ฅ) = 0, ๐‘Œ(0) =
1
,
๐‘ 2 + 1
and the solution ought to be
๐‘Œ(๐‘ฅ) =
๐‘ 2
1
๐‘’ ๐‘ ๐‘ฅ .
+1
Importantly, this Laplace transform does not decay to zero at infinity! That is, since ๐‘ฅ > 0
in the region of interest, then
lim
๐‘ →∞ ๐‘  2
1
๐‘’ ๐‘ ๐‘ฅ = ∞ ≠ 0.
+1
It almost looks as if we could use the shifting property, but notice that the shift is in the
wrong direction.
Of course, we need not restrict ourselves to first order equations, although the computations become more involved for higher order equations.
324
CHAPTER 6. THE LAPLACE TRANSFORM
Example 6.5.3: Let us use Laplace for the following problem:
๐‘ฆ๐‘ก = ๐‘ฆ ๐‘ฅ๐‘ฅ ,
0 < ๐‘ฅ < ∞, ๐‘ก > 0,
๐‘ฆ ๐‘ฅ (0, ๐‘ก) = ๐‘“ (๐‘ก),
๐‘ฆ(๐‘ฅ, 0) = 0.
Really we also impose other conditions on the solution so that for example the Laplace
transform exists. For example, we might impose that ๐‘ฆ is bounded for each fixed time ๐‘ก.
Transform the equation in the ๐‘ก variable to find
๐‘ ๐‘Œ(๐‘ฅ) = ๐‘Œ 00(๐‘ฅ).
The general solution to this ODE is
√
๐‘Œ(๐‘ฅ) = ๐ด๐‘’
√
๐‘ ๐‘ฅ
+ ๐ต๐‘’ −
๐‘ ๐‘ฅ
.
First ๐ด = 0, since otherwise ๐‘Œ does not decay to zero as ๐‘  → ∞.
√
Now consider the boundary condition. Transform ๐‘Œ 0(0) = ๐น(๐‘ ) and so − ๐‘ ๐ต = ๐น(๐‘ ). In
other words,
1 √
๐‘Œ(๐‘ฅ) = −๐น(๐‘ ) √ ๐‘’ − ๐‘ ๐‘ฅ .
๐‘ 
If we look up the inverse transform in a table such as the one in Appendix B (or we
spend the afternoon doing calculus), we find
√
h
โ„’ −1 ๐‘’ −
๐‘ ๐‘ฅ
i
๐‘ฅ
=√
๐‘’
−๐‘ฅ 2
4๐‘ก
,
4๐œ‹๐‘ก 3
or
โ„’
−1
1 √
1 −๐‘ฅ2
√ ๐‘’ − ๐‘ ๐‘ฅ = √ ๐‘’ 4๐‘ก .
๐‘ 
๐œ‹๐‘ก
So
๐‘ฆ(๐‘ฅ, ๐‘ก) = โ„’
−1
h
๐น(๐‘ )๐‘’
√ i
− ๐‘ ๐‘ฅ
∫
=
0
๐‘ก
๐‘“ (๐œ) p
1
๐‘’
−๐‘ฅ 2
4(๐‘ก−๐œ)
๐‘‘๐œ.
๐œ‹(๐‘ก − ๐œ)
Laplace can solve problems where separation of variables fails. Laplace does not mind
nonhomogeneity, but it is essentially only useful for constant coefficient equations.
6.5.1
Exercises
Exercise 6.5.1: Solve
๐‘ฆ๐‘ก + ๐‘ฆ ๐‘ฅ = 1,
๐‘ฆ(0, ๐‘ก) = 1,
0 < ๐‘ฅ < ∞, ๐‘ก > 0,
๐‘ฆ(๐‘ฅ, 0) = 0.
325
6.5. SOLVING PDES WITH THE LAPLACE TRANSFORM
Exercise 6.5.2: Solve
๐‘ฆ๐‘ก + ๐›ผ๐‘ฆ ๐‘ฅ = 0,
0 < ๐‘ฅ < ∞, ๐‘ก > 0,
๐‘ฆ(0, ๐‘ก) = ๐‘ก,
๐‘ฆ(๐‘ฅ, 0) = 0.
Exercise 6.5.3: Solve
๐‘ฆ๐‘ก + 2๐‘ฆ ๐‘ฅ = ๐‘ฅ + ๐‘ก,
๐‘ฆ(0, ๐‘ก) = 0,
0 < ๐‘ฅ < ∞, ๐‘ก > 0,
๐‘ฆ(๐‘ฅ, 0) = 0.
Exercise 6.5.4: For an ๐›ผ > 0, solve
๐‘ฆ๐‘ก + ๐›ผ๐‘ฆ ๐‘ฅ + ๐‘ฆ = 0,
0 < ๐‘ฅ < ∞, ๐‘ก > 0,
๐‘ฆ(0, ๐‘ก) = sin(๐‘ก),
๐‘ฆ(๐‘ฅ, 0) = 0.
Exercise 6.5.5: Find the corresponding ODE problem for ๐‘Œ(๐‘ฅ), after transforming the ๐‘ก variable
๐‘ฆ๐‘ก๐‘ก + 3๐‘ฆ ๐‘ฅ๐‘ฅ + ๐‘ฆ ๐‘ฅ๐‘ก + 3๐‘ฆ ๐‘ฅ + ๐‘ฆ = sin(๐‘ฅ) + ๐‘ก,
๐‘ฆ(0, ๐‘ก) = 1,
๐‘ฆ(1, ๐‘ก) = ๐‘ก,
๐‘ฆ(๐‘ฅ, 0) = 1 − ๐‘ฅ,
0 < ๐‘ฅ < 1, ๐‘ก > 0,
๐‘ฆ๐‘ก (๐‘ฅ, 0) = 1.
Do not solve the problem.
Exercise 6.5.6: Write down a solution to
๐‘ฆ๐‘ก = ๐‘ฆ ๐‘ฅ๐‘ฅ ,
0 < ๐‘ฅ < ∞, ๐‘ก > 0,
−๐‘ก
๐‘ฆ ๐‘ฅ (0, ๐‘ก) = ๐‘’ ,
๐‘ฆ(๐‘ฅ, 0) = 0,
as an definite integral (convolution).
Exercise 6.5.7: Use the Laplace transform in ๐‘ก to solve
๐‘ฆ๐‘ก๐‘ก = ๐‘ฆ ๐‘ฅ๐‘ฅ ,
−∞ < ๐‘ฅ < ∞, ๐‘ก > 0,
๐‘ฆ๐‘ก (๐‘ฅ, 0) = sin(๐‘ฅ),
๐‘ฆ(๐‘ฅ, 0) = 0.
Hint: Note that ๐‘’ ๐‘ ๐‘ฅ does not go to zero as ๐‘  → ∞ for positive ๐‘ฅ, and ๐‘’ −๐‘ ๐‘ฅ does not go to zero as
๐‘  → ∞ for negative ๐‘ฅ.
Exercise 6.5.101: Solve
๐‘ฆ๐‘ก + ๐‘ฆ ๐‘ฅ = 1,
๐‘ฆ(0, ๐‘ก) = 0,
0 < ๐‘ฅ < ∞, ๐‘ก > 0,
๐‘ฆ(๐‘ฅ, 0) = 0.
Exercise 6.5.102: For a ๐‘ > 0, solve
๐‘ฆ๐‘ก + ๐‘ฆ ๐‘ฅ + ๐‘๐‘ฆ = 0,
๐‘ฆ(0, ๐‘ก) = sin(๐‘ก),
0 < ๐‘ฅ < ∞, ๐‘ก > 0,
๐‘ฆ(๐‘ฅ, 0) = 0.
326
CHAPTER 6. THE LAPLACE TRANSFORM
Exercise 6.5.103: Find the corresponding ODE problem for ๐‘Œ(๐‘ฅ), after transforming the ๐‘ก variable
๐‘ฆ๐‘ก๐‘ก + 3๐‘ฆ ๐‘ฅ๐‘ฅ + ๐‘ฆ = ๐‘ฅ + ๐‘ก,
๐‘ฆ(−1, ๐‘ก) = 0,
−1 < ๐‘ฅ < 1, ๐‘ก > 0,
๐‘ฆ(1, ๐‘ก) = 0,
๐‘ฆ(๐‘ฅ, 0) = (1 − ๐‘ฅ 2 ),
๐‘ฆ๐‘ก (๐‘ฅ, 0) = 0.
Do not solve the problem.
Exercise 6.5.104: Use the Laplace transform in ๐‘ก to solve
๐‘ฆ๐‘ก๐‘ก = ๐‘ฆ ๐‘ฅ๐‘ฅ ,
−∞ < ๐‘ฅ < ∞, ๐‘ก > 0,
๐‘ฆ๐‘ก (๐‘ฅ, 0) = ๐‘ฅ ,
2
๐‘ฆ(๐‘ฅ, 0) = 0.
Hint: Note that ๐‘’ ๐‘ ๐‘ฅ does not go to zero as ๐‘  → ∞ for positive ๐‘ฅ, and ๐‘’ −๐‘ ๐‘ฅ does not go to zero as
๐‘  → ∞ for negative ๐‘ฅ.
Chapter 7
Power series methods
7.1
Power series
Note: 1 or 1.5 lecture, §8.1 in [EP], §5.1 in [BD]
Many functions can be written in terms of a power series
∞
Õ
๐‘Ž ๐‘˜ (๐‘ฅ − ๐‘ฅ0 ) ๐‘˜ .
๐‘˜=0
If we assume that a solution of a differential equation is written as a power series, then
perhaps we can use a method reminiscent of undetermined coefficients. That is, we will
try to solve for the numbers ๐‘Ž ๐‘˜ . Before we can carry out this process, let us review some
results and concepts about power series.
7.1.1
Definition
As we said, a power series is an expression such as
∞
Õ
๐‘Ž ๐‘˜ (๐‘ฅ − ๐‘ฅ0 ) ๐‘˜ = ๐‘Ž 0 + ๐‘Ž 1 (๐‘ฅ − ๐‘ฅ 0 ) + ๐‘Ž 2 (๐‘ฅ − ๐‘ฅ0 )2 + ๐‘Ž 3 (๐‘ฅ − ๐‘ฅ0 )3 + · · · ,
(7.1)
๐‘˜=0
where ๐‘Ž0 , ๐‘Ž1 , ๐‘Ž2 , . . . , ๐‘Ž ๐‘˜ , . . . and ๐‘ฅ 0 are constants. Let
๐‘†๐‘› (๐‘ฅ) =
๐‘›
Õ
๐‘Ž ๐‘˜ (๐‘ฅ − ๐‘ฅ0 ) ๐‘˜ = ๐‘Ž 0 + ๐‘Ž 1 (๐‘ฅ − ๐‘ฅ0 ) + ๐‘Ž 2 (๐‘ฅ − ๐‘ฅ0 )2 + ๐‘Ž3 (๐‘ฅ − ๐‘ฅ 0 )3 + · · · + ๐‘Ž ๐‘› (๐‘ฅ − ๐‘ฅ 0 )๐‘› ,
๐‘˜=0
denote the so-called partial sum. If for some ๐‘ฅ, the limit
lim ๐‘†๐‘› (๐‘ฅ) = lim
๐‘›→∞
๐‘›→∞
๐‘›
Õ
๐‘˜=0
๐‘Ž ๐‘˜ (๐‘ฅ − ๐‘ฅ0 ) ๐‘˜
328
CHAPTER 7. POWER SERIES METHODS
exists, then we say that the series (7.1) converges at ๐‘ฅ. At ๐‘ฅ = ๐‘ฅ0 , the series always converges
to ๐‘Ž0 . When (7.1) converges at any other point ๐‘ฅ ≠ ๐‘ฅ 0 , we say that (7.1) is a convergent power
series, and we write
∞
Õ
๐‘Ž ๐‘˜ (๐‘ฅ − ๐‘ฅ0 ) ๐‘˜ = lim
๐‘›
Õ
๐‘›→∞
๐‘˜=0
๐‘Ž ๐‘˜ (๐‘ฅ − ๐‘ฅ0 ) ๐‘˜ .
๐‘˜=0
If the series does not converge for any point ๐‘ฅ ≠ ๐‘ฅ0 , we say that the series is divergent.
Example 7.1.1: The series
∞
Õ
1
๐‘˜=0
๐‘ฅ2 ๐‘ฅ3
๐‘ฅ =1+๐‘ฅ+
+
+···
๐‘˜!
2
6
๐‘˜
is convergent for any ๐‘ฅ. Recall that ๐‘˜! = 1 · 2 · 3 · · · ๐‘˜ is the factorial. By convention we
define 0! = 1. You may recall that this series converges to ๐‘’ ๐‘ฅ .
We say that (7.1) converges absolutely at ๐‘ฅ whenever the limit
lim
๐‘›→∞
๐‘›
Õ
|๐‘Ž ๐‘˜ | |๐‘ฅ − ๐‘ฅ 0 | ๐‘˜
๐‘˜=0
๐‘˜
exists. That is, the series ∞
๐‘˜=0 |๐‘Ž ๐‘˜ | |๐‘ฅ − ๐‘ฅ 0 | is convergent. If (7.1) converges absolutely at ๐‘ฅ,
then it converges at ๐‘ฅ. However, the opposite implication is not true.
Í
Example 7.1.2: The series
∞
Õ
1
๐‘˜=1
๐‘˜
๐‘ฅ๐‘˜
๐‘˜
(−1)
converges absolutely for all ๐‘ฅ in the interval (−1, 1). It converges at ๐‘ฅ = −1, as ∞
๐‘˜=1 ๐‘˜
converges (conditionally) by the alternating series test. The power series does not converge
Í
1
absolutely at ๐‘ฅ = −1, because ∞
๐‘˜=1 ๐‘˜ does not converge. The series diverges at ๐‘ฅ = 1.
Í
7.1.2
Radius of convergence
If a power series converges absolutely at some ๐‘ฅ 1 , then for all ๐‘ฅ such that |๐‘ฅ − ๐‘ฅ0 | ≤ |๐‘ฅ1 − ๐‘ฅ0 |
(that is, ๐‘ฅ is closer than ๐‘ฅ1 to ๐‘ฅ0 ) we have ๐‘Ž ๐‘˜ (๐‘ฅ − ๐‘ฅ0 ) ๐‘˜ ≤ ๐‘Ž ๐‘˜ (๐‘ฅ 1 − ๐‘ฅ 0 ) ๐‘˜ for all ๐‘˜. As
the numbers ๐‘Ž ๐‘˜ (๐‘ฅ1 − ๐‘ฅ0 ) ๐‘˜ sum to some finite limit, summing smaller positive numbers
๐‘Ž ๐‘˜ (๐‘ฅ − ๐‘ฅ0 ) ๐‘˜ must also have a finite limit. Hence, the series must converge absolutely at ๐‘ฅ.
Theorem 7.1.1. For a power series (7.1), there exists a number ๐œŒ (we allow ๐œŒ = ∞) called the
radius of convergence such that the series converges absolutely on the interval (๐‘ฅ0 − ๐œŒ, ๐‘ฅ0 + ๐œŒ)
and diverges for ๐‘ฅ < ๐‘ฅ0 − ๐œŒ and ๐‘ฅ > ๐‘ฅ 0 + ๐œŒ. We write ๐œŒ = ∞ if the series converges for all ๐‘ฅ.
See Figure 7.1 on the facing page. In Example 7.1.1 the radius of convergence is ๐œŒ = ∞
as the series converges everywhere. In Example 7.1.2 the radius of convergence is ๐œŒ = 1.
We note that ๐œŒ = 0 is another way of saying that the series is divergent.
329
7.1. POWER SERIES
diverges
converges absolutely
๐‘ฅ0 − ๐œŒ
๐‘ฅ0
diverges
๐‘ฅ0 + ๐œŒ
Figure 7.1: Convergence of a power series.
A useful test for convergence of a series is the ratio test. Suppose that
∞
Õ
๐‘๐‘˜
๐‘˜=0
is a series and the limit
๐‘ ๐‘˜+1
๐‘›→∞ ๐‘ ๐‘˜
exists. Then the series converges absolutely if ๐ฟ < 1 and diverges if ๐ฟ > 1.
We apply this test to the series (7.1). Let ๐‘ ๐‘˜ = ๐‘Ž ๐‘˜ (๐‘ฅ − ๐‘ฅ0 ) ๐‘˜ in the test. Compute
๐ฟ = lim
๐ฟ = lim
๐‘›→∞
๐‘Ž ๐‘˜+1
๐‘ ๐‘˜+1
๐‘Ž ๐‘˜+1 (๐‘ฅ − ๐‘ฅ 0 ) ๐‘˜+1
=
lim
= lim
|๐‘ฅ − ๐‘ฅ0 |.
๐‘›→∞ ๐‘Ž ๐‘˜
๐‘›→∞
๐‘๐‘˜
๐‘Ž ๐‘˜ (๐‘ฅ − ๐‘ฅ 0 ) ๐‘˜
Define ๐ด by
๐‘Ž ๐‘˜+1
.
๐‘›→∞ ๐‘Ž ๐‘˜
Then if 1 > ๐ฟ = ๐ด|๐‘ฅ − ๐‘ฅ0 | the series (7.1) converges absolutely. If ๐ด = 0, then the series
always converges. If ๐ด > 0, then the series converges absolutely if |๐‘ฅ − ๐‘ฅ0 | < 1/๐ด, and
diverges if |๐‘ฅ − ๐‘ฅ 0 | > 1/๐ด. That is, the radius of convergence is 1/๐ด.
A similar test is the root test. Suppose
๐ด = lim
๐ฟ = lim
๐‘˜→∞
p๐‘˜
|๐‘ ๐‘˜ |
exists. Then ∞
๐‘˜=0 ๐‘ ๐‘˜ converges absolutely if ๐ฟ < 1 and diverges if ๐ฟ > 1. We can use the
same calculation as above to find ๐ด. Let us summarize.
Í
Theorem 7.1.2 (Ratio and root tests for power series). Consider a power series
∞
Õ
๐‘Ž ๐‘˜ (๐‘ฅ − ๐‘ฅ 0 ) ๐‘˜
๐‘˜=0
such that
p๐‘˜
๐‘Ž ๐‘˜+1
or
๐ด = lim |๐‘Ž ๐‘˜ |
๐‘›→∞ ๐‘Ž ๐‘˜
๐‘˜→∞
exists. If ๐ด = 0, then the radius of convergence of the series is ∞. Otherwise, the radius of
convergence is 1/๐ด.
๐ด = lim
330
CHAPTER 7. POWER SERIES METHODS
Example 7.1.3: Suppose we have the series
∞
Õ
2−๐‘˜ (๐‘ฅ − 1) ๐‘˜ .
๐‘˜=0
First we compute the limit in the ratio test,
๐ด = lim
๐‘˜→∞
2−๐‘˜−1
๐‘Ž ๐‘˜+1
= lim
= lim 2−1 = 1/2.
๐‘Ž๐‘˜
๐‘˜→∞ 2−๐‘˜
๐‘˜→∞
Therefore the radius of convergence is 2, and the series converges absolutely on the interval
(−1, 3). And we could just as well have used the root test:
๐ด = lim lim
p๐‘˜
๐‘˜→∞ ๐‘˜→∞
|๐‘Ž ๐‘˜ | = lim
๐‘˜→∞
Example 7.1.4: Consider
∞
Õ
1
๐‘˜=0
๐‘˜๐‘˜
p๐‘˜
|2−๐‘˜ | = lim 2−1 = 1/2.
๐‘˜→∞
๐‘ฅ๐‘˜.
Compute the limit for the root test,
s
๐ด = lim
p๐‘˜
๐‘˜→∞
|๐‘Ž ๐‘˜ | = lim
๐‘˜
๐‘˜→∞
1
= lim
๐‘˜→∞
๐‘˜๐‘˜
s
๐‘˜
1
๐‘˜
๐‘˜
1
= 0.
๐‘˜→∞ ๐‘˜
= lim
So the radius of convergence is ∞: the series converges everywhere. The ratio test would
also work here.
The root or the ratio test does not always apply. That is the limit of ๐‘Ž๐‘Ž๐‘˜+1
or ๐‘˜ |๐‘Ž ๐‘˜ | might
๐‘˜
not exist. There exist more sophisticated ways of finding the radius of convergence, but
those would be beyond the scope of this chapter. The two methods above cover many of
the series that arise in practice. Often if the root test applies, so does the ratio test, and vice
versa, though the limit might be easier to compute in one way than the other.
p
7.1.3
Analytic functions
Functions represented by power series are called analytic functions. Not every function is
analytic, although the majority of the functions you have seen in calculus are.
An analytic function ๐‘“ (๐‘ฅ) is equal to its Taylor series∗ near a point ๐‘ฅ0 . That is, for ๐‘ฅ near
๐‘ฅ0 we have
∞
Õ
๐‘“ (๐‘˜) (๐‘ฅ0 )
๐‘“ (๐‘ฅ) =
(๐‘ฅ − ๐‘ฅ 0 ) ๐‘˜ ,
(7.2)
๐‘˜!
๐‘˜=0
where
๐‘“ (๐‘˜) (๐‘ฅ
∗ Named
0)
denotes the
๐‘˜ th
derivative of ๐‘“ (๐‘ฅ) at the point ๐‘ฅ0 .
after the English mathematician Sir Brook Taylor (1685–1731).
331
7.1. POWER SERIES
For example, sine is an analytic function and its Taylor series around ๐‘ฅ 0 = 0 is given by
sin(๐‘ฅ) =
∞
Õ
(−1)๐‘›
๐‘›=0
(2๐‘› + 1)!
๐‘ฅ 2๐‘›+1 .
In Figure 7.2 we plot sin(๐‘ฅ) and the truncations of the series up to degree 5 and 9. You can
see that the approximation is very good for ๐‘ฅ near 0, but gets worse for ๐‘ฅ further away
from 0. This is what happens in general. To get a good approximation far away from ๐‘ฅ0
you need to take more and more terms of the Taylor series.
-10
-5
0
5
10
3
3
2
2
1
1
0
0
-1
-1
-2
-2
-3
-3
-10
-5
0
5
10
Figure 7.2: The sine function and its Taylor approximations around ๐‘ฅ 0 = 0 of 5th and 9th degree.
7.1.4
Manipulating power series
One of the main properties of power series that we will use is that we can differentiate
Í
them term by term. That is, suppose that ๐‘Ž ๐‘˜ (๐‘ฅ − ๐‘ฅ0 ) ๐‘˜ is a convergent power series. Then
for ๐‘ฅ in the radius of convergence we have
"
#
∞
∞
Õ
๐‘‘ Õ
๐‘˜
๐‘Ž ๐‘˜ (๐‘ฅ − ๐‘ฅ0 ) =
๐‘˜๐‘Ž ๐‘˜ (๐‘ฅ − ๐‘ฅ0 ) ๐‘˜−1 .
๐‘‘๐‘ฅ
๐‘˜=0
๐‘˜=1
Notice that the term corresponding to ๐‘˜ = 0 disappeared as it was constant. The radius of
convergence of the differentiated series is the same as that of the original.
Example 7.1.5: Let us show that the exponential ๐‘ฆ = ๐‘’ ๐‘ฅ solves ๐‘ฆ 0 = ๐‘ฆ. First write
๐‘ฆ = ๐‘’๐‘ฅ =
∞
Õ
1
๐‘˜=0
๐‘˜!
๐‘ฅ๐‘˜.
332
CHAPTER 7. POWER SERIES METHODS
Now differentiate
0
๐‘ฆ =
∞
Õ
1
๐‘˜
๐‘˜=1
๐‘˜!
๐‘ฅ
๐‘˜−1
=
∞
Õ
๐‘˜=1
1
๐‘ฅ ๐‘˜−1 .
(๐‘˜ − 1)!
We reindex the series by simply replacing ๐‘˜ with ๐‘˜ + 1. The series does not change, what
changes is simply how we write it. After reindexing the series starts at ๐‘˜ = 0 again.
∞
Õ
๐‘˜=1
∞
Õ
1
๐‘˜−1
๐‘ฅ
=
(๐‘˜ − 1)!
๐‘˜+1=1
∞
Õ 1
1
(๐‘˜+1)−1
=
๐‘ฅ๐‘˜.
๐‘ฅ
๐‘˜!
(๐‘˜ + 1) − 1 !
๐‘˜=0
That was precisely the power series for ๐‘’ ๐‘ฅ we started with, so we showed that
๐‘‘
๐‘ฅ
๐‘‘๐‘ฅ [๐‘’ ]
= ๐‘’๐‘ฅ.
Convergent power series can be added and multiplied together, and multiplied by
constants using the following rules. First, we can add series by adding term by term,
∞
Õ
!
∞
Õ
๐‘Ž ๐‘˜ (๐‘ฅ − ๐‘ฅ 0 ) ๐‘˜ +
๐‘˜=0
!
๐‘ ๐‘˜ (๐‘ฅ − ๐‘ฅ0 ) ๐‘˜ =
๐‘˜=0
∞
Õ
(๐‘Ž ๐‘˜ + ๐‘ ๐‘˜ )(๐‘ฅ − ๐‘ฅ0 ) ๐‘˜ .
๐‘˜=0
We can multiply by constants,
๐›ผ
∞
Õ
!
๐‘Ž ๐‘˜ (๐‘ฅ − ๐‘ฅ 0 ) ๐‘˜ =
๐‘˜=0
∞
Õ
๐›ผ๐‘Ž ๐‘˜ (๐‘ฅ − ๐‘ฅ0 ) ๐‘˜ .
๐‘˜=0
We can also multiply series together,
∞
Õ
๐‘˜=0
๐‘Ž ๐‘˜ (๐‘ฅ − ๐‘ฅ0 )
๐‘˜
!
∞
Õ
๐‘ ๐‘˜ (๐‘ฅ − ๐‘ฅ0 )
๐‘˜
๐‘˜=0
!
=
∞
Õ
๐‘ ๐‘˜ (๐‘ฅ − ๐‘ฅ0 ) ๐‘˜ ,
๐‘˜=0
where ๐‘ ๐‘˜ = ๐‘Ž 0 ๐‘ ๐‘˜ + ๐‘Ž1 ๐‘ ๐‘˜−1 + · · · + ๐‘Ž ๐‘˜ ๐‘0 . The radius of convergence of the sum or the product
is at least the minimum of the radii of convergence of the two series involved.
7.1.5
Power series for rational functions
Polynomials are simply finite power series. That is, a polynomial is a power series where
the ๐‘Ž ๐‘˜ are zero for all ๐‘˜ large enough. We can always expand a polynomial as a power series
about any point ๐‘ฅ0 by writing the polynomial as a polynomial in (๐‘ฅ − ๐‘ฅ0 ). For example, let
us write 2๐‘ฅ 2 − 3๐‘ฅ + 4 as a power series around ๐‘ฅ0 = 1:
2๐‘ฅ 2 − 3๐‘ฅ + 4 = 3 + (๐‘ฅ − 1) + 2(๐‘ฅ − 1)2 .
In other words ๐‘Ž 0 = 3, ๐‘Ž1 = 1, ๐‘Ž 2 = 2, and all other ๐‘Ž ๐‘˜ = 0. To do this, we know that ๐‘Ž ๐‘˜ = 0
for all ๐‘˜ ≥ 3 as the polynomial is of degree 2. We write ๐‘Ž 0 + ๐‘Ž 1 (๐‘ฅ − 1) + ๐‘Ž 2 (๐‘ฅ − 1)2 , we
expand, and we solve for ๐‘Ž 0 , ๐‘Ž1 , and ๐‘Ž 2 . We could have also differentiated at ๐‘ฅ = 1 and
used the Taylor series formula (7.2).
333
7.1. POWER SERIES
Let us look at rational functions, that is, ratios of polynomials. An important fact is
that a series for a function only defines the function on an interval even if the function is
defined elsewhere. For example, for −1 < ๐‘ฅ < 1,
∞
Õ
1
=
๐‘ฅ ๐‘˜ = 1 + ๐‘ฅ + ๐‘ฅ2 + · · ·
1−๐‘ฅ
๐‘˜=0
This series is called the geometric series. The ratio test tells us that the radius of convergence
1
is defined for all ๐‘ฅ ≠ 1.
is 1. The series diverges for ๐‘ฅ ≤ −1 and ๐‘ฅ ≥ 1, even though 1−๐‘ฅ
We can use the geometric series together with rules for addition and multiplication of
power series to expand rational functions around a point, as long as the denominator is
not zero at ๐‘ฅ 0 . Note that as for polynomials, we could equivalently use the Taylor series
expansion (7.2).
๐‘ฅ
Example 7.1.6: Expand 1+2๐‘ฅ+๐‘ฅ
2 as a power series around the origin (๐‘ฅ 0 = 0) and find the
radius of convergence.
2
First, write 1 + 2๐‘ฅ + ๐‘ฅ 2 = (1 + ๐‘ฅ)2 = 1 − (−๐‘ฅ) . Compute
๐‘ฅ
1
=๐‘ฅ
2
1 − (−๐‘ฅ)
1 + 2๐‘ฅ + ๐‘ฅ
=๐‘ฅ
∞
Õ
=๐‘ฅ
2
(−1) ๐‘˜ ๐‘ฅ ๐‘˜
๐‘˜=0
∞
Õ
!2
!
๐‘๐‘˜ ๐‘ฅ๐‘˜
๐‘˜=0
=
∞
Õ
๐‘ ๐‘˜ ๐‘ฅ ๐‘˜+1 ,
๐‘˜=0
where to get ๐‘ ๐‘˜ , we use the formula for the product of series. We obtain, ๐‘ 0 = 1,
๐‘1 = −1 − 1 = −2, ๐‘ 2 = 1 + 1 + 1 = 3, etc. Therefore
∞
Õ
๐‘ฅ
=
(−1) ๐‘˜+1 ๐‘˜๐‘ฅ ๐‘˜ = ๐‘ฅ − 2๐‘ฅ 2 + 3๐‘ฅ 3 − 4๐‘ฅ 4 + · · ·
2
1 + 2๐‘ฅ + ๐‘ฅ
๐‘˜=1
The radius of convergence is at least 1. We use the ratio test
(−1) ๐‘˜+2 (๐‘˜ + 1)
๐‘Ž ๐‘˜+1
๐‘˜+1
= lim
= 1.
lim
= lim
๐‘˜+1
๐‘˜
๐‘˜→∞ ๐‘Ž ๐‘˜
๐‘˜→∞
๐‘˜→∞
(−1) ๐‘˜
So the radius of convergence is actually equal to 1.
When the rational function is more complicated, it is also possible to use method of
3
partial fractions. For example, to find the Taylor series for ๐‘ฅ๐‘ฅ 2+๐‘ฅ
, we write
−1
∞
∞
∞
Õ
Õ
Õ
๐‘ฅ3 + ๐‘ฅ
1
1
๐‘˜ ๐‘˜
๐‘˜
=
๐‘ฅ
+
−
=
๐‘ฅ
+
(−1)
๐‘ฅ
−
๐‘ฅ
=
−๐‘ฅ
+
(−2)๐‘ฅ ๐‘˜ .
2
1
+
๐‘ฅ
1
−
๐‘ฅ
๐‘ฅ −1
๐‘˜=0
๐‘˜=0
๐‘˜=3
๐‘˜ odd
334
7.1.6
CHAPTER 7. POWER SERIES METHODS
Exercises
Exercise 7.1.1: Is the power series
∞
Õ
๐‘’ ๐‘˜ ๐‘ฅ ๐‘˜ convergent? If so, what is the radius of convergence?
๐‘˜=0
Exercise 7.1.2: Is the power series
∞
Õ
๐‘˜๐‘ฅ ๐‘˜ convergent? If so, what is the radius of convergence?
๐‘˜=0
Exercise 7.1.3: Is the power series
∞
Õ
๐‘˜!๐‘ฅ ๐‘˜ convergent? If so, what is the radius of convergence?
๐‘˜=0
Exercise 7.1.4: Is the power series
∞
Õ
1
๐‘˜=0
(2๐‘˜)!
(๐‘ฅ − 10) ๐‘˜ convergent? If so, what is the radius of
convergence?
Exercise 7.1.5: Determine the Taylor series for sin ๐‘ฅ around the point ๐‘ฅ0 = ๐œ‹.
Exercise 7.1.6: Determine the Taylor series for ln ๐‘ฅ around the point ๐‘ฅ0 = 1, and find the radius of
convergence.
1
around ๐‘ฅ0 = 0.
1+๐‘ฅ
๐‘ฅ
Exercise 7.1.8: Determine the Taylor series and its radius of convergence of
around ๐‘ฅ0 = 0.
4 − ๐‘ฅ2
Hint: You will not be able to use the ratio test.
Exercise 7.1.7: Determine the Taylor series and its radius of convergence of
Exercise 7.1.9: Expand ๐‘ฅ 5 + 5๐‘ฅ + 1 as a power series around ๐‘ฅ 0 = 5.
Exercise 7.1.10: Suppose that the ratio test applies to a series
∞
Õ
๐‘Ž ๐‘˜ ๐‘ฅ ๐‘˜ . Show, using the ratio test,
๐‘˜=0
that the radius of convergence of the differentiated series is the same as that of the original series.
Exercise 7.1.11: Suppose that ๐‘“ is an analytic function such that ๐‘“ (๐‘›) (0) = ๐‘›. Find ๐‘“ (1).
Exercise 7.1.101: Is the power series
∞
Õ
(0.1)๐‘› ๐‘ฅ ๐‘› convergent? If so, what is the radius of
๐‘›=1
convergence?
Exercise 7.1.102 (challenging): Is the power series
∞
Õ
๐‘›!
๐‘›=1
๐‘›๐‘›
๐‘ฅ ๐‘› convergent? If so, what is the radius
of convergence?
Exercise 7.1.103: Using the geometric series, expand
series converge?
1
1−๐‘ฅ
around ๐‘ฅ0 = 2. For what ๐‘ฅ does the
Exercise 7.1.104 (challenging): Find the Taylor series for ๐‘ฅ 7 ๐‘’ ๐‘ฅ around ๐‘ฅ0 = 0.
Exercise 7.1.105 (challenging): Imagine ๐‘“ and ๐‘” are analytic functions such that ๐‘“ (๐‘˜) (0) = ๐‘” (๐‘˜) (0)
for all large enough ๐‘˜. What can you say about ๐‘“ (๐‘ฅ) − ๐‘”(๐‘ฅ)?
7.2. SERIES SOLUTIONS OF LINEAR SECOND ORDER ODES
7.2
335
Series solutions of linear second order ODEs
Note: 1 or 1.5 lecture, §8.2 in [EP], §5.2 and §5.3 in [BD]
Suppose we have a linear second order homogeneous ODE of the form
๐‘(๐‘ฅ)๐‘ฆ 00 + ๐‘ž(๐‘ฅ)๐‘ฆ 0 + ๐‘Ÿ(๐‘ฅ)๐‘ฆ = 0.
Suppose that ๐‘(๐‘ฅ), ๐‘ž(๐‘ฅ), and ๐‘Ÿ(๐‘ฅ) are polynomials. We will try a solution of the form
๐‘ฆ=
∞
Õ
๐‘Ž ๐‘˜ (๐‘ฅ − ๐‘ฅ0 ) ๐‘˜
๐‘˜=0
and solve for the ๐‘Ž ๐‘˜ to try to obtain a solution defined in some interval around ๐‘ฅ 0 .
The point ๐‘ฅ0 is called an ordinary point if ๐‘(๐‘ฅ0 ) ≠ 0. That is, the functions
๐‘ž(๐‘ฅ)
๐‘(๐‘ฅ)
and
๐‘Ÿ(๐‘ฅ)
๐‘(๐‘ฅ)
are defined for ๐‘ฅ near ๐‘ฅ0 . If ๐‘(๐‘ฅ0 ) = 0, then we say ๐‘ฅ 0 is a singular point. Handling singular
points is harder than ordinary points and so we now focus only on ordinary points.
Example 7.2.1: Let us start with a very simple example
๐‘ฆ 00 − ๐‘ฆ = 0.
Let us try a power series solution near ๐‘ฅ0 = 0, which is an ordinary point. Every point is an
ordinary point in fact, as the equation is constant coefficient. We already know we should
obtain exponentials or the hyperbolic sine and cosine, but let us pretend we do not know
this.
We try
๐‘ฆ=
∞
Õ
๐‘Ž๐‘˜ ๐‘ฅ ๐‘˜ .
๐‘˜=0
If we differentiate, the ๐‘˜ = 0 term is a constant and hence disappears. We therefore get
0
๐‘ฆ =
∞
Õ
๐‘˜๐‘Ž ๐‘˜ ๐‘ฅ ๐‘˜−1 .
๐‘˜=1
We differentiate yet again to obtain (now the ๐‘˜ = 1 term disappears)
00
๐‘ฆ =
∞
Õ
๐‘˜(๐‘˜ − 1)๐‘Ž ๐‘˜ ๐‘ฅ ๐‘˜−2 .
๐‘˜=2
We reindex the series (replace ๐‘˜ with ๐‘˜ + 2) to obtain
๐‘ฆ 00 =
∞
Õ
(๐‘˜ + 2) (๐‘˜ + 1) ๐‘Ž ๐‘˜+2 ๐‘ฅ ๐‘˜ .
๐‘˜=0
336
CHAPTER 7. POWER SERIES METHODS
Now we plug ๐‘ฆ and ๐‘ฆ 00 into the differential equation
0 = ๐‘ฆ 00 − ๐‘ฆ =
!
∞
Õ
(๐‘˜ + 2) (๐‘˜ + 1) ๐‘Ž ๐‘˜+2 ๐‘ฅ ๐‘˜ −
๐‘˜=0
=
=
!
๐‘Ž๐‘˜ ๐‘ฅ๐‘˜
๐‘˜=0
∞ Õ
๐‘˜=0
∞
Õ
∞
Õ
๐‘˜
(๐‘˜ + 2) (๐‘˜ + 1) ๐‘Ž ๐‘˜+2 ๐‘ฅ − ๐‘Ž ๐‘˜ ๐‘ฅ
๐‘˜
(๐‘˜ + 2) (๐‘˜ + 1) ๐‘Ž ๐‘˜+2 − ๐‘Ž ๐‘˜ ๐‘ฅ ๐‘˜ .
๐‘˜=0
๐‘ฆ 00
As
− ๐‘ฆ is supposed to be equal to 0, we know that the coefficients of the resulting series
must be equal to 0. Therefore,
๐‘Ž๐‘˜
.
(๐‘˜ + 2) (๐‘˜ + 1) ๐‘Ž ๐‘˜+2 − ๐‘Ž ๐‘˜ = 0,
or
๐‘Ž ๐‘˜+2 =
(๐‘˜ + 2)(๐‘˜ + 1)
The equation above is called a recurrence relation for the coefficients of the power series. It
did not matter what ๐‘Ž 0 or ๐‘Ž1 was. They can be arbitrary. But once we pick ๐‘Ž 0 and ๐‘Ž 1 , then
all other coefficients are determined by the recurrence relation.
Let us see what the coefficients must be. First, ๐‘Ž 0 and ๐‘Ž1 are arbitrary. Then,
๐‘Ž1
๐‘Ž0
๐‘Ž2
๐‘Ž0
๐‘Ž3
๐‘Ž1
๐‘Ž2 = , ๐‘Ž3 =
, ๐‘Ž4 =
=
, ๐‘Ž5 =
=
, ...
2
(3)(2)
(4)(3) (4)(3)(2)
(5)(4) (5)(4)(3)(2)
So for even ๐‘˜, that is ๐‘˜ = 2๐‘›, we have
๐‘Ž0
,
(2๐‘›)!
๐‘Ž ๐‘˜ = ๐‘Ž 2๐‘› =
and for odd ๐‘˜, that is ๐‘˜ = 2๐‘› + 1, we have
๐‘Ž ๐‘˜ = ๐‘Ž 2๐‘›+1 =
๐‘Ž1
.
(2๐‘› + 1)!
Let us write down the series
๐‘ฆ=
∞
Õ
๐‘˜
๐‘Ž๐‘˜ ๐‘ฅ =
๐‘˜=0
∞ Õ
๐‘Ž0
๐‘›=0
(2๐‘›)!
๐‘ฅ
2๐‘›
∞
∞
๐‘›=0
๐‘›=0
Õ 1
Õ
๐‘Ž1
1
+
๐‘ฅ 2๐‘›+1 = ๐‘Ž 0
๐‘ฅ 2๐‘› + ๐‘Ž 1
๐‘ฅ 2๐‘›+1 .
(2๐‘› + 1)!
(2๐‘›)!
(2๐‘› + 1)!
We recognize the two series as the hyperbolic sine and cosine. Therefore,
๐‘ฆ = ๐‘Ž 0 cosh ๐‘ฅ + ๐‘Ž 1 sinh ๐‘ฅ.
Of course, in general we will not be able to recognize the series that appears, since
usually there will not be any elementary function that matches it. In that case we will be
content with the series.
Example 7.2.2: Let us do a more complex example. Consider Airy’s equation∗ :
๐‘ฆ 00 − ๐‘ฅ๐‘ฆ = 0,
near the point ๐‘ฅ0 = 0. Note that ๐‘ฅ0 = 0 is an ordinary point.
∗ Named
after the English mathematician Sir George Biddell Airy (1801–1892).
337
7.2. SERIES SOLUTIONS OF LINEAR SECOND ORDER ODES
We try
๐‘ฆ=
∞
Õ
๐‘Ž๐‘˜ ๐‘ฅ ๐‘˜ .
๐‘˜=0
We differentiate twice (as above) to obtain
00
๐‘ฆ =
∞
Õ
๐‘˜ (๐‘˜ − 1) ๐‘Ž ๐‘˜ ๐‘ฅ ๐‘˜−2 .
๐‘˜=2
We plug ๐‘ฆ into the equation
∞
Õ
0 = ๐‘ฆ 00 − ๐‘ฅ๐‘ฆ =
๐‘˜=2
∞
Õ
=
!
๐‘˜ (๐‘˜ − 1) ๐‘Ž ๐‘˜ ๐‘ฅ ๐‘˜−2 − ๐‘ฅ
!
๐‘˜ (๐‘˜ − 1) ๐‘Ž ๐‘˜ ๐‘ฅ ๐‘˜−2 −
๐‘˜=2
∞
Õ
!
๐‘Ž๐‘˜ ๐‘ฅ๐‘˜
๐‘˜=0
∞
Õ
!
๐‘Ž ๐‘˜ ๐‘ฅ ๐‘˜+1 .
๐‘˜=0
We reindex to make things easier to sum
0 = ๐‘ฆ 00 − ๐‘ฅ๐‘ฆ = 2๐‘Ž 2 +
(๐‘˜ + 2) (๐‘˜ + 1) ๐‘Ž ๐‘˜+2 ๐‘ฅ ๐‘˜ −
๐‘˜=1
∞
Õ
= 2๐‘Ž2 +
!
∞
Õ
∞
Õ
!
๐‘Ž ๐‘˜−1 ๐‘ฅ ๐‘˜
๐‘˜=1
(๐‘˜ + 2) (๐‘˜ + 1) ๐‘Ž ๐‘˜+2 − ๐‘Ž ๐‘˜−1 ๐‘ฅ ๐‘˜ .
๐‘˜=1
Again ๐‘ฆ 00 − ๐‘ฅ๐‘ฆ is supposed to be 0, so ๐‘Ž 2 = 0, and
(๐‘˜ + 2) (๐‘˜ + 1) ๐‘Ž ๐‘˜+2 − ๐‘Ž ๐‘˜−1 = 0,
or
๐‘Ž ๐‘˜+2 =
๐‘Ž ๐‘˜−1
.
(๐‘˜ + 2)(๐‘˜ + 1)
We jump in steps of three. First, since ๐‘Ž 2 = 0 we must have , ๐‘Ž 5 = 0, ๐‘Ž8 = 0, ๐‘Ž 11 = 0, etc. In
general, ๐‘Ž 3๐‘›+2 = 0.
The constants ๐‘Ž 0 and ๐‘Ž 1 are arbitrary and we obtain
๐‘Ž3 =
๐‘Ž0
,
(3)(2)
๐‘Ž4 =
๐‘Ž1
,
(4)(3)
๐‘Ž6 =
๐‘Ž3
๐‘Ž0
=
,
(6)(5) (6)(5)(3)(2)
๐‘Ž7 =
For ๐‘Ž ๐‘˜ where ๐‘˜ is a multiple of 3, that is ๐‘˜ = 3๐‘› we notice that
๐‘Ž 3๐‘› =
๐‘Ž0
.
(2)(3)(5)(6) · · · (3๐‘› − 1)(3๐‘›)
For ๐‘Ž ๐‘˜ where ๐‘˜ = 3๐‘› + 1, we notice
๐‘Ž3๐‘›+1 =
๐‘Ž1
.
(3)(4)(6)(7) · · · (3๐‘›)(3๐‘› + 1)
๐‘Ž4
๐‘Ž1
=
,
(7)(6) (7)(6)(4)(3)
...
338
CHAPTER 7. POWER SERIES METHODS
In other words, if we write down the series for ๐‘ฆ, it has two parts
๐‘Ž0
๐‘Ž0 6
๐‘Ž0
๐‘ฆ = ๐‘Ž0 + ๐‘ฅ 3 +
๐‘ฅ +···+
๐‘ฅ 3๐‘› + · · ·
6
180
(2)(3)(5)(6) · · · (3๐‘› − 1)(3๐‘›)
๐‘Ž1 7
๐‘Ž1
๐‘Ž1 4
3๐‘›+1
๐‘ฅ +···+
+ ๐‘Ž1 ๐‘ฅ + ๐‘ฅ +
๐‘ฅ
+···
12
504
(3)(4)(6)(7) · · · (3๐‘›)(3๐‘› + 1)
1 3
1 6
1
๐‘ฅ +···+
= ๐‘Ž0 1 + ๐‘ฅ +
๐‘ฅ 3๐‘› + · · ·
6
180
(2)(3)(5)(6) · · · (3๐‘› − 1)(3๐‘›)
1 7
1
1 4
3๐‘›+1
๐‘ฅ +···+
+ ๐‘Ž1 ๐‘ฅ + ๐‘ฅ +
๐‘ฅ
+··· .
12
504
(3)(4)(6)(7) · · · (3๐‘›)(3๐‘› + 1)
We define
1 6
1
1
๐‘ฅ +···+
๐‘ฅ 3๐‘› + · · · ,
๐‘ฆ1 (๐‘ฅ) = 1 + ๐‘ฅ 3 +
6
180
(2)(3)(5)(6) · · · (3๐‘› − 1)(3๐‘›)
1 7
1
1
๐‘ฅ +···+
๐‘ฅ 3๐‘›+1 + · · · ,
๐‘ฆ2 (๐‘ฅ) = ๐‘ฅ + ๐‘ฅ 4 +
12
504
(3)(4)(6)(7) · · · (3๐‘›)(3๐‘› + 1)
and write the general solution to the equation as ๐‘ฆ(๐‘ฅ) = ๐‘Ž 0 ๐‘ฆ1 (๐‘ฅ) + ๐‘Ž1 ๐‘ฆ2 (๐‘ฅ). If we plug
in ๐‘ฅ = 0 into the power series for ๐‘ฆ1 and ๐‘ฆ2 , we find ๐‘ฆ1 (0) = 1 and ๐‘ฆ2 (0) = 0. Similarly,
๐‘ฆ10 (0) = 0 and ๐‘ฆ20 (0) = 1. Therefore ๐‘ฆ = ๐‘Ž 0 ๐‘ฆ1 + ๐‘Ž 1 ๐‘ฆ2 is a solution that satisfies the initial
conditions ๐‘ฆ(0) = ๐‘Ž 0 and ๐‘ฆ 0(0) = ๐‘Ž 1 .
-5.0
7.5
-2.5
0.0
2.5
5.0
7.5
5.0
5.0
2.5
2.5
0.0
0.0
-2.5
-2.5
-5.0
-5.0
-5.0
-2.5
0.0
2.5
5.0
Figure 7.3: The two solutions ๐‘ฆ1 and ๐‘ฆ2 to Airy’s equation.
The functions ๐‘ฆ1 and ๐‘ฆ2 cannot be written in terms of the elementary functions that you
know. See Figure 7.3 for the plot of the solutions ๐‘ฆ1 and ๐‘ฆ2 . These functions have many
interesting properties. For example, they are oscillatory for negative ๐‘ฅ (like solutions to
๐‘ฆ 00 + ๐‘ฆ = 0) and for positive ๐‘ฅ they grow without bound (like solutions to ๐‘ฆ 00 − ๐‘ฆ = 0).
Sometimes a solution may turn out to be a polynomial.
339
7.2. SERIES SOLUTIONS OF LINEAR SECOND ORDER ODES
Example 7.2.3: Let us find a solution to the so-called Hermite’s equation of order ๐‘› ∗ :
๐‘ฆ 00 − 2๐‘ฅ๐‘ฆ 0 + 2๐‘›๐‘ฆ = 0.
Let us find a solution around the point ๐‘ฅ 0 = 0. We try
๐‘ฆ=
∞
Õ
๐‘Ž๐‘˜ ๐‘ฅ ๐‘˜ .
๐‘˜=0
We differentiate (as above) to obtain
0
๐‘ฆ =
∞
Õ
๐‘˜๐‘Ž ๐‘˜ ๐‘ฅ ๐‘˜−1 ,
๐‘˜=1
๐‘ฆ 00 =
∞
Õ
๐‘˜ (๐‘˜ − 1) ๐‘Ž ๐‘˜ ๐‘ฅ ๐‘˜−2 .
๐‘˜=2
Now we plug into the equation
0 = ๐‘ฆ 00 − 2๐‘ฅ๐‘ฆ 0 + 2๐‘›๐‘ฆ
=
=
∞
Õ
๐‘˜=2
∞
Õ
!
๐‘˜(๐‘˜ − 1)๐‘Ž ๐‘˜ ๐‘ฅ ๐‘˜−2 − 2๐‘ฅ
!
๐‘˜(๐‘˜ − 1)๐‘Ž ๐‘˜ ๐‘ฅ ๐‘˜−2 −
๐‘˜=2
∞
Õ
!
๐‘˜๐‘Ž ๐‘˜ ๐‘ฅ ๐‘˜−1 + 2๐‘›
๐‘˜=1
∞
Õ
๐‘˜=1
= 2๐‘Ž 2 +
๐‘Ž๐‘˜ ๐‘ฅ๐‘˜
∞
Õ
!
2๐‘›๐‘Ž ๐‘˜ ๐‘ฅ ๐‘˜
๐‘˜=0
!
(๐‘˜ + 2)(๐‘˜ + 1)๐‘Ž ๐‘˜+2 ๐‘ฅ ๐‘˜ −
∞
Õ
๐‘˜=1
= 2๐‘Ž2 + 2๐‘›๐‘Ž 0 +
!
๐‘˜=0
!
2๐‘˜๐‘Ž ๐‘˜ ๐‘ฅ ๐‘˜ +
∞
Õ
∞
Õ
!
2๐‘˜๐‘Ž ๐‘˜ ๐‘ฅ ๐‘˜ + 2๐‘›๐‘Ž 0 +
๐‘˜=1
∞
Õ
∞
Õ
!
2๐‘›๐‘Ž ๐‘˜ ๐‘ฅ ๐‘˜
๐‘˜=1
(๐‘˜ + 2)(๐‘˜ + 1)๐‘Ž ๐‘˜+2 − 2๐‘˜๐‘Ž ๐‘˜ + 2๐‘›๐‘Ž ๐‘˜ ๐‘ฅ ๐‘˜ .
๐‘˜=1
As ๐‘ฆ 00 − 2๐‘ฅ ๐‘ฆ 0 + 2๐‘›๐‘ฆ = 0 we have
(๐‘˜ + 2)(๐‘˜ + 1)๐‘Ž ๐‘˜+2 + (−2๐‘˜ + 2๐‘›)๐‘Ž ๐‘˜ = 0,
or
๐‘Ž ๐‘˜+2 =
(2๐‘˜ − 2๐‘›)
๐‘Ž๐‘˜ .
(๐‘˜ + 2)(๐‘˜ + 1)
This recurrence relation actually includes ๐‘Ž2 = −๐‘›๐‘Ž 0 (which comes about from 2๐‘Ž2 + 2๐‘›๐‘Ž 0 =
0). Again ๐‘Ž0 and ๐‘Ž 1 are arbitrary.
−2๐‘›
2(1 − ๐‘›)
๐‘Ž0 ,
๐‘Ž3 =
๐‘Ž1 ,
(2)(1)
(3)(2)
2(2 − ๐‘›)
22 (2 − ๐‘›)(−๐‘›)
๐‘Ž4 =
๐‘Ž2 =
๐‘Ž0 ,
(4)(3)
(4)(3)(2)(1)
๐‘Ž2 =
∗ Named
after the French mathematician Charles Hermite (1822–1901).
340
CHAPTER 7. POWER SERIES METHODS
๐‘Ž5 =
2(3 − ๐‘›)
22 (3 − ๐‘›)(1 − ๐‘›)
๐‘Ž3 =
๐‘Ž1 ,
(5)(4)
(5)(4)(3)(2)
...
Let us separate the even and odd coefficients. We find that
2๐‘š (−๐‘›)(2 − ๐‘›) · · · (2๐‘š − 2 − ๐‘›)
,
(2๐‘š)!
2๐‘š (1 − ๐‘›)(3 − ๐‘›) · · · (2๐‘š − 1 − ๐‘›)
=
.
(2๐‘š + 1)!
๐‘Ž 2๐‘š =
๐‘Ž 2๐‘š+1
Let us write down the two series, one with the even powers and one with the odd.
2(−๐‘›) 2 22 (−๐‘›)(2 − ๐‘›) 4 23 (−๐‘›)(2 − ๐‘›)(4 − ๐‘›) 6
๐‘ฅ +
๐‘ฅ +
๐‘ฅ +··· ,
2!
4!
6!
2(1 − ๐‘›) 3 22 (1 − ๐‘›)(3 − ๐‘›) 5 23 (1 − ๐‘›)(3 − ๐‘›)(5 − ๐‘›) 7
๐‘ฆ2 (๐‘ฅ) = ๐‘ฅ +
๐‘ฅ +
๐‘ฅ +
๐‘ฅ +··· .
3!
5!
7!
๐‘ฆ1 (๐‘ฅ) = 1 +
We then write
๐‘ฆ(๐‘ฅ) = ๐‘Ž0 ๐‘ฆ1 (๐‘ฅ) + ๐‘Ž 1 ๐‘ฆ2 (๐‘ฅ).
We remark that if ๐‘› is a positive even integer, then ๐‘ฆ1 (๐‘ฅ) is a polynomial as all the
coefficients in the series beyond a certain degree are zero. If ๐‘› is a positive odd integer,
then ๐‘ฆ2 (๐‘ฅ) is a polynomial. For example, if ๐‘› = 4, then
๐‘ฆ1 (๐‘ฅ) = 1 +
7.2.1
4
2(−4) 2 22 (−4)(2 − 4) 4
๐‘ฅ +
๐‘ฅ = 1 − 4๐‘ฅ 2 + ๐‘ฅ 4 .
2!
4!
3
Exercises
In the following exercises, when asked to solve an equation using power series methods,
you should find the first few terms of the series, and if possible find a general formula for
the ๐‘˜ th coefficient.
Exercise 7.2.1: Use power series methods to solve ๐‘ฆ 00 + ๐‘ฆ = 0 at the point ๐‘ฅ0 = 1.
Exercise 7.2.2: Use power series methods to solve ๐‘ฆ 00 + 4๐‘ฅ๐‘ฆ = 0 at the point ๐‘ฅ0 = 0.
Exercise 7.2.3: Use power series methods to solve ๐‘ฆ 00 − ๐‘ฅ๐‘ฆ = 0 at the point ๐‘ฅ0 = 1.
Exercise 7.2.4: Use power series methods to solve ๐‘ฆ 00 + ๐‘ฅ 2 ๐‘ฆ = 0 at the point ๐‘ฅ0 = 0.
Exercise 7.2.5: The methods work for other orders than second order. Try the methods of this section
to solve the first order system ๐‘ฆ 0 − ๐‘ฅ๐‘ฆ = 0 at the point ๐‘ฅ 0 = 0.
Exercise 7.2.6 (Chebyshev’s equation of order ๐‘):
a) Solve (1 − ๐‘ฅ 2 )๐‘ฆ 00 − ๐‘ฅ๐‘ฆ 0 + ๐‘ 2 ๐‘ฆ = 0 using power series methods at ๐‘ฅ0 = 0.
b) For what ๐‘ is there a polynomial solution?
7.2. SERIES SOLUTIONS OF LINEAR SECOND ORDER ODES
341
Exercise 7.2.7: Find a polynomial solution to (๐‘ฅ 2 + 1)๐‘ฆ 00 − 2๐‘ฅ๐‘ฆ 0 + 2๐‘ฆ = 0 using power series
methods.
Exercise 7.2.8:
a) Use power series methods to solve (1 − ๐‘ฅ)๐‘ฆ 00 + ๐‘ฆ = 0 at the point ๐‘ฅ 0 = 0.
b) Use the solution to part a) to find a solution for ๐‘ฅ๐‘ฆ 00 + ๐‘ฆ = 0 around the point ๐‘ฅ0 = 1.
Exercise 7.2.101: Use power series methods to solve ๐‘ฆ 00 + 2๐‘ฅ 3 ๐‘ฆ = 0 at the point ๐‘ฅ0 = 0.
Exercise 7.2.102 (challenging): Power series methods also work for nonhomogeneous equations.
a) Use power series methods to solve ๐‘ฆ 00 − ๐‘ฅ๐‘ฆ =
geometric series.
1
1−๐‘ฅ
at the point ๐‘ฅ0 = 0. Hint: Recall the
b) Now solve for the initial condition ๐‘ฆ(0) = 0, ๐‘ฆ 0(0) = 0.
Exercise 7.2.103: Attempt to solve ๐‘ฅ 2 ๐‘ฆ 00 − ๐‘ฆ = 0 at ๐‘ฅ0 = 0 using the power series method of this
section (๐‘ฅ0 is a singular point). Can you find at least one solution? Can you find more than one
solution?
342
7.3
CHAPTER 7. POWER SERIES METHODS
Singular points and the method of Frobenius
Note: 1 or 1.5 lectures, §8.4 and §8.5 in [EP], §5.4–§5.7 in [BD]
While behavior of ODEs at singular points is more complicated, certain singular points
are not especially difficult to solve. Let us look at some examples before giving a general
method. We may be lucky and obtain a power series solution using the method of the
previous section, but in general we may have to try other things.
7.3.1
Examples
Example 7.3.1: Let us first look at a simple first order equation
2๐‘ฅ๐‘ฆ 0 − ๐‘ฆ = 0.
Note that ๐‘ฅ = 0 is a singular point. If we try to plug in
๐‘ฆ=
∞
Õ
๐‘Ž๐‘˜ ๐‘ฅ ๐‘˜ ,
๐‘˜=0
we obtain
0 = 2๐‘ฅ๐‘ฆ 0 − ๐‘ฆ = 2๐‘ฅ
∞
Õ
= ๐‘Ž0 +
!
๐‘˜๐‘Ž ๐‘˜ ๐‘ฅ ๐‘˜−1 −
๐‘˜=1
∞
Õ
∞
Õ
!
๐‘Ž๐‘˜ ๐‘ฅ๐‘˜
๐‘˜=0
(2๐‘˜๐‘Ž ๐‘˜ − ๐‘Ž ๐‘˜ ) ๐‘ฅ ๐‘˜ .
๐‘˜=1
First, ๐‘Ž 0 = 0. Next, the only way to solve 0 = 2๐‘˜๐‘Ž ๐‘˜ − ๐‘Ž ๐‘˜ = (2๐‘˜ − 1) ๐‘Ž ๐‘˜ for ๐‘˜ = 1, 2, 3, . . . is for
๐‘Ž ๐‘˜ = 0 for all ๐‘˜. Therefore, in this manner we only get the trivial solution ๐‘ฆ = 0. We need a
nonzero solution to get the general solution to the equation.
Let us try ๐‘ฆ = ๐‘ฅ ๐‘Ÿ for some real number ๐‘Ÿ. Consequently our solution—if we can find
one—may only make sense for positive ๐‘ฅ. Then ๐‘ฆ 0 = ๐‘Ÿ๐‘ฅ ๐‘Ÿ−1 . So
0 = 2๐‘ฅ๐‘ฆ 0 − ๐‘ฆ = 2๐‘ฅ๐‘Ÿ๐‘ฅ ๐‘Ÿ−1 − ๐‘ฅ ๐‘Ÿ = (2๐‘Ÿ − 1)๐‘ฅ ๐‘Ÿ .
Therefore ๐‘Ÿ = 1/2, or in other words ๐‘ฆ = ๐‘ฅ 1/2 . Multiplying by a constant, the general
solution for positive ๐‘ฅ is
๐‘ฆ = ๐ถ๐‘ฅ 1/2 .
If ๐ถ ≠ 0, then the derivative of the solution “blows up” at ๐‘ฅ = 0 (the singular point). There
is only one solution that is differentiable at ๐‘ฅ = 0 and that’s the trivial solution ๐‘ฆ = 0.
Not every problem with a singular point has a solution of the form ๐‘ฆ = ๐‘ฅ ๐‘Ÿ , of course.
But perhaps we can combine the methods. What we will do is to try a solution of the form
๐‘ฆ = ๐‘ฅ ๐‘Ÿ ๐‘“ (๐‘ฅ),
where ๐‘“ (๐‘ฅ) is an analytic function.
343
7.3. SINGULAR POINTS AND THE METHOD OF FROBENIUS
Example 7.3.2: Consider the equation
4๐‘ฅ 2 ๐‘ฆ 00 − 4๐‘ฅ 2 ๐‘ฆ 0 + (1 − 2๐‘ฅ)๐‘ฆ = 0,
and again note that ๐‘ฅ = 0 is a singular point.
Let us try
๐‘ฆ = ๐‘ฅ๐‘Ÿ
∞
Õ
๐‘Ž๐‘˜ ๐‘ฅ ๐‘˜ =
๐‘˜=0
∞
Õ
๐‘Ž ๐‘˜ ๐‘ฅ ๐‘˜+๐‘Ÿ ,
๐‘˜=0
where ๐‘Ÿ is a real number, not necessarily an integer. Again if such a solution exists, it may
only exist for positive ๐‘ฅ. First let us find the derivatives
0
๐‘ฆ =
๐‘ฆ 00 =
∞
Õ
(๐‘˜ + ๐‘Ÿ) ๐‘Ž ๐‘˜ ๐‘ฅ ๐‘˜+๐‘Ÿ−1 ,
๐‘˜=0
∞
Õ
(๐‘˜ + ๐‘Ÿ) (๐‘˜ + ๐‘Ÿ − 1) ๐‘Ž ๐‘˜ ๐‘ฅ ๐‘˜+๐‘Ÿ−2 .
๐‘˜=0
Plugging into our equation we obtain
0 = 4๐‘ฅ 2 ๐‘ฆ 00 − 4๐‘ฅ 2 ๐‘ฆ 0 + (1 − 2๐‘ฅ)๐‘ฆ
!
∞
Õ
(๐‘˜ + ๐‘Ÿ) (๐‘˜ + ๐‘Ÿ − 1) ๐‘Ž ๐‘˜ ๐‘ฅ ๐‘˜+๐‘Ÿ−2 − 4๐‘ฅ 2
= 4๐‘ฅ 2
(๐‘˜ + ๐‘Ÿ) ๐‘Ž ๐‘˜ ๐‘ฅ ๐‘˜+๐‘Ÿ−1 + (1 − 2๐‘ฅ)
๐‘˜=0
=
∞
Õ
!
∞
Õ
๐‘˜=0
∞
Õ
!
๐‘Ž ๐‘˜ ๐‘ฅ ๐‘˜+๐‘Ÿ
๐‘˜=0
!
4(๐‘˜ + ๐‘Ÿ) (๐‘˜ + ๐‘Ÿ − 1) ๐‘Ž ๐‘˜ ๐‘ฅ ๐‘˜+๐‘Ÿ
๐‘˜=0
−
∞
Õ
!
∞
Õ
4(๐‘˜ + ๐‘Ÿ) ๐‘Ž ๐‘˜ ๐‘ฅ ๐‘˜+๐‘Ÿ+1 +
๐‘˜=0
=
∞
Õ
!
∞
Õ
๐‘Ž ๐‘˜ ๐‘ฅ ๐‘˜+๐‘Ÿ −
๐‘˜=0
!
2๐‘Ž ๐‘˜ ๐‘ฅ ๐‘˜+๐‘Ÿ+1
๐‘˜=0
!
4(๐‘˜ + ๐‘Ÿ) (๐‘˜ + ๐‘Ÿ − 1) ๐‘Ž ๐‘˜ ๐‘ฅ ๐‘˜+๐‘Ÿ
๐‘˜=0
−
∞
Õ
!
4(๐‘˜ + ๐‘Ÿ − 1) ๐‘Ž ๐‘˜−1 ๐‘ฅ ๐‘˜+๐‘Ÿ +
๐‘˜=1
∞
Õ
!
∞
Õ
๐‘Ž ๐‘˜ ๐‘ฅ ๐‘˜+๐‘Ÿ −
๐‘˜=0
= 4๐‘Ÿ(๐‘Ÿ − 1) ๐‘Ž 0 ๐‘ฅ ๐‘Ÿ + ๐‘Ž 0 ๐‘ฅ ๐‘Ÿ +
∞ Õ
!
2๐‘Ž ๐‘˜−1 ๐‘ฅ ๐‘˜+๐‘Ÿ
๐‘˜=1
4(๐‘˜ + ๐‘Ÿ) (๐‘˜ + ๐‘Ÿ − 1) ๐‘Ž ๐‘˜ − 4(๐‘˜ + ๐‘Ÿ − 1) ๐‘Ž ๐‘˜−1 + ๐‘Ž ๐‘˜ − 2๐‘Ž ๐‘˜−1 ๐‘ฅ ๐‘˜+๐‘Ÿ
๐‘˜=1
๐‘Ÿ
= 4๐‘Ÿ(๐‘Ÿ − 1) + 1 ๐‘Ž 0 ๐‘ฅ +
∞ Õ
4(๐‘˜ + ๐‘Ÿ) (๐‘˜ + ๐‘Ÿ − 1) + 1 ๐‘Ž ๐‘˜ − 4(๐‘˜ + ๐‘Ÿ − 1) + 2 ๐‘Ž ๐‘˜−1 ๐‘ฅ ๐‘˜+๐‘Ÿ .
๐‘˜=1
To have a solution we must first have 4๐‘Ÿ(๐‘Ÿ − 1) + 1 ๐‘Ž 0 = 0. Supposing that ๐‘Ž 0 ≠ 0 we obtain
4๐‘Ÿ(๐‘Ÿ − 1) + 1 = 0.
344
CHAPTER 7. POWER SERIES METHODS
This equation is called the indicial equation. This particular indicial equation has a double
root at ๐‘Ÿ = 1/2.
OK, so we know what ๐‘Ÿ has to be. That knowledge we obtained simply by looking at
the coefficient of ๐‘ฅ ๐‘Ÿ . All other coefficients of ๐‘ฅ ๐‘˜+๐‘Ÿ also have to be zero so
4(๐‘˜ + ๐‘Ÿ) (๐‘˜ + ๐‘Ÿ − 1) + 1 ๐‘Ž ๐‘˜ − 4(๐‘˜ + ๐‘Ÿ − 1) + 2 ๐‘Ž ๐‘˜−1 = 0.
If we plug in ๐‘Ÿ = 1/2 and solve for ๐‘Ž ๐‘˜ , we get
๐‘Ž๐‘˜ =
1
4(๐‘˜ + 1/2 − 1) + 2
๐‘Ž ๐‘˜−1 = ๐‘Ž ๐‘˜−1 .
๐‘˜
4(๐‘˜ + 1/2) (๐‘˜ + 1/2 − 1) + 1
Let us set ๐‘Ž 0 = 1. Then
1
1
1
๐‘Ž0 = 1,
๐‘Ž2 = ๐‘Ž1 = ,
1
2
2
Extrapolating, we notice that
๐‘Ž1 =
๐‘Ž3 =
1
1
๐‘Ž2 =
,
3
3·2
๐‘Ž4 =
1
1
๐‘Ž3 =
,
4
4·3·2
···
1
1
= .
๐‘˜(๐‘˜ − 1)(๐‘˜ − 2) · · · 3 · 2 ๐‘˜!
๐‘Ž๐‘˜ =
In other words,
๐‘ฆ=
∞
Õ
๐‘Ž๐‘˜ ๐‘ฅ
๐‘˜+๐‘Ÿ
=
∞
Õ
1
๐‘˜=0
๐‘˜=0
๐‘˜!
๐‘ฅ
๐‘˜+1/2
=๐‘ฅ
1/2
∞
Õ
1
๐‘˜=0
๐‘˜!
๐‘ฅ ๐‘˜ = ๐‘ฅ 1/2 ๐‘’ ๐‘ฅ .
That was lucky! In general, we will not be able to write the series in terms of elementary
functions.
We have one solution, let us call it ๐‘ฆ1 = ๐‘ฅ 1/2 ๐‘’ ๐‘ฅ . But what about a second solution? If we
want a general solution, we need two linearly independent solutions. Picking ๐‘Ž0 to be a
different constant only gets us a constant multiple of ๐‘ฆ1 , and we do not have any other ๐‘Ÿ to
try; we only have one solution to the indicial equation. Well, there are powers of ๐‘ฅ floating
around and we are taking derivatives, perhaps the logarithm (the antiderivative of ๐‘ฅ −1 ) is
around as well. It turns out we want to try for another solution of the form
๐‘ฆ2 =
∞
Õ
๐‘ ๐‘˜ ๐‘ฅ ๐‘˜+๐‘Ÿ + (ln ๐‘ฅ)๐‘ฆ1 ,
๐‘˜=0
which in our case is
๐‘ฆ2 =
∞
Õ
๐‘ ๐‘˜ ๐‘ฅ ๐‘˜+1/2 + (ln ๐‘ฅ)๐‘ฅ 1/2 ๐‘’ ๐‘ฅ .
๐‘˜=0
We now differentiate this equation, substitute into the differential equation and solve for
๐‘ ๐‘˜ . A long computation ensues and we obtain some recursion relation for ๐‘ ๐‘˜ . The reader
can (and should) try this to obtain for example the first three terms
2๐‘ 1 − 1
6๐‘ 2 − 1
,
๐‘3 =
,
...
4
18
We then fix ๐‘ 0 and obtain a solution ๐‘ฆ2 . Then we write the general solution as ๐‘ฆ = ๐ด๐‘ฆ1 +๐ต๐‘ฆ2 .
๐‘ 1 = ๐‘ 0 − 1,
๐‘2 =
345
7.3. SINGULAR POINTS AND THE METHOD OF FROBENIUS
7.3.2
The method of Frobenius
Before giving the general method, let us clarify when the method applies. Let
๐‘(๐‘ฅ)๐‘ฆ 00 + ๐‘ž(๐‘ฅ)๐‘ฆ 0 + ๐‘Ÿ(๐‘ฅ)๐‘ฆ = 0
be an ODE. As before, if ๐‘(๐‘ฅ 0 ) = 0, then ๐‘ฅ0 is a singular point. If, furthermore, the limits
lim (๐‘ฅ − ๐‘ฅ0 )
๐‘ฅ→๐‘ฅ 0
๐‘ž(๐‘ฅ)
๐‘(๐‘ฅ)
and
lim (๐‘ฅ − ๐‘ฅ0 )2
๐‘ฅ→๐‘ฅ 0
๐‘Ÿ(๐‘ฅ)
๐‘(๐‘ฅ)
both exist and are finite, then we say that ๐‘ฅ0 is a regular singular point.
Example 7.3.3: Often, and for the rest of this section, ๐‘ฅ0 = 0. Consider
๐‘ฅ 2 ๐‘ฆ 00 + ๐‘ฅ(1 + ๐‘ฅ)๐‘ฆ 0 + (๐œ‹ + ๐‘ฅ 2 )๐‘ฆ = 0.
Write
๐‘ž(๐‘ฅ)
๐‘ฅ(1 + ๐‘ฅ)
= lim ๐‘ฅ
= lim (1 + ๐‘ฅ) = 1,
๐‘ฅ→0
๐‘ฅ→0
๐‘(๐‘ฅ) ๐‘ฅ→0
๐‘ฅ2
(๐œ‹ + ๐‘ฅ 2 )
๐‘Ÿ(๐‘ฅ)
= lim (๐œ‹ + ๐‘ฅ 2 ) = ๐œ‹.
= lim ๐‘ฅ 2
lim ๐‘ฅ 2
๐‘ฅ→0
๐‘ฅ→0
๐‘(๐‘ฅ) ๐‘ฅ→0
๐‘ฅ2
lim ๐‘ฅ
So ๐‘ฅ = 0 is a regular singular point.
On the other hand if we make the slight change
๐‘ฅ 2 ๐‘ฆ 00 + (1 + ๐‘ฅ)๐‘ฆ 0 + (๐œ‹ + ๐‘ฅ 2 )๐‘ฆ = 0,
then
lim ๐‘ฅ
๐‘ฅ→0
๐‘ž(๐‘ฅ)
(1 + ๐‘ฅ)
1+๐‘ฅ
= DNE.
= lim ๐‘ฅ
= lim
2
๐‘ฅ→0
๐‘ฅ
๐‘(๐‘ฅ) ๐‘ฅ→0
๐‘ฅ
Here DNE stands for does not exist. The point 0 is a singular point, but not a regular singular
point.
Let us now discuss the general Method of Frobenius∗ . We only consider the method at
the point ๐‘ฅ = 0 for simplicity. The main idea is the following theorem.
Theorem 7.3.1 (Method of Frobenius). Suppose that
๐‘(๐‘ฅ)๐‘ฆ 00 + ๐‘ž(๐‘ฅ)๐‘ฆ 0 + ๐‘Ÿ(๐‘ฅ)๐‘ฆ = 0
has a regular singular point at ๐‘ฅ = 0, then there exists at least one solution of the form
๐‘ฆ=๐‘ฅ
๐‘Ÿ
∞
Õ
๐‘Ž๐‘˜ ๐‘ฅ ๐‘˜ .
๐‘˜=0
A solution of this form is called a Frobenius-type solution.
∗ Named
after the German mathematician Ferdinand Georg Frobenius (1849–1917).
(7.3)
346
CHAPTER 7. POWER SERIES METHODS
The method usually breaks down like this:
(i) We seek a Frobenius-type solution of the form
๐‘ฆ=
∞
Õ
๐‘Ž ๐‘˜ ๐‘ฅ ๐‘˜+๐‘Ÿ .
๐‘˜=0
We plug this ๐‘ฆ into equation (7.3). We collect terms and write everything as a single
series.
(ii) The obtained series must be zero. Setting the first coefficient (usually the coefficient of
๐‘ฅ ๐‘Ÿ ) in the series to zero we obtain the indicial equation, which is a quadratic polynomial
in ๐‘Ÿ.
(iii) If the indicial equation has two real roots ๐‘Ÿ1 and ๐‘Ÿ2 such that ๐‘Ÿ1 − ๐‘Ÿ2 is not an integer,
then we have two linearly independent Frobenius-type solutions. Using the first root,
we plug in
๐‘ฆ1 = ๐‘ฅ
๐‘Ÿ1
∞
Õ
๐‘Ž๐‘˜ ๐‘ฅ ๐‘˜ ,
๐‘˜=0
and we solve for all ๐‘Ž ๐‘˜ to obtain the first solution. Then using the second root, we
plug in
๐‘ฆ2 = ๐‘ฅ
๐‘Ÿ2
∞
Õ
๐‘๐‘˜ ๐‘ฅ๐‘˜ ,
๐‘˜=0
and solve for all ๐‘ ๐‘˜ to obtain the second solution.
(iv) If the indicial equation has a doubled root ๐‘Ÿ, then there we find one solution
๐‘ฆ1 = ๐‘ฅ
๐‘Ÿ
∞
Õ
๐‘Ž๐‘˜ ๐‘ฅ๐‘˜ ,
๐‘˜=0
and then we obtain a new solution by plugging
๐‘ฆ2 = ๐‘ฅ
๐‘Ÿ
∞
Õ
๐‘ ๐‘˜ ๐‘ฅ ๐‘˜ + (ln ๐‘ฅ)๐‘ฆ1 ,
๐‘˜=0
into equation (7.3) and solving for the constants ๐‘ ๐‘˜ .
(v) If the indicial equation has two real roots such that ๐‘Ÿ1 − ๐‘Ÿ2 is an integer, then one
solution is
๐‘ฆ 1 = ๐‘ฅ ๐‘Ÿ1
∞
Õ
๐‘˜=0
๐‘Ž๐‘˜ ๐‘ฅ ๐‘˜ ,
7.3. SINGULAR POINTS AND THE METHOD OF FROBENIUS
347
and the second linearly independent solution is of the form
๐‘ฆ2 = ๐‘ฅ
๐‘Ÿ2
∞
Õ
๐‘ ๐‘˜ ๐‘ฅ ๐‘˜ + ๐ถ(ln ๐‘ฅ)๐‘ฆ1 ,
๐‘˜=0
where we plug ๐‘ฆ2 into (7.3) and solve for the constants ๐‘ ๐‘˜ and ๐ถ.
(vi) Finally, if the indicial equation has complex roots, then solving for ๐‘Ž ๐‘˜ in the solution
๐‘ฆ=๐‘ฅ
๐‘Ÿ1
∞
Õ
๐‘Ž๐‘˜ ๐‘ฅ ๐‘˜
๐‘˜=0
results in a complex-valued function—all the ๐‘Ž ๐‘˜ are complex numbers. We obtain
our two linearly independent solutions∗ by taking the real and imaginary parts of ๐‘ฆ.
The main idea is to find at least one Frobenius-type solution. If we are lucky and find
two, we are done. If we only get one, we either use the ideas above or even a different
method such as reduction of order (see § 2.1) to obtain a second solution.
7.3.3
Bessel functions
An important class of functions that arises commonly in physics are the Bessel functions† .
For example, these functions appear when solving the wave equation in two and three
dimensions. First consider Bessel’s equation of order ๐‘:
๐‘ฅ 2 ๐‘ฆ 00 + ๐‘ฅ๐‘ฆ 0 + ๐‘ฅ 2 − ๐‘ 2 ๐‘ฆ = 0.
We allow ๐‘ to be any number, not just an integer, although integers and multiples of 1/2 are
most important in applications.
When we plug
๐‘ฆ=
∞
Õ
๐‘Ž ๐‘˜ ๐‘ฅ ๐‘˜+๐‘Ÿ
๐‘˜=0
into Bessel’s equation of order ๐‘, we obtain the indicial equation
๐‘Ÿ(๐‘Ÿ − 1) + ๐‘Ÿ − ๐‘ 2 = (๐‘Ÿ − ๐‘)(๐‘Ÿ + ๐‘) = 0.
We obtain two roots, ๐‘Ÿ1 = ๐‘ and ๐‘Ÿ2 = −๐‘. If ๐‘ is not an integer, then following the method
of Frobenius and setting ๐‘Ž0 = 1, we find linearly independent solutions of the form
๐‘ฆ1 = ๐‘ฅ ๐‘
๐‘ฆ2 = ๐‘ฅ
∞
Õ
(−1) ๐‘˜ ๐‘ฅ 2๐‘˜
,
22๐‘˜ ๐‘˜!(๐‘˜ + ๐‘)(๐‘˜ − 1 + ๐‘) · · · (2 + ๐‘)(1 + ๐‘)
๐‘˜=0
∞
Õ
−๐‘
๐‘˜=0
∗ See
(−1) ๐‘˜ ๐‘ฅ 2๐‘˜
.
22๐‘˜ ๐‘˜!(๐‘˜ − ๐‘)(๐‘˜ − 1 − ๐‘) · · · (2 − ๐‘)(1 − ๐‘)
Joseph L. Neuringera, The Frobenius method for complex roots of the indicial equation, International
Journal of Mathematical Education in Science and Technology, Volume 9, Issue 1, 1978, 71–77.
† Named after the German astronomer and mathematician Friedrich Wilhelm Bessel (1784–1846).
348
CHAPTER 7. POWER SERIES METHODS
Exercise 7.3.1:
a) Verify that the indicial equation of Bessel’s equation of order ๐‘ is (๐‘Ÿ − ๐‘)(๐‘Ÿ + ๐‘) = 0.
b) Suppose ๐‘ is not an integer. Carry out the computation to obtain the solutions ๐‘ฆ1 and ๐‘ฆ2
above.
Bessel functions are convenient constant multiples of ๐‘ฆ1 and ๐‘ฆ2 . First we must define
the gamma function
∫
∞
๐‘ก ๐‘ฅ−1 ๐‘’ −๐‘ก ๐‘‘๐‘ก.
Γ(๐‘ฅ) =
0
Notice that Γ(1) = 1. The gamma function also has a wonderful property
Γ(๐‘ฅ + 1) = ๐‘ฅΓ(๐‘ฅ).
From this property, it follows that Γ(๐‘›) = (๐‘› − 1)! when ๐‘› is an integer. So the gamma
function is a continuous version of the factorial. We compute:
Γ(๐‘˜ + ๐‘ + 1) = (๐‘˜ + ๐‘)(๐‘˜ − 1 + ๐‘) · · · (2 + ๐‘)(1 + ๐‘)Γ(1 + ๐‘),
Γ(๐‘˜ − ๐‘ + 1) = (๐‘˜ − ๐‘)(๐‘˜ − 1 − ๐‘) · · · (2 − ๐‘)(1 − ๐‘)Γ(1 − ๐‘).
Exercise 7.3.2: Verify the identities above using Γ(๐‘ฅ + 1) = ๐‘ฅΓ(๐‘ฅ).
We define the Bessel functions of the first kind of order ๐‘ and −๐‘ as
๐‘ฅ 2๐‘˜+๐‘
Õ
1
(−1) ๐‘˜
๐ฝ๐‘ (๐‘ฅ) = ๐‘
,
๐‘ฆ1 =
2 Γ(1 + ๐‘)
๐‘˜! Γ(๐‘˜ + ๐‘ + 1) 2
∞
๐‘˜=0
๐‘ฅ 2๐‘˜−๐‘
Õ
1
(−1) ๐‘˜
.
๐ฝ−๐‘ (๐‘ฅ) = −๐‘
๐‘ฆ2 =
2 Γ(1 − ๐‘)
๐‘˜! Γ(๐‘˜ − ๐‘ + 1) 2
∞
๐‘˜=0
As these are constant multiples of the solutions we found above, these are both solutions
to Bessel’s equation of order ๐‘. The constants are picked for convenience.
When ๐‘ is not an integer, ๐ฝ๐‘ and ๐ฝ−๐‘ are linearly independent. When ๐‘› is an integer we
obtain
∞
Õ
(−1) ๐‘˜ ๐‘ฅ 2๐‘˜+๐‘›
.
๐ฝ๐‘› (๐‘ฅ) =
๐‘˜! (๐‘˜ + ๐‘›)! 2
๐‘˜=0
In this case
๐ฝ๐‘› (๐‘ฅ) = (−1)๐‘› ๐ฝ−๐‘› (๐‘ฅ),
and so ๐ฝ−๐‘› is not a second linearly independent solution. The other solution is the so-called
Bessel function of second kind. These make sense only for integer orders ๐‘› and are defined as
limits of linear combinations of ๐ฝ๐‘ (๐‘ฅ) and ๐ฝ−๐‘ (๐‘ฅ), as ๐‘ approaches ๐‘› in the following way:
๐‘Œ๐‘› (๐‘ฅ) = lim
๐‘→๐‘›
cos(๐‘๐œ‹)๐ฝ๐‘ (๐‘ฅ) − ๐ฝ−๐‘ (๐‘ฅ)
sin(๐‘๐œ‹)
.
349
7.3. SINGULAR POINTS AND THE METHOD OF FROBENIUS
Each linear combination of ๐ฝ๐‘ (๐‘ฅ) and ๐ฝ−๐‘ (๐‘ฅ) is a solution to Bessel’s equation of order ๐‘.
Then as we take the limit as ๐‘ goes to ๐‘›, we see that ๐‘Œ๐‘› (๐‘ฅ) is a solution to Bessel’s equation
of order ๐‘›. It also turns out that ๐‘Œ๐‘› (๐‘ฅ) and ๐ฝ๐‘› (๐‘ฅ) are linearly independent. Therefore when
๐‘› is an integer, we have the general solution to Bessel’s equation of order ๐‘›:
๐‘ฆ = ๐ด๐ฝ๐‘› (๐‘ฅ) + ๐ต๐‘Œ๐‘› (๐‘ฅ),
for arbitrary constants ๐ด and ๐ต. Note that ๐‘Œ๐‘› (๐‘ฅ) goes to negative infinity at ๐‘ฅ = 0. Many
mathematical software packages have these functions ๐ฝ๐‘› (๐‘ฅ) and ๐‘Œ๐‘› (๐‘ฅ) defined, so they
can be used just like say sin(๐‘ฅ) and cos(๐‘ฅ). In fact, Bessel functions have some similar
properties. For example, −๐ฝ1 (๐‘ฅ) is a derivative of ๐ฝ0 (๐‘ฅ), and in general the derivative of ๐ฝ๐‘› (๐‘ฅ)
can be written as a linear combination of ๐ฝ๐‘›−1 (๐‘ฅ) and ๐ฝ๐‘›+1 (๐‘ฅ). Furthermore, these functions
oscillate, although they are not periodic. See Figure 7.4 for graphs of Bessel functions.
0.0
2.5
5.0
7.5
10.0
1.00
1.00
0.75
0.75
0.50
0.50
0.25
0.25
0.00
0.00
-0.25
-0.25
1.0
0.0
2.5
5.0
7.5
10.0
1.0
0.5
0.5
0.0
0.0
-0.5
-0.5
-1.0
-1.0
-1.5
-1.5
-2.0
0.0
2.5
5.0
7.5
10.0
0.0
2.5
5.0
7.5
-2.0
10.0
Figure 7.4: Plot of the ๐ฝ0 (๐‘ฅ) and ๐ฝ1 (๐‘ฅ) in the first graph and ๐‘Œ0 (๐‘ฅ) and ๐‘Œ1 (๐‘ฅ) in the second graph.
Example 7.3.4: Other equations can sometimes be solved in terms of the Bessel functions.
For example, given a positive constant ๐œ†,
๐‘ฅ๐‘ฆ 00 + ๐‘ฆ 0 + ๐œ†2 ๐‘ฅ๐‘ฆ = 0,
can be changed to ๐‘ฅ 2 ๐‘ฆ 00 + ๐‘ฅ๐‘ฆ 0 + ๐œ†2 ๐‘ฅ 2 ๐‘ฆ = 0. Then changing variables ๐‘ก = ๐œ†๐‘ฅ, we obtain via
chain rule the equation in ๐‘ฆ and ๐‘ก:
๐‘ก 2 ๐‘ฆ 00 + ๐‘ก ๐‘ฆ 0 + ๐‘ก 2 ๐‘ฆ = 0,
which we recognize as Bessel’s equation of order 0. Therefore the general solution is
๐‘ฆ(๐‘ก) = ๐ด๐ฝ0 (๐‘ก) + ๐ต๐‘Œ0 (๐‘ก), or in terms of ๐‘ฅ:
๐‘ฆ = ๐ด๐ฝ0 (๐œ†๐‘ฅ) + ๐ต๐‘Œ0 (๐œ†๐‘ฅ).
This equation comes up, for example, when finding the fundamental modes of vibration of
a circular drum, but we digress.
350
7.3.4
CHAPTER 7. POWER SERIES METHODS
Exercises
Exercise 7.3.3: Find a particular (Frobenius-type) solution of ๐‘ฅ 2 ๐‘ฆ 00 + ๐‘ฅ๐‘ฆ 0 + (1 + ๐‘ฅ)๐‘ฆ = 0.
Exercise 7.3.4: Find a particular (Frobenius-type) solution of ๐‘ฅ๐‘ฆ 00 − ๐‘ฆ = 0.
Exercise 7.3.5: Find a particular (Frobenius-type) solution of ๐‘ฆ 00 + ๐‘ฅ1 ๐‘ฆ 0 − ๐‘ฅ๐‘ฆ = 0.
Exercise 7.3.6: Find the general solution of 2๐‘ฅ๐‘ฆ 00 + ๐‘ฆ 0 − ๐‘ฅ 2 ๐‘ฆ = 0.
Exercise 7.3.7: Find the general solution of ๐‘ฅ 2 ๐‘ฆ 00 − ๐‘ฅ๐‘ฆ 0 − ๐‘ฆ = 0.
Exercise 7.3.8: In the following equations classify the point ๐‘ฅ = 0 as ordinary, regular singular,
or singular but not regular singular.
a) ๐‘ฅ 2 (1 + ๐‘ฅ 2 )๐‘ฆ 00 + ๐‘ฅ๐‘ฆ = 0
b) ๐‘ฅ 2 ๐‘ฆ 00 + ๐‘ฆ 0 + ๐‘ฆ = 0
c) ๐‘ฅ ๐‘ฆ 00 + ๐‘ฅ 3 ๐‘ฆ 0 + ๐‘ฆ = 0
d) ๐‘ฅ๐‘ฆ 00 + ๐‘ฅ๐‘ฆ 0 − ๐‘’ ๐‘ฅ ๐‘ฆ = 0
e) ๐‘ฅ 2 ๐‘ฆ 00 + ๐‘ฅ 2 ๐‘ฆ 0 + ๐‘ฅ 2 ๐‘ฆ = 0
Exercise 7.3.101: In the following equations classify the point ๐‘ฅ = 0 as ordinary, regular singular,
or singular but not regular singular.
a) ๐‘ฆ 00 + ๐‘ฆ = 0
b) ๐‘ฅ 3 ๐‘ฆ 00 + (1 + ๐‘ฅ)๐‘ฆ = 0
c) ๐‘ฅ ๐‘ฆ 00 + ๐‘ฅ 5 ๐‘ฆ 0 + ๐‘ฆ = 0
d) sin(๐‘ฅ)๐‘ฆ 00 − ๐‘ฆ = 0
e) cos(๐‘ฅ)๐‘ฆ 00 − sin(๐‘ฅ)๐‘ฆ = 0
Exercise 7.3.102: Find the general solution of ๐‘ฅ 2 ๐‘ฆ 00 − ๐‘ฆ = 0.
Exercise 7.3.103: Find a particular solution of ๐‘ฅ 2 ๐‘ฆ 00 + (๐‘ฅ − 3/4)๐‘ฆ = 0.
Exercise 7.3.104 (tricky): Find the general solution of ๐‘ฅ 2 ๐‘ฆ 00 − ๐‘ฅ๐‘ฆ 0 + ๐‘ฆ = 0.
Chapter 8
Nonlinear systems
8.1
Linearization, critical points, and equilibria
Note: 1 lecture, §6.1–§6.2 in [EP], §9.2–§9.3 in [BD]
Except for a few brief detours in chapter 1, we considered mostly linear equations.
Linear equations suffice in many applications, but in reality most phenomena require
nonlinear equations. Nonlinear equations, however, are notoriously more difficult to
understand than linear ones, and many strange new phenomena appear when we allow
our equations to be nonlinear.
Not to worry, we did not waste all this time studying linear equations. Nonlinear
equations can often be approximated by linear ones if we only need a solution “locally,” for
example, only for a short period of time, or only for certain parameters. Understanding
linear equations can also give us qualitative understanding about a more general nonlinear
problem. The idea is similar to what you did in calculus in trying to approximate a function
by a line with the right slope.
In § 2.4 we looked at the pendulum of length ๐ฟ. The goal was to
solve for the angle ๐œƒ(๐‘ก) as a function of the time ๐‘ก. The equation for
๐ฟ
the setup is the nonlinear equation
๐œƒ
๐‘”
๐œƒ00 + sin ๐œƒ = 0.
๐ฟ
๐‘š
Instead of solving this equation, we solved the rather easier linear
equation
๐‘”
๐œƒ00 + ๐œƒ = 0.
๐ฟ
While the solution to the linear equation is not exactly what we were looking for, it is rather
close to the original, as long as the angle ๐œƒ is small and the time period involved is short.
You might ask: Why don’t we just solve the nonlinear problem? Well, it might be very
difficult, impractical, or impossible to solve analytically, depending on the equation in
question. We may not even be interested in the actual solution, we might only be interested
352
CHAPTER 8. NONLINEAR SYSTEMS
in some qualitative idea of what the solution is doing. For example, what happens as time
goes to infinity?
8.1.1
Autonomous systems and phase plane analysis
We restrict our attention to a two-dimensional autonomous system
๐‘ฅ 0 = ๐‘“ (๐‘ฅ, ๐‘ฆ),
๐‘ฆ 0 = ๐‘”(๐‘ฅ, ๐‘ฆ),
where ๐‘“ (๐‘ฅ, ๐‘ฆ) and ๐‘”(๐‘ฅ, ๐‘ฆ) are functions of two variables, and the derivatives are taken with
respect to time ๐‘ก. Solutions are functions ๐‘ฅ(๐‘ก) and ๐‘ฆ(๐‘ก) such that
๐‘ฅ 0(๐‘ก) = ๐‘“ ๐‘ฅ(๐‘ก), ๐‘ฆ(๐‘ก) ,
๐‘ฆ 0(๐‘ก) = ๐‘” ๐‘ฅ(๐‘ก), ๐‘ฆ(๐‘ก) .
The way we will analyze the system is very similar to § 1.6, where we studied a single
autonomous equation. The ideas in two dimensions are the same, but the behavior can be
far more complicated.
It may be best to think of the system of equations as the single vector equation
0
๐‘ฅ
๐‘ฆ
๐‘“ (๐‘ฅ, ๐‘ฆ)
=
.
๐‘”(๐‘ฅ, ๐‘ฆ)
(8.1)
As in § 3.1 we draw the phase portrait (or phase diagram), where each point (๐‘ฅ, ๐‘ฆ) corresponds
to a specific
state
of the system. We draw the vector field given at each point (๐‘ฅ, ๐‘ฆ) by the
h
i
vector
๐‘“ (๐‘ฅ,๐‘ฆ)
๐‘”(๐‘ฅ,๐‘ฆ)
. And as before if we find solutions, we draw the trajectories by plotting all
points ๐‘ฅ(๐‘ก), ๐‘ฆ(๐‘ก) for a certain range of ๐‘ก.
Example 8.1.1: Consider the second order equation ๐‘ฅ 00 = −๐‘ฅ + ๐‘ฅ 2 . Write this equation as a
first order nonlinear system
๐‘ฅ 0 = ๐‘ฆ,
๐‘ฆ 0 = −๐‘ฅ + ๐‘ฅ 2 .
The phase portrait with some trajectories is drawn in Figure 8.1 on the facing page.
From the phase portrait it should be clear that even this simple system has fairly
complicated behavior. Some trajectories keep oscillating around the origin, and some go
off towards infinity. We will return to this example often, and analyze it completely in this
(and the next) section.
If we zoom into the diagram near a point where
h
๐‘“ (๐‘ฅ,๐‘ฆ)
๐‘”(๐‘ฅ,๐‘ฆ)
i
is not zero, then nearby the
arrows point generally in essentially that same direction and have essentially the same
magnitude. In other words the behavior is not that interesting near such a point. We are of
course assuming that ๐‘“ (๐‘ฅ, ๐‘ฆ) and ๐‘”(๐‘ฅ, ๐‘ฆ) are continuous.
Let us concentrate on those points in the phase diagram above where the trajectories
seem to start, end, or go around. We see two such points: (0, 0) and (1, 0). The trajectories
seem to go around the point (0, 0), and they seem to either go in or out of the point (1, 0).
353
8.1. LINEARIZATION, CRITICAL POINTS, AND EQUILIBRIA
-2
-1
0
1
2
2
2
1
1
0
0
-1
-1
-2
-2
-2
-1
0
1
2
Figure 8.1: Phase portrait with some trajectories of ๐‘ฅ 0 = ๐‘ฆ, ๐‘ฆ 0 = −๐‘ฅ + ๐‘ฅ 2 .
These points are precisely those points where the derivatives of both ๐‘ฅ and ๐‘ฆ are zero. Let
us define the critical points as the points (๐‘ฅ, ๐‘ฆ) such that
๐‘“ (๐‘ฅ, ๐‘ฆ)
®
= 0.
๐‘”(๐‘ฅ, ๐‘ฆ)
In other words, these are the points where both ๐‘“ (๐‘ฅ, ๐‘ฆ) = 0 and ๐‘”(๐‘ฅ, ๐‘ฆ) = 0.
The critical hpointsi are where the behavior of the system is in some sense the most
complicated. If
๐‘“ (๐‘ฅ,๐‘ฆ)
๐‘”(๐‘ฅ,๐‘ฆ)
is zero, then nearby, the vector can point in any direction whatsoever.
Also, the trajectories are either going towards, away from, or around these points, so if we
are looking for long-term qualitative behavior of the system, we should look at what is
happening near the critical points.
Critical points are also sometimes called equilibria, since we have so-called equilibrium
solutions at critical points. If (๐‘ฅ0 , ๐‘ฆ0 ) is a critical point, then we have the solutions
๐‘ฅ(๐‘ก) = ๐‘ฅ0 ,
๐‘ฆ(๐‘ก) = ๐‘ฆ0 .
In Example 8.1.1 on the preceding page, there are two equilibrium solutions:
๐‘ฅ(๐‘ก) = 0,
๐‘ฆ(๐‘ก) = 0,
and
๐‘ฅ(๐‘ก) = 1,
๐‘ฆ(๐‘ก) = 0.
Compare this discussion on equilibria to the discussion in § 1.6. The underlying concept is
exactly the same.
8.1.2
Linearization
In § 3.5 we studied the behavior of a homogeneous linear system of two equations near a
critical point. For a linear system of two variables given by an invertible matrix, the only
354
CHAPTER 8. NONLINEAR SYSTEMS
critical point is the origin (0, 0). Let us put the understanding we gained in that section to
good use understanding what happens near critical points of nonlinear systems.
In calculus we learned to estimate a function by taking its derivative and linearizing.
We work similarly with nonlinear systems of ODE. Suppose (๐‘ฅ 0 , ๐‘ฆ0 ) is a critical point. First
change variables to (๐‘ข, ๐‘ฃ), so that (๐‘ข, ๐‘ฃ) = (0, 0) corresponds to (๐‘ฅ0 , ๐‘ฆ0 ). That is,
๐‘ข = ๐‘ฅ − ๐‘ฅ0 ,
๐‘ฃ = ๐‘ฆ − ๐‘ฆ0 .
Next we need to find the derivative. In multivariable calculus you may have seen that the
several variables version ofh the derivative
is the Jacobian matrix∗ . The Jacobian matrix of
i
๐‘“ (๐‘ฅ,๐‘ฆ)
the vector-valued function ๐‘”(๐‘ฅ,๐‘ฆ) at (๐‘ฅ0 , ๐‘ฆ0 ) is
"๐œ•๐‘“
(๐‘ฅ , ๐‘ฆ )
๐œ•๐‘ฅ 0 0
๐œ•๐‘”
(๐‘ฅ ,
๐œ•๐‘ฅ 0
๐‘ฆ0 )
๐œ•๐‘“
(๐‘ฅ ,
๐œ•๐‘ฆ 0
๐œ•๐‘”
(๐‘ฅ ,
๐œ•๐‘ฆ 0
๐‘ฆ0 )
#
๐‘ฆ0 )
.
This matrix gives the best linear approximation as ๐‘ข and ๐‘ฃ (and therefore ๐‘ฅ and ๐‘ฆ) vary.
We define the linearization of the equation (8.1) as the linear system
๐‘ข
๐‘ฃ
๐œ•๐‘“
(๐‘ฅ ,
๐œ•๐‘ฆ 0
๐œ•๐‘”
(๐‘ฅ ,
๐œ•๐‘ฆ 0
"๐œ•๐‘“
0
(๐‘ฅ , ๐‘ฆ )
๐œ•๐‘ฅ 0 0
=
๐œ•๐‘”
(๐‘ฅ ,
๐œ•๐‘ฅ 0
๐‘ฆ0 )
๐‘ฆ0 )
๐‘ฆ0 )
# ๐‘ข
.
๐‘ฃ
Example 8.1.2: Let us keep with the same equations as Example 8.1.1: ๐‘ฅ 0 = ๐‘ฆ, ๐‘ฆ 0 = −๐‘ฅ + ๐‘ฅ 2 .
There are two critical points, (0, 0) and (1, 0). The Jacobian matrix at any point is
"๐œ•๐‘“
๐œ•๐‘“
(๐‘ฅ,
๐œ•๐‘ฆ
๐œ•๐‘”
(๐‘ฅ,
๐œ•๐‘ฆ
(๐‘ฅ, ๐‘ฆ)
๐œ•๐‘ฅ
๐œ•๐‘”
(๐‘ฅ,
๐œ•๐‘ฅ
๐‘ฆ)
๐‘ฆ)
#
0
1
=
.
−1
+
2๐‘ฅ
0
๐‘ฆ)
Therefore at (0, 0), we have ๐‘ข = ๐‘ฅ and ๐‘ฃ = ๐‘ฆ, and the linearization is
0
๐‘ข
๐‘ฃ
0 1
=
−1 0
๐‘ข
.
๐‘ฃ
At the point (1, 0), we have ๐‘ข = ๐‘ฅ − 1 and ๐‘ฃ = ๐‘ฆ, and the linearization is
0
๐‘ข
๐‘ฃ
0 1
=
1 0
๐‘ข
.
๐‘ฃ
The phase diagrams of the two linearizations at the point (0, 0) and (1, 0) are given in
Figure 8.2 on the facing page. Note that the variables are now ๐‘ข and ๐‘ฃ. Compare Figure 8.2
with Figure 8.1 on the previous page, and look especially at the behavior near the critical
points.
∗ Named
for the German mathematician Carl Gustav Jacob Jacobi (1804–1851).
355
8.1. LINEARIZATION, CRITICAL POINTS, AND EQUILIBRIA
-1.0
1.0
1.0
-1.0
1.0
0.5
0.5
0.5
0.5
0.0
0.0
0.0
0.0
-0.5
-0.5
-0.5
-0.5
-1.0
-1.0
-1.0
-1.0
-1.0
-0.5
-0.5
0.0
0.0
0.5
1.0
0.5
1.0
-0.5
0.0
0.5
1.0
1.0
-1.0
-0.5
0.0
0.5
1.0
Figure 8.2: Phase diagram with some trajectories of linearizations at the critical points (0, 0) (left) and
(1, 0) (right) of ๐‘ฅ 0 = ๐‘ฆ, ๐‘ฆ 0 = −๐‘ฅ + ๐‘ฅ 2 .
8.1.3
Exercises
Exercise 8.1.1: Sketch the phase plane vector field for:
a) ๐‘ฅ 0 = ๐‘ฅ 2 , ๐‘ฆ 0 = ๐‘ฆ 2 ,
b) ๐‘ฅ 0 = (๐‘ฅ − ๐‘ฆ)2 , ๐‘ฆ 0 = −๐‘ฅ,
c) ๐‘ฅ 0 = ๐‘’ ๐‘ฆ , ๐‘ฆ 0 = ๐‘’ ๐‘ฅ .
2) ๐‘ฅ 0 = ๐‘ฅ๐‘ฆ, ๐‘ฆ 0 = 1 + ๐‘ฆ 2 ,
3) ๐‘ฅ 0 = sin(๐œ‹๐‘ฆ), ๐‘ฆ 0 = ๐‘ฅ,
b)
c)
Exercise 8.1.2: Match systems
1) ๐‘ฅ 0 = ๐‘ฅ 2 , ๐‘ฆ 0 = ๐‘ฆ 2 ,
to the vector fields below. Justify.
a)
Exercise 8.1.3: Find the critical points and linearizations of the following systems.
a) ๐‘ฅ 0 = ๐‘ฅ 2 − ๐‘ฆ 2 , ๐‘ฆ 0 = ๐‘ฅ 2 + ๐‘ฆ 2 − 1,
b) ๐‘ฅ 0 = −๐‘ฆ, ๐‘ฆ 0 = 3๐‘ฅ + ๐‘ฆ๐‘ฅ 2 ,
c) ๐‘ฅ 0 = ๐‘ฅ 2 + ๐‘ฆ, ๐‘ฆ 0 = ๐‘ฆ 2 + ๐‘ฅ.
Exercise 8.1.4: For the following systems, verify they have critical point at (0, 0), and find the
linearization at (0, 0).
a) ๐‘ฅ 0 = ๐‘ฅ + 2๐‘ฆ + ๐‘ฅ 2 − ๐‘ฆ 2 , ๐‘ฆ 0 = 2๐‘ฆ − ๐‘ฅ 2
b) ๐‘ฅ 0 = −๐‘ฆ, ๐‘ฆ 0 = ๐‘ฅ − ๐‘ฆ 3
c) ๐‘ฅ 0 = ๐‘Ž๐‘ฅ + ๐‘๐‘ฆ + ๐‘“ (๐‘ฅ, ๐‘ฆ), ๐‘ฆ 0 = ๐‘๐‘ฅ + ๐‘‘๐‘ฆ + ๐‘”(๐‘ฅ, ๐‘ฆ), where ๐‘“ (0, 0) = 0, ๐‘”(0, 0) = 0, and all first
partial derivatives of ๐‘“ and ๐‘” are also zero at (0, 0), that is,
๐œ•๐‘”
(0, 0)
๐œ•๐‘ฆ
= 0.
๐œ•๐‘“
(0, 0)
๐œ•๐‘ฅ
=
๐œ•๐‘“
(0, 0)
๐œ•๐‘ฆ
=
๐œ•๐‘”
(0, 0)
๐œ•๐‘ฅ
=
356
CHAPTER 8. NONLINEAR SYSTEMS
Exercise 8.1.5: Take ๐‘ฅ 0 = (๐‘ฅ − ๐‘ฆ)2 , ๐‘ฆ 0 = (๐‘ฅ + ๐‘ฆ)2 .
a) Find the set of critical points.
b) Sketch a phase diagram and describe the behavior near the critical point(s).
c) Find the linearization. Is it helpful in understanding the system?
Exercise 8.1.6: Take ๐‘ฅ 0 = ๐‘ฅ 2 , ๐‘ฆ 0 = ๐‘ฅ 3 .
a) Find the set of critical points.
b) Sketch a phase diagram and describe the behavior near the critical point(s).
c) Find the linearization. Is it helpful in understanding the system?
Exercise 8.1.101: Find the critical points and linearizations of the following systems.
a) ๐‘ฅ 0 = sin(๐œ‹๐‘ฆ) + (๐‘ฅ − 1)2 , ๐‘ฆ 0 = ๐‘ฆ 2 − ๐‘ฆ,
b) ๐‘ฅ 0 = ๐‘ฅ + ๐‘ฆ + ๐‘ฆ 2 , ๐‘ฆ 0 = ๐‘ฅ,
c) ๐‘ฅ 0 = (๐‘ฅ − 1)2 + ๐‘ฆ, ๐‘ฆ 0 = ๐‘ฅ 2 + ๐‘ฆ.
Exercise 8.1.102: Match systems
1) ๐‘ฅ 0 = ๐‘ฆ 2 , ๐‘ฆ 0 = −๐‘ฅ 2 ,
2) ๐‘ฅ 0 = ๐‘ฆ, ๐‘ฆ 0 = (๐‘ฅ − 1)(๐‘ฅ + 1),
3) ๐‘ฅ 0 = ๐‘ฆ + ๐‘ฅ 2 , ๐‘ฆ 0 = −๐‘ฅ,
to the vector fields below. Justify.
a)
b)
c)
Exercise 8.1.103: The idea of critical points and linearization works in higher dimensions as well.
You simply make the Jacobian matrix bigger by adding more functions and more variables. For the
following system of 3 equations find the critical points and their linearizations:
๐‘ฅ0 = ๐‘ฅ + ๐‘ง 2 ,
๐‘ฆ 0 = ๐‘ง 2 − ๐‘ฆ,
๐‘ง0 = ๐‘ง + ๐‘ฅ 2 .
Exercise 8.1.104: Any two-dimensional non-autonomous system ๐‘ฅ 0 = ๐‘“ (๐‘ฅ, ๐‘ฆ, ๐‘ก), ๐‘ฆ 0 = ๐‘”(๐‘ฅ, ๐‘ฆ, ๐‘ก)
can be written as a three-dimensional autonomous system (three equations). Write down this
autonomous system using the variables ๐‘ข, ๐‘ฃ, ๐‘ค.
8.2. STABILITY AND CLASSIFICATION OF ISOLATED CRITICAL POINTS
8.2
357
Stability and classification of isolated critical points
Note: 1.5–2 lectures, §6.1–§6.2 in [EP], §9.2–§9.3 in [BD]
8.2.1
Isolated critical points and almost linear systems
A critical point is isolated if it is the only critical point in some small “neighborhood” of the
point. That is, if we zoom in far enough it is the only critical point we see. In the example
above, the critical point was isolated. If on the other hand there would be a whole curve of
critical points, then it would not be isolated.
A system is called almost linear at a critical point (๐‘ฅ0 , ๐‘ฆ0 ), if the critical point is isolated
and the Jacobian matrix at the point is invertible, or equivalently if the linearized system
has an isolated critical point. In such a case, the nonlinear terms are very small and the
system behaves like its linearization, at least if we are close to the critical point.
For example, the system in Examples 8.1.1 and 8.1.2 has two isolated critical points
(0, 0) and
linear at both critical points as the Jacobian matrices at both
0(0,1 1), and 0is1almost
points, −1 0 and 1 0 , are invertible.
On the other hand, the system ๐‘ฅ 0 = ๐‘ฅ 2 , ๐‘ฆ 0 = ๐‘ฆ 2 has an isolated critical point at (0, 0),
however the Jacobian matrix
2๐‘ฅ 0
0 2๐‘ฆ
is zero when (๐‘ฅ, ๐‘ฆ) = (0, 0). So the system is not almost linear. Even a worse example is the
system ๐‘ฅ 0 = ๐‘ฅ, ๐‘ฆ 0 = ๐‘ฅ 2 , which does not have isolated critical points; ๐‘ฅ 0 and ๐‘ฆ 0 are both zero
whenever ๐‘ฅ = 0, that is, the entire ๐‘ฆ-axis.
Fortunately, most often critical points are isolated, and the system is almost linear at
the critical points. So if we learn what happens there, we will have figured out the majority
of situations that arise in applications.
8.2.2
Stability and classification of isolated critical points
Once we have an isolated critical point, the system is almost linear at that critical point,
and we computed the associated linearized system, we can classify what happens to the
solutions. We more or less use the classification for linear two-variable systems from § 3.5,
with one minor caveat. Let us list the behaviors depending on the eigenvalues of the
Jacobian matrix at the critical point in Table 8.1 on the following page. This table is very
similar to Table 3.1 on page 150, with the exception of missing “center” points. We will
discuss centers later, as they are more complicated.
In the third column, we mark points as asymptotically stable or unstable. Formally, a
stable critical point (๐‘ฅ 0 , ๐‘ฆ0 ) is one where given any small distance ๐œ– to (๐‘ฅ0 , ๐‘ฆ0 ), and any initial
condition within a perhaps smaller radius around (๐‘ฅ0 , ๐‘ฆ0 ), the trajectory of the system
never goes further away from (๐‘ฅ0 , ๐‘ฆ0 ) than ๐œ–. An unstable critical point is one that is not
358
CHAPTER 8. NONLINEAR SYSTEMS
Eigenvalues of the Jacobian matrix
Behavior
Stability
real and both positive
real and both negative
real and opposite signs
complex with positive real part
complex with negative real part
source / unstable node unstable
sink / stable node
asymptotically stable
saddle
unstable
spiral source
unstable
spiral sink
asymptotically stable
Table 8.1: Behavior of an almost linear system near an isolated critical point.
stable. Informally, a point is stable if we start close to a critical point and follow a trajectory
we either go towards, or at least not away from, this critical point.
A stable critical point (๐‘ฅ0 , ๐‘ฆ0 ) is called asymptoticallystable if given any initial condition
sufficiently close to (๐‘ฅ0 , ๐‘ฆ0 ) and any solution ๐‘ฅ(๐‘ก), ๐‘ฆ(๐‘ก) satisfying that condition, then
lim ๐‘ฅ(๐‘ก), ๐‘ฆ(๐‘ก) = (๐‘ฅ0 , ๐‘ฆ0 ).
๐‘ก→∞
That is, the critical point is asymptotically stable if any trajectory for a sufficiently close
initial condition goes towards the critical point (๐‘ฅ0 , ๐‘ฆ0 ).
Example 8.2.1: Consider ๐‘ฅ 0 = −๐‘ฆ − ๐‘ฅ 2 , ๐‘ฆ 0 = −๐‘ฅ + ๐‘ฆ 2 . See Figure 8.3 on the facing page for
the phase diagram. Let us find the critical points. These are the points where −๐‘ฆ − ๐‘ฅ 2 = 0
and −๐‘ฅ + ๐‘ฆ 2 = 0. The first equation means ๐‘ฆ = −๐‘ฅ 2 , and so ๐‘ฆ 2 = ๐‘ฅ 4 . Plugging into the
second equation we obtain −๐‘ฅ + ๐‘ฅ 4 = 0. Factoring we obtain ๐‘ฅ(1 − ๐‘ฅ 3 ) = 0. Since we are
looking only for real solutions we get either ๐‘ฅ = 0 or ๐‘ฅ = 1. Solving for the corresponding
๐‘ฆ using ๐‘ฆ = −๐‘ฅ 2 , we get two critical points, one being (0, 0) and the other being (1, −1).
Clearly the critical points are isolated.
Let us compute the Jacobian matrix:
−2๐‘ฅ −1
.
−1 2๐‘ฆ
0 −1 and so the two eigenvalues are 1 and −1. As
At the point (0, 0) we get the matrix −1
0
the matrix is invertible, the system is almost linear at (0, 0). As the eigenvalues are real and
of opposite signs, we get a saddle point, which is an unstable equilibrium point.
−1
At the point (1, −1) we get the matrix −2
−1 −2 and computing the eigenvalues we get −1,
−3. The matrix is invertible, and so the system is almost linear at (1, −1). As we have real
eigenvalues and both negative, the critical point is a sink, and therefore an asymptotically
stable equilibrium point. That is, if we start with any point (๐‘ฅ ๐‘– , ๐‘ฆ ๐‘– ) close to (1, −1) as an
initial condition and plot a trajectory, it approaches (1, −1). In other words,
lim ๐‘ฅ(๐‘ก), ๐‘ฆ(๐‘ก) = (1, −1).
๐‘ก→∞
8.2. STABILITY AND CLASSIFICATION OF ISOLATED CRITICAL POINTS
-2
-1
0
1
359
2
2
2
1
1
0
0
-1
-1
-2
-2
-2
-1
0
1
2
Figure 8.3: The phase portrait with few sample trajectories of ๐‘ฅ 0 = −๐‘ฆ − ๐‘ฅ 2 , ๐‘ฆ 0 = −๐‘ฅ + ๐‘ฆ 2 .
As you can see from the diagram, this behavior is true even for some initial points quite far
from (1, −1), but it is definitely not true for all initial points.
Example 8.2.2: Let us look at ๐‘ฅ 0 = ๐‘ฆ + ๐‘ฆ 2 ๐‘’ ๐‘ฅ , ๐‘ฆ 0 = ๐‘ฅ. First let us find the critical points.
These are the points where ๐‘ฆ + ๐‘ฆ 2 ๐‘’ ๐‘ฅ = 0 and ๐‘ฅ = 0. Simplifying we get 0 = ๐‘ฆ + ๐‘ฆ 2 = ๐‘ฆ(๐‘ฆ + 1).
So the critical points are (0, 0) and (0, −1), and hence are isolated. Let us compute the
Jacobian matrix:
2 ๐‘ฅ
๐‘ฆ ๐‘’ 1 + 2๐‘ฆ๐‘’ ๐‘ฅ
.
1
0
At the point (0, 0) we get the matrix 01 10 and so the two eigenvalues are 1 and −1. As
the matrix is invertible, the system is almost linear at (0, 0). And, as the eigenvalues are
real and of opposite signs, we get a saddle point, which is an unstable equilibrium
point.
√
At the point (0, −1) we get the matrix 11 −1
whose eigenvalues are 12 ± ๐‘– 23 . The matrix
0
is invertible, and so the system is almost linear at (0, −1). As we have complex eigenvalues
with positive real part, the critical point is a spiral source, and therefore an unstable
equilibrium point.
See Figure 8.4 on the next page for the phase diagram. Notice the two critical points,
and the behavior of the arrows in the vector field around these points.
8.2.3
The trouble with centers
Recall, a linear system with a center means that trajectories travel in closed elliptical orbits
in some direction around the critical point. Such a critical point we call a center or a stable
center. It is not an asymptotically stable critical point, as the trajectories never approach the
critical point, but at least if you start sufficiently close to the critical point, you stay close to
the critical point. The simplest example of such behavior is the linear system with a center.
Another example is the critical point (0, 0) in Example 8.1.1 on page 352.
360
CHAPTER 8. NONLINEAR SYSTEMS
-2
-1
0
1
2
2
2
1
1
0
0
-1
-1
-2
-2
-2
-1
0
1
2
Figure 8.4: The phase portrait with few sample trajectories of ๐‘ฅ 0 = ๐‘ฆ + ๐‘ฆ 2 ๐‘’ ๐‘ฅ , ๐‘ฆ 0 = ๐‘ฅ.
The trouble with a center in a nonlinear system is that whether the trajectory goes
towards or away from the critical point is governed by the sign of the real part of the
eigenvalues of the Jacobian matrix, and the Jacobian matrix in a nonlinear system changes
from point to point. Since this real part is zero at the critical point itself, it can have either
sign nearby, meaning the trajectory could be pulled towards or away from the critical point.
Example 8.2.3: An example of such a problematic behavior is the system ๐‘ฅ 0 = ๐‘ฆ, ๐‘ฆ 0 = −๐‘ฅ+๐‘ฆ 3 .
The only critical point is the origin (0, 0). The Jacobian matrix is
0
1
.
−1 3๐‘ฆ 2
0 1 , which has eigenvalues ±๐‘–. So the linearization has a
At (0, 0) the Jacobian matrix is −1
0
center.
Using the quadratic equation, the eigenvalues of the Jacobian matrix at any point (๐‘ฅ, ๐‘ฆ)
are
p
4 − 9๐‘ฆ 4
3 2
๐œ†= ๐‘ฆ ±๐‘–
.
2
2
At any point where ๐‘ฆ ≠ 0 (so at most points near the origin), the eigenvalues have a positive
real part (๐‘ฆ 2 can never be negative). This positive real part pulls the trajectory away from
the origin. A sample trajectory for an initial condition near the origin is given in Figure 8.5
on the next page.
The moral of the example is that further analysis is needed when the linearization has a
center. The analysis will in general be more complicated than in the example above, and
is more likely to involve case-by-case consideration. Such a complication should not be
surprising to you. By now in your mathematical career, you have seen many places where
a simple test is inconclusive, recall for example the second derivative test for maxima or
minima, and requires more careful, and perhaps ad hoc analysis of the situation.
8.2. STABILITY AND CLASSIFICATION OF ISOLATED CRITICAL POINTS
-3
-2
-1
0
1
2
3
2
2
1
1
0
0
-1
-1
-2
-2
-3
-2
-1
0
1
2
361
3
Figure 8.5: An unstable critical point (spiral source) at the origin for ๐‘ฅ 0 = ๐‘ฆ, ๐‘ฆ 0 = −๐‘ฅ + ๐‘ฆ 3 , even if the
linearization has a center.
8.2.4
Conservative equations
An equation of the form
๐‘ฅ 00 + ๐‘“ (๐‘ฅ) = 0
for an arbitrary function ๐‘“ (๐‘ฅ) is called a conservative equation. For example the pendulum
equation is a conservative equation. The equations are conservative as there is no friction
in the system so the energy in the system is “conserved.” Let us write this equation as a
system of nonlinear ODE.
๐‘ฅ 0 = ๐‘ฆ,
๐‘ฆ 0 = − ๐‘“ (๐‘ฅ).
These types of equations have the advantage that we can solve for their trajectories easily.
The trick is to first think of ๐‘ฆ as a function of ๐‘ฅ for a moment. Then use the chain rule
๐‘ฅ 00 = ๐‘ฆ 0 =
๐‘‘๐‘ฆ 0
๐‘‘๐‘ฆ
๐‘ฅ =๐‘ฆ ,
๐‘‘๐‘ฅ
๐‘‘๐‘ฅ
๐‘‘๐‘ฆ
where the prime indicates a derivative with respect to ๐‘ก. We obtain ๐‘ฆ ๐‘‘๐‘ฅ + ๐‘“ (๐‘ฅ) = 0. We
integrate with respect to ๐‘ฅ to get
∫
๐‘‘๐‘ฆ
๐‘ฆ ๐‘‘๐‘ฅ ๐‘‘๐‘ฅ +
1 2
๐‘ฆ +
2
∫
∫
๐‘“ (๐‘ฅ) ๐‘‘๐‘ฅ = ๐ถ. In other words
๐‘“ (๐‘ฅ) ๐‘‘๐‘ฅ = ๐ถ.
We obtained an implicit equation for the trajectories, with different ๐ถ giving different
trajectories. The value of ๐ถ is conserved on any trajectory. This expression is sometimes
called the Hamiltonian or the energy of the system. If you look back to § 1.8, you will notice
๐‘‘๐‘ฆ
that ๐‘ฆ ๐‘‘๐‘ฅ + ๐‘“ (๐‘ฅ) = 0 is an exact equation, and we just found a potential function.
362
CHAPTER 8. NONLINEAR SYSTEMS
Example 8.2.4: Let us find the trajectories for the equation ๐‘ฅ 00 + ๐‘ฅ − ๐‘ฅ 2 = 0, which is the
equation from Example 8.1.1 on page 352. The corresponding first order system is
๐‘ฅ 0 = ๐‘ฆ,
๐‘ฆ 0 = −๐‘ฅ + ๐‘ฅ 2 .
Trajectories satisfy
1 2 1 2 1 3
๐‘ฆ + ๐‘ฅ − ๐‘ฅ = ๐ถ.
2
2
3
We solve for ๐‘ฆ
r
2
๐‘ฆ = ± −๐‘ฅ 2 + ๐‘ฅ 3 + 2๐ถ.
3
Plotting these graphs we get exactly the trajectories in Figure 8.1 on page 353. In
particular we notice that near the origin the trajectories are closed curves: they keep going
around the origin, never spiraling in or out. Therefore we discovered a way to verify that
the critical point at (0, 0) is a stable center. The critical point at (0, 1) is a saddle as we
already noticed. This example is typical for conservative equations.
Consider an arbitrary conservative equation ๐‘ฅ 00 + ๐‘“ (๐‘ฅ) = 0. All critical points occur
when ๐‘ฆ = 0 (the ๐‘ฅ-axis), that is when ๐‘ฅ 0 = 0. The critical points are those points on the
๐‘ฅ-axis where ๐‘“ (๐‘ฅ) = 0. The trajectories are given by
s
๐‘ฆ = ± −2
∫
๐‘“ (๐‘ฅ) ๐‘‘๐‘ฅ + 2๐ถ.
So all trajectories are mirrored across the ๐‘ฅ-axis. In particular, there can be no spiral sources
nor sinks. The Jacobian matrix is
0
1
.
− ๐‘“ 0(๐‘ฅ) 0
The critical point is almost linear if ๐‘“ 0(๐‘ฅ) ≠ 0 at the critical point. Let ๐ฝ denote the Jacobian
matrix. The eigenvalues of ๐ฝ are solutions to
0 = det(๐ฝ − ๐œ†๐ผ) = ๐œ†2 + ๐‘“ 0(๐‘ฅ).
Therefore ๐œ† = ± − ๐‘“ 0(๐‘ฅ). In other words, either we get real eigenvalues of opposite signs
(if ๐‘“ 0(๐‘ฅ) < 0), or we get purely imaginary eigenvalues (if ๐‘“ 0(๐‘ฅ) > 0). There are only two
possibilities for critical points, either an unstable saddle point, or a stable center. There are
never any sinks or sources.
p
8.2.5
Exercises
Exercise 8.2.1: For the systems below, find and classify the critical points, also indicate if the
equilibria are stable, asymptotically stable, or unstable.
a) ๐‘ฅ 0 = −๐‘ฅ + 3๐‘ฅ 2 , ๐‘ฆ 0 = −๐‘ฆ
c) ๐‘ฅ 0 = ๐‘ฆ๐‘’ ๐‘ฅ , ๐‘ฆ 0 = ๐‘ฆ − ๐‘ฅ + ๐‘ฆ 2
b) ๐‘ฅ 0 = ๐‘ฅ 2 + ๐‘ฆ 2 − 1, ๐‘ฆ 0 = ๐‘ฅ
8.2. STABILITY AND CLASSIFICATION OF ISOLATED CRITICAL POINTS
363
Exercise 8.2.2: Find the implicit equations of the trajectories of the following conservative systems.
Next find their critical points (if any) and classify them.
a) ๐‘ฅ 00 + ๐‘ฅ + ๐‘ฅ 3 = 0
b) ๐œƒ00 + sin ๐œƒ = 0
c) ๐‘ง 00 + (๐‘ง − 1)(๐‘ง + 1) = 0
d) ๐‘ฅ 00 + ๐‘ฅ 2 + 1 = 0
Exercise 8.2.3: Find and classify the critical point(s) of ๐‘ฅ 0 = −๐‘ฅ 2 , ๐‘ฆ 0 = −๐‘ฆ 2 .
Exercise 8.2.4: Suppose ๐‘ฅ 0 = −๐‘ฅ๐‘ฆ, ๐‘ฆ 0 = ๐‘ฅ 2 − 1 − ๐‘ฆ.
a) Show there are two spiral sinks at (−1, 0) and (1, 0).
b) For any initial point of the form (0, ๐‘ฆ0 ), find what is the trajectory.
c) Can a trajectory starting at (๐‘ฅ0 , ๐‘ฆ0 ) where ๐‘ฅ0 > 0 spiral into the critical point at (−1, 0)?
Why or why not?
Exercise 8.2.5: In the example ๐‘ฅ 0 = ๐‘ฆ, ๐‘ฆ 0 = ๐‘ฆ 3 − ๐‘ฅ show that for any trajectory, the distance from
the origin is an increasing function. Conclude that the origin behaves like is a spiral source. Hint:
2
2
Consider ๐‘“ (๐‘ก) = ๐‘ฅ(๐‘ก) + ๐‘ฆ(๐‘ก) and show it has positive derivative.
Exercise 8.2.6: Suppose ๐‘“ is always positive. Find the trajectories of ๐‘ฅ 00 + ๐‘“ (๐‘ฅ 0) = 0. Are there any
critical points?
Exercise 8.2.7: Suppose that ๐‘ฅ 0 = ๐‘“ (๐‘ฅ, ๐‘ฆ), ๐‘ฆ 0 = ๐‘”(๐‘ฅ, ๐‘ฆ). Suppose that ๐‘”(๐‘ฅ, ๐‘ฆ) > 1 for all ๐‘ฅ and ๐‘ฆ.
Are there any critical points? What can we say about the trajectories at ๐‘ก goes to infinity?
Exercise 8.2.101: For the systems below, find and classify the critical points.
a) ๐‘ฅ 0 = −๐‘ฅ + ๐‘ฅ 2 , ๐‘ฆ 0 = ๐‘ฆ
b) ๐‘ฅ 0 = ๐‘ฆ − ๐‘ฆ 2 − ๐‘ฅ, ๐‘ฆ 0 = −๐‘ฅ
c) ๐‘ฅ 0 = ๐‘ฅ๐‘ฆ, ๐‘ฆ 0 = ๐‘ฅ + ๐‘ฆ − 1
Exercise 8.2.102: Find the implicit equations of the trajectories of the following conservative systems.
Next find their critical points (if any) and classify them.
a) ๐‘ฅ 00 + ๐‘ฅ 2 = 4
b) ๐‘ฅ 00 + ๐‘’ ๐‘ฅ = 0
c) ๐‘ฅ 00 + (๐‘ฅ + 1)๐‘’ ๐‘ฅ = 0
Exercise 8.2.103: The conservative system ๐‘ฅ 00 + ๐‘ฅ 3 = 0 is not almost linear. Classify its critical
point(s) nonetheless.
Exercise 8.2.104: Derive an analogous classification of critical points for equations in one dimension,
such as ๐‘ฅ 0 = ๐‘“ (๐‘ฅ) based on the derivative. A point ๐‘ฅ0 is critical when ๐‘“ (๐‘ฅ0 ) = 0 and almost linear
if in addition ๐‘“ 0(๐‘ฅ0 ) ≠ 0. Figure out if the critical point is stable or unstable depending on the sign
of ๐‘“ 0(๐‘ฅ0 ). Explain. Hint: see § 1.6.
364
8.3
CHAPTER 8. NONLINEAR SYSTEMS
Applications of nonlinear systems
Note: 2 lectures, §6.3–§6.4 in [EP], §9.3, §9.5 in [BD]
In this section we study two very standard examples of nonlinear systems. First, we
look at the nonlinear pendulum equation. We saw the pendulum equation’s linearization
before, but we noted it was only valid for small angles and short times. Now we find
out what happens for large angles. Next, we look at the predator-prey equation, which
finds various applications in modeling problems in biology, chemistry, economics, and
elsewhere.
8.3.1
Pendulum
๐‘”
The first example we study is the pendulum equation ๐œƒ00 + ๐ฟ sin ๐œƒ = 0. Here, ๐œƒ is the angular
displacement, ๐‘” is the gravitational acceleration, and ๐ฟ is the length of the pendulum. In
this equation we disregard friction, so we are talking about an idealized pendulum.
This equation is a conservative equation, so we can use our
analysis of conservative equations from the previous section. Let us
๐ฟ
change the equation to a two-dimensional system in variables (๐œƒ, ๐œ”)
๐œƒ
by introducing the new variable ๐œ”:
0
๐œƒ
๐œ”
๐œ”
=
.
๐‘”
− ๐ฟ sin ๐œƒ
๐‘š
๐‘”
The critical points of this system are when ๐œ” = 0 and − ๐ฟ sin ๐œƒ = 0, or in other words
if sin ๐œƒ = 0. So the critical points are when ๐œ” = 0 and ๐œƒ is a multiple of ๐œ‹. That is,
the points are . . . (−2๐œ‹, 0), (−๐œ‹, 0), (0, 0), (๐œ‹, 0), (2๐œ‹, 0) . . .. While there are infinitely many
critical points, they are all isolated. Let us compute the Jacobian matrix:
๏ฃฎ
๐œ•
๏ฃฏ ๐œ•๐œƒ ๐œ”
๏ฃฏ ๏ฃฏ๐œ•
๐‘”
−
sin
๐œƒ
๏ฃฏ ๐œ•๐œƒ ๐ฟ
๏ฃฐ
๐œ•
๐œ•๐œ”
๐œ•
๐œ•๐œ”
๐œ”
๏ฃน ๏ฃบ
0
1
๏ฃบ =
.
๐‘”
๏ฃบ
๐‘”
− ๐ฟ cos ๐œƒ 0
− ๐ฟ sin ๐œƒ ๏ฃบ
๏ฃป
For conservative equations, there are two types of critical points.
q Either stable centers,
๐‘”
or saddle points. The eigenvalues of the Jacobian matrix are ๐œ† = ± − ๐ฟ cos ๐œƒ.
The eigenvalues are going to be real when cos ๐œƒ < 0. This happens at the odd
multiples of ๐œ‹. The eigenvalues are going to be purely imaginary when cos ๐œƒ > 0.
This happens at the even multiples of ๐œ‹. Therefore the system has a stable center
at the points . . . (−2๐œ‹, 0), (0, 0), (2๐œ‹, 0) . . ., and it has an unstable saddle at the points
. . . (−3๐œ‹, 0), (−๐œ‹, 0), (๐œ‹, 0), (3๐œ‹, 0) . . .. Look at the phase diagram in Figure 8.6 on the facing
๐‘”
page, where for simplicity we let ๐ฟ = 1.
In the linearized equation we have only a single critical point, the center at (0, 0). Now
we see more clearly what we meant when we said the linearization is good for small
365
8.3. APPLICATIONS OF NONLINEAR SYSTEMS
-5.0
-2.5
0.0
2.5
5.0
3
3
2
2
1
1
0
0
-1
-1
-2
-2
-3
-3
-5.0
-2.5
0.0
2.5
5.0
Figure 8.6: Phase plane diagram and some trajectories of the nonlinear pendulum equation.
angles. The horizontal axis is the deflection angle. The vertical axis is the angular velocity
of the pendulum. Suppose we start at ๐œƒ = 0 (no deflection), and we start with a small
angular velocity ๐œ”. Then the trajectory keeps going around the critical point (0, 0) in an
approximate circle. This corresponds to short swings of the pendulum back and forth.
When ๐œƒ stays small, the trajectories really look like circles and hence are very close to our
linearization.
When we give the pendulum a big enough push, it goes across the top and keeps
spinning about its axis. This behavior corresponds to the wavy curves that do not cross the
horizontal axis in the phase diagram. Let us suppose we look at the top curves, when the
angular velocity ๐œ” is large and positive. Then the pendulum is going around and around
its axis. The velocity is going to be large when the pendulum is near the bottom, and the
velocity is the smallest when the pendulum is close to the top of its loop.
At each critical point, there is an equilibrium solution. Consider the solution ๐œƒ = 0;
the pendulum is not moving and is hanging straight down. This is a stable place for the
pendulum to be, hence this is a stable equilibrium.
The other type of equilibrium solution is at the unstable point, for example ๐œƒ = ๐œ‹. Here
the pendulum is upside down. Sure you can balance the pendulum this way and it will
stay, but this is an unstable equilibrium. Even the tiniest push will make the pendulum
start swinging wildly.
See Figure 8.7 on the next page for a diagram. The first picture is the stable equilibrium
๐œƒ = 0. The second picture corresponds to those “almost circles” in the phase diagram
around ๐œƒ = 0 when the angular velocity is small. The next picture is the unstable
equilibrium ๐œƒ = ๐œ‹. The last picture corresponds to the wavy lines for large angular
velocities.
The quantity
1 2 ๐‘”
๐œ” − cos ๐œƒ
2
๐ฟ
366
CHAPTER 8. NONLINEAR SYSTEMS
๐œƒ=0
๐œƒ=๐œ‹
Small angular velocities
Large angular velocities
Figure 8.7: Various possibilities for the motion of the pendulum.
is conserved by any solution. This is the energy or the Hamiltonian of the system.
We have a conservative equation and so (exercise) the trajectories are given by
r
๐œ”=±
2๐‘”
cos ๐œƒ + ๐ถ,
๐ฟ
for various values of ๐ถ. Let us look at the initial condition of (๐œƒ0 , 0), that is, we take the
pendulum to angle ๐œƒ0 , and just let it go (initial angular velocity 0). We plug the initial
conditions into the above and solve for ๐ถ to obtain
๐ถ=−
2๐‘”
cos ๐œƒ0 .
๐ฟ
Thus the expression for the trajectory is
r
๐œ”=±
2๐‘” p
cos ๐œƒ − cos ๐œƒ0 .
๐ฟ
Let us figure out the period. That is, the time it takes for the pendulum to swing back
and forth. We notice that the trajectory about the origin in the phase plane is symmetric
about both the ๐œƒ and the ๐œ”-axis. That is, in terms of ๐œƒ, the time it takes from ๐œƒ0 to −๐œƒ0 is
the same as it takes from −๐œƒ0 back to ๐œƒ0 . Furthermore, the time it takes from −๐œƒ0 to 0 is the
same as to go from 0 to ๐œƒ0 . Therefore, let us find how long it takes for the pendulum to go
from angle 0 to angle ๐œƒ0 , which is a quarter of the full oscillation and then multiply by 4.
๐‘‘๐‘ก
We figure out this time by finding ๐‘‘๐œƒ
and integrating from 0 to ๐œƒ0 . The period is four
times this integral. Let us stay in the region where ๐œ” is positive. Since ๐œ” = ๐‘‘๐œƒ
๐‘‘๐‘ก , inverting
we get
๐‘‘๐‘ก
=
๐‘‘๐œƒ
s
๐ฟ
1
.
√
2๐‘” cos ๐œƒ − cos ๐œƒ0
367
8.3. APPLICATIONS OF NONLINEAR SYSTEMS
Therefore the period ๐‘‡ is given by
s
๐‘‡=4
๐ฟ
2๐‘”
∫
๐œƒ0
√
0
1
cos ๐œƒ − cos ๐œƒ0
๐‘‘๐œƒ.
The integral is an improper integral, and we cannot in general evaluate it symbolically. We
must resort to numerical approximation if we want to compute a particular ๐‘‡.
๐‘”
Recall from § 2.4, the linearized equation ๐œƒ00 + ๐ฟ ๐œƒ = 0 has period
s
๐‘‡linear = 2๐œ‹
๐ฟ
.
๐‘”
We plot ๐‘‡, ๐‘‡linear , and the relative error ๐‘‡−๐‘‡๐‘‡linear in Figure 8.8. The relative error says how far
is our approximation from the real period percentage-wise. Note that ๐‘‡linear is simply a
constant, it does not change with the initial angle ๐œƒ0 . The actual period ๐‘‡ gets larger and
larger as ๐œƒ0 gets larger. Notice how the relative error is small when ๐œƒ0 is small. It is still
only 15% when ๐œƒ0 = ๐œ‹2 , that is, a 90 degree angle. The error is 3.8% when starting at ๐œ‹4 , a 45
degree angle. At a 5 degree initial angle, the error is only 0.048%.
0.00
8.0
0.25
0.50
0.75
1.00
1.25
1.50
0.00
0.25
0.50
0.75
1.00
1.25
1.50
8.0
7.5
0.150
0.150
0.125
0.125
0.100
0.100
7.5
7.0
7.0
6.5
6.5
0.075
0.075
6.0
6.0
0.050
0.050
0.025
0.025
5.5
5.5
0.000
5.0
0.00
0.000
5.0
0.25
0.50
0.75
1.00
1.25
Figure 8.8: The plot of ๐‘‡ and ๐‘‡linear with
๐œƒ0 between 0 and ๐œ‹/2.
1.50
๐‘”
๐ฟ
0.00
0.25
0.50
0.75
1.00
1.25
= 1 (left), and the plot of the relative error
๐‘‡−๐‘‡linear
๐‘‡
1.50
(right), for
While it is not immediately obvious from the formula, it is true that
lim ๐‘‡ = ∞.
๐œƒ0 ↑๐œ‹
That is, the period goes to infinity as the initial angle approaches the unstable equilibrium
point. So if we put the pendulum almost upside down it may take a very long time before
it gets down. This is consistent with the limiting behavior, where the exactly upside down
pendulum never makes an oscillation, so we could think of that as infinite period.
368
8.3.2
CHAPTER 8. NONLINEAR SYSTEMS
Predator-prey or Lotka–Volterra systems
One of the most common simple applications of nonlinear systems are the so-called
predator-prey or Lotka–Volterra∗ systems. For example, these systems arise when two species
interact, one as the prey and one as the predator. It is then no surprise that the equations
also see applications in economics. The system also arises in chemical reactions. In biology,
this system of equations explains the natural periodic variations of populations of different
species in nature. Before the application of differential equations, these periodic variations
in the population baffled biologists.
We keep with the classical example of hares and foxes in a forest, it is the easiest to
understand.
๐‘ฅ = # of hares (the prey),
๐‘ฆ = # of foxes (the predator).
When there are a lot of hares, there is plenty of food for the foxes, so the fox population
grows. However, when the fox population grows, the foxes eat more hares, so when there
are lots of foxes, the hare population should go down, and vice versa. The Lotka–Volterra
model proposes that this behavior is described by the system of equations
๐‘ฅ 0 = (๐‘Ž − ๐‘๐‘ฆ)๐‘ฅ,
๐‘ฆ 0 = (๐‘๐‘ฅ − ๐‘‘)๐‘ฆ,
where ๐‘Ž, ๐‘, ๐‘, ๐‘‘ are some parameters that describe the interaction of the foxes and hares† .
In this model, these are all positive numbers.
Let us analyze the idea behind this model. The model is a slightly more complicated
idea based on the exponential population model. First expand,
๐‘ฅ 0 = (๐‘Ž − ๐‘๐‘ฆ)๐‘ฅ = ๐‘Ž๐‘ฅ − ๐‘๐‘ฆ๐‘ฅ.
The hares are expected to simply grow exponentially in the absence of foxes, that is where
the ๐‘Ž๐‘ฅ term comes in, the growth in population is proportional to the population itself.
We are assuming the hares always find enough food and have enough space to reproduce.
However, there is another component −๐‘๐‘ฆ๐‘ฅ, that is, the population also is decreasing
proportionally to the number of foxes. Together we can write the equation as (๐‘Ž − ๐‘๐‘ฆ)๐‘ฅ, so
it is like exponential growth or decay but the constant depends on the number of foxes.
The equation for foxes is very similar, expand again
๐‘ฆ 0 = (๐‘๐‘ฅ − ๐‘‘)๐‘ฆ = ๐‘๐‘ฅ๐‘ฆ − ๐‘‘๐‘ฆ.
The foxes need food (hares) to reproduce: the more food, the bigger the rate of growth,
hence the ๐‘๐‘ฅ๐‘ฆ term. On the other hand, there are natural deaths in the fox population, and
hence the −๐‘‘๐‘ฆ term.
∗ Named
for the American mathematician, chemist, and statistician Alfred James Lotka (1880–1949) and
the Italian mathematician and physicist Vito Volterra (1860–1940).
† This interaction does not end well for the hare.
369
8.3. APPLICATIONS OF NONLINEAR SYSTEMS
Without further delay, let us start with an explicit example. Suppose the equations are
๐‘ฅ 0 = (0.4 − 0.01๐‘ฆ)๐‘ฅ,
๐‘ฆ 0 = (0.003๐‘ฅ − 0.3)๐‘ฆ.
See Figure 8.9 for the phase portrait. In this example it makes sense to also plot ๐‘ฅ and ๐‘ฆ as
graphs with respect to time. Therefore the second graph in Figure 8.9 is the graph of ๐‘ฅ and
๐‘ฆ on the vertical axis (the prey ๐‘ฅ is the thinner line with taller peaks), against time on the
horizontal axis. The particular solution graphed was with initial conditions of 20 foxes and
50 hares.
0
50
100
150
200
250
300
100
100
75
0
10
20
30
40
300
300
250
250
200
200
150
150
100
100
50
50
75
50
50
25
25
0
0
50
100
150
200
0
300
250
0
0
0
10
20
30
40
Figure 8.9: The phase portrait (left) and graphs of ๐‘ฅ and ๐‘ฆ for a sample solution (right).
Let us analyze what we see on the graphs. We work in the general setting rather than
putting in specific numbers. We start with finding the critical points. Set (๐‘Ž − ๐‘๐‘ฆ)๐‘ฅ = 0,
and (๐‘๐‘ฅ − ๐‘‘)๐‘ฆ = 0. The first equation is satisfied if either ๐‘ฅ = 0 or ๐‘ฆ = ๐‘Ž/๐‘ . If ๐‘ฅ = 0, the
second equation implies ๐‘ฆ = 0. If ๐‘ฆ = ๐‘Ž/๐‘ , the second equation implies ๐‘ฅ = ๐‘‘/๐‘ . There are
two equilibria: at (0, 0) when there are no animals at all, and at (๐‘‘/๐‘ , ๐‘Ž/๐‘ ). In our specific
example ๐‘ฅ = ๐‘‘/๐‘ = 100, and ๐‘ฆ = ๐‘Ž/๐‘ = 40. This is the point where there are 100 hares and 40
foxes.
We compute the Jacobian matrix:
๐‘Ž − ๐‘๐‘ฆ −๐‘๐‘ฅ
.
๐‘๐‘ฆ
๐‘๐‘ฅ − ๐‘‘
0 , so the eigenvalues are ๐‘Ž and −๐‘‘, hence real
At the origin (0, 0) we get the matrix 0๐‘Ž −๐‘‘
and of opposite signs. So the critical point at the origin is a saddle. This makes sense. If
you started with some foxes but no hares, then the foxes would go extinct, that is, you
would approach the origin. If you started with no foxes and a few hares, then the hares
would keep multiplying without check, and so you would go away from the origin.
OK, how about the other critical point at (๐‘‘/๐‘ , ๐‘Ž/๐‘ ). Here the Jacobian matrix becomes
0
๐‘Ž๐‘
๐‘
− ๐‘๐‘‘
๐‘ .
0
370
CHAPTER 8. NONLINEAR SYSTEMS
√
The eigenvalues satisfy ๐œ†2 + ๐‘Ž๐‘‘ = 0. In other words, ๐œ† = ±๐‘– ๐‘Ž๐‘‘. The eigenvalues being
purely imaginary, we are in the case where we cannot quite decide using only linearization.
We could have a stable center, spiral sink, or a spiral source. That is, the equilibrium could
be asymptotically stable, stable, or unstable. Of course I gave you a picture above that
seems to imply it is a stable center. But never trust a picture only. Perhaps the oscillations
are getting larger and larger, but only very slowly. Of course this would be bad as it would
imply something will go wrong with our population sooner or later. And I only graphed a
very specific example with very specific trajectories.
How can we be sure we are in the stable situation? As we said before, in the case of
purely imaginary eigenvalues, we have to do a bit more work. Previously we found that for
conservative systems, there was a certain quantity that was conserved on the trajectories,
and hence the trajectories had to go in closed loops. We can use a similar technique here.
We just have to figure out what is the conserved quantity. After some trial and error we
find the constant
๐‘ฆ๐‘Ž ๐‘ฅ๐‘‘
๐ถ = ๐‘๐‘ฅ+๐‘ ๐‘ฆ = ๐‘ฆ ๐‘Ž ๐‘ฅ ๐‘‘ ๐‘’ −๐‘๐‘ฅ−๐‘ ๐‘ฆ
๐‘’
is conserved. Such a quantity is called the constant of motion. Let us check ๐ถ really is a
constant of motion. How do we check, you say? Well, a constant is something that does
not change with time, so let us compute the derivative with respect to time:
๐ถ 0 = ๐‘Ž๐‘ฆ ๐‘Ž−1 ๐‘ฆ 0 ๐‘ฅ ๐‘‘ ๐‘’ −๐‘๐‘ฅ−๐‘ ๐‘ฆ + ๐‘ฆ ๐‘Ž ๐‘‘๐‘ฅ ๐‘‘−1 ๐‘ฅ 0 ๐‘’ −๐‘๐‘ฅ−๐‘ ๐‘ฆ + ๐‘ฆ ๐‘Ž ๐‘ฅ ๐‘‘ ๐‘’ −๐‘๐‘ฅ−๐‘ ๐‘ฆ (−๐‘๐‘ฅ 0 − ๐‘๐‘ฆ 0).
Our equations give us what ๐‘ฅ 0 and ๐‘ฆ 0 are so let us plug those in:
๐ถ 0 = ๐‘Ž๐‘ฆ ๐‘Ž−1 (๐‘๐‘ฅ − ๐‘‘)๐‘ฆ๐‘ฅ ๐‘‘ ๐‘’ −๐‘๐‘ฅ−๐‘ ๐‘ฆ + ๐‘ฆ ๐‘Ž ๐‘‘๐‘ฅ ๐‘‘−1 (๐‘Ž − ๐‘๐‘ฆ)๐‘ฅ๐‘’ −๐‘๐‘ฅ−๐‘ ๐‘ฆ
+ ๐‘ฆ ๐‘Ž ๐‘ฅ ๐‘‘ ๐‘’ −๐‘๐‘ฅ−๐‘ ๐‘ฆ −๐‘(๐‘Ž − ๐‘๐‘ฆ)๐‘ฅ − ๐‘(๐‘๐‘ฅ − ๐‘‘)๐‘ฆ
= ๐‘ฆ ๐‘Ž ๐‘ฅ ๐‘‘ ๐‘’ −๐‘๐‘ฅ−๐‘ ๐‘ฆ ๐‘Ž(๐‘๐‘ฅ − ๐‘‘) + ๐‘‘(๐‘Ž − ๐‘๐‘ฆ) + −๐‘(๐‘Ž − ๐‘๐‘ฆ)๐‘ฅ − ๐‘(๐‘๐‘ฅ − ๐‘‘)๐‘ฆ
= 0.
๐‘ฆ๐‘Ž ๐‘ฅ๐‘‘
So along the trajectories ๐ถ is constant. In fact, the expression ๐ถ = ๐‘’ ๐‘๐‘ฅ+๐‘ ๐‘ฆ gives us an implicit
equation for the trajectories. In any case, once we have found this constant of motion, it
๐‘ฆ๐‘Ž ๐‘ฅ๐‘‘
must be true that the trajectories are simple curves, that is, the level curves of ๐‘’ ๐‘๐‘ฅ+๐‘ ๐‘ฆ . It turns
out, the critical point at (๐‘‘/๐‘ , ๐‘Ž/๐‘ ) is a maximum for ๐ถ (left as an exercise). So (๐‘‘/๐‘ , ๐‘Ž/๐‘ ) is a
stable equilibrium point, and we do not have to worry about the foxes and hares going
extinct or their populations exploding.
One blemish on this wonderful model is that the number of foxes and hares are discrete
quantities and we are modeling with continuous variables. Our model has no problem
with there being 0.1 fox in the forest for example, while in reality that makes no sense. The
approximation is a reasonable one as long as the number of foxes and hares are large, but
it does not make much sense for small numbers. One must be careful in interpreting any
results from such a model.
371
8.3. APPLICATIONS OF NONLINEAR SYSTEMS
An interesting consequence (perhaps counterintuitive) of this model is that adding
animals to the forest might lead to extinction, because the variations will get too big, and
one of the populations will get close to zero. For example, suppose there are 20 foxes and
50 hares as before, but now we bring in more foxes, bringing their number to 200. If we
run the computation, we find the number of hares will plummet to just slightly more than
1 hare in the whole forest. In reality that most likely means the hares die out, and then the
foxes will die out as well as they will have nothing to eat.
Showing that a system of equations has a stable solution can be a very difficult problem.
When Isaac Newton put forth his laws of planetary motions, he proved that a single
planet orbiting a single sun is a stable system. But any solar system with more than 1
planet proved very difficult indeed. In fact, such a system behaves chaotically (see § 8.5),
meaning small changes in initial conditions lead to very different long-term outcomes.
From numerical experimentation and measurements, we know the earth will not fly out
into the empty space or crash into the sun, for at least some millions of years or so. But we
do not know what happens beyond that.
8.3.3
Exercises
Exercise 8.3.1: Take the damped nonlinear pendulum equation ๐œƒ00 + ๐œ‡๐œƒ0 + ( ๐‘”/๐ฟ) sin ๐œƒ = 0 for
some ๐œ‡ > 0 (that is, there is some friction).
a) Suppose ๐œ‡ = 1 and ๐‘”/๐ฟ = 1 for simplicity, find and classify the critical points.
b) Do the same for any ๐œ‡ > 0 and any ๐‘” and ๐ฟ, but such that the damping is small, in particular,
๐œ‡2 < 4( ๐‘”/๐ฟ).
c) Explain what your findings mean, and if it agrees with what you expect in reality.
Exercise 8.3.2: Suppose the hares do not grow exponentially, but logistically. In particular consider
๐‘ฅ 0 = (0.4 − 0.01๐‘ฆ)๐‘ฅ − ๐›พ๐‘ฅ 2 ,
๐‘ฆ 0 = (0.003๐‘ฅ − 0.3)๐‘ฆ.
For the following two values of ๐›พ, find and classify all the critical points in the positive quadrant,
that is, for ๐‘ฅ ≥ 0 and ๐‘ฆ ≥ 0. Then sketch the phase diagram. Discuss the implication for the long
term behavior of the population.
a) ๐›พ = 0.001,
b) ๐›พ = 0.01.
Exercise 8.3.3:
a) Suppose ๐‘ฅ and ๐‘ฆ are positive variables. Show
๐‘ฆ๐‘ฅ
๐‘’ ๐‘ฅ+๐‘ฆ
attains a maximum at (1, 1).
b) Suppose ๐‘Ž, ๐‘, ๐‘, ๐‘‘ are positive constants, and also suppose ๐‘ฅ and ๐‘ฆ are positive variables.
Show
๐‘ฆ๐‘Ž ๐‘ฅ๐‘‘
๐‘’ ๐‘๐‘ฅ+๐‘ ๐‘ฆ
attains a maximum at (๐‘‘/๐‘ , ๐‘Ž/๐‘ ).
372
CHAPTER 8. NONLINEAR SYSTEMS
Exercise 8.3.4: Suppose that for the
q pendulum equation we take a trajectory giving the spinningaround motion, for example ๐œ” =
2๐‘”
๐ฟ
cos ๐œƒ +
2๐‘”
๐ฟ
+ ๐œ”02 . This is the trajectory where the lowest
angular velocity is ๐œ”02 . Find an integral expression for how long it takes the pendulum to go all the
way around.
Exercise 8.3.5 (challenging): Take the pendulum, suppose the initial position is ๐œƒ = 0.
a) Find the expression for ๐œ” giving the trajectory with initial condition (0, ๐œ”0 ). Hint: Figure
out what ๐ถ should be in terms of ๐œ”0 .
b) Find the crucial angular velocity ๐œ”1 , such that for any higher initial angular velocity, the
pendulum will keep going around its axis, and for any lower initial angular velocity, the
pendulum will simply swing back and forth. Hint: When the pendulum doesn’t go over the
top the expression for ๐œ” will be undefined for some ๐œƒs.
c) What do you think happens if the initial condition is (0, ๐œ”1 ), that is, the initial angle is 0, and
the initial angular velocity is exactly ๐œ”1 .
Exercise 8.3.101: Take the damped nonlinear pendulum equation ๐œƒ00 + ๐œ‡๐œƒ0 + ( ๐‘”/๐ฟ) sin ๐œƒ = 0 for
some ๐œ‡ > 0 (that is, there is friction). Suppose the friction is large, in particular ๐œ‡2 > 4( ๐‘”/๐ฟ).
a) Find and classify the critical points.
b) Explain what your findings mean, and if it agrees with what you expect in reality.
Exercise 8.3.102: Suppose we have the system predator-prey system where the foxes are also killed
at a constant rate โ„Ž (โ„Ž foxes killed per unit time): ๐‘ฅ 0 = (๐‘Ž − ๐‘๐‘ฆ)๐‘ฅ, ๐‘ฆ 0 = (๐‘๐‘ฅ − ๐‘‘)๐‘ฆ − โ„Ž.
a) Find the critical points and the Jacobian matrices of the system.
b) Put in the constants ๐‘Ž = 0.4, ๐‘ = 0.01, ๐‘ = 0.003, ๐‘‘ = 0.3, โ„Ž = 10. Analyze the critical
points. What do you think it says about the forest?
Exercise 8.3.103 (challenging): Suppose the foxes never die. That is, we have the system
๐‘ฅ 0 = (๐‘Ž − ๐‘๐‘ฆ)๐‘ฅ, ๐‘ฆ 0 = ๐‘๐‘ฅ๐‘ฆ. Find the critical points and notice they are not isolated. What will
happen to the population in the forest if it starts at some positive numbers. Hint: Think of the
constant of motion.
373
8.4. LIMIT CYCLES
8.4
Limit cycles
Note: less than 1 lecture, discussed in §6.1 and §6.4 in [EP] , §9.7 in [BD]
For nonlinear systems, trajectories do not simply need to approach or leave a single
point. They may in fact approach a larger set, such as a circle or another closed curve.
Example 8.4.1: The Van der Pol oscillator∗ is the following equation
๐‘ฅ 00 − ๐œ‡(1 − ๐‘ฅ 2 )๐‘ฅ 0 + ๐‘ฅ = 0,
where ๐œ‡ is some positive constant. The Van der Pol oscillator originated with electrical
circuits, but finds applications in diverse fields such as biology, seismology, and other
physical sciences.
For simplicity, let us use ๐œ‡ = 1. A phase diagram is given in the left-hand plot in
Figure 8.10. Notice how the trajectories seem to very quickly settle on a closed curve. On
the right-hand side is the plot of a single solution for ๐‘ก = 0 to ๐‘ก = 30 with initial conditions
๐‘ฅ(0) = 0.1 and ๐‘ฅ 0(0) = 0.1. The solution quickly tends to a periodic solution.
-4
-2
0
2
4
4
0
5
10
15
20
25
30
4
2
2
2
1
1
0
0
-1
-1
2
0
0
-2
-2
-2
-4
-2
-4
-4
-2
0
2
4
0
5
10
15
20
25
30
Figure 8.10: The phase portrait (left) and a graph of a sample solution of the Van der Pol oscillator.
The Van der Pol oscillator is an example of so-called relaxation oscillation. The word
relaxation comes from the sudden jump (the very steep part of the solution). For larger ๐œ‡
the steep part becomes even more pronounced, for small ๐œ‡ the limit cycle looks more like a
circle. In fact, setting ๐œ‡ = 0, we get ๐‘ฅ 00 + ๐‘ฅ = 0, which is a linear system with a center and
all trajectories become circles.
A trajectory in the phase portrait that is a closed curve (a curve that is a loop) is called a
closed trajectory. A limit cycle is a closed trajectory such that at least one other trajectory
spirals into it (or spirals out of it). For example, the closed curve in the phase portrait for
∗ Named
for the Dutch physicist Balthasar van der Pol (1889–1959).
374
CHAPTER 8. NONLINEAR SYSTEMS
the Van der Pol equation is a limit cycle. If all trajectories that start near the limit cycle
spiral into it, the limit cycle is called asymptotically stable. The limit cycle in the Van der Pol
oscillator is asymptotically stable.
Given a closed trajectory on an autonomous system, any solution that starts on it is
periodic. Such a curve is called a periodic orbit.
More precisely, if ๐‘ฅ(๐‘ก), ๐‘ฆ(๐‘ก) is a solution
such that for some ๐‘ก0 the point ๐‘ฅ(๐‘ก0 ), ๐‘ฆ(๐‘ก0 ) lies on a periodic orbit, then both ๐‘ฅ(๐‘ก) and ๐‘ฆ(๐‘ก)
are periodic functions (with the same period). That is, there is some number ๐‘ƒ such that
๐‘ฅ(๐‘ก) = ๐‘ฅ(๐‘ก + ๐‘ƒ) and ๐‘ฆ(๐‘ก) = ๐‘ฆ(๐‘ก + ๐‘ƒ).
Consider the system
๐‘ฅ 0 = ๐‘“ (๐‘ฅ, ๐‘ฆ),
๐‘ฆ 0 = ๐‘”(๐‘ฅ, ๐‘ฆ),
(8.2)
where the functions ๐‘“ and ๐‘” have continuous derivatives in some region ๐‘… in the plane.
Theorem 8.4.1 (Poincaré–Bendixson∗ ). Suppose ๐‘… is a closed bounded region (a region in the
plane that includes
its boundary and does not have points arbitrarily far from the origin). Suppose
๐‘ฅ(๐‘ก), ๐‘ฆ(๐‘ก) is a solution of (8.2) in ๐‘… that exists for all ๐‘ก ≥ ๐‘ก0 . Then either the solution is a periodic
function, or the solution tends towards a periodic solution in ๐‘….
The main point of the theorem is that if you find one solution that exists for all ๐‘ก large
enough (that is, as ๐‘ก goes to infinity) and stays within a bounded region, then you have
found either a periodic orbit, or a solution that spirals towards a limit cycle or tends to a
critical point. That is, in the long term, the behavior is very close to a periodic function.
Note that a constant solution at a critical point is periodic (with any period). The theorem is
more a qualitative statement rather than something to help us in computations. In practice
it is hard to find analytic solutions and so hard to show rigorously that they exist for all time.
But if we think the solution exists we numerically solve for a large time to approximate the
limit cycle. Another caveat is that the theorem only works in two dimensions. In three
dimensions and higher, there is simply too much room.
The theorem applies to all solutions in the Van der Pol oscillator. Solutions that start at
any point except the origin (0, 0) will tend to the periodic solution around the limit cycle,
and if the initial condition of (0, 0) will lead to the constant solution ๐‘ฅ = 0, ๐‘ฆ = 0.
Example 8.4.2: Consider
2
๐‘ฅ 0 = ๐‘ฆ + (๐‘ฅ 2 + ๐‘ฆ 2 − 1) ๐‘ฅ,
2
๐‘ฆ 0 = −๐‘ฅ + (๐‘ฅ 2 + ๐‘ฆ 2 − 1) ๐‘ฆ.
A vector field along with solutions with initial conditions (1.02, 0), (0.9, 0), and (0.1, 0) are
drawn in Figure 8.11 on the next page.
Notice that points on the unit circle (distance one from the origin) satisfy ๐‘ฅ 2 + ๐‘ฆ 2 − 1 = 0.
And ๐‘ฅ(๐‘ก) = sin(๐‘ก), ๐‘ฆ = cos(๐‘ก) is a solution of the system. Therefore we have a closed
trajectory. For points off the unit circle, the second term in ๐‘ฅ 0 pushes the solution further
away from the ๐‘ฆ-axis than the system ๐‘ฅ 0 = ๐‘ฆ, ๐‘ฆ 0 = −๐‘ฅ, and ๐‘ฆ 0 pushes the solution further
away from the ๐‘ฅ-axis than the linear system ๐‘ฅ 0 = ๐‘ฆ, ๐‘ฆ 0 = −๐‘ฅ. In other words for all other
initial conditions the trajectory will spiral out.
∗ Ivar
Otto Bendixson (1861–1935) was a Swedish mathematician.
375
8.4. LIMIT CYCLES
-1.5
1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
1.5
1.0
1.0
0.5
0.5
0.0
0.0
-0.5
-0.5
-1.0
-1.0
-1.5
-1.5
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
Figure 8.11: Unstable limit cycle example.
This means that for initial conditions inside the unit circle, the solution spirals out
towards the periodic solution on the unit circle, and for initial conditions outside the unit
circle the solutions spiral off towards infinity. Therefore the unit circle is a limit cycle, but
not an asymptotically stable one. The Poincaré–Bendixson Theorem applies to the initial
points inside the unit circle, as those solutions stay bounded, but not to those outside, as
those solutions go off to infinity.
A very similar analysis applies to the system
๐‘ฅ 0 = ๐‘ฆ + (๐‘ฅ 2 + ๐‘ฆ 2 − 1)๐‘ฅ,
๐‘ฆ 0 = −๐‘ฅ + (๐‘ฅ 2 + ๐‘ฆ 2 − 1)๐‘ฆ.
We still obtain a closed trajectory on the unit circle, and points outside the unit circle spiral
out to infinity, but now points inside the unit circle spiral towards the critical point at the
origin. So this system does not have a limit cycle, even though it has a closed trajectory.
Due to the Picard theorem (Theorem 3.1.1 on page 125) we find that no matter where
we are in the plane we can always find a solution a little bit further in time, as long as ๐‘“ and
๐‘” have continuous derivatives. So if we find a closed trajectory in an autonomous system,
then for every initial point inside the closed trajectory, the solution will exist for all time
and it will stay bounded (it will stay inside the closed trajectory). So the moment we found
the solution above going around the unit circle, we knew that for every initial point inside
the circle, the solution exists for all time and the Poincaré–Bendixson theorem applies.
Let us next look for conditions when limit cycles (or periodic orbits) do not exist. We
assume the equation (8.2) is defined on a simply connected region, that is, a region with no
holes we can go around. For example the entire plane is a simply connected region, and
so is the inside of the unit disc. However, the entire plane minus a point is not a simply
connected domain as it has a “hole” at the origin.
376
CHAPTER 8. NONLINEAR SYSTEMS
Theorem 8.4.2 (Bendixson–Dulac∗ ). Suppose ๐‘… is a simply connected region, and the expression†
๐œ•๐‘“
๐œ•๐‘”
+
๐œ•๐‘ฅ ๐œ•๐‘ฆ
is either always positive or always negative on ๐‘… (except perhaps a small set such as on isolated
points or curves) then the system (8.2) has no closed trajectory inside ๐‘….
The theorem gives us a way of ruling out the existence of a closed trajectory, and hence
a way of ruling out limit cycles. The exception about points or curves means that we can
allow the expression to be zero at a few points, or perhaps on a curve, but not on any larger
set.
Example 8.4.3: Let us look at ๐‘ฅ 0 = ๐‘ฆ + ๐‘ฆ 2 ๐‘’ ๐‘ฅ , ๐‘ฆ 0 = ๐‘ฅ in the entire plane (see Example 8.2.2
on page 359). The entire plane is simply connected and so we can apply the theorem. We
๐œ•๐‘“
๐œ•๐‘”
compute ๐œ•๐‘ฅ + ๐œ•๐‘ฆ = ๐‘ฆ 2 ๐‘’ ๐‘ฅ + 0. The function ๐‘ฆ 2 ๐‘’ ๐‘ฅ is always positive except on the line ๐‘ฆ = 0.
Therefore, via the theorem, the system has no closed trajectories.
In some books (or the internet) the theorem is not stated carefully and it concludes there
are no periodic solutions. That is not quite right. The example above has two critical points
and hence it has constant solutions, and constant functions are periodic. The conclusion of
the theorem should be that there exist no trajectories that form closed curves. Another
way to state the conclusion of the theorem would be to say that there exist no nonconstant
periodic solutions that stay in ๐‘….
Example 8.4.4: Let us look at a somewhat more complicated example. Take the system
๐œ•๐‘“
๐œ•๐‘”
๐‘ฅ 0 = −๐‘ฆ − ๐‘ฅ 2 , ๐‘ฆ 0 = −๐‘ฅ + ๐‘ฆ 2 (see Example 8.2.1 on page 358). We compute ๐œ•๐‘ฅ + ๐œ•๐‘ฆ =
−2๐‘ฅ + 2๐‘ฆ = 2(−๐‘ฅ + ๐‘ฆ). This expression takes on both signs, so if we are talking about the
whole plane we cannot simply apply the theorem. However, we could apply it on the set
where −๐‘ฅ + ๐‘ฆ ≥ 0. Via the theorem, there is no closed trajectory in that set. Similarly, there
is no closed trajectory in the set −๐‘ฅ + ๐‘ฆ ≤ 0. We cannot conclude (yet) that there is no
closed trajectory in the entire plane. Perhaps half of it is in the set where −๐‘ฅ + ๐‘ฆ ≥ 0 and
the other half is in the set where −๐‘ฅ + ๐‘ฆ ≤ 0.
The key is to look at the line where −๐‘ฅ+๐‘ฆ = 0, or ๐‘ฅ = ๐‘ฆ. On this line ๐‘ฅ 0 = −๐‘ฆ−๐‘ฅ 2 = −๐‘ฅ−๐‘ฅ 2
and ๐‘ฆ 0 = −๐‘ฅ + ๐‘ฆ 2 = −๐‘ฅ + ๐‘ฅ 2 . In particular, when ๐‘ฅ = ๐‘ฆ then ๐‘ฅ 0 ≤ ๐‘ฆ 0. That means that the
arrows, the vectors (๐‘ฅ 0 , ๐‘ฆ 0), always point into the set where −๐‘ฅ + ๐‘ฆ ≥ 0. There is no way we
can start in the set where −๐‘ฅ + ๐‘ฆ ≥ 0 and go into the set where −๐‘ฅ + ๐‘ฆ ≤ 0. Once we are in
the set where −๐‘ฅ + ๐‘ฆ ≥ 0, we stay there. So no closed trajectory can have points in both
sets.
Example 8.4.5: Consider ๐‘ฅ 0 = ๐‘ฆ + (๐‘ฅ 2 + ๐‘ฆ 2 − 1)๐‘ฅ, ๐‘ฆ 0 = −๐‘ฅ + (๐‘ฅ 2 + ๐‘ฆ 2 − 1)๐‘ฆ, and consider
the region ๐‘… given by ๐‘ฅ 2 + ๐‘ฆ 2 > 12 . That is, ๐‘… is the region outside a circle of radius √1
2
∗ Henri
Dulac (1870–1955) was a French mathematician.
๐œ•(๐œ‘ ๐‘“ )
the expression in the Bendixson–Dulac Theorem is ๐œ•๐‘ฅ +
tiable function ๐œ‘. For simplicity, let us just consider the case ๐œ‘ = 1.
† Usually
๐œ•(๐œ‘๐‘”)
๐œ•๐‘ฆ
for some continuously differen-
377
8.4. LIMIT CYCLES
centered at the origin. Then there is a closed trajectory in ๐‘…, namely ๐‘ฅ = cos(๐‘ก), ๐‘ฆ = sin(๐‘ก).
Furthermore,
๐œ•๐‘“
๐œ•๐‘”
+
= 4๐‘ฅ 2 + 4๐‘ฆ 2 − 2,
๐œ•๐‘ฅ ๐œ•๐‘ฅ
which is always positive on ๐‘…. So what is going on? The Bendixson–Dulac theorem does
not apply since the region ๐‘… is not simply connected—it has a hole, the circle we cut out!
8.4.1
Exercises
Exercise 8.4.1: Show that the following systems have no closed trajectories.
a) ๐‘ฅ 0 = ๐‘ฅ 3 + ๐‘ฆ,
๐‘ฆ0 = ๐‘ฆ 3 + ๐‘ฅ 2,
c) ๐‘ฅ 0 = ๐‘ฅ + 3๐‘ฆ 2 − ๐‘ฆ 3 ,
b) ๐‘ฅ 0 = ๐‘’ ๐‘ฅ−๐‘ฆ ,
๐‘ฆ 0 = ๐‘’ ๐‘ฅ+๐‘ฆ ,
๐‘ฆ0 = ๐‘ฆ 3 + ๐‘ฅ 2.
Exercise 8.4.2: Formulate a condition for a 2-by-2 linear system ๐‘ฅ®0 = ๐ด ๐‘ฅ® to not be a center using
the Bendixson–Dulac theorem. That is, the theorem says something about certain elements of ๐ด.
Exercise 8.4.3: Explain why the Bendixson–Dulac Theorem does not apply for any conservative
system ๐‘ฅ 00 + โ„Ž(๐‘ฅ) = 0.
Exercise 8.4.4: A system such as ๐‘ฅ 0 = ๐‘ฅ, ๐‘ฆ 0 = ๐‘ฆ has solutions that exist for all time ๐‘ก, yet there are
no closed trajectories. Explain why the Poincaré–Bendixson Theorem does not apply.
Exercise 8.4.5: Differential equations can also be given in different coordinate systems. Suppose we
have the system ๐‘Ÿ 0 = 1 − ๐‘Ÿ 2 , ๐œƒ0 = 1 given in polar coordinates. Find all the closed trajectories and
check if they are limit cycles and if so, if they are asymptotically stable or not.
Exercise 8.4.101: Show that the following systems have no closed trajectories.
a) ๐‘ฅ 0 = ๐‘ฅ + ๐‘ฆ 2 ,
c) ๐‘ฅ 0 = ๐‘ฅ๐‘ฆ,
๐‘ฆ0 = ๐‘ฆ + ๐‘ฅ 2,
b) ๐‘ฅ 0 = −๐‘ฅ sin2 (๐‘ฆ),
๐‘ฆ0 = ๐‘’ ๐‘ฅ ,
๐‘ฆ0 = ๐‘ฅ + ๐‘ฅ 2.
Exercise 8.4.102: Suppose an autonomous system in the plane has a solution ๐‘ฅ = cos(๐‘ก) + ๐‘’ −๐‘ก ,
๐‘ฆ = sin(๐‘ก) + ๐‘’ −๐‘ก . What can you say about the system (in particular about limit cycles and periodic
solutions)?
Exercise 8.4.103: Show that the limit cycle of the Van der Pol oscillator (for ๐œ‡ > 0) must not lie
completely in the set where −1 < ๐‘ฅ < 1. Compare with Figure 8.10 on page 373.
Exercise 8.4.104: Suppose we have the system ๐‘Ÿ 0 = sin(๐‘Ÿ), ๐œƒ0 = 1 given in polar coordinates. Find
all the closed trajectories.
378
8.5
CHAPTER 8. NONLINEAR SYSTEMS
Chaos
Note: 1 lecture, §6.5 in [EP], §9.8 in [BD]
You have surely heard the story about the flap of a butterfly wing in the Amazon
causing hurricanes in the North Atlantic. In a prior section, we mentioned that a small
change in initial conditions of the planets can lead to very different configuration of the
planets in the long term. These are examples of chaotic systems. Mathematical chaos is
not really chaos, there is precise order behind the scenes. Everything is still deterministic.
However a chaotic system is extremely sensitive to initial conditions. This also means even
small errors induced via numerical approximation create large errors very quickly, so it is
almost impossible to numerically approximate for long times. This is a large part of the
trouble, as chaotic systems cannot be in general solved analytically.
Take the weather, the most well-known chaotic system. A small change in the initial
conditions (the temperature at every point of the atmosphere for example) produces
drastically different predictions in relatively short time, and so we cannot accurately
predict weather. And we do not actually know the exact initial conditions. We measure
temperatures at a few points with some error, and then we somehow estimate what is in
between. There is no way we can accurately measure the effects of every butterfly wing.
Then we solve the equations numerically introducing new errors. You should not trust
weather prediction more than a few days out.
Chaotic behavior was first noticed by Edward Lorenz∗ in the 1960s when trying to
model thermally induced air convection (movement). Lorentz was looking at the relatively
simple system:
๐‘ฅ 0 = −10๐‘ฅ + 10๐‘ฆ,
๐‘ฆ 0 = 28๐‘ฅ − ๐‘ฆ − ๐‘ฅ๐‘ง,
8
๐‘ง 0 = − ๐‘ง + ๐‘ฅ๐‘ฆ.
3
A small change in the initial conditions yields a very different solution after a reasonably
short time.
A simple example the reader can experiment with, and which displays
chaotic behavior, is a double pendulum. The equations for this setup are
somewhat complicated, and their derivation is quite tedious, so we will not
bother to write them down. The idea is to put a pendulum on the end of
another pendulum. The movement of the bottom mass will appear chaotic.
This type of chaotic system is a basis for a whole number of office novelty
desk toys. It is simple to build a version. Take a piece of a string. Tie two
heavy nuts at different points of the string; one at the end, and one a bit
above. Now give the bottom nut a little push. As long as the swings are not too big and
the string stays tight, you have a double pendulum system.
∗ Edward
Norton Lorenz (1917–2008) was an American mathematician and meteorologist.
379
8.5. CHAOS
8.5.1
Duffing equation and strange attractors
Let us study the so-called Duffing equation:
๐‘ฅ 00 + ๐‘Ž๐‘ฅ 0 + ๐‘๐‘ฅ + ๐‘๐‘ฅ 3 = ๐ถ cos(๐œ”๐‘ก).
Here ๐‘Ž, ๐‘, ๐‘, ๐ถ, and ๐œ” are constants. Except for the ๐‘๐‘ฅ 3 term, this equation looks like a
forced mass-spring system. The ๐‘๐‘ฅ 3 means the spring does not exactly obey Hooke’s law
(which no real-world spring actually does obey exactly). When ๐‘ is not zero, the equation
does not have a closed form solution, so we must resort to numerical solutions, as is usual
for nonlinear systems. Not all choices of constants and initial conditions exhibit chaotic
behavior. Let us study
๐‘ฅ 00 + 0.05๐‘ฅ 0 + ๐‘ฅ 3 = 8 cos(๐‘ก).
The equation is not autonomous, so we cannot draw the vector field in the phase plane.
We can still draw trajectories. In Figure 8.12, we plot trajectories for ๐‘ก going from 0 to 15 for
two very close initial conditions (2, 3) and (2, 2.9), and also the solutions in the (๐‘ฅ, ๐‘ก) space.
The two trajectories are close at first, but after a while diverge significantly. This sensitivity
to initial conditions is precisely what we mean by the system behaving chaotically.
-2
0
2
0.0
5.0
5.0
2.5
2.5
0.0
2.5
5.0
7.5
10.0
12.5
15.0
3
3
2
2
1
1
0
0
-1
-1
-2
-2
0.0
-2.5
-2.5
-5.0
-5.0
-3
-2
0
2
-3
0.0
2.5
5.0
7.5
10.0
12.5
15.0
Figure 8.12: On left, two trajectories in phase space for 0 ≤ ๐‘ก ≤ 15, for the Duffing equation one with
initial conditions (2, 3) and the other with (2, 2.9). On right the two solutions in (๐‘ฅ, ๐‘ก)-space.
Let us see the long term behavior. In Figure 8.13 on the next page, we plot the behavior
of the system for initial conditions (2, 3) for a longer period of time. It is hard to see any
particular pattern in the shape of the solution except that it seems to oscillate, but each
oscillation appears quite unique. The oscillation is expected due to the forcing term. We
mention that to produce the picture accurately, a ridiculously large number of steps∗ had
to be used in the numerical algorithm, as even small errors quickly propagate in a chaotic
system.
∗ In
fact for reference, 30,000 steps were used with the Runge–Kutta algorithm, see exercises in § 1.7.
380
0
CHAPTER 8. NONLINEAR SYSTEMS
20
40
60
80
100
2
2
0
0
-2
-2
0
20
40
60
80
100
Figure 8.13: The solution to the given Duffing equation for ๐‘ก from 0 to 100.
It is very difficult to analyze chaotic systems, or to find the order behind the madness,
but let us try to do something that we did for the standard mass-spring system. One
way we analyzed the system is that we figured out what was the long term behavior (not
dependent on initial conditions). From the figure above, it is clear that we will not get a
nice exact description of the long term behavior for this chaotic system, but perhaps we
can find some order to what happens on each “oscillation” and what do these oscillations
have in common.
The concept we explore is that of a Poincaré section∗ . Instead of looking at ๐‘ก in a certain
interval, we look at where the system is at a certain sequence of points in time. Imagine
flashing a strobe at a fixed frequency and drawing the points where the solution is during
the flashes. The right strobing frequency depends on the system in question. The correct
frequency for the forced Duffing equation (and other similar systems) is the frequency of
the forcing term. For the Duffing equation above, find a solution ๐‘ฅ(๐‘ก), ๐‘ฆ(๐‘ก) , and look at
the points
๐‘ฅ(0), ๐‘ฆ(0) ,
๐‘ฅ(2๐œ‹), ๐‘ฆ(2๐œ‹) ,
๐‘ฅ(4๐œ‹), ๐‘ฆ(4๐œ‹) ,
๐‘ฅ(6๐œ‹), ๐‘ฆ(6๐œ‹) ,
...
As we are really not interested in the transient part of the solution, that is, the part of
the solution that depends on the initial condition, we skip some number of steps in the
beginning. For example, we might skip the first 100 such steps and start plotting points at
๐‘ก = 100(2๐œ‹), that is
๐‘ฅ(200๐œ‹), ๐‘ฆ(200๐œ‹) ,
๐‘ฅ(202๐œ‹), ๐‘ฆ(202๐œ‹) ,
๐‘ฅ(204๐œ‹), ๐‘ฆ(204๐œ‹) ,
...
The plot of these points is the Poincaré section. After plotting enough points, a curious
pattern emerges in Figure 8.14 on the facing page (the left-hand picture), a so-called strange
attractor.
∗ Named
for the French polymath Jules Henri Poincaré (1854–1912).
381
8.5. CHAOS
2.0
2.5
3.0
3.5
0.0
5.0
5.0
2.5
2.5
0.0
0.5
1.0
1.5
2.0
0
0
-1
-1
-2
-2
-3
-3
-4
-4
-5
-5
0.0
-2.5
-2.5
2.0
2.5
3.0
3.5
0.0
0.5
1.0
1.5
2.0
Figure 8.14: Strange attractor. The left plot is with no phase shift, the right plot has phase shift ๐œ‹/4.
Given a sequence of points, an attractor is a set towards which the points in the sequence
eventually get closer and closer to, that is, they are attracted. The Poincaré section is not
really the attractor itself, but as the points are very close to it, we see its shape. The strange
attractor is a very complicated set. It has fractal structure, that is, if you zoom in as far as
you want, you keep seeing the same complicated structure.
The initial condition makes no difference. If we start with a different initial condition,
the points eventually gravitate towards the attractor, and so as long as we throw away
the first few points, we get the same picture. Similarly small errors in the numerical
approximations do not matter here.
An amazing thing is that a chaotic system such as the Duffing equation is not random
at all. There is a very complicated order to it, and the strange attractor says something
about this order. We cannot quite say what state the system will be in eventually, but given
the fixed strobing frequency we narrow it down to the points on the attractor.
If we use a phase shift, for example ๐œ‹/4, and look at the times
๐œ‹/4 ,
2๐œ‹ + ๐œ‹/4,
4๐œ‹ + ๐œ‹/4,
6๐œ‹ + ๐œ‹/4,
...
we obtain a slightly different attractor. The picture is the right-hand side of Figure 8.14. It
is as if we had rotated, moved, and slightly distorted the original. For each phase shift you
can find the set of points towards which the system periodically keeps coming back to.
Study the pictures and notice especially the scales—where are these attractors located
in the phase plane. Notice the regions where the strange attractor lives and compare it to
the plot of the trajectories in Figure 8.12 on page 379.
Let us compare this section to the discussion in § 2.6 on forced oscillations. Consider
๐‘ฅ 00 + 2๐‘๐‘ฅ 0 + ๐œ”02 ๐‘ฅ =
๐น0
cos(๐œ”๐‘ก).
๐‘š
This is like the Duffing equation, but with no ๐‘ฅ 3 term. The steady periodic solution is of
382
CHAPTER 8. NONLINEAR SYSTEMS
the form
๐‘ฅ = ๐ถ cos(๐œ”๐‘ก + ๐›พ).
Strobing using the frequency ๐œ”, we obtain a single point in the phase space. The attractor
in this setting is a single point—an expected result as the system is not chaotic. It was
the opposite of chaotic: Any difference induced by the initial conditions dies away very
quickly, and we settle into always the same steady periodic motion.
8.5.2
The Lorenz system
In two dimensions to find chaotic behavior, we must study forced, or non-autonomous,
systems such as the Duffing equation. The Poincaré–Bendixson Theorem says that a
solution to an autonomous two-dimensional system that exists for all time in the future
and does not go towards infinity is periodic or tends towards a periodic solution. Hardly
the chaotic behavior we are looking for.
In three dimensions, even autonomous systems can be chaotic. Let us very briefly
return to the Lorenz system
๐‘ฅ 0 = −10๐‘ฅ + 10๐‘ฆ,
8
๐‘ง 0 = − ๐‘ง + ๐‘ฅ๐‘ฆ.
3
๐‘ฆ 0 = 28๐‘ฅ − ๐‘ฆ − ๐‘ฅ๐‘ง,
The Lorenz system is an autonomous system in three dimensions exhibiting chaotic behavior.
See the Figure 8.15 for a sample trajectory, which is now a curve in three-dimensional space.
y
x
-15
20
-10
-5
10
0
5
10
15
0
-10
-20
40
40
30
30
20
20
10
10
20
10
-15
-10
0
-5
0
x
5
-10
10
15
y
-20
Figure 8.15: A trajectory in the Lorenz system.
383
8.5. CHAOS
The solutions tend to an attractor in space, the so-called Lorenz attractor. In this case no
strobing is necessary. Again we cannot quite see the attractor itself, but if we try to follow a
solution for long enough, as in the figure, we get a pretty good picture of what the attractor
looks like. The Lorenz attractor is also a strange attractor and has a complicated fractal
structure. And, just as for the Duffing equation, what we want to draw is not the whole
trajectory, but start drawing the trajectory after a while, once it is close to the attractor.
The path of the trajectory is not simply a repeating figure-eight. The trajectory spins
some seemingly random number of times on the left, then spins a number of times on the
right, and so on. As this system arose in weather prediction, one can perhaps imagine a few
days of warm weather and then a few days of cold weather, where it is not easy to predict
when the weather will change, just as it is not really easy to predict far in advance when
the solution will jump onto the other side. See Figure 8.16 for a plot of the ๐‘ฅ component of
the solution drawn above. A negative ๐‘ฅ corresponds to the left “loop” and a positive ๐‘ฅ
corresponds to the right “loop”.
Most of the mathematics we studied in this book is quite classical and well understood.
On the other hand, chaos, including the Lorenz system, continues to be the subject of
current research. Furthermore, chaos has found applications not just in the sciences, but
also in art.
0.0
2.5
5.0
7.5
10.0
12.5
15.0
10
10
0
0
-10
-10
0.0
2.5
5.0
7.5
10.0
12.5
15.0
Figure 8.16: Graph of the ๐‘ฅ(๐‘ก) component of the solution.
8.5.3
Exercises
Exercise 8.5.1: For the non-chaotic equation ๐‘ฅ 00 + 2๐‘๐‘ฅ 0 + ๐œ”02 ๐‘ฅ = ๐น๐‘š0 cos(๐œ”๐‘ก), suppose we strobe
with frequency ๐œ” as we mentioned above. Use the known steady periodic solution to find precisely
the point which is the attractor for the Poincaré section.
384
CHAPTER 8. NONLINEAR SYSTEMS
Exercise 8.5.2 (project): A simple fractal attractor can be drawn via the following chaos game.
Draw the three vertices of a triangle and label them, say ๐‘ 1 , ๐‘2 and ๐‘3 . Draw some random point ๐‘
(it does not have to be one of the three points above). Roll a die to pick of the ๐‘ 1 , ๐‘2 , or ๐‘3 randomly
(for example 1 and 4 mean ๐‘1 , 2 and 5 mean ๐‘ 2 , and 3 and 6 mean ๐‘3 ). Suppose we picked ๐‘2 , then
let ๐‘ new be the point exactly halfway between ๐‘ and ๐‘2 . Draw this point and let ๐‘ now refer to this
new point ๐‘ new . Rinse, repeat. Try to be precise and draw as many iterations as possible. Your
points will be attracted to the so-called Sierpinski triangle. A computer was used to run the game
for 10,000 iterations to obtain the picture in Figure 8.17.
0.00
0.25
0.50
0.75
1.00
0.75
0.75
0.50
0.50
0.25
0.25
0.00
0.00
0.00
0.25
0.50
0.75
1.00
Figure 8.17: 10,000 iterations of the chaos game producing the Sierpinski triangle.
Exercise 8.5.3 (project): Construct the double pendulum described in the text with a string and
two nuts (or heavy beads). Play around with the position of the middle nut, and perhaps use different
weight nuts. Describe what you find.
Exercise 8.5.4 (computer project): Use a computer software (such as Matlab, Octave, or perhaps
even a spreadsheet), plot the solution of the given forced Duffing equation with Euler’s method.
Plotting the solution for ๐‘ก from 0 to 100 with several different (small) step sizes. Discuss.
Exercise 8.5.101: Find critical points of the Lorenz system and the associated linearizations.
Appendix A
Linear algebra
A.1
Vectors, mappings, and matrices
Note: 2 lectures
In real life, there is most often more than one variable. We wish to organize dealing with
multiple variables in a consistent manner, and in particular organize dealing with linear
equations and linear mappings, as those are both rather useful and rather easy to handle.
Mathematicians joke that “to an engineer every problem is linear, and everything is a
matrix.” And well, they (the engineers) are not wrong. Quite often, solving an engineering
problem is figuring out the right finite-dimensional linear problem to solve, which is
then solved with some matrix manipulation. Most importantly, linear problems are the
ones that we know how to solve, and we have many tools to solve them. For engineers,
mathematicians, physicists, and anybody else in a technical field, it is absolutely vital to
learn linear algebra.
As motivation, suppose we wish to solve
๐‘ฅ − ๐‘ฆ = 2,
2๐‘ฅ + ๐‘ฆ = 4,
for ๐‘ฅ and ๐‘ฆ. That is, we desire numbers ๐‘ฅ and ๐‘ฆ such that the two equations are satisfied.
Let us perhaps start by adding the equations together to find
๐‘ฅ + 2๐‘ฅ − ๐‘ฆ + ๐‘ฆ = 2 + 4,
or
3๐‘ฅ = 6.
In other words, ๐‘ฅ = 2. Once we have that, we plug ๐‘ฅ = 2 into the first equation to find
2 − ๐‘ฆ = 2, so ๐‘ฆ = 0. OK, that was easy. What is all this fuss about linear equations. Well,
try doing this if you have 5000 unknowns† . Also, we may have such equations not just of
numbers, but of functions and derivatives of functions in differential equations. Clearly we
need a systematic way of doing things. A nice consequence of making things systematic
† One
of the downsides of making everything look like a linear problem is that the number of variables
tends to become huge.
386
APPENDIX A. LINEAR ALGEBRA
and simpler to write down is that it becomes easier to have computers do the work for us.
Computers are rather stupid, they do not think, but are very good at doing lots of repetitive
tasks precisely, as long as we figure out a systematic way for them to perform the tasks.
A.1.1
Vectors and operations on vectors
Consider ๐‘› real numbers as an ๐‘›-tuple:
(๐‘ฅ1 , ๐‘ฅ2 , . . . , ๐‘ฅ ๐‘› ).
The set of such ๐‘›-tuples is the so-called ๐‘›-dimensional space, often denoted by โ„ ๐‘› . Sometimes
we call this the ๐‘›-dimensional euclidean space∗ . In two dimensions, โ„2 is called the cartesian
plane† . Each such ๐‘›-tuple represents a point in the ๐‘›-dimensional space. For example, the
point (1, 2) in the plane โ„2 is one unit to the right and two units up from the origin.
When we do algebra with these ๐‘›-tuples of numbers we call them vectors‡ . Mathematicians are keen on separating what is a vector and what is a point of the space or in the
plane, and it turns out to be an important distinction, however, for the purposes of linear
algebra we can think of everything being represented by a vector. A way to think of a
vector, which is especially useful in calculus and differential equations, is an arrow. It is an
object that has a direction and a magnitude. For instance, the vector (1, 2) is the arrow from
the origin to the point (1, 2) in the plane. The magnitude is the length of the arrow. See
Figure A.1. If we think of vectors as arrows, the arrow doesn’t always have to start at the
origin. If we do move it around, however, it should always keep the same direction and
the same magnitude.
๐‘ฅ2
2
1
0 0
1
2
3
๐‘ฅ1
Figure A.1: The vector (1, 2) drawn as an arrow from the origin to the point (1, 2).
As vectors are arrows, when we want to give a name to a vector, we draw a little arrow
above it:
๐‘ฅ®
∗ Named
after the ancient Greek mathematician Euclid of Alexandria (around 300 BC), possibly the most
famous of mathematicians; even small towns often have Euclid Street or Euclid Avenue.
† Named after the French mathematician René Descartes (1596–1650). It is “cartesian” as his name in Latin
is Renatus Cartesius.
‡ A common notation to distinguish vectors from points is to write (1, 2) for the point and h1, 2i for the
vector. We write both as (1, 2).
387
A.1. VECTORS, MAPPINGS, AND MATRICES
Another popular notation is x, although we will use the little arrows. It may be easy to
write a bold letter in a book, but it is not so easy to write it by hand on paper or on the
board. Mathematicians often don’t even write the arrows. A mathematician would write ๐‘ฅ
and just remember that ๐‘ฅ is a vector and not a number. Just like you remember that Bob is
your uncle, and you don’t have to keep repeating “Uncle Bob” and you can just say “Bob.”
In this book, however, we will call Bob “Uncle Bob” and write vectors with the little arrows.
The magnitude can be computed
√ using the√Pythagorean theorem. The vector (1, 2)
drawn in the figure has magnitude 12 + 22 = 5. The magnitude is denoted by k ๐‘ฅ® k, and,
in any number of dimensions, it can be computed in the same way:
k ๐‘ฅ® k = k(๐‘ฅ1 , ๐‘ฅ2 , . . . , ๐‘ฅ ๐‘› )k =
q
๐‘ฅ12 + ๐‘ฅ22 + · · · + ๐‘ฅ ๐‘›2 .
For reasons that will become clear in the next section, we often write vectors as so-called
column vectors:
๏ฃฎ ๐‘ฅ1 ๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฏ ๐‘ฅ2 ๏ฃบ
๐‘ฅ® = ๏ฃฏ๏ฃฏ .. ๏ฃบ๏ฃบ .
.
๏ฃฏ ๏ฃบ
๏ฃฏ๐‘ฅ ๏ฃบ
๏ฃฐ ๐‘›๏ฃป
Don’t worry. It is just a different way of writing the same thing. For example, the vector
(1, 2) can be written as
1
.
2
The fact that we write arrows above vectors allows us to write several vectors ๐‘ฅ®1 , ๐‘ฅ®2 ,
etc., without confusing these with the components of some other vector ๐‘ฅ®.
So where is the algebra from linear algebra? Well, arrows can be added, subtracted, and
multiplied by numbers. First we consider addition. If we have two arrows, we simply move
along one, and then along the other. See Figure A.2.
๐‘ฅ2
2
1
0 0
1
2
3
๐‘ฅ1
−1
Figure A.2: Adding the vectors (1, 2), drawn dotted, and (2, −3), drawn dashed. The result, (3, −1), is
drawn as a solid arrow.
388
APPENDIX A. LINEAR ALGEBRA
It is rather easy to see what it does to the numbers that represent the vectors. Suppose
we want to add (1, 2) to (2, −3) as in the figure. We travel along (1, 2) and then we travel
along (2, −3). What we did was travel one unit right, two units up, and then we travelled
two units right, and
three units down (the negative three). That means that we ended up
at 1 + 2, 2 + (−3) = (3, −1). And that’s how addition always works:
๏ฃฎ ๐‘ฅ 1 ๏ฃน ๏ฃฎ ๐‘ฆ1 ๏ฃน ๏ฃฎ ๐‘ฅ 1 + ๐‘ฆ1 ๏ฃน
๏ฃบ
๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบ ๏ฃฏ
๏ฃฏ ๐‘ฅ 2 ๏ฃบ ๏ฃฏ ๐‘ฆ2 ๏ฃบ ๏ฃฏ ๐‘ฅ 2 + ๐‘ฆ2 ๏ฃบ
๏ฃฏ . ๏ฃบ+๏ฃฏ . ๏ฃบ = ๏ฃฏ . ๏ฃบ.
๏ฃฏ .. ๏ฃบ ๏ฃฏ .. ๏ฃบ ๏ฃฏ .. ๏ฃบ
๏ฃบ
๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบ ๏ฃฏ
๏ฃฏ๐‘ฅ ๏ฃบ ๏ฃฏ ๐‘ฆ ๏ฃบ ๏ฃฏ๐‘ฅ + ๐‘ฆ ๏ฃบ
๐‘›
๐‘›
๐‘›
๐‘›
๏ฃป
๏ฃฐ ๏ฃป ๏ฃฐ ๏ฃป ๏ฃฐ
Subtracting is similar. What ๐‘ฅ® − ๐‘ฆ® means visually is that we first travel along ๐‘ฅ®, and then
we travel backwards along ๐‘ฆ®. See Figure A.3. It is like adding ๐‘ฅ® + (− ๐‘ฆ®) where − ๐‘ฆ® is the
arrow we obtain by erasing the arrow head from one side and drawing it on the other side,
that is, we reverse the direction. In terms of the numbers, we simply go backwards both
horizontally and vertically, so we negate both numbers. For instance, if ๐‘ฆ® is (−2, 1), then
− ๐‘ฆ® is (2, −1).
๐‘ฅ2
2
1
0 0
1
2
3
๐‘ฅ1
Figure A.3: Subtraction, the vector (1, 2), drawn dotted, minus (−2, 1), drawn dashed. The result, (3, 1),
is drawn as a solid arrow.
Another intuitive thing to do to a vector is to scale it. We represent this by multiplication
of a number with a vector. Because of this, when we wish to distinguish between vectors
and numbers, we call the numbers scalars. For example, suppose we want to travel three
times further. If the vector is (1, 2), travelling 3 times further means going 3 units to the
right and 6 units up, so we get the vector (3, 6). We just multiply each number in the vector
by 3. If ๐›ผ is a number, then
๏ฃฎ ๐‘ฅ1 ๏ฃน ๏ฃฎ ๐›ผ๐‘ฅ1 ๏ฃน
๏ฃฏ ๏ฃบ ๏ฃฏ
๏ฃบ
๏ฃฏ ๐‘ฅ2 ๏ฃบ ๏ฃฏ ๐›ผ๐‘ฅ2 ๏ฃบ
๐›ผ ๏ฃฏ๏ฃฏ .. ๏ฃบ๏ฃบ = ๏ฃฏ๏ฃฏ .. ๏ฃบ๏ฃบ .
.
.
๏ฃฏ ๏ฃบ
๏ฃฏ๐‘ฅ ๏ฃบ
๏ฃฐ ๐‘›๏ฃป
๏ฃฏ
๏ฃบ
๏ฃฏ๐›ผ๐‘ฅ ๏ฃบ
๏ฃฐ ๐‘›๏ฃป
Scaling (by a positive number)
√ multiplies the magnitude and leaves direction untouched.
√
The magnitude of (1, 2) is 5. The magnitude of 3 times (1, 2), that is, (3, 6), is 3 5.
389
A.1. VECTORS, MAPPINGS, AND MATRICES
When the scalar is negative, then when we multiply a vector by it, the vector is not only
scaled, but it also switches direction. Multiplying (1, 2) by −3 means we should go 3 times
further but in the opposite direction, so 3 units to the left and 6 units down, or in other
words, (−3, −6). As we mentioned above, − ๐‘ฆ® is a reverse of ๐‘ฆ®, and this is the same as (−1) ๐‘ฆ®.
In Figure A.4, you can see a couple of examples of what scaling a vector means visually.
−1.5๐‘ฅ®
2๐‘ฅ®
๐‘ฅ®
Figure A.4: A vector ๐‘ฅ®, the vector 2 ๐‘ฅ® (same direction, double the magnitude), and the vector −1.5 ๐‘ฅ®
(opposite direction, 1.5 times the magnitude).
We put all of these operations together to work out more complicated expressions. Let
us compute a small example:
1
−4
−2
3(1) + 2(−4) − 3(−2)
1
3
+2
−3
=
=
.
2
−1
2
3(2) + 2(−1) − 3(2)
−2
As we said a vector is a direction and a magnitude. Magnitude is easy to represent, it is
just a number. The direction is usually given by a vector with magnitude one. We call such
a vector a unit vector. That is, ๐‘ข® is a unit vector when k ๐‘ข® k = 1. For instance, the vectors
√
√
(1, 0), (1/ 2, 1/ 2), and (0, −1) are all unit vectors.
To represent the direction of a vector ๐‘ฅ®, we need to find the unit vector in the same
direction. To do so, we simply rescale ๐‘ฅ® by the reciprocal of the magnitude, that is k ๐‘ฅ1®k ๐‘ฅ®, or
more concisely k ๐‘ฅ๐‘ฅ®®k .
As an example, the unit vector in the direction of (1, 2) is the vector
1
1 2
(1, 2) = √ , √ .
√
5 5
12 + 22
A.1.2
Linear mappings and matrices
A vector-valued function ๐น is a rule that takes a vector ๐‘ฅ® and returns another vector ๐‘ฆ®. For
example, ๐น could be a scaling that doubles the size of vectors:
๐น( ๐‘ฅ®) = 2๐‘ฅ®.
Applied to say (1, 3) we get
๐น
1
3
1
2
=2
=
.
3
6
390
APPENDIX A. LINEAR ALGEBRA
If ๐น is a mapping that takes vectors in โ„2 to โ„2 (such as the above), we write
๐น : โ„2 → โ„2 .
The words function and mapping are used rather interchangeably, although more often than
not, mapping is used when talking about a vector-valued function, and the word function is
often used when the function is scalar-valued.
A beginning student of mathematics (and many a seasoned mathematician), that sees
an expression such as
๐‘“ (3๐‘ฅ + 8๐‘ฆ)
yearns to write
3 ๐‘“ (๐‘ฅ) + 8 ๐‘“ (๐‘ฆ).
√
√
√
After all, who hasn’t wanted to write ๐‘ฅ + ๐‘ฆ = ๐‘ฅ + ๐‘ฆ or something like that at some
point in their mathematical lives. Wouldn’t life be simple if we could do that? Of course
we can’t always do that (for example, not with the square roots!) But there are many other
functions where we can do exactly the above. Such functions are called linear.
A mapping ๐น : โ„ ๐‘› → โ„ ๐‘š is called linear if
๐น(๐‘ฅ® + ๐‘ฆ®) = ๐น(๐‘ฅ®) + ๐น( ๐‘ฆ®),
for any vectors ๐‘ฅ® and ๐‘ฆ®, and also
๐น(๐›ผ ๐‘ฅ®) = ๐›ผ๐น(๐‘ฅ®),
for any scalar ๐›ผ. The ๐น we defined above that doubles the size of all vectors is linear. Let
us check:
๐น(๐‘ฅ® + ๐‘ฆ®) = 2(๐‘ฅ® + ๐‘ฆ®) = 2๐‘ฅ® + 2 ๐‘ฆ® = ๐น(๐‘ฅ®) + ๐น( ๐‘ฆ®),
and also
๐น(๐›ผ ๐‘ฅ®) = 2๐›ผ ๐‘ฅ® = ๐›ผ2๐‘ฅ® = ๐›ผ๐น(๐‘ฅ®).
We also call a linear function a linear transformation. If you want to be really fancy and
impress your friends, you can call it a linear operator. When a mapping is linear we often do
not write the parentheses. We write simply
๐น ๐‘ฅ®
instead of ๐น(๐‘ฅ®). We do this because linearity means that the mapping ๐น behaves like
multiplying ๐‘ฅ® by “something.” That something is a matrix.
A matrix is an ๐‘š × ๐‘› array of numbers (๐‘š rows and ๐‘› columns). A 3 × 5 matrix is
๏ฃฎ ๐‘Ž11 ๐‘Ž12 ๐‘Ž13 ๐‘Ž14 ๐‘Ž15 ๏ฃน
๏ฃฏ
๏ฃบ
๐ด = ๏ฃฏ๏ฃฏ ๐‘Ž21 ๐‘Ž 22 ๐‘Ž 23 ๐‘Ž 24 ๐‘Ž 25 ๏ฃบ๏ฃบ .
๏ฃฏ ๐‘Ž31 ๐‘Ž32 ๐‘Ž33 ๐‘Ž34 ๐‘Ž35 ๏ฃบ
๏ฃฐ
๏ฃป
The numbers ๐‘Ž ๐‘–๐‘— are called elements or entries.
391
A.1. VECTORS, MAPPINGS, AND MATRICES
A column vector is simply an ๐‘š × 1 matrix. Similarly to a column vector there is also a
row vector, which is a 1 × ๐‘› matrix. If we have an ๐‘› × ๐‘› matrix, then we say that it is a square
matrix.
Now how does a matrix ๐ด relate to a linear mapping? Well a matrix tells you where
certain special vectors go. Let’s give a name to those certain vectors. The standard basis
vectors of โ„ ๐‘› are
๏ฃฎ1๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฏ0๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ ๏ฃบ
๐‘’®1 = ๏ฃฏ0๏ฃบ ,
๏ฃฏ .. ๏ฃบ
๏ฃฏ.๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ0๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฎ0๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฏ1๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ ๏ฃบ
๐‘’®2 = ๏ฃฏ0๏ฃบ ,
๏ฃฏ .. ๏ฃบ
๏ฃฏ.๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ0๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฎ0๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฏ0๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ ๏ฃบ
๐‘’®3 = ๏ฃฏ1๏ฃบ ,
๏ฃฏ .. ๏ฃบ
๏ฃฏ.๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ0๏ฃบ
๏ฃฐ ๏ฃป
··· ,
๏ฃฎ0๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฏ0๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ ๏ฃบ
๐‘’®๐‘› = ๏ฃฏ0๏ฃบ .
๏ฃฏ .. ๏ฃบ
๏ฃฏ.๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ1๏ฃบ
๏ฃฐ ๏ฃป
In โ„3 these vectors are
๏ฃฎ1๏ฃน
๏ฃฏ ๏ฃบ
๐‘’®1 = ๏ฃฏ๏ฃฏ0๏ฃบ๏ฃบ ,
๏ฃฏ0๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฎ0๏ฃน
๏ฃฏ ๏ฃบ
๐‘’®2 = ๏ฃฏ๏ฃฏ1๏ฃบ๏ฃบ ,
๏ฃฏ0๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฎ0๏ฃน
๏ฃฏ ๏ฃบ
๐‘’®3 = ๏ฃฏ๏ฃฏ0๏ฃบ๏ฃบ .
๏ฃฏ1๏ฃบ
๏ฃฐ ๏ฃป
®
You may recall from calculus of several variables that these are sometimes called ®๐šค , ®๐šฅ , ๐‘˜.
The reason these are called a basis is that every other vector can be written as a linear
combination of them. For example, in โ„3 the vector (4, 5, 6) can be written as
๏ฃฎ1๏ฃน
๏ฃฎ0๏ฃน
๏ฃฎ0๏ฃน ๏ฃฎ4๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบ
4®๐‘’1 + 5®๐‘’2 + 6®๐‘’3 = 4 ๏ฃฏ๏ฃฏ0๏ฃบ๏ฃบ + 5 ๏ฃฏ๏ฃฏ1๏ฃบ๏ฃบ + 6 ๏ฃฏ๏ฃฏ0๏ฃบ๏ฃบ = ๏ฃฏ๏ฃฏ5๏ฃบ๏ฃบ .
๏ฃฏ0๏ฃบ
๏ฃฏ1๏ฃบ ๏ฃฏ6๏ฃบ
๏ฃฏ0๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฐ ๏ฃป
๏ฃฐ ๏ฃป ๏ฃฐ ๏ฃป
So how does a matrix represent a linear mapping? Well, the columns of the matrix are
the vectors where ๐ด as a linear mapping takes ๐‘’®1 , ๐‘’®2 , etc. For instance, consider
1 2
๐‘€=
.
3 4
As a linear mapping ๐‘€ : โ„2 → โ„2 takes ๐‘’®1 =
words,
1 2 1
1
๐‘€®๐‘’1 =
=
,
and
3 4 0
3
1
0
to
1
3
and ๐‘’®2 =
1 2
๐‘€®๐‘’2 =
3 4
0
1
to
2
4
. In other
0
2
=
.
1
4
More generally, if we have an ๐‘› × ๐‘š matrix ๐ด, that is, we have ๐‘› rows and ๐‘š columns,
then the mapping ๐ด : โ„ ๐‘š → โ„ ๐‘› takes ๐‘’®๐‘— to the ๐‘— th column of ๐ด. For example,
๏ฃฎ ๐‘Ž11 ๐‘Ž12 ๐‘Ž13 ๐‘Ž14 ๐‘Ž15 ๏ฃน
๏ฃฏ
๏ฃบ
๐ด = ๏ฃฏ๏ฃฏ ๐‘Ž 21 ๐‘Ž 22 ๐‘Ž 23 ๐‘Ž 24 ๐‘Ž25 ๏ฃบ๏ฃบ
๏ฃฏ ๐‘Ž31 ๐‘Ž32 ๐‘Ž33 ๐‘Ž34 ๐‘Ž35 ๏ฃบ
๏ฃฐ
๏ฃป
392
APPENDIX A. LINEAR ALGEBRA
represents a mapping from โ„5 to โ„3 that does
๏ฃฎ ๐‘Ž11 ๏ฃน
๏ฃฏ ๏ฃบ
๐ด®๐‘’1 = ๏ฃฏ๏ฃฏ ๐‘Ž 21 ๏ฃบ๏ฃบ ,
๏ฃฏ ๐‘Ž31 ๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฎ ๐‘Ž12 ๏ฃน
๏ฃฏ ๏ฃบ
๐ด®๐‘’2 = ๏ฃฏ๏ฃฏ ๐‘Ž 22 ๏ฃบ๏ฃบ ,
๏ฃฏ ๐‘Ž32 ๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฎ ๐‘Ž13 ๏ฃน
๏ฃฏ ๏ฃบ
๐ด®๐‘’3 = ๏ฃฏ๏ฃฏ ๐‘Ž 23 ๏ฃบ๏ฃบ ,
๏ฃฏ ๐‘Ž33 ๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฎ ๐‘Ž14 ๏ฃน
๏ฃฏ ๏ฃบ
๐ด®๐‘’4 = ๏ฃฏ๏ฃฏ ๐‘Ž 24 ๏ฃบ๏ฃบ ,
๏ฃฏ ๐‘Ž34 ๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฎ ๐‘Ž15 ๏ฃน
๏ฃฏ ๏ฃบ
๐ด®๐‘’5 = ๏ฃฏ๏ฃฏ ๐‘Ž25 ๏ฃบ๏ฃบ .
๏ฃฏ ๐‘Ž35 ๏ฃบ
๏ฃฐ ๏ฃป
What about another vector ๐‘ฅ®, which isn’t in the standard basis? Where does it go? We
use linearity. First, we write the vector as a linear combination of the standard basis vectors:
๏ฃฎ0 ๏ฃน
๏ฃฎ๐‘ฅ1 ๏ฃน
๏ฃฎ1๏ฃน
๏ฃฎ0๏ฃน
๏ฃฎ0๏ฃน
๏ฃฎ0๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ๐‘ฅ2 ๏ฃบ
๏ฃฏ0๏ฃบ
๏ฃฏ1๏ฃบ
๏ฃฏ0๏ฃบ
๏ฃฏ0๏ฃบ
๏ฃฏ0 ๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ ๏ฃบ
๐‘ฅ® = ๏ฃฏ๏ฃฏ๐‘ฅ 3 ๏ฃบ๏ฃบ = ๐‘ฅ1 ๏ฃฏ๏ฃฏ0๏ฃบ๏ฃบ + ๐‘ฅ 2 ๏ฃฏ๏ฃฏ0๏ฃบ๏ฃบ + ๐‘ฅ3 ๏ฃฏ๏ฃฏ1๏ฃบ๏ฃบ + ๐‘ฅ4 ๏ฃฏ๏ฃฏ0๏ฃบ๏ฃบ + ๐‘ฅ5 ๏ฃฏ๏ฃฏ0๏ฃบ๏ฃบ = ๐‘ฅ1 ๐‘’®1 + ๐‘ฅ2 ๐‘’®2 + ๐‘ฅ 3 ๐‘’®3 + ๐‘ฅ 4 ๐‘’®4 + ๐‘ฅ 5 ๐‘’®5 .
๏ฃฏ0๏ฃบ
๏ฃฏ0๏ฃบ
๏ฃฏ0๏ฃบ
๏ฃฏ1๏ฃบ
๏ฃฏ0 ๏ฃบ
๏ฃฏ๐‘ฅ4 ๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ0๏ฃบ
๏ฃฏ0๏ฃบ
๏ฃฏ0๏ฃบ
๏ฃฏ1 ๏ฃบ
๏ฃฏ๐‘ฅ5 ๏ฃบ
๏ฃฏ0๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฐ ๏ฃป
๏ฃฐ ๏ฃป
๏ฃฐ ๏ฃป
๏ฃฐ ๏ฃป
๏ฃฐ ๏ฃป
Then
๐ด ๐‘ฅ® = ๐ด(๐‘ฅ 1 ๐‘’®1 + ๐‘ฅ 2 ๐‘’®2 + ๐‘ฅ 3 ๐‘’®3 + ๐‘ฅ4 ๐‘’®4 + ๐‘ฅ5 ๐‘’®5 ) = ๐‘ฅ1 ๐ด®๐‘’1 + ๐‘ฅ2 ๐ด®๐‘’2 + ๐‘ฅ3 ๐ด®๐‘’3 + ๐‘ฅ 4 ๐ด®๐‘’4 + ๐‘ฅ 5 ๐ด®๐‘’5 .
If we know where ๐ด takes all the basis vectors, we know where it takes all vectors.
Suppose ๐‘€ is the 2 × 2 matrix from above, then
−2
1 2
๐‘€
=
0.1
3 4
−2
1
2
−1.8
= −2
+ 0.1
=
.
0.1
3
4
−5.6
Every linear mapping from โ„ ๐‘š to โ„ ๐‘› can be represented by an ๐‘› × ๐‘š matrix. You
just figure out where it takes the standard basis vectors. Conversely, every ๐‘› × ๐‘š matrix
represents a linear mapping. Hence, we may think of matrices being linear mappings, and
linear mappings being matrices.
Or can we? In this book we study mostly linear differential operators, and linear
differential operators are linear mappings, although they are not acting on โ„ ๐‘› , but on an
infinite-dimensional space of functions:
๐ฟ ๐‘“ = ๐‘”.
For a function ๐‘“ we get a function ๐‘”, and ๐ฟ is linear in the sense that
๐ฟ( ๐‘“ + โ„Ž) = ๐ฟ ๐‘“ + ๐ฟโ„Ž,
and
๐ฟ(๐›ผ ๐‘“ ) = ๐›ผ๐ฟ ๐‘“ .
for any number (scalar) ๐›ผ and all functions ๐‘“ and โ„Ž.
So the answer is not really. But if we consider vectors in finite-dimensional spaces
๐‘›
โ„ then yes, every linear mapping is a matrix. We have mentioned at the beginning of
this section, that we can “make everything a vector.” That’s not strictly true, but it is true
approximately. Those “infinite-dimensional” spaces of functions can be approximated by a
finite-dimensional space, and then linear operators are just matrices. So approximately,
this is true. And as far as actual computations that we can do on a computer, we can work
393
A.1. VECTORS, MAPPINGS, AND MATRICES
only with finitely many dimensions anyway. If you ask a computer or your calculator to
plot a function, it samples the function at finitely many points and then connects the dots∗ .
It does not actually give you infinitely many values. The way that you have been using the
computer or your calculator so far has already been a certain approximation of the space
of functions by a finite-dimensional space.
To end the section, we notice how ๐ด ๐‘ฅ® can be written more succintly. Suppose
๐‘Ž
๐‘Ž
๐‘Ž
๐ด = 11 12 13
๐‘Ž 21 ๐‘Ž 22 ๐‘Ž23
Then
๐‘Ž
๐‘Ž
๐‘Ž
๐ด ๐‘ฅ® = 11 12 13
๐‘Ž 21 ๐‘Ž22 ๐‘Ž 23
For example,
1 2
3 4
๏ฃฎ๐‘ฅ1 ๏ฃน
๏ฃฏ ๏ฃบ
๐‘ฅ® = ๏ฃฏ๏ฃฏ๐‘ฅ2 ๏ฃบ๏ฃบ .
๏ฃฏ๐‘ฅ3 ๏ฃบ
๏ฃฐ ๏ฃป
and
๏ฃฎ๐‘ฅ1 ๏ฃน ๏ฃฏ ๏ฃบ
๏ฃฏ๐‘ฅ2 ๏ฃบ = ๐‘Ž11 ๐‘ฅ1 + ๐‘Ž12 ๐‘ฅ2 + ๐‘Ž13 ๐‘ฅ3 .
๏ฃฏ ๏ฃบ
๐‘Ž 21 ๐‘ฅ1 + ๐‘Ž 22 ๐‘ฅ2 + ๐‘Ž 23 ๐‘ฅ3
๏ฃฏ๐‘ฅ3 ๏ฃบ
๏ฃฐ ๏ฃป
2
1 · 2 + 2 · (−1)
0
=
=
.
−1
3 · 2 + 4 · (−1)
2
That is, you take the entries in a row of the matrix, you multiply them by the entries in
your vector, you add things up, and that’s the corresponding entry in the resulting vector.
A.1.3
Exercises
Exercise A.1.1: On a piece of graph paper draw the vectors:
−2
b)
−4
2
a)
5
c) (3, −4)
Exercise A.1.2: On a piece of graph paper draw the vector (1, 2) starting at (based at) the given
point:
a) based at (0, 0)
b) based at (1, 2)
c) based at (0, −1)
Exercise A.1.3: On a piece of graph paper draw the following operations. Draw and label the
vectors involved in the operations as well as the result:
1
2
a)
+
−4
3
−3
1
b)
−
2
3
c) 3
2
1
Exercise A.1.4: Compute the magnitude of
7
a)
2
∗ If
๏ฃฎ−2๏ฃน
๏ฃฏ ๏ฃบ
b) ๏ฃฏ๏ฃฏ 3 ๏ฃบ๏ฃบ
๏ฃฏ1๏ฃบ
๏ฃฐ ๏ฃป
c) (1, 3, −4)
you have ever used Matlab, you may have noticed that to plot a function, we take a vector of inputs, ask
Matlab to compute the corresponding vector of values of the function, and then we ask it to plot the result.
394
APPENDIX A. LINEAR ALGEBRA
Exercise A.1.5: Compute
2
7
a)
+
3
−8
−1
d) 4
5
−2
6
b)
−
3
−4
1
0
e) 5
+9
0
1
−3
c) −
2
1
3
f) 3
−2
−8
−1
Exercise A.1.6: Find the unit vector in the direction of the given vector
1
a)
−3
๏ฃฎ2๏ฃน
๏ฃฏ ๏ฃบ
b) ๏ฃฏ๏ฃฏ 1 ๏ฃบ๏ฃบ
๏ฃฏ−1๏ฃบ
๏ฃฐ ๏ฃป
c) (3, 1, −2)
Exercise A.1.7: If ๐‘ฅ® = (1, 2) and ๐‘ฆ® are added together, we find ๐‘ฅ® + ๐‘ฆ® = (0, 2). What is ๐‘ฆ®?
Exercise A.1.8: Write (1, 2, 3) as a linear combination of the standard basis vectors ๐‘’®1 , ๐‘’®2 , and ๐‘’®3 .
Exercise A.1.9: If the magnitude of ๐‘ฅ® is 4, what is the magnitude of
a) 0๐‘ฅ®
b) 3๐‘ฅ®
c) −๐‘ฅ®
d) −4๐‘ฅ®
e) ๐‘ฅ® + ๐‘ฅ®
f) ๐‘ฅ® − ๐‘ฅ®
Exercise A.1.10: Suppose a linear mapping ๐น : โ„2 → โ„2 takes (1, 0) to (2, −1) and it takes (0, 1)
to (3, 3). Where does it take
a) (1, 1)
b) (2, 0)
c) (2, −1)
Exercise A.1.11: Suppose a linear mapping ๐น : โ„3 → โ„2 takes (1, 0, 0) to (2, 1), it takes (0, 1, 0)
to (3, 4), and it takes (0, 0, 1) to (5, 6). Write down the matrix representing the mapping ๐น.
Exercise A.1.12: Suppose that a mapping ๐น : โ„2 → โ„2 takes (1, 0) to (1, 2), (0, 1) to (3, 4), and
(1, 1) to (0, −1). Explain why ๐น is not linear.
Exercise A.1.13 (challenging): Let โ„3 represent the space of quadratic polynomials in ๐‘ก: a point
(๐‘Ž0 , ๐‘Ž1 , ๐‘Ž2 ) in โ„3 represents the polynomial ๐‘Ž 0 + ๐‘Ž 1 ๐‘ก + ๐‘Ž 2 ๐‘ก 2 . Consider the derivative ๐‘‘๐‘ก๐‘‘ as a mapping
of โ„3 to โ„3 , and note that ๐‘‘๐‘ก๐‘‘ is linear. Write down ๐‘‘๐‘ก๐‘‘ as a 3 × 3 matrix.
Exercise A.1.101: Compute the magnitude of
๏ฃฎ2๏ฃน
๏ฃฏ ๏ฃบ
b) ๏ฃฏ๏ฃฏ 3 ๏ฃบ๏ฃบ
๏ฃฏ−1๏ฃบ
๏ฃฐ ๏ฃป
1
a)
3
c) (−2, 1, −2)
Exercise A.1.102: Find the unit vector in the direction of the given vector
−1
a)
1
๏ฃฎ1๏ฃน
๏ฃฏ ๏ฃบ
b) ๏ฃฏ๏ฃฏ−1๏ฃบ๏ฃบ
๏ฃฏ2๏ฃบ
๏ฃฐ ๏ฃป
c) (2, −5, 2)
395
A.1. VECTORS, MAPPINGS, AND MATRICES
Exercise A.1.103: Compute
3
6
a)
+
1
−3
−2
d) 2
4
−1
2
b)
−
2
−1
1
0
e) 3
+7
0
1
−5
c) −
3
2
2
f) 2
−6
−3
−1
Exercise A.1.104: If the magnitude of ๐‘ฅ® is 5, what is the magnitude of
a) 4๐‘ฅ®
b) −2๐‘ฅ®
c) −4๐‘ฅ®
Exercise A.1.105: Suppose a linear mapping ๐น : โ„2 → โ„2 takes (1, 0) to (1, −1) and it takes (0, 1)
to (2, 0). Where does it take
a) (1, 1)
b) (0, 2)
c) (1, −1)
396
A.2
APPENDIX A. LINEAR ALGEBRA
Matrix algebra
Note: 2–3 lectures
A.2.1
One-by-one matrices
Let us motivate what we want to achieve with matrices. Real-valued linear mappings of the
real line, linear functions that eat numbers and spit out numbers, are just multiplications
by a number. Consider a mapping defined by multiplying by a number. Let’s call this
number ๐›ผ. The mapping then takes ๐‘ฅ to ๐›ผ๐‘ฅ. We can add such mappings: If we have another
mapping ๐›ฝ, then
๐›ผ๐‘ฅ + ๐›ฝ๐‘ฅ = (๐›ผ + ๐›ฝ)๐‘ฅ.
We get a new mapping ๐›ผ + ๐›ฝ that multiplies ๐‘ฅ by, well, ๐›ผ + ๐›ฝ. If ๐ท is a mapping that doubles
its input, ๐ท๐‘ฅ = 2๐‘ฅ, and ๐‘‡ is a mapping that triples, ๐‘‡๐‘ฅ = 3๐‘ฅ, then ๐ท + ๐‘‡ is a mapping that
multiplies by 5, (๐ท + ๐‘‡)๐‘ฅ = 5๐‘ฅ.
Similarly we can compose such mappings, that is, we could apply one and then the other.
We take ๐‘ฅ, we run it through the first mapping ๐›ผ to get ๐›ผ times ๐‘ฅ, then we run ๐›ผ๐‘ฅ through
the second mapping ๐›ฝ. In other words,
๐›ฝ(๐›ผ๐‘ฅ) = (๐›ฝ๐›ผ)๐‘ฅ.
We just multiply those two numbers. Using our doubling and tripling mappings, if we
double and then triple, that is ๐‘‡(๐ท๐‘ฅ) then we obtain 3(2๐‘ฅ) = 6๐‘ฅ. The composition ๐‘‡๐ท is
the mapping that multiplies by 6. For larger matrices, composition also ends up being a
kind of multiplication.
A.2.2
Matrix addition and scalar multiplication
The mappings that multiply numbers by numbers are just 1 × 1 matrices. The number ๐›ผ
above could be written as a matrix [๐›ผ]. Perhaps we would want to do to all matrices the
same things that we did to those 1 × 1 matrices at the start of this section above. First, let
us add matrices. If we have a matrix ๐ด and a matrix ๐ต that are of the same size, say ๐‘š × ๐‘›,
then they are mappings from โ„ ๐‘› to โ„ ๐‘š . The mapping ๐ด + ๐ต should also be a mapping
from โ„ ๐‘› to โ„ ๐‘š , and it should do the following to vectors:
(๐ด + ๐ต)๐‘ฅ® = ๐ด ๐‘ฅ® + ๐ต ๐‘ฅ®.
It turns out you just add the matrices element-wise: If the ๐‘–๐‘— th entry of ๐ด is ๐‘Ž ๐‘–๐‘— , and the ๐‘–๐‘— th
entry of ๐ต is ๐‘ ๐‘–๐‘— , then the ๐‘–๐‘— th entry of ๐ด + ๐ต is ๐‘Ž ๐‘–๐‘— + ๐‘ ๐‘–๐‘— . If
๐‘Ž
๐‘Ž
๐‘Ž
๐ด = 11 12 13
๐‘Ž 21 ๐‘Ž 22 ๐‘Ž 23
๐‘
๐‘
๐‘
๐ต = 11 12 13 ,
๐‘ 21 ๐‘ 22 ๐‘ 23
and
397
A.2. MATRIX ALGEBRA
then
๐‘Ž + ๐‘ 11 ๐‘Ž 12 + ๐‘ 12 ๐‘Ž 13 + ๐‘ 13
๐ด + ๐ต = 11
.
๐‘Ž 21 + ๐‘ 21 ๐‘Ž 22 + ๐‘ 22 ๐‘Ž 23 + ๐‘ 23
Let us illustrate on a more concrete example:
๏ฃฎ1 2๏ฃน ๏ฃฎ 7 8 ๏ฃน ๏ฃฎ 1 + 7 2 + 8 ๏ฃน ๏ฃฎ 8 10๏ฃน
๏ฃฏ
๏ฃบ ๏ฃฏ
๏ฃบ ๏ฃฏ
๏ฃบ ๏ฃฏ
๏ฃบ
๏ฃฏ3 4๏ฃบ + ๏ฃฏ 9 10 ๏ฃบ = ๏ฃฏ 3 + 9 4 + 10๏ฃบ = ๏ฃฏ12 14๏ฃบ .
๏ฃฏ
๏ฃบ ๏ฃฏ
๏ฃบ ๏ฃฏ
๏ฃบ ๏ฃฏ
๏ฃบ
๏ฃฏ5 6๏ฃบ ๏ฃฏ11 −1๏ฃบ ๏ฃฏ5 + 11 6 − 1 ๏ฃบ ๏ฃฏ16 5 ๏ฃบ
๏ฃฐ
๏ฃป ๏ฃฐ
๏ฃป ๏ฃฐ
๏ฃป ๏ฃฐ
๏ฃป
Let’s check that this does the right thing to a vector. Let’s use some of the vector algebra
that we already know, and regroup things:
๏ฃฎ1๏ฃน ๏ฃฎ2๏ฃน
๏ฃฎ ๏ฃน ๏ฃฎ ๏ฃน
๏ฃฎ1 2 ๏ฃน ๏ฃฎ 7 8 ๏ฃน ๏ฃฏ
๏ฃบ 2
๏ฃฏ
๏ฃบ 2
๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบª © ๏ฃฏ 7 ๏ฃบ ๏ฃฏ 8 ๏ฃบª
©
๏ฃฏ
๏ฃบ
๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบ
๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบ
๏ฃฏ3 4 ๏ฃบ
๏ฃฏ
๏ฃบ −1 + ๏ฃฏ 9 10 ๏ฃบ −1 = ­2 ๏ฃฏ3๏ฃบ − ๏ฃฏ4๏ฃบ ® + ­2 ๏ฃฏ 9 ๏ฃบ − ๏ฃฏ 10 ๏ฃบ ®
๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบ
๏ฃฏ5 6 ๏ฃบ
๏ฃฏ11 −1๏ฃบ
๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบ
« ๏ฃฐ5๏ฃป ๏ฃฐ6๏ฃป ¬ « ๏ฃฐ11๏ฃป ๏ฃฐ−1๏ฃป ¬
๏ฃฐ
๏ฃป
๏ฃฐ
๏ฃป
๏ฃฎ1๏ฃน ๏ฃฎ 7 ๏ฃน
๏ฃฎ2๏ฃน ๏ฃฎ 8 ๏ฃน
© ๏ฃฏ๏ฃฏ ๏ฃบ๏ฃบ ๏ฃฏ๏ฃฏ ๏ฃบ๏ฃบ ª © ๏ฃฏ๏ฃฏ ๏ฃบ๏ฃบ ๏ฃฏ๏ฃฏ ๏ฃบ๏ฃบ ª
= 2 ­ ๏ฃฏ3๏ฃบ + ๏ฃฏ 9 ๏ฃบ ® − ­ ๏ฃฏ4๏ฃบ + ๏ฃฏ 10 ๏ฃบ ®
๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบ
๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบ
« ๏ฃฐ5๏ฃป ๏ฃฐ11๏ฃป ¬ « ๏ฃฐ6๏ฃป ๏ฃฐ−1๏ฃป ¬
๏ฃฎ1+7๏ฃน ๏ฃฎ2+8๏ฃน
๏ฃฎ 8 ๏ฃน ๏ฃฎ10๏ฃน
๏ฃฏ
๏ฃบ ๏ฃฏ
๏ฃบ
๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบ
= 2 ๏ฃฏ๏ฃฏ 3 + 9 ๏ฃบ๏ฃบ − ๏ฃฏ๏ฃฏ4 + 10๏ฃบ๏ฃบ = 2 ๏ฃฏ๏ฃฏ12๏ฃบ๏ฃบ − ๏ฃฏ๏ฃฏ14๏ฃบ๏ฃบ
๏ฃฏ5 + 11๏ฃบ ๏ฃฏ 6 − 1 ๏ฃบ
๏ฃฏ16๏ฃบ ๏ฃฏ 5 ๏ฃบ
๏ฃฐ
๏ฃป ๏ฃฐ
๏ฃป
๏ฃฐ ๏ฃป ๏ฃฐ ๏ฃป
๏ฃฎ 2(8) − 10 ๏ฃน ๏ฃฎ 6 ๏ฃน
๏ฃฎ 8 10๏ฃน ๏ฃฏ
๏ฃบ 2
๏ฃบ ๏ฃฏ ๏ฃบª
© ๏ฃฏ๏ฃฏ
= ๏ฃฏ๏ฃฏ12 14๏ฃบ๏ฃบ
­= ๏ฃฏ2(12) − 14๏ฃบ๏ฃบ = ๏ฃฏ๏ฃฏ10๏ฃบ๏ฃบ ® .
๏ฃฏ16 5 ๏ฃบ −1
๏ฃฏ
๏ฃบ ๏ฃฏ ๏ฃบ
« ๏ฃฐ 2(16) − 5 ๏ฃป ๏ฃฐ27๏ฃป ¬
๏ฃฐ
๏ฃป
If we replaced the numbers by letters that would constitute a proof! You’ll notice that we
didn’t really have to even compute what the result is to convince ourselves that the two
expressions were equal.
If the sizes of the matrices do not match, then addition is not defined. If ๐ด is 3 × 2 and ๐ต
is 2 × 5, then we cannot add these matrices. We don’t know what that could possibly mean.
It is also useful to have a matrix that when added to any other matrix does nothing.
This is the zero matrix, the matrix of all zeros:
1 2
0 0
1 2
+
=
.
3 4
0 0
3 4
We often denote the zero matrix by 0 without specifying size. We would then just write
๐ด + 0, where we just assume that 0 is the zero matrix of the same size as ๐ด.
There are really two things we can multiply matrices by. We can multiply matrices by
scalars or we can multiply by other matrices. Let us first consider multiplication by scalars.
For a matrix ๐ด and a scalar ๐›ผ, we want ๐›ผ๐ด to be the matrix that accomplishes
(๐›ผ๐ด)๐‘ฅ® = ๐›ผ(๐ด ๐‘ฅ®).
398
APPENDIX A. LINEAR ALGEBRA
That is just scaling the result by ๐›ผ. If you think about it, scaling every term in ๐ด by ๐›ผ
achieves just that: If
๐‘Ž
๐‘Ž
๐‘Ž
๐ด = 11 12 13 ,
๐‘Ž 21 ๐‘Ž 22 ๐‘Ž 23
๐›ผ๐‘Ž 11 ๐›ผ๐‘Ž 12 ๐›ผ๐‘Ž 13
๐›ผ๐ด =
.
๐›ผ๐‘Ž 21 ๐›ผ๐‘Ž 22 ๐›ผ๐‘Ž 23
then
For example,
1 2 3
2 4 6
2
=
.
4 5 6
8 10 12
Let us list some properties of matrix addition and scalar multiplication. Denote by 0
the zero matrix, by ๐›ผ, ๐›ฝ scalars, and by ๐ด, ๐ต, ๐ถ matrices. Then:
๐ด + 0 = ๐ด = 0 + ๐ด,
๐ด + ๐ต = ๐ต + ๐ด,
(๐ด + ๐ต) + ๐ถ = ๐ด + (๐ต + ๐ถ),
๐›ผ(๐ด + ๐ต) = ๐›ผ๐ด + ๐›ผ๐ต,
(๐›ผ + ๐›ฝ)๐ด = ๐›ผ๐ด + ๐›ฝ๐ด.
These rules should look very familiar.
A.2.3
Matrix multiplication
As we mentioned above, composition of linear mappings is also a multiplication of matrices.
Suppose ๐ด is an ๐‘š × ๐‘› matrix, that is, ๐ด takes โ„ ๐‘› to โ„ ๐‘š , and ๐ต is an ๐‘› × ๐‘ matrix, that is, ๐ต
takes โ„ ๐‘ to โ„ ๐‘› . The composition ๐ด๐ต should work as follows
๐ด๐ต ๐‘ฅ® = ๐ด(๐ต ๐‘ฅ®).
First, a vector ๐‘ฅ® in โ„ ๐‘ gets taken to the vector ๐ต ๐‘ฅ® in โ„ ๐‘› . Then the mapping ๐ด takes it to the
vector ๐ด(๐ต ๐‘ฅ®) in โ„ ๐‘š . In other words, the composition ๐ด๐ต should be an ๐‘š × ๐‘ matrix. In
terms of sizes we should have
“
[๐‘š × ๐‘›] [๐‘› × ๐‘] = [๐‘š × ๐‘].
”
Notice how the middle size must match.
OK, now we know what sizes of matrices we should be able to multiply, and what the
product should be. Let us see how to actually compute matrix multiplication. We start
with the so-called dot product (or inner product) of two vectors. Usually this is a row vector
multiplied with a column vector of the same size. Dot product multiplies each pair of
entries from the first and the second vector and sums these products. The result is a single
number. For example,
๐‘Ž1 ๐‘Ž2
๏ฃฎ ๏ฃน
๏ฃฏ๐‘ 1 ๏ฃบ
๐‘Ž 3 · ๏ฃฏ๏ฃฏ๐‘ 2 ๏ฃบ๏ฃบ = ๐‘Ž 1 ๐‘ 1 + ๐‘Ž 2 ๐‘ 2 + ๐‘Ž 3 ๐‘ 3 .
๏ฃฏ๐‘ 3 ๏ฃบ
๏ฃฐ ๏ฃป
399
A.2. MATRIX ALGEBRA
And similarly for larger (or smaller) vectors. A dot product is really a product of two
matrices: a 1 × ๐‘› matrix and an ๐‘› × 1 matrix resulting in a 1 × 1 matrix, that is, a number.
Armed with the dot product we define the product of matrices. We denote by row๐‘– (๐ด)
the ๐‘– th row of ๐ด and by column ๐‘— (๐ด) the ๐‘— th column of ๐ด. For an ๐‘š × ๐‘› matrix ๐ด and an
๐‘› × ๐‘ matrix ๐ต we can compute the product ๐ด๐ต: The matrix ๐ด๐ต is an ๐‘š × ๐‘ matrix whose
๐‘–๐‘— th entry is the dot product
row๐‘– (๐ด) · column ๐‘— (๐ต).
For example, given a 2 × 3 and a 3 × 2 matrix we should end up with a 2 × 2 matrix:
๐‘Ž 11 ๐‘Ž12 ๐‘Ž13
๐‘Ž 21 ๐‘Ž22 ๐‘Ž23
๏ฃฎ๐‘11 ๐‘12 ๏ฃน ๏ฃฏ
๏ฃบ
๏ฃฏ๐‘21 ๐‘22 ๏ฃบ = ๐‘Ž11 ๐‘11 + ๐‘Ž12 ๐‘21 + ๐‘Ž13 ๐‘31
๏ฃฏ
๏ฃบ
๐‘Ž 21 ๐‘ 11 + ๐‘Ž22 ๐‘ 21 + ๐‘Ž 23 ๐‘ 31
๏ฃฏ๐‘31 ๐‘32 ๏ฃบ
๏ฃฐ
๏ฃป
๐‘Ž 11 ๐‘12 + ๐‘Ž 12 ๐‘ 22 + ๐‘Ž 13 ๐‘ 32
, (A.1)
๐‘Ž 21 ๐‘12 + ๐‘Ž 22 ๐‘ 22 + ๐‘Ž 23 ๐‘ 32
or with some numbers:
๏ฃฎ−1 2 ๏ฃน ๏ฃบ
1 2 3 ๏ฃฏ๏ฃฏ
๏ฃบ = 1 · (−1) + 2 · (−7) + 3 · 1
−7
0
๏ฃฏ
๏ฃบ
4 5 6 ๏ฃฏ
4 · (−1) + 5 · (−7) + 6 · 1
๏ฃบ
1
−1
๏ฃฐ
๏ฃป
1 · 2 + 2 · 0 + 3 · (−1)
−12 −1
=
.
4 · 2 + 5 · 0 + 6 · (−1)
−33 2
A useful consequence of the definition is that the evaluation ๐ด ๐‘ฅ® for a matrix ๐ด and a
(column) vector ๐‘ฅ® is also matrix multiplication. That is really why we think of vectors as
column vectors, or ๐‘› × 1 matrices. For example,
1 2
3 4
2
1 · 2 + 2 · (−1)
0
=
=
.
−1
3 · 2 + 4 · (−1)
2
If you look at the last section, that is precisely the last example we gave.
You should stare at the computation of multiplication of matrices ๐ด๐ต and the previous
definition of ๐ด ๐‘ฆ® as a mapping for a moment. What we are doing with matrix multiplication
is applying the mapping ๐ด to the columns of ๐ต. This is usually written as follows. Suppose
we write the ๐‘› × ๐‘ matrix ๐ต = [๐‘®1 ๐‘®2 · · · ๐‘®๐‘ ], where ๐‘®1 , ๐‘®2 , . . . , ๐‘®๐‘ are the columns of ๐ต.
Then for an ๐‘š × ๐‘› matrix ๐ด,
๐ด๐ต = ๐ด[๐‘®1 ๐‘®2 · · · ๐‘®๐‘ ] = [๐ด๐‘®1 ๐ด๐‘®2 · · · ๐ด๐‘®๐‘ ].
The columns of the ๐‘š × ๐‘ matrix ๐ด๐ต are the vectors ๐ด๐‘®1 , ๐ด๐‘®2 , . . . , ๐ด๐‘®๐‘ . For example, in
(A.1), the columns of
๏ฃฎ๐‘
๐‘ ๏ฃน
๐‘Ž 11 ๐‘Ž 12 ๐‘Ž 13 ๏ฃฏ๏ฃฏ 11 12 ๏ฃบ๏ฃบ
๐‘
๐‘
๐‘Ž 21 ๐‘Ž 22 ๐‘Ž 23 ๏ฃฏ๏ฃฏ 21 22 ๏ฃบ๏ฃบ
๏ฃฐ๐‘31 ๐‘32 ๏ฃป
are
๏ฃฎ๐‘ ๏ฃน
๏ฃฎ๐‘ ๏ฃน
๐‘Ž 11 ๐‘Ž 12 ๐‘Ž13 ๏ฃฏ๏ฃฏ 11 ๏ฃบ๏ฃบ
๐‘Ž 11 ๐‘Ž 12 ๐‘Ž 13 ๏ฃฏ๏ฃฏ 12 ๏ฃบ๏ฃบ
๐‘
๐‘ .
and
๐‘Ž 21 ๐‘Ž 22 ๐‘Ž23 ๏ฃฏ๏ฃฏ 21 ๏ฃบ๏ฃบ
๐‘Ž 21 ๐‘Ž 22 ๐‘Ž 23 ๏ฃฏ๏ฃฏ 22 ๏ฃบ๏ฃบ
๐‘
๏ฃฐ 31 ๏ฃป
๏ฃฐ๐‘32 ๏ฃป
This is a very useful way to understand what matrix multiplication is. It should also make
it easier to remember how to perform matrix multiplication.
400
A.2.4
APPENDIX A. LINEAR ALGEBRA
Some rules of matrix algebra
For multiplication we want an analogue of a 1. That is, we desire a matrix that just leaves
everything as it found it. This analogue is the so-called identity matrix. The identity matrix
is a square matrix with 1s on the main diagonal and zeros everywhere else. It is usually
denoted by ๐ผ. For each size we have a different identity matrix and so sometimes we may
denote the size as a subscript. For example, ๐ผ3 is the 3 × 3 identity matrix
๏ฃฎ1 0 0๏ฃน
๏ฃฏ
๏ฃบ
๐ผ = ๐ผ3 = ๏ฃฏ๏ฃฏ0 1 0๏ฃบ๏ฃบ .
๏ฃฏ0 0 1๏ฃบ
๏ฃฐ
๏ฃป
Let us see how the matrix works on a smaller example,
๐‘Ž 11 ๐‘Ž 12
๐‘Ž 21 ๐‘Ž 22
1 0
๐‘Ž · 1 + ๐‘Ž 12 · 0
= 11
0 1
๐‘Ž21 · 1 + ๐‘Ž 22 · 0
๐‘Ž 11 · 0 + ๐‘Ž12 · 1
๐‘Ž
๐‘Ž
= 11 12 .
๐‘Ž 21 · 0 + ๐‘Ž22 · 1
๐‘Ž 21 ๐‘Ž 22
Multiplication by the identity from the left looks similar, and also does not touch anything.
We have the following rules for matrix multiplication. Suppose that ๐ด, ๐ต, ๐ถ are matrices
of the correct sizes so that the following make sense. Let ๐›ผ denote a scalar (number). Then
๐ด(๐ต๐ถ) = (๐ด๐ต)๐ถ
(associative law),
๐ด(๐ต + ๐ถ) = ๐ด๐ต + ๐ด๐ถ
(distributive law),
(๐ต + ๐ถ)๐ด = ๐ต๐ด + ๐ถ๐ด
(distributive law),
๐›ผ(๐ด๐ต) = (๐›ผ๐ด)๐ต = ๐ด(๐›ผ๐ต),
๐ผ๐ด = ๐ด = ๐ด๐ผ
(identity).
Example A.2.1: Let us demonstrate a couple of these rules. For example, the associative
law:
−3 3
4 4 −1 4
−3 3
16 24
−96 −78
=
=
,
2 −2 1 −3 5 2
2 −2 −16 −2
64
52
| {z } | {z } | {z }
๐ด
and
๐ต
−3 3
2 −2
| {z } | {z }
๐ถ
4 4
1 −3
๐ด
๐ต
๐ถ
|
| {z } | {z }
|
−1 4
−9 −21
=
5 2
6
14
| {z } | {z } | {z }
๐ด
๐ต๐ถ
{z
๐ด(๐ต๐ถ)
−1 4
−96 −78
=
.
5 2
64
52
๐ด๐ต
๐ถ
{z
(๐ด๐ต)๐ถ
Or how about multiplication by scalars:
−3 3
10
2 −2
4 4
1 −3
| {z } | {z }
๐ด
๐ต
| {z }
|
−9 −21
−90 −210
= 10
=
,
6
14
60
140
๐ด๐ต
}
{z
10(๐ด๐ต)
}
}
401
A.2. MATRIX ALGEBRA
| {z } | {z }
|
−3 3
10
2 −2
๐ด
and
๐ต
−3 3
4 4
10
2 −2
1 −3
| {z }
10๐ด
} | {z }
|
๐ต
๐ต
4 4
−90 −210
=
,
1 −3
60
140
{z
}
(10๐ด)๐ต
| {z } | {z }
|
−3 3
=
2 −2
| {z }
๐ด
{z
4 4
−30 30
=
1 −3
20 −20
40 40
−90 −210
=
.
10 −30
60
140
๐ด
10๐ต
{z
}
๐ด(10๐ต)
A multiplication rule, one you have used since primary school on numbers, is quite
conspicuously missing for matrices. That is, matrix multiplication is not commutative.
Firstly, just because ๐ด๐ต makes sense, it may be that ๐ต๐ด is not even defined. For example, if
๐ด is 2 × 3, and ๐ต is 3 × 4, the we can multiply ๐ด๐ต but not ๐ต๐ด.
Even if ๐ด๐ต and ๐ต๐ด are
1 0both
defined, does not mean that they are equal. For example,
1
1
take ๐ด = 1 1 and ๐ต = 0 2 :
1 1
๐ด๐ต =
1 1
A.2.5
1 0
1 2
=
0 2
1 2
≠
1 1
1 0
=
2 2
0 2
1 1
= ๐ต๐ด.
1 1
Inverse
A couple of other algebra rules you know for numbers do not quite work on matrices:
(i) ๐ด๐ต = ๐ด๐ถ does not necessarily imply ๐ต = ๐ถ, even if ๐ด is not 0.
(ii) ๐ด๐ต = 0 does not necessarily mean that ๐ด = 0 or ๐ต = 0.
For example:
0 1
0 0
0 1
0 0
0 1
=
=
0 0
0 0
0 0
0 2
.
0 0
To make these rules hold, we do not just need one of the matrices to not be zero, we
would need to “divide” by a matrix. This is where the matrix inverse comes in. Suppose
that ๐ด and ๐ต are ๐‘› × ๐‘› matrices such that
๐ด๐ต = ๐ผ = ๐ต๐ด.
Then we call ๐ต the inverse of ๐ด and we denote ๐ต by ๐ด−1 . Perhaps not surprisingly,
−1
(๐ด−1 ) = ๐ด, since if the inverse of ๐ด is ๐ต, then the inverse of ๐ต is ๐ด. If the inverse of ๐ด
exists, then we say ๐ด is invertible. If ๐ด is not invertible, we say ๐ด is singular.
If ๐ด = [๐‘Ž] is a 1 × 1 matrix, then ๐ด−1 is ๐‘Ž −1 = 1๐‘Ž . That is where the notation comes from.
The computation is not nearly as simple when ๐ด is larger.
The proper formulation of the cancellation rule is:
If ๐ด is invertible, then ๐ด๐ต = ๐ด๐ถ implies ๐ต = ๐ถ.
402
APPENDIX A. LINEAR ALGEBRA
The computation is what you would do in regular algebra with numbers, but you have to
be careful never to commute matrices:
๐ด๐ต = ๐ด๐ถ,
๐ด−1 ๐ด๐ต = ๐ด−1 ๐ด๐ถ,
๐ผ๐ต = ๐ผ๐ถ,
๐ต = ๐ถ.
And similarly for cancellation on the right:
If ๐ด is invertible, then ๐ต๐ด = ๐ถ๐ด implies ๐ต = ๐ถ.
The rule says, among other things, that the inverse of a matrix is unique if it exists: If
๐ด๐ต = ๐ผ = ๐ด๐ถ, then ๐ด is invertible and ๐ต = ๐ถ.
We will see later how to compute an inverse of a matrix in general. For now, let us note
that there is a simple formula for the inverse of a 2 × 2 matrix
๐‘Ž ๐‘
๐‘ ๐‘‘
−1
1
๐‘‘ −๐‘
=
.
−๐‘
๐‘Ž
๐‘Ž๐‘‘ − ๐‘๐‘
For example:
1 1
2 4
−1
1
4 −1
2
=
=
−1
1 · 4 − 1 · 2 −2 1
−1/2
1/2
.
Let’s try it:
1 1
2 4
2
−1
−1/2
1/2
1 0
=
0 1
and
2
−1
−1/2
1/2
1 1
1 0
=
.
2 4
0 1
Just as we cannot divide by every number, not every matrix is invertible. In the case of
matrices however we may have singular matrices that are not zero. For example,
1 1
2 2
is a singular matrix. But didn’t we just give a formula for an inverse? Let us try it:
1 1
2 2
−1
1
2 −1
=
=
−2
1
1·2−1·2
?
We get into a bit of trouble; we are trying to divide by zero.
So a 2 × 2 matrix ๐ด is invertible whenever
๐‘Ž๐‘‘ − ๐‘๐‘ ≠ 0
and otherwise it is singular. The expression ๐‘Ž๐‘‘ − ๐‘๐‘ is called the determinant and we will
look at it more carefully in a later section. There is a similar expression for a square matrix
of any size.
403
A.2. MATRIX ALGEBRA
A.2.6
Diagonal matrices
A simple (and surprisingly useful) type of a square matrix is a so-called diagonal matrix. It
is a matrix whose entries are all zero except those on the main diagonal from top left to
bottom right. For example a 4 × 4 diagonal matrix is of the form
๏ฃฎ ๐‘‘1 0 0 0 ๏ฃน
๏ฃฏ
๏ฃบ
๏ฃฏ 0 ๐‘‘2 0 0 ๏ฃบ
๏ฃฏ
๏ฃบ
๏ฃฏ 0 0 ๐‘‘3 0 ๏ฃบ .
๏ฃฏ
๏ฃบ
๏ฃฏ 0 0 0 ๐‘‘4 ๏ฃบ
๏ฃฐ
๏ฃป
Such matrices have nice properties when we multiply by them. If we multiply them by a
vector, they multiply the ๐‘˜ th entry by ๐‘‘ ๐‘˜ . For example,
๏ฃฎ1 0 0๏ฃน ๏ฃฎ4๏ฃน ๏ฃฎ1 · 4๏ฃน ๏ฃฎ 4 ๏ฃน
๏ฃฏ
๏ฃบ๏ฃฏ ๏ฃบ ๏ฃฏ
๏ฃบ ๏ฃฏ ๏ฃบ
๏ฃฏ0 2 0๏ฃบ ๏ฃฏ5๏ฃบ = ๏ฃฏ2 · 5๏ฃบ = ๏ฃฏ10๏ฃบ .
๏ฃฏ
๏ฃบ๏ฃฏ ๏ฃบ ๏ฃฏ
๏ฃบ ๏ฃฏ ๏ฃบ
๏ฃฏ0 0 3๏ฃบ ๏ฃฏ6๏ฃบ ๏ฃฏ3 · 6๏ฃบ ๏ฃฏ18๏ฃบ
๏ฃฐ
๏ฃป๏ฃฐ ๏ฃป ๏ฃฐ
๏ฃป ๏ฃฐ ๏ฃป
Similarly, when they multiply another matrix from the left, they multiply the ๐‘˜ th row by ๐‘‘ ๐‘˜ .
For example,
๏ฃฎ2 0 0 ๏ฃน ๏ฃฎ1 1 1๏ฃน ๏ฃฎ 2 2 2 ๏ฃน
๏ฃฏ
๏ฃบ๏ฃฏ
๏ฃบ ๏ฃฏ
๏ฃบ
๏ฃฏ0 3 0 ๏ฃบ ๏ฃฏ1 1 1๏ฃบ = ๏ฃฏ 3 3 3 ๏ฃบ .
๏ฃฏ
๏ฃบ๏ฃฏ
๏ฃบ ๏ฃฏ
๏ฃบ
๏ฃฏ0 0 −1๏ฃบ ๏ฃฏ1 1 1๏ฃบ ๏ฃฏ−1 −1 −1๏ฃบ
๏ฃฐ
๏ฃป๏ฃฐ
๏ฃป ๏ฃฐ
๏ฃป
On the other hand, multiplying on the right, they multiply the columns:
๏ฃฎ1 1 1๏ฃน ๏ฃฎ2 0 0 ๏ฃน ๏ฃฎ2 3 −1๏ฃน
๏ฃฏ
๏ฃบ๏ฃฏ
๏ฃบ ๏ฃฏ
๏ฃบ
๏ฃฏ1 1 1๏ฃบ ๏ฃฏ0 3 0 ๏ฃบ = ๏ฃฏ2 3 −1๏ฃบ .
๏ฃฏ
๏ฃบ๏ฃฏ
๏ฃบ ๏ฃฏ
๏ฃบ
๏ฃฏ1 1 1๏ฃบ ๏ฃฏ0 0 −1๏ฃบ ๏ฃฏ2 3 −1๏ฃบ
๏ฃฐ
๏ฃป๏ฃฐ
๏ฃป ๏ฃฐ
๏ฃป
And it is really easy to multiply two diagonal matrices together—we multiply the entries:
๏ฃฎ1 0 0๏ฃน ๏ฃฎ2 0 0 ๏ฃน ๏ฃฎ1 · 2 0
0 ๏ฃน๏ฃบ ๏ฃฎ๏ฃฏ2 0 0 ๏ฃน๏ฃบ
๏ฃฏ
๏ฃบ๏ฃฏ
๏ฃบ ๏ฃฏ
๏ฃฏ0 2 0๏ฃบ ๏ฃฏ0 3 0 ๏ฃบ = ๏ฃฏ 0 2 · 3
0 ๏ฃบ๏ฃบ = ๏ฃฏ๏ฃฏ0 6 0 ๏ฃบ๏ฃบ .
๏ฃฏ
๏ฃบ๏ฃฏ
๏ฃบ ๏ฃฏ
๏ฃฏ0 0 3๏ฃบ ๏ฃฏ0 0 −1๏ฃบ ๏ฃฏ 0
0 3 · (−1)๏ฃบ๏ฃป ๏ฃฏ๏ฃฐ0 0 −3๏ฃบ๏ฃป
๏ฃฐ
๏ฃป๏ฃฐ
๏ฃป ๏ฃฐ
For this last reason, they are easy to invert, you simply invert each diagonal element:
๏ฃฎ๐‘‘1 0 0 ๏ฃน −1 ๏ฃฎ๐‘‘−1 0
0 ๏ฃน๏ฃบ
๏ฃฏ
๏ฃบ
๏ฃฏ 1
−1
๏ฃฏ 0 ๐‘‘2 0 ๏ฃบ = ๏ฃฏ 0 ๐‘‘
0 ๏ฃบ๏ฃบ .
2
๏ฃฏ
๏ฃบ
๏ฃฏ
๏ฃฏ 0 0 ๐‘‘3 ๏ฃบ
๏ฃฏ 0
0 ๐‘‘3−1 ๏ฃบ๏ฃป
๏ฃฐ
๏ฃป
๏ฃฐ
Let us check an example
๏ฃฎ2
๏ฃฏ
๏ฃฏ0
๏ฃฏ
๏ฃฏ0
๏ฃฐ
|
0 0๏ฃน๏ฃบ
3 0๏ฃบ๏ฃบ
0 4๏ฃบ๏ฃป
{z
๐ด−1
−1
๏ฃฎ2
๏ฃฏ
๏ฃฏ0
๏ฃฏ
๏ฃฏ0
๏ฃฐ
}|
0 0๏ฃน๏ฃบ ๏ฃฎ๏ฃฏ 12 0 0 ๏ฃน๏ฃบ ๏ฃฎ๏ฃฏ2 0 0๏ฃน๏ฃบ ๏ฃฎ๏ฃฏ1 0 0๏ฃน๏ฃบ
3 0๏ฃบ๏ฃบ = ๏ฃฏ๏ฃฏ 0 13 0 ๏ฃบ๏ฃบ ๏ฃฏ๏ฃฏ0 3 0๏ฃบ๏ฃบ = ๏ฃฏ๏ฃฏ0 1 0๏ฃบ๏ฃบ .
0 4๏ฃบ๏ฃป ๏ฃฏ๏ฃฐ 0 0 41 ๏ฃบ๏ฃป ๏ฃฏ๏ฃฐ0 0 4๏ฃบ๏ฃป ๏ฃฏ๏ฃฐ0 0 1๏ฃบ๏ฃป
{z }
๐ด
| {z } | {z }
๐ด−1
๐ด
| {z }
๐ผ
It is no wonder that the way we solve many problems in linear algebra (and in differential
equations) is to try to reduce the problem to the case of diagonal matrices.
404
A.2.7
APPENDIX A. LINEAR ALGEBRA
Transpose
Vectors do not always have to be column vectors, that is just a convention. Swapping rows
and columns is from time to time needed. The operation that swaps rows and columns is
the so-called transpose. The transpose of ๐ด is denoted by ๐ด๐‘‡ . Example:
1 2 3
4 5 6
๐‘‡
๏ฃฎ1 4๏ฃน
๏ฃฏ
๏ฃบ
= ๏ฃฏ๏ฃฏ2 5๏ฃบ๏ฃบ .
๏ฃฏ3 6๏ฃบ
๏ฃฐ
๏ฃป
Transpose takes an ๐‘š × ๐‘› matrix to an ๐‘› × ๐‘š matrix.
A key feature of the transpose is that if the product ๐ด๐ต makes sense, then ๐ต๐‘‡ ๐ด๐‘‡ also
makes sense, at least from the point of view of sizes. In fact, we get precisely the transpose
of ๐ด๐ต. That is:
(๐ด๐ต)๐‘‡ = ๐ต๐‘‡ ๐ด๐‘‡ .
For example,
๏ฃฎ0 1 ๏ฃน ๐‘‡ ๏ฃฎ1 4๏ฃน
๏ฃบª
๏ฃบ
0 1 2 ๏ฃฏ๏ฃฏ
© 1 2 3 ๏ฃฏ๏ฃฏ
๏ฃบ.
๏ฃบ
1
0
2
5
=
­
®
๏ฃบ
๏ฃฏ
๏ฃบ
4 5 6 ๏ฃฏ๏ฃฏ
1
0
−2
๏ฃฏ3 6๏ฃบ
๏ฃบ
2
−2
«
๏ฃฐ
๏ฃป¬
๏ฃฐ
๏ฃป
It is left to the reader to verify that computing the matrix product on the left and then
transposing is the same as computing the matrix product on the right.
If we have a column vector ๐‘ฅ® to which we apply a matrix ๐ด and we transpose the result,
then the row vector ๐‘ฅ®๐‘‡ applies to ๐ด๐‘‡ from the left:
๐‘‡
(๐ด ๐‘ฅ®) = ๐‘ฅ®๐‘‡ ๐ด๐‘‡ .
Another place where transpose is useful is when we wish to apply the dot product∗ to
two column vectors:
๐‘ฅ® · ๐‘ฆ® = ๐‘ฆ®๐‘‡ ๐‘ฅ®.
That is the way that one often writes the dot product in software.
We say a matrix ๐ด is symmetric if ๐ด = ๐ด๐‘‡ . For example,
๏ฃฎ1 2 3๏ฃน
๏ฃฏ
๏ฃบ
๏ฃฏ2 4 5๏ฃบ
๏ฃฏ
๏ฃบ
๏ฃฏ3 5 6๏ฃบ
๏ฃฐ
๏ฃป
is a symmetric matrix. Notice that a symmetric matrix is always square, that is, ๐‘› × ๐‘›.
Symmetric matrices have many nice properties† , and come up quite often in applications.
∗ As
a side note, mathematicians write ๐‘ฆ®๐‘‡ ๐‘ฅ® and physicists write ๐‘ฅ®๐‘‡ ๐‘ฆ®. Shhh. . . don’t tell anyone, but the
physicists are probably right on this.
† Although so far we have not learned enough about matrices to really appreciate them.
405
A.2. MATRIX ALGEBRA
A.2.8
Exercises
Exercise A.2.1: Add the following matrices
−1 2 2
3 2 3
a)
+
5 8 −1
8 3 5
๏ฃฎ1 2 4๏ฃน ๏ฃฎ2 −8 −3๏ฃน
๏ฃฏ
๏ฃบ ๏ฃฏ
๏ฃบ
0 ๏ฃบ๏ฃบ
b) ๏ฃฏ๏ฃฏ2 3 1๏ฃบ๏ฃบ + ๏ฃฏ๏ฃฏ3 1
๏ฃฏ0 5 1๏ฃบ ๏ฃฏ6 −4 1 ๏ฃบ
๏ฃฐ
๏ฃป ๏ฃฐ
๏ฃป
Exercise A.2.2: Compute
0 3
1 5
a) 3
+6
−2 2
−1 5
−3 1
2 −1
b) 2
−3
2 2
3 2
Exercise A.2.3: Multiply the following matrices
๏ฃฎ−1 2๏ฃน ๏ฃฏ
๏ฃบ 3 −1 3 1
a) ๏ฃฏ๏ฃฏ 3 1๏ฃบ๏ฃบ
๏ฃฏ 5 8๏ฃบ 8 3 2 −3
๏ฃฐ
๏ฃป
๏ฃฎ1 2 3๏ฃน ๏ฃฎ2 3 1 7 ๏ฃน
๏ฃบ
๏ฃฏ
๏ฃบ๏ฃฏ
b) ๏ฃฏ๏ฃฏ3 1 1๏ฃบ๏ฃบ ๏ฃฏ๏ฃฏ1 2 3 −1๏ฃบ๏ฃบ
๏ฃฏ1 0 3๏ฃบ ๏ฃฏ1 −1 3 0 ๏ฃบ
๏ฃป
๏ฃฐ
๏ฃป๏ฃฐ
๏ฃฎ
๏ฃฎ4 1 6 3๏ฃน ๏ฃฏ2
๏ฃบ ๏ฃฏ1
๏ฃฏ
c) ๏ฃฏ๏ฃฏ5 6 5 0๏ฃบ๏ฃบ ๏ฃฏ๏ฃฏ
๏ฃฏ4 6 6 0๏ฃบ ๏ฃฏ3
๏ฃป ๏ฃฏ5
๏ฃฐ
๏ฃฐ
๏ฃฎ2 2๏ฃน
๏ฃบ
1 1 4 ๏ฃฏ๏ฃฏ
๏ฃบ
1
0
d)
๏ฃบ
0 5 1 ๏ฃฏ๏ฃฏ
๏ฃบ
๏ฃฐ6 4๏ฃป
5๏ฃน๏ฃบ
2๏ฃบ๏ฃบ
5๏ฃบ๏ฃบ
6๏ฃบ๏ฃป
Exercise A.2.4: Compute the inverse of the given matrices
a) −3
0 −1
b)
1 0
1 4
c)
1 3
2 2
d)
1 4
Exercise A.2.5: Compute the inverse of the given matrices
a)
−2 0
0 1
๏ฃฎ1 0
0
0 ๏ฃน๏ฃบ
๏ฃฏ
๏ฃฏ0 −1 0
0 ๏ฃบ๏ฃบ
c) ๏ฃฏ๏ฃฏ
๏ฃบ
๏ฃฏ0 0 0.01 0 ๏ฃบ
๏ฃฏ0 0
0 −5๏ฃบ๏ฃป
๏ฃฐ
๏ฃฎ3 0 0๏ฃน
๏ฃฏ
๏ฃบ
b) ๏ฃฏ๏ฃฏ0 −2 0๏ฃบ๏ฃบ
๏ฃฏ0 0 1๏ฃบ
๏ฃฐ
๏ฃป
Exercise A.2.101: Add the following matrices
2 1 0
5 3 4
a)
+
1 1 −1
1 2 5
๏ฃฎ6 −2 3๏ฃน ๏ฃฎ−1 −1 −3๏ฃน
๏ฃฏ
๏ฃบ ๏ฃฏ
๏ฃบ
7
3 ๏ฃบ๏ฃบ
b) ๏ฃฏ๏ฃฏ7 3 3๏ฃบ๏ฃบ + ๏ฃฏ๏ฃฏ 6
๏ฃฏ8 −1 2๏ฃบ ๏ฃฏ−9 4 −1๏ฃบ
๏ฃฐ
๏ฃป ๏ฃฐ
๏ฃป
Exercise A.2.102: Compute
1 2
−1 3
a) 2
+3
3 4
1 2
2 −1
2 1
b) 3
−2
1 3
−1 2
406
APPENDIX A. LINEAR ALGEBRA
Exercise A.2.103: Multiply the following matrices
๏ฃฎ2 4๏ฃน
๏ฃบ
2 1 4 ๏ฃฏ๏ฃฏ
๏ฃบ
6
3
a)
๏ฃบ
3 4 4 ๏ฃฏ๏ฃฏ
๏ฃบ
3
5
๏ฃฐ
๏ฃป
๏ฃฎ0 3 3 ๏ฃน
๏ฃฏ
๏ฃบ
b) ๏ฃฏ๏ฃฏ2 −2 1 ๏ฃบ๏ฃบ
๏ฃฏ3 5 −2๏ฃบ
๏ฃฐ
๏ฃป
๏ฃฎ3 4 1๏ฃน
๏ฃฏ
๏ฃบ
c) ๏ฃฏ๏ฃฏ2 −1 0๏ฃบ๏ฃบ
๏ฃฏ4 −1 5๏ฃบ
๏ฃฐ
๏ฃป
๏ฃฎ−2 −2๏ฃน ๏ฃฏ
๏ฃบ 0 3
3 ๏ฃบ๏ฃบ
d) ๏ฃฏ๏ฃฏ 5
๏ฃฏ2 1๏ฃบ 1 3
๏ฃฐ
๏ฃป
๏ฃฎ0 2 5 0๏ฃน
๏ฃฏ
๏ฃบ
๏ฃฏ2 0 5 2๏ฃบ
๏ฃฏ
๏ฃบ
๏ฃฏ3 6 1 6๏ฃบ
๏ฃฐ
๏ฃป
๏ฃฎ6 6 2๏ฃน
๏ฃฏ
๏ฃบ
๏ฃฏ4 6 0๏ฃบ
๏ฃฏ
๏ฃบ
๏ฃฏ2 0 4๏ฃบ
๏ฃฐ
๏ฃป
Exercise A.2.104: Compute the inverse of the given matrices
0 1
b)
1 0
a) 2
1 2
c)
3 5
4 2
d)
4 4
Exercise A.2.105: Compute the inverse of the given matrices
a)
2 0
0 3
๏ฃฎ4 0 0 ๏ฃน
๏ฃบ
๏ฃฏ
b) ๏ฃฏ๏ฃฏ0 5 0 ๏ฃบ๏ฃบ
๏ฃฏ0 0 −1๏ฃบ
๏ฃป
๏ฃฐ
๏ฃฎ−1
๏ฃฏ
๏ฃฏ0
c) ๏ฃฏ๏ฃฏ
๏ฃฏ0
๏ฃฏ0
๏ฃฐ
0
2
0
0
0 0 ๏ฃน๏ฃบ
0 0 ๏ฃบ๏ฃบ
3 0 ๏ฃบ๏ฃบ
0 0.1๏ฃบ๏ฃป
407
A.3. ELIMINATION
A.3
Elimination
Note: 2–3 lectures
A.3.1
Linear systems of equations
One application of matrices is to solve systems of linear equations∗ . Consider the following
system of linear equations
2๐‘ฅ1 + 2๐‘ฅ2 + 2๐‘ฅ3 = 2,
๐‘ฅ1 + ๐‘ฅ2 + 3๐‘ฅ3 = 5,
(A.2)
๐‘ฅ1 + 4๐‘ฅ2 + ๐‘ฅ3 = 10.
There is a systematic procedure called elimination to solve such a system. In this
procedure, we attempt to eliminate each variable from all but one equation. We want to
end up with equations such as ๐‘ฅ3 = 2, where we can just read off the answer.
We write a system of linear equations as a matrix equation:
®
๐ด ๐‘ฅ® = ๐‘.
The system (A.2) is written as
๏ฃฎ2
๏ฃฏ
๏ฃฏ1
๏ฃฏ
๏ฃฏ1
๏ฃฐ
|
๏ฃฎ2๏ฃน
2 2๏ฃน๏ฃบ ๏ฃฎ๏ฃฏ ๐‘ฅ1 ๏ฃน๏ฃบ
๏ฃฏ ๏ฃบ
1 3๏ฃบ๏ฃบ ๏ฃฏ๏ฃฏ ๐‘ฅ2 ๏ฃบ๏ฃบ = ๏ฃฏ๏ฃฏ 5 ๏ฃบ๏ฃบ .
๏ฃฏ10๏ฃบ
4 1๏ฃบ๏ฃป ๏ฃฏ๏ฃฐ ๐‘ฅ3 ๏ฃบ๏ฃป
๏ฃฐ ๏ฃป
{z } |{z}
๐ด
๐‘ฅ®
|{z}
๐‘®
If we knew the inverse of ๐ด, then we would be done; we would simply solve the equation:
®
๐‘ฅ® = ๐ด−1 ๐ด ๐‘ฅ® = ๐ด−1 ๐‘.
Well, but that is part of the problem, we do not know how to compute the inverse for
matrices bigger than 2 × 2. We will see later that to compute the inverse we are really
® In other words, we will need to do elimination to
solving ๐ด ๐‘ฅ® = ๐‘® for several different ๐‘.
find ๐ด−1 . In addition, we may wish to solve ๐ด ๐‘ฅ® = ๐‘® if ๐ด is not invertible, or perhaps not
even square.
Let us return to the equations themselves and see how we can manipulate them. There
are a few operations we can perform on the equations that do not change the solution.
First, perhaps an operation that may seem stupid, we can swap two equations in (A.2):
๐‘ฅ1 + ๐‘ฅ2 + 3๐‘ฅ3 = 5,
2๐‘ฅ1 + 2๐‘ฅ2 + 2๐‘ฅ3 = 2,
๐‘ฅ1 + 4๐‘ฅ2 + ๐‘ฅ3 = 10.
∗ Although
perhaps we have this backwards, quite often we solve a linear system of equations to find out
something about matrices, rather than vice versa.
408
APPENDIX A. LINEAR ALGEBRA
Clearly these new equations have the same solutions ๐‘ฅ1 , ๐‘ฅ2 , ๐‘ฅ3 . A second operation is that
we can multiply an equation by a nonzero number. For example, we multiply the third
equation in (A.2) by 3:
2๐‘ฅ1 + 2๐‘ฅ2 + 2๐‘ฅ 3 = 2,
๐‘ฅ1 +
๐‘ฅ2 + 3๐‘ฅ 3 = 5,
3๐‘ฅ1 + 12๐‘ฅ2 + 3๐‘ฅ 3 = 30.
Finally, we can add a multiple of one equation to another equation. For instance, we add 3
times the third equation in (A.2) to the second equation:
2๐‘ฅ1 +
2๐‘ฅ2 +
2๐‘ฅ 3 = 2,
(1 + 3)๐‘ฅ1 + (1 + 12)๐‘ฅ 2 + (3 + 3)๐‘ฅ 3 = 5 + 30,
๐‘ฅ1 +
4๐‘ฅ 2 +
๐‘ฅ 3 = 10.
The same ๐‘ฅ1 , ๐‘ฅ2 , ๐‘ฅ3 should still be solutions to the new equations. These were just examples;
we did not get any closer to the solution. We must to do these three operations in some
more logical manner, but it turns out these three operations suffice to solve every linear
equation.
The first thing is to write the equations in a more compact manner. Given
®
๐ด ๐‘ฅ® = ๐‘,
we write down the so-called augmented matrix
®
[๐ด | ๐‘],
where the vertical line is just a marker for us to know where the “right-hand side” of the
equation starts. For the system (A.2) the augmented matrix is
๏ฃฎ2 2 2 2 ๏ฃน
๏ฃฏ
๏ฃบ
๏ฃฏ 1 1 3 5 ๏ฃบ.
๏ฃฏ
๏ฃบ
๏ฃฏ 1 4 1 10 ๏ฃบ
๏ฃฐ
๏ฃป
The entire process of elimination, which we will describe, is often applied to any sort of
matrix, not just an augmented matrix. Simply think of the matrix as the 3 × 4 matrix
๏ฃฎ2 2 2 2 ๏ฃน
๏ฃฏ
๏ฃบ
๏ฃฏ1 1 3 5 ๏ฃบ .
๏ฃฏ
๏ฃบ
๏ฃฏ1 4 1 10๏ฃบ
๏ฃฐ
๏ฃป
A.3.2
Row echelon form and elementary operations
We apply the three operations above to the matrix. We call these the elementary operations
or elementary row operations. Translating the operations to the matrix setting, the operations
become:
409
A.3. ELIMINATION
(i) Swap two rows.
(ii) Multiply a row by a nonzero number.
(iii) Add a multiple of one row to another row.
We run these operations until we get into a state where it is easy to read off the answer, or
until we get into a contradiction indicating no solution.
More specifically, we run the operations until we obtain the so-called row echelon form.
Let us call the first (from the left) nonzero entry in each row the leading entry. A matrix is
in row echelon form if the following conditions are satisfied:
(i) The leading entry in any row is strictly to the right of the leading entry of the row
above.
(ii) Any zero rows are below all the nonzero rows.
(iii) All leading entries are 1.
A matrix is in reduced row echelon form if furthermore the following condition is satisfied.
(iv) All the entries above a leading entry are zero.
Note that the definition applies to matrices of any size.
Example A.3.1: The following matrices are in row echelon form. The leading entries are
marked:
๏ฃฎ1
๏ฃฏ
๏ฃฏ0
๏ฃฏ
๏ฃฏ0
๏ฃฐ
2
0
0
9
1
0
3 ๏ฃน๏ฃบ
5 ๏ฃบ๏ฃบ
1 ๏ฃบ๏ฃป
−1 −3๏ฃน๏ฃบ
5 ๏ฃบ๏ฃบ
1
0
1 ๏ฃบ๏ฃป
๏ฃฎ1
๏ฃฏ
๏ฃฏ0
๏ฃฏ
๏ฃฏ0
๏ฃฐ
๏ฃฎ1
๏ฃฏ
๏ฃฏ0
๏ฃฏ
๏ฃฏ0
๏ฃฐ
2
1
0
1๏ฃน๏ฃบ
2๏ฃบ๏ฃบ
0๏ฃบ๏ฃป
๏ฃฎ0
๏ฃฏ
๏ฃฏ0
๏ฃฏ
๏ฃฏ0
๏ฃฐ
−5
0
0
1
0
0
2 ๏ฃน๏ฃบ
1 ๏ฃบ๏ฃบ
0 ๏ฃบ๏ฃป
None of the matrices above are in reduced row echelon form. For example, in the first matrix
none of the entries above the second and third leading entries are zero; they are 9, 3, and 5.
The following matrices are in reduced row echelon form. The leading entries are marked:
๏ฃฎ1
๏ฃฏ
๏ฃฏ0
๏ฃฏ
๏ฃฏ0
๏ฃฐ
3
0
0
0
1
0
8๏ฃน๏ฃบ
6๏ฃบ๏ฃบ
0๏ฃบ๏ฃป
๏ฃฎ1
๏ฃฏ
๏ฃฏ0
๏ฃฏ
๏ฃฏ0
๏ฃฐ
0
1
0
2
3
0
0 ๏ฃน๏ฃบ
0 ๏ฃบ๏ฃบ
1 ๏ฃบ๏ฃป
๏ฃฎ1
๏ฃฏ
๏ฃฏ0
๏ฃฏ
๏ฃฏ0
๏ฃฐ
0
1
0
3 ๏ฃน๏ฃบ
−2๏ฃบ๏ฃบ
0 ๏ฃบ๏ฃป
๏ฃฎ0
๏ฃฏ
๏ฃฏ0
๏ฃฏ
๏ฃฏ0
๏ฃฐ
1
0
0
2
0
0
0 ๏ฃน๏ฃบ
1 ๏ฃบ๏ฃบ
0 ๏ฃบ๏ฃป
The procedure we will describe to find a reduced row echelon form of a matrix is called
Gauss–Jordan elimination. The first part of it, which obtains a row echelon form, is called
Gaussian elimination or row reduction. For some problems, a row echelon form is sufficient,
and it is a bit less work to only do this first part.
To attain the row echelon form we work systematically. We go column by column,
starting at the first column. We find topmost entry in the first column that is not zero, and
we call it the pivot. If there is no nonzero entry we move to the next column. We swap rows
410
APPENDIX A. LINEAR ALGEBRA
to put the row with the pivot as the first row. We divide the first row by the pivot to make
the pivot entry be a 1. Now look at all the rows below and subtract the correct multiple of
the pivot row so that all the entries below the pivot become zero.
After this procedure we forget that we had a first row (it is now fixed), and we forget
about the column with the pivot and all the preceding zero columns. Below the pivot row,
all the entries in these columns are just zero. Then we focus on the smaller matrix and we
repeat the steps above.
It is best shown by example, so let us go back to the example from the beginning of the
section. We keep the vertical line in the matrix, even though the procedure works on any
matrix, not just an augmented matrix. We start with the first column and we locate the
pivot, in this case the first entry of the first column.
๏ฃฎ 2
๏ฃฏ
๏ฃฏ 1
๏ฃฏ
๏ฃฏ 1
๏ฃฐ
2 2 2 ๏ฃน๏ฃบ
1 3 5 ๏ฃบ๏ฃบ
4 1 10 ๏ฃบ๏ฃป
๏ฃฎ 1
๏ฃฏ
๏ฃฏ 1
๏ฃฏ
๏ฃฏ 1
๏ฃฐ
1 1 1 ๏ฃน๏ฃบ
1 3 5 ๏ฃบ๏ฃบ
4 1 10 ๏ฃบ๏ฃป
We multiply the first row by 1/2.
We subtract the first row from the second and third row (two elementary operations).
๏ฃฎ1 1 1 1๏ฃน
๏ฃฏ
๏ฃบ
๏ฃฏ0 0 2 4๏ฃบ
๏ฃฏ
๏ฃบ
๏ฃฏ0 3 0 9๏ฃบ
๏ฃฐ
๏ฃป
We are done with the first column and the first row for now. We almost pretend the matrix
doesn’t have the first column and the first row.
๏ฃฎ∗ ∗ ∗ ∗ ๏ฃน
๏ฃฏ
๏ฃบ
๏ฃฏ∗ 0 2 4๏ฃบ
๏ฃฏ
๏ฃบ
๏ฃฏ∗ 3 0 9๏ฃบ
๏ฃฐ
๏ฃป
OK, look at the second column, and notice that now the pivot is in the third row.
๏ฃฎ1
๏ฃฏ
๏ฃฏ0
๏ฃฏ
๏ฃฏ0
๏ฃฐ
1
0
3
1 1 ๏ฃน๏ฃบ
2 4 ๏ฃบ๏ฃบ
0 9 ๏ฃบ๏ฃป
๏ฃฎ1
๏ฃฏ
๏ฃฏ0
๏ฃฏ
๏ฃฏ0
๏ฃฐ
1
3
0
1 1 ๏ฃน๏ฃบ
0 9 ๏ฃบ๏ฃบ
2 4 ๏ฃบ๏ฃป
We swap rows.
411
A.3. ELIMINATION
And we divide the pivot row by 3.
๏ฃฎ1
๏ฃฏ
๏ฃฏ0
๏ฃฏ
๏ฃฏ0
๏ฃฐ
1
1
0
1 1 ๏ฃน๏ฃบ
0 3 ๏ฃบ๏ฃบ
2 4 ๏ฃบ๏ฃป
We do not need to subtract anything as everything below the pivot is already zero. We
move on, we again start ignoring the second row and second column and focus on
๏ฃฎ∗ ∗ ∗ ∗ ๏ฃน
๏ฃฏ
๏ฃบ
๏ฃฏ ∗ ∗ ∗ ∗ ๏ฃบ.
๏ฃฏ
๏ฃบ
๏ฃฏ∗ ∗ 2 4๏ฃบ
๏ฃฐ
๏ฃป
We find the pivot, then divide that row by 2:
๏ฃฎ1 1
๏ฃฏ
๏ฃฏ0 1
๏ฃฏ
๏ฃฏ0 0
๏ฃฐ
1
0
2
1
3
4
๏ฃน
๏ฃบ
๏ฃบ
๏ฃบ
๏ฃบ
๏ฃป
→
๏ฃฎ1 1 1 1๏ฃน
๏ฃฏ
๏ฃบ
๏ฃฏ 0 1 0 3 ๏ฃบ.
๏ฃฏ
๏ฃบ
๏ฃฏ0 0 1 2๏ฃบ
๏ฃฐ
๏ฃป
The matrix is now in row echelon form.
The equation corresponding to the last row is ๐‘ฅ 3 = 2. We know ๐‘ฅ3 and we could
substitute it into the first two equations to get equations for ๐‘ฅ1 and ๐‘ฅ 2 . Then we could
do the same thing with ๐‘ฅ2 , until we solve for all 3 variables. This procedure is called
backsubstitution and we can achieve it via elementary operations. We start from the lowest
pivot (leading entry in the row echelon form) and subtract the right multiple from the row
above to make all the entries above this pivot zero. Then we move to the next pivot and so
on. After we are done, we will have a matrix in reduced row echelon form.
We continue our example. Subtract the last row from the first to get
๏ฃฎ 1 1 0 −1 ๏ฃน
๏ฃบ
๏ฃฏ
๏ฃฏ 0 1 0 3 ๏ฃบ.
๏ฃฏ
๏ฃบ
๏ฃฏ0 0 1 2 ๏ฃบ
๏ฃป
๏ฃฐ
The entry above the pivot in the second row is already zero. So we move onto the next
pivot, the one in the second row. We subtract this row from the top row to get
๏ฃฎ 1 0 0 −4 ๏ฃน
๏ฃฏ
๏ฃบ
๏ฃฏ 0 1 0 3 ๏ฃบ.
๏ฃฏ
๏ฃบ
๏ฃฏ0 0 1 2 ๏ฃบ
๏ฃฐ
๏ฃป
The matrix is in reduced row echelon form.
If we now write down the equations for ๐‘ฅ1 , ๐‘ฅ2 , ๐‘ฅ3 , we find
๐‘ฅ1 = −4,
๐‘ฅ2 = 3,
In other words, we have solved the system.
๐‘ฅ3 = 2.
412
A.3.3
APPENDIX A. LINEAR ALGEBRA
Non-unique solutions and inconsistent systems
It is possible that the solution of a linear system of equations is not unique, or that no
solution exists. Suppose for a moment that the row echelon form we found was
๏ฃฎ1 2 3 4๏ฃน
๏ฃบ
๏ฃฏ
๏ฃฏ 0 0 1 3 ๏ฃบ.
๏ฃฏ
๏ฃบ
๏ฃฏ0 0 0 1๏ฃบ
๏ฃฐ
๏ฃป
Then we have an equation 0 = 1 coming from the last row. That is impossible and the
®
equations are what we call inconsistent. There is no solution to ๐ด ๐‘ฅ® = ๐‘.
On the other hand, if we find a row echelon form
๏ฃฎ1 2 3 4๏ฃน
๏ฃบ
๏ฃฏ
๏ฃฏ 0 0 1 3 ๏ฃบ,
๏ฃบ
๏ฃฏ
๏ฃฏ0 0 0 0๏ฃบ
๏ฃป
๏ฃฐ
then there is no issue with finding solutions. In fact, we will find way too many. Let us
continue with backsubstitution (subtracting 3 times the third row from the first) to find the
reduced row echelon form and let’s mark the pivots.
๏ฃฎ 1
๏ฃฏ
๏ฃฏ 0
๏ฃฏ
๏ฃฏ 0
๏ฃฐ
2
0
0
0
1
0
−5
3
0
๏ฃน
๏ฃบ
๏ฃบ
๏ฃบ
๏ฃบ
๏ฃป
The last row is all zeros; it just says 0 = 0 and we ignore it. The two remaining equations
are
๐‘ฅ1 + 2๐‘ฅ2 = −5,
๐‘ฅ3 = 3.
Let us solve for the variables that corresponded to the pivots, that is ๐‘ฅ1 and ๐‘ฅ3 as there was
a pivot in the first column and in the third column:
๐‘ฅ1 = −2๐‘ฅ2 − 5,
๐‘ฅ3 = 3.
The variable ๐‘ฅ2 can be anything you wish and we still get a solution. The ๐‘ฅ2 is called a free
variable. There are infinitely many solutions, one for every choice of ๐‘ฅ 2 . If we pick ๐‘ฅ2 = 0,
then ๐‘ฅ1 = −5, and ๐‘ฅ 3 = 3 give a solution. But we also get a solution by picking say ๐‘ฅ2 = 1,
in which case ๐‘ฅ1 = −9 and ๐‘ฅ3 = 3, or by picking ๐‘ฅ 2 = −5 in which case ๐‘ฅ1 = 5 and ๐‘ฅ 3 = 3.
The general idea is that if any row has all zeros in the columns corresponding to the
® then
variables, but a nonzero entry in the column corresponding to the right-hand side ๐‘,
the system is inconsistent and has no solutions. In other words, the system is inconsistent
if you find a pivot on the right side of the vertical line drawn in the augmented matrix.
Otherwise, the system is consistent, and at least one solution exists.
413
A.3. ELIMINATION
Suppose the system is consistent (at least one solution exists):
(i) If every column corresponding to a variable has a pivot element, then the solution is
unique.
(ii) If there are columns corresponding to variables with no pivot, then those are free
variables that can be chosen arbitrarily, and there are infinitely many solutions.
® we have a so-called homogeneous matrix equation
When ๐‘® = 0,
®
๐ด ๐‘ฅ® = 0.
There is no need to write an augmented matrix in this case. As the elementary operations
do not do anything to a zero column, it always stays a zero column. Moreover, ๐ด ๐‘ฅ® = 0®
® Such a system is always consistent. It may
always has at least one solution, namely ๐‘ฅ® = 0.
have other solutions: If you find any free variables, then you get infinitely many solutions.
The set of solutions of ๐ด ๐‘ฅ® = 0® comes up quite often so people give it a name. It is called
the nullspace or the kernel of ๐ด. One place where the kernel comes up is invertibility of a
square matrix ๐ด. If the kernel of ๐ด contains a nonzero vector, then it contains infinitely
® since infinitely
many vectors (there was a free variable). But then it is impossible to invert 0,
® so there is no unique vector that ๐ด takes to 0.
® So if the kernel is
many vectors go to 0,
nontrivial, that is, if there are any nonzero vectors in the kernel, in other words, if there are
any free variables, or in yet other words, if the row echelon form of ๐ด has columns without
pivots, then ๐ด is not invertible. We will return to this idea later.
A.3.4
Linear independence and rank
If rows of a matrix correspond to equations, it may be good to find out how many equations
we really need to find the same set of solutions. Similarly, if we find a number of solutions
® we may ask if we found enough so that all other solutions can
to a linear equation ๐ด ๐‘ฅ® = 0,
be formed out of the given set. The concept we want is that of linear independence. That
same concept is useful for differential equations, for example in chapter 2.
Given row or column vectors ๐‘ฆ®1 , ๐‘ฆ®2 , . . . , ๐‘ฆ®๐‘› , a linear combination is an expression of the
form
๐›ผ1 ๐‘ฆ®1 + ๐›ผ2 ๐‘ฆ®2 + · · · + ๐›ผ ๐‘› ๐‘ฆ®๐‘› ,
where ๐›ผ1 , ๐›ผ2 , . . . , ๐›ผ ๐‘› are all scalars. For example, 3 ๐‘ฆ®1 + ๐‘ฆ®2 − 5 ๐‘ฆ®3 is a linear combination of
๐‘ฆ®1 , ๐‘ฆ®2 , and ๐‘ฆ®3 .
We have seen linear combinations before. The expression
๐ด ๐‘ฅ®
is a linear combination of the columns of ๐ด, while
๐‘ฅ®๐‘‡ ๐ด = (๐ด๐‘‡ ๐‘ฅ®)๐‘‡
is a linear combination of the rows of ๐ด.
414
APPENDIX A. LINEAR ALGEBRA
The way linear combinations come up in our study of differential equations is similar to
® ๐ด ๐‘ฅ®2 = 0,
®
the following computation. Suppose that ๐‘ฅ®1 , ๐‘ฅ®2 , . . . , ๐‘ฅ®๐‘› are solutions to ๐ด ๐‘ฅ®1 = 0,
® Then the linear combination
. . . , ๐ด ๐‘ฅ®๐‘› = 0.
๐‘ฆ® = ๐›ผ 1 ๐‘ฅ®1 + ๐›ผ 2 ๐‘ฅ®2 + · · · + ๐›ผ ๐‘› ๐‘ฅ®๐‘›
®
is a solution to ๐ด ๐‘ฆ® = 0:
๐ด ๐‘ฆ® = ๐ด(๐›ผ1 ๐‘ฅ®1 + ๐›ผ2 ๐‘ฅ®2 + · · · + ๐›ผ ๐‘› ๐‘ฅ®๐‘› ) =
®
= ๐›ผ1 ๐ด ๐‘ฅ®1 + ๐›ผ 2 ๐ด ๐‘ฅ®2 + · · · + ๐›ผ ๐‘› ๐ด ๐‘ฅ®๐‘› = ๐›ผ1 0® + ๐›ผ2 0® + · · · + ๐›ผ ๐‘› 0® = 0.
So if you have found enough solutions, you have them all. The question is, when did
we find enough of them?
We say the vectors ๐‘ฆ®1 , ๐‘ฆ®2 , . . . , ๐‘ฆ®๐‘› are linearly independent if the only solution to
๐›ผ1 ๐‘ฅ®1 + ๐›ผ2 ๐‘ฅ®2 + · · · + ๐›ผ ๐‘› ๐‘ฅ®๐‘› = 0®
is ๐›ผ 1 = ๐›ผ 2 = · · · = ๐›ผ ๐‘› = 0. Otherwise,
1
we
say the vectors are linearly dependent.
0
For example, the vectors 2 and 1 are linearly independent. Let’s try:
๐›ผ1
0
๐›ผ1
1
0
+ ๐›ผ2
=
= 0® =
.
1
2๐›ผ 1 + ๐›ผ2
0
2
So ๐›ผ1 = 0, and then it is clear that ๐›ผ2 = 0 as well. In other words, the two vectors are
linearly independent.
If a set of vectors is linearly dependent, that is, some of the ๐›ผ ๐‘— s are nonzero, then we can
®
solve for one vector in terms of the others. Suppose ๐›ผ1 ≠ 0. Since ๐›ผ1 ๐‘ฅ®1 +๐›ผ 2 ๐‘ฅ®2 +· · ·+๐›ผ ๐‘› ๐‘ฅ®๐‘› = 0,
then
−๐›ผ3
−๐›ผ ๐‘›
−๐›ผ2
๐‘ฅ®2 −
๐‘ฅ®3 + · · · +
๐‘ฅ®๐‘› .
๐‘ฅ®1 =
๐›ผ1
๐›ผ1
๐›ผ1
For example,
and so
๏ฃฎ1๏ฃน
๏ฃฎ1๏ฃน
๏ฃฎ 1 ๏ฃน ๏ฃฎ0๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบ
๏ฃฏ
๏ฃบ
๏ฃฏ
๏ฃบ
2 ๏ฃฏ2๏ฃบ − 4 ๏ฃฏ1๏ฃบ + 2 ๏ฃฏ๏ฃฏ 0 ๏ฃบ๏ฃบ = ๏ฃฏ๏ฃฏ0๏ฃบ๏ฃบ ,
๏ฃฏ3๏ฃบ
๏ฃฏ1๏ฃบ
๏ฃฏ−1๏ฃบ ๏ฃฏ0๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฐ ๏ฃป
๏ฃฐ ๏ฃป ๏ฃฐ ๏ฃป
๏ฃฎ1 ๏ฃน
๏ฃฎ1๏ฃน ๏ฃฎ 1 ๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบ
๏ฃฏ2๏ฃบ = 2 ๏ฃฏ1๏ฃบ − ๏ฃฏ 0 ๏ฃบ .
๏ฃฏ ๏ฃบ
๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบ
๏ฃฏ3 ๏ฃบ
๏ฃฏ1๏ฃบ ๏ฃฏ−1๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฐ ๏ฃป ๏ฃฐ ๏ฃป
You may have noticed that solving for those ๐›ผ ๐‘— s is just solving linear equations, and so
you may not be surprised that to check if a set of vectors is linearly independent we use
row reduction.
Given a set of vectors, we may not be interested in just finding if they are linearly
independent or not, we may be interested in finding a linearly independent subset. Or
415
A.3. ELIMINATION
perhaps we may want to find some other vectors that give the same linear combinations
and are linearly independent. The way to figure this out is to form a matrix out of our
vectors. If we have row vectors we consider them as rows of a matrix. If we have column
vectors we consider them columns of a matrix. The set of all linear combinations of a set of
vectors is called their span.
span ๐‘ฅ®1 , ๐‘ฅ®2 , . . . , ๐‘ฅ®๐‘› = Set of all linear combinations of ๐‘ฅ®1 , ๐‘ฅ®2 , . . . , ๐‘ฅ®๐‘› .
Given a matrix ๐ด, the maximal number of linearly independent rows is called the rank
of ๐ด, and we write “rank ๐ด” for the rank. For example,
๏ฃฎ1 1 1๏ฃน
๏ฃฏ
๏ฃบ
2
2 ๏ฃบ๏ฃบ = 1.
rank ๏ฃฏ๏ฃฏ 2
๏ฃฏ−1 −1 −1๏ฃบ
๏ฃฐ
๏ฃป
The second and third row are multiples of the first one. We cannot choose more than one
row and still have a linearly independent set. But what is
๏ฃฎ1 2 3๏ฃน
๏ฃฏ
๏ฃบ
rank ๏ฃฏ๏ฃฏ4 5 6๏ฃบ๏ฃบ
๏ฃฏ7 8 9๏ฃบ
๏ฃฐ
๏ฃป
= ?
That seems to be a tougher question to answer. The first two rows are linearly independent
(neither is a multiple of the other), so the rank is at least two. If we would set up the
equations for the ๐›ผ1 , ๐›ผ2 , and ๐›ผ3 , we would find a system with infinitely many solutions.
One solution is
1 2 3 −2 4 5 6 + 7 8 9 = 0 0 0 .
So the set of all three rows is linearly dependent, the rank cannot be 3. Therefore the rank
is 2.
But how can we do this in a more systematic way? We find the row echelon form!
๏ฃฎ1 2 3๏ฃน
๏ฃฏ
๏ฃบ
๏ฃฏ4 5 6๏ฃบ
๏ฃฏ
๏ฃบ
๏ฃฏ7 8 9๏ฃบ
๏ฃฐ
๏ฃป
Row echelon form of
is
๏ฃฎ1 2 3๏ฃน
๏ฃฏ
๏ฃบ
๏ฃฏ0 1 2๏ฃบ .
๏ฃฏ
๏ฃบ
๏ฃฏ0 0 0๏ฃบ
๏ฃฐ
๏ฃป
The elementary row operations do not change the set of linear combinations of the rows
(that was one of the main reasons for defining them as they were). In other words, the
span of the rows of the ๐ด is the same as the span of the rows of the row echelon form of
๐ด. In particular, the number of linearly independent rows is the same. And in the row
echelon form, all nonzero rows are linearly independent. This is not hard to see. Consider
the two nonzero rows in the example above. Suppose we tried to solve for the ๐›ผ1 and ๐›ผ2 in
๐›ผ1 1 2 3 + ๐›ผ2 0 1 2 = 0 0 0 .
Since the first column of the row echelon matrix has zeros except in the first row means
that ๐›ผ1 = 0. For the same reason, ๐›ผ2 is zero. We only have two nonzero rows, and they are
linearly independent, so the rank of the matrix is 2.
416
APPENDIX A. LINEAR ALGEBRA
The span of the rows is called the row space. The row space of ๐ด and the row echelon
form of ๐ด are the same. In the example,
๏ฃฎ1 2 3๏ฃน
๏ฃฏ
๏ฃบ
row space of ๏ฃฏ๏ฃฏ4 5 6๏ฃบ๏ฃบ = span 1 2 3 , 4 5 6 , 7 8 9
๏ฃฏ7 8 9๏ฃบ
๏ฃฐ
๏ฃป
= span
1 2 3 , 0 1 2
.
Similarly to row space, the span of columns is called the column space.
๏ฃฑ
๏ฃฎ1 2 3๏ฃน
๏ฃฎ ๏ฃน ๏ฃฎ ๏ฃน ๏ฃฎ ๏ฃน๏ฃผ
๏ฃด
๏ฃฒ ๏ฃฏ1๏ฃบ ๏ฃฏ2๏ฃบ ๏ฃฏ3๏ฃบ ๏ฃด
๏ฃด
๏ฃฝ
๏ฃด
๏ฃฏ
๏ฃบ
column space of ๏ฃฏ๏ฃฏ4 5 6๏ฃบ๏ฃบ = span ๏ฃฏ๏ฃฏ4๏ฃบ๏ฃบ , ๏ฃฏ๏ฃฏ5๏ฃบ๏ฃบ , ๏ฃฏ๏ฃฏ6๏ฃบ๏ฃบ .
๏ฃด
๏ฃด ๏ฃฏ7๏ฃบ ๏ฃฏ8๏ฃบ ๏ฃฏ9๏ฃบ ๏ฃด
๏ฃด
๏ฃฏ7 8 9๏ฃบ
๏ฃฐ
๏ฃป
๏ฃณ๏ฃฐ ๏ฃป ๏ฃฐ ๏ฃป ๏ฃฐ ๏ฃป๏ฃพ
So it may also be good to find the number of linearly independent columns of ๐ด. One
way to do that is to find the number of linearly independent rows of ๐ด๐‘‡ . It is a tremendously
useful fact that the number of linearly independent columns is always the same as the
number of linearly independent rows:
Theorem A.3.1. rank ๐ด = rank ๐ด๐‘‡
In particular, to find a set of linearly independent columns we need to look at where
the pivots were. If you recall above, when solving ๐ด ๐‘ฅ® = 0® the key was finding the pivots,
any non-pivot columns corresponded to free variables. That means we can solve for the
non-pivot columns in terms of the pivot columns. Let’s see an example. First we reduce
some random matrix:
๏ฃฎ1 2 3 4๏ฃน
๏ฃฏ
๏ฃบ
๏ฃฏ2 4 5 6๏ฃบ .
๏ฃฏ
๏ฃบ
๏ฃฏ3 6 7 8๏ฃบ
๏ฃฐ
๏ฃป
We find a pivot and reduce the rows below:
๏ฃฎ1
2 3 4๏ฃน๏ฃบ
๏ฃฏ
4 5 6๏ฃบ๏ฃบ → ๏ฃฏ๏ฃฏ 0
๏ฃฏ3
6 7 8๏ฃบ๏ฃป
๏ฃฐ
๏ฃฎ1
๏ฃฏ
๏ฃฏ2
๏ฃฏ
๏ฃฏ3
๏ฃฐ
๏ฃฎ1
2 3
4 ๏ฃน๏ฃบ
๏ฃฏ
0 −1 −2๏ฃบ๏ฃบ → ๏ฃฏ๏ฃฏ 0
๏ฃฏ0
6 7
8 ๏ฃบ๏ฃป
๏ฃฐ
2 3
4 ๏ฃน๏ฃบ
0 −1 −2๏ฃบ๏ฃบ .
0 −2 −4๏ฃบ๏ฃป
We find the next pivot, make it one, and rinse and repeat:
๏ฃฎ1
๏ฃฏ
๏ฃฏ0
๏ฃฏ
๏ฃฏ0
๏ฃฐ
2
0
0
3
−1
−2
๏ฃฎ1
4 ๏ฃน๏ฃบ
๏ฃฏ
๏ฃบ
−2๏ฃบ → ๏ฃฏ๏ฃฏ 0
๏ฃฏ0
−4๏ฃบ๏ฃป
๏ฃฐ
๏ฃฎ1
2 3
4 ๏ฃน๏ฃบ
๏ฃฏ
๏ฃบ
0 1
2 ๏ฃบ → ๏ฃฏ๏ฃฏ 0
๏ฃฏ0
0 −2 −4๏ฃบ๏ฃป
๏ฃฐ
2
0
0
3
1
0
4๏ฃน๏ฃบ
2๏ฃบ๏ฃบ .
0๏ฃบ๏ฃป
The final matrix is the row echelon form of the matrix. Consider the pivots that we marked.
The pivot columns are the first and the third column. All other columns correspond to free
417
A.3. ELIMINATION
® so all other columns can be solved in terms of the first and
variables when solving ๐ด ๐‘ฅ® = 0,
the third column. In other words
๏ฃฑ
๏ฃฑ
๏ฃฎ ๏ฃน ๏ฃฎ ๏ฃน ๏ฃฎ ๏ฃน ๏ฃฎ ๏ฃน๏ฃผ
๏ฃฎ ๏ฃน ๏ฃฎ ๏ฃน๏ฃผ
๏ฃฎ1 2 3 4๏ฃน
๏ฃด
๏ฃด
๏ฃฝ
๏ฃด
๏ฃฒ ๏ฃฏ1๏ฃบ ๏ฃฏ3๏ฃบ ๏ฃด
๏ฃด
๏ฃฝ
๏ฃด
๏ฃฒ ๏ฃฏ1๏ฃบ ๏ฃฏ2๏ฃบ ๏ฃฏ3๏ฃบ ๏ฃฏ4๏ฃบ ๏ฃด
๏ฃด
๏ฃฏ
๏ฃบ
column space of ๏ฃฏ๏ฃฏ2 4 5 6๏ฃบ๏ฃบ = span ๏ฃฏ๏ฃฏ2๏ฃบ๏ฃบ , ๏ฃฏ๏ฃฏ4๏ฃบ๏ฃบ , ๏ฃฏ๏ฃฏ5๏ฃบ๏ฃบ , ๏ฃฏ๏ฃฏ6๏ฃบ๏ฃบ = span ๏ฃฏ๏ฃฏ2๏ฃบ๏ฃบ , ๏ฃฏ๏ฃฏ5๏ฃบ๏ฃบ .
๏ฃด
๏ฃด
๏ฃด
๏ฃด ๏ฃฏ3๏ฃบ ๏ฃฏ7๏ฃบ ๏ฃด
๏ฃด
๏ฃด ๏ฃฏ3๏ฃบ ๏ฃฏ6๏ฃบ ๏ฃฏ7๏ฃบ ๏ฃฏ8๏ฃบ ๏ฃด
๏ฃฏ3 6 7 8๏ฃบ
๏ฃฐ
๏ฃป
๏ฃณ๏ฃฐ ๏ฃป ๏ฃฐ ๏ฃป ๏ฃฐ ๏ฃป ๏ฃฐ ๏ฃป๏ฃพ
๏ฃณ๏ฃฐ ๏ฃป ๏ฃฐ ๏ฃป๏ฃพ
We could perhaps use another pair of columns to get the same span, but the first and the
third are guaranteed to work because they are pivot columns.
The discussion above could be expanded into a proof of the theorem if we wanted. As
each nonzero row in the row echelon form contains a pivot, then the rank is the number of
pivots, which is the same as the maximal number of linearly independent columns.
The idea also works in reverse. Suppose we have a bunch of column vectors and we just
need to find a linearly independent set. For example, suppose we started with the vectors
๏ฃฎ1๏ฃน
๏ฃฏ ๏ฃบ
๐‘ฃ®1 = ๏ฃฏ๏ฃฏ2๏ฃบ๏ฃบ ,
๏ฃฏ3๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฎ2๏ฃน
๏ฃฏ ๏ฃบ
๐‘ฃ®2 = ๏ฃฏ๏ฃฏ4๏ฃบ๏ฃบ ,
๏ฃฏ6๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฎ3๏ฃน
๏ฃฏ ๏ฃบ
๐‘ฃ®3 = ๏ฃฏ๏ฃฏ5๏ฃบ๏ฃบ ,
๏ฃฏ7๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฎ4๏ฃน
๏ฃฏ ๏ฃบ
๐‘ฃ®4 = ๏ฃฏ๏ฃฏ6๏ฃบ๏ฃบ .
๏ฃฏ8๏ฃบ
๏ฃฐ ๏ฃป
These vectors are not linearly independent as we saw above. In particular, the span of ๐‘ฃ®1
and ๐‘ฃ®3 is the same as the span of all four of the vectors. So ๐‘ฃ®2 and ๐‘ฃ®4 can both be written as
linear combinations of ๐‘ฃ®1 and ๐‘ฃ®3 . A common thing that comes up in practice is that one
gets a set of vectors whose span is the set of solutions of some problem. But perhaps we
get way too many vectors, we want to simplify. For example above, all vectors in the span
of ๐‘ฃ®1 , ๐‘ฃ®2 , ๐‘ฃ®3 , ๐‘ฃ®4 can be written ๐›ผ1 ๐‘ฃ®1 + ๐›ผ2 ๐‘ฃ®2 + ๐›ผ3 ๐‘ฃ®3 + ๐›ผ4 ๐‘ฃ®4 for some numbers ๐›ผ1 , ๐›ผ2 , ๐›ผ3 , ๐›ผ 4 .
But it is also true that every such vector can be written as ๐‘Ž®
๐‘ฃ 1 + ๐‘®
๐‘ฃ3 for two numbers ๐‘Ž and
๐‘. And one has to admit, that looks much simpler. Moreover, these numbers ๐‘Ž and ๐‘ are
unique. More on that in the next section.
To find this linearly independent set we simply take our vectors and form the matrix
[®
๐‘ฃ1 ๐‘ฃ®2 ๐‘ฃ®3 ๐‘ฃ®4 ], that is, the matrix
๏ฃฎ1 2 3 4๏ฃน
๏ฃฏ
๏ฃบ
๏ฃฏ2 4 5 6๏ฃบ .
๏ฃฏ
๏ฃบ
๏ฃฏ3 6 7 8๏ฃบ
๏ฃฐ
๏ฃป
We crank up the row-reduction machine, feed this matrix into it, find the pivot columns,
and pick those. In this case, ๐‘ฃ®1 and ๐‘ฃ®3 .
A.3.5
Computing the inverse
If the matrix ๐ด is square and there exists a unique solution ๐‘ฅ® to ๐ด ๐‘ฅ® = ๐‘® for any ๐‘® (there are
no free variables), then ๐ด is invertible. This is equivalent to the ๐‘› × ๐‘› matrix ๐ด being of
rank ๐‘›.
® Now we just need to compute what ๐ด−1 is. We
In particular, if ๐ด ๐‘ฅ® = ๐‘® then ๐‘ฅ® = ๐ด−1 ๐‘.
® but that would be ridiculous.
can surely do elimination every time we want to find ๐ด−1 ๐‘,
418
APPENDIX A. LINEAR ALGEBRA
The mapping ๐ด−1 is linear and hence given by a matrix, and we have seen that to figure out
the matrix we just need to find where ๐ด−1 takes the standard basis vectors ๐‘’®1 , ๐‘’®2 , . . . , ๐‘’®๐‘› .
That is, to find the first column of ๐ด−1 , we solve ๐ด ๐‘ฅ® = ๐‘’®1 , because then ๐ด−1 ๐‘’®1 = ๐‘ฅ®. To
find the second column of ๐ด−1 , we solve ๐ด ๐‘ฅ® = ๐‘’®2 . And so on. It is really just ๐‘› eliminations
that we need to do. But it gets even easier. If you think about it, the elimination is the same
for everything on the left side of the augmented matrix. Doing ๐‘› eliminations separately
we would redo most of the computations. Best is to do all at once.
Therefore, to find the inverse of ๐ด, we write an ๐‘› × 2๐‘› augmented matrix [ ๐ด | ๐ผ ], where
๐ผ is the identity matrix, whose columns are precisely the standard basis vectors. We then
perform row reduction until we arrive at the reduced row echelon form. If ๐ด is invertible,
then pivots can be found in every column of ๐ด, and so the reduced row echelon form of
[ ๐ด | ๐ผ ] looks like [ ๐ผ | ๐ด−1 ]. We then just read off the inverse ๐ด−1 . If you do not find a pivot
in every one of the first ๐‘› columns of the augmented matrix, then ๐ด is not invertible.
This is best seen by example. Suppose we wish to invert the matrix
๏ฃฎ1 2 3๏ฃน
๏ฃฏ
๏ฃบ
๏ฃฏ2 0 1๏ฃบ .
๏ฃฏ
๏ฃบ
๏ฃฏ3 1 0๏ฃบ
๏ฃฐ
๏ฃป
We write the augmented matrix and we start reducing:
๏ฃฎ
๏ฃฏ
๏ฃฏ
๏ฃฏ
๏ฃฏ
๏ฃฐ
๏ฃฎ
๏ฃฏ
→ ๏ฃฏ๏ฃฏ
๏ฃฏ
๏ฃฐ
๏ฃฎ
๏ฃฏ
→ ๏ฃฏ๏ฃฏ
๏ฃฏ
๏ฃฐ
๏ฃฎ
๏ฃฏ
→ ๏ฃฏ๏ฃฏ
๏ฃฏ
๏ฃฐ
1
2
3
1
0
0
2 3 1 0 0 ๏ฃน๏ฃบ
0 1 0 1 0 ๏ฃบ๏ฃบ →
1 0 0 0 1 ๏ฃบ๏ฃป
2
3
1
5
1
1 /4 /2
−5 −9 −3
0 0 ๏ฃน๏ฃบ
1/4 0 ๏ฃบ →
๏ฃบ
0 1 ๏ฃบ๏ฃป
1
0
0
2
1
0
3
5/4
1
0
1/4
5/11
1
0
0
0
1
0
0
0
1
1
1/2
2/11
0
0
−4/11
−1/11
3/11
3/11
−9/11
2/11
5/11
๏ฃน
๏ฃบ
๏ฃบ→
๏ฃบ
๏ฃบ
๏ฃป
2/11 ๏ฃน
๏ฃบ
5/11 ๏ฃบ .
๏ฃบ
−4/11 ๏ฃบ
๏ฃป
๏ฃฎ
๏ฃฏ
๏ฃฏ
๏ฃฏ
๏ฃฏ
๏ฃฐ
๏ฃฎ
๏ฃฏ
๏ฃฏ
๏ฃฏ
๏ฃฏ
๏ฃฐ
๏ฃฎ
๏ฃฏ
๏ฃฏ
๏ฃฏ
๏ฃฏ
๏ฃฐ
1
0
0
1
0
0
1
0
0
2
3
1 0 0 ๏ฃน๏ฃบ
−4 −5 −2 1 0 ๏ฃบ๏ฃบ →
−5 −9 −3 0 1 ๏ฃบ๏ฃป
2
1
0
3
5/4
−11/4
2
1
0
0
0
1
1
1/2
−1/2
0 0 ๏ฃน๏ฃบ
1/4 0 ๏ฃบ →
๏ฃบ
−5/4 1 ๏ฃบ
๏ฃป
5/11
−5/11
12/11
3/11
−9/11
5/11
2/11
5/11
−4/11
๏ฃน
๏ฃบ
๏ฃบ→
๏ฃบ
๏ฃบ
๏ฃป
So
๏ฃฎ1 2 3๏ฃน −1 ๏ฃฎ−1/11
๏ฃฏ
๏ฃบ
๏ฃฏ
๏ฃฏ2 0 1๏ฃบ = ๏ฃฏ 3/11
๏ฃฏ
๏ฃบ
๏ฃฏ
๏ฃฏ3 1 0๏ฃบ
๏ฃฏ 2/11
๏ฃฐ
๏ฃป
๏ฃฐ
3/11
−9/11
5/11
2/11
๏ฃน
๏ฃบ
5/11 ๏ฃบ .
๏ฃบ
−4/11๏ฃบ
๏ฃป
Not too terrible, no? Perhaps harder than inverting a 2 × 2 matrix for which we had a
simple formula, but not too bad. Really in practice this is done efficiently by a computer.
419
A.3. ELIMINATION
A.3.6
Exercises
Exercise A.3.1: Compute the reduced row echelon form for the following matrices:
1 3 1
a)
0 1 1
3 3
b)
6 −3
3
6
c)
−2 −3
๏ฃฎ 2 1 3 −3๏ฃน
๏ฃฏ
๏ฃบ
f) ๏ฃฏ๏ฃฏ 6 0 0 −1๏ฃบ๏ฃบ
๏ฃฏ−2 4 4 3 ๏ฃบ
๏ฃฐ
๏ฃป
๏ฃฎ9 3 0 2๏ฃน
๏ฃฏ
๏ฃบ
e) ๏ฃฏ๏ฃฏ8 6 3 6๏ฃบ๏ฃบ
๏ฃฏ7 9 7 9๏ฃบ
๏ฃฐ
๏ฃป
6 6 7 7
d)
1 1 0 1
๏ฃฎ0 2 0 −1๏ฃน
๏ฃฏ
๏ฃบ
h) ๏ฃฏ๏ฃฏ6 6 −3 3 ๏ฃบ๏ฃบ
๏ฃฏ6 2 −3 5 ๏ฃบ
๏ฃฐ
๏ฃป
๏ฃฎ6 6 5๏ฃน
๏ฃฏ
๏ฃบ
g) ๏ฃฏ๏ฃฏ0 −2 2๏ฃบ๏ฃบ
๏ฃฏ6 5 6๏ฃบ
๏ฃฐ
๏ฃป
Exercise A.3.2: Compute the inverse of the given matrices
๏ฃฎ1 0 0๏ฃน
๏ฃฏ
๏ฃบ
a) ๏ฃฏ๏ฃฏ0 0 1๏ฃบ๏ฃบ
๏ฃฏ0 1 0๏ฃบ
๏ฃฐ
๏ฃป
๏ฃฎ1 1 1๏ฃน
๏ฃฏ
๏ฃบ
b) ๏ฃฏ๏ฃฏ0 2 1๏ฃบ๏ฃบ
๏ฃฏ0 0 1๏ฃบ
๏ฃฐ
๏ฃป
๏ฃฎ1 2 3๏ฃน
๏ฃฏ
๏ฃบ
c) ๏ฃฏ๏ฃฏ2 0 1๏ฃบ๏ฃบ
๏ฃฏ0 2 1๏ฃบ
๏ฃฐ
๏ฃป
Exercise A.3.3: Solve (find all solutions), or show no solution exists
a)
๐‘ฅ1 + 5๐‘ฅ 2 + 3๐‘ฅ3 = 7
4๐‘ฅ 1 + 3๐‘ฅ2 = −2
b) 8๐‘ฅ1 + 7๐‘ฅ 2 + 8๐‘ฅ3 = 8
−๐‘ฅ 1 + ๐‘ฅ2 = 4
4๐‘ฅ1 + 8๐‘ฅ 2 + 6๐‘ฅ3 = 4
4๐‘ฅ 1 + 8๐‘ฅ2 + 2๐‘ฅ 3 = 3
๐‘ฅ + 2๐‘ฆ + 3๐‘ง = 4
c) −๐‘ฅ 1 − 2๐‘ฅ2 + 3๐‘ฅ 3 = 1
d) 2๐‘ฅ − ๐‘ฆ + 3๐‘ง = 1
4๐‘ฅ 1 + 8๐‘ฅ2
3๐‘ฅ + ๐‘ฆ + 6๐‘ง = 6
=2
Exercise A.3.4: By computing the inverse, solve the following systems for ๐‘ฅ®.
4 1
13
a)
๐‘ฅ® =
−1 3
26
3 3
2
b)
๐‘ฅ® =
3 4
−1
Exercise A.3.5: Compute the rank of the given matrices
๏ฃฎ6 3 5๏ฃน
๏ฃฏ
๏ฃบ
a) ๏ฃฏ๏ฃฏ1 4 1๏ฃบ๏ฃบ
๏ฃฏ7 7 6๏ฃบ
๏ฃฐ
๏ฃป
๏ฃฎ5 −2 −1๏ฃน
๏ฃฏ
๏ฃบ
6 ๏ฃบ๏ฃบ
b) ๏ฃฏ๏ฃฏ3 0
๏ฃฏ2 4 5 ๏ฃบ
๏ฃฐ
๏ฃป
๏ฃฎ1 2 3๏ฃน
๏ฃฏ
๏ฃบ
c) ๏ฃฏ๏ฃฏ−1 −2 −3๏ฃบ๏ฃบ
๏ฃฏ2 4 6๏ฃบ
๏ฃฐ
๏ฃป
Exercise A.3.6: For the matrices in Exercise A.3.5, find a linearly independent set of row vectors
that span the row space (they don’t need to be rows of the matrix).
Exercise A.3.7: For the matrices in Exercise A.3.5, find a linearly independent set of columns that
span the column space. That is, find the pivot columns of the matrices.
Exercise A.3.8: Find a linearly independent subset of the following vectors that has the same span.
๏ฃฎ−1๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฏ1๏ฃบ,
๏ฃฏ ๏ฃบ
๏ฃฏ2๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฎ2๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฏ−2๏ฃบ ,
๏ฃฏ ๏ฃบ
๏ฃฏ−4๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฎ−2๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฏ4๏ฃบ,
๏ฃฏ ๏ฃบ
๏ฃฏ1๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฎ−1๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฏ3๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ−2๏ฃบ
๏ฃฐ ๏ฃป
420
APPENDIX A. LINEAR ALGEBRA
Exercise A.3.101: Compute the reduced row echelon form for the following matrices:
1 0 1
a)
0 1 0
1 2
b)
3 4
๏ฃฎ2 2 5 2 ๏ฃน
๏ฃฏ
๏ฃบ
e) ๏ฃฏ๏ฃฏ1 −2 4 −1๏ฃบ๏ฃบ
๏ฃฏ0 3 1 −2๏ฃบ
๏ฃฐ
๏ฃป
๏ฃฎ−2 6 4 3๏ฃน
๏ฃฏ
๏ฃบ
f) ๏ฃฏ๏ฃฏ 6 0 −3 0๏ฃบ๏ฃบ
๏ฃฏ 4 2 −1 1๏ฃบ
๏ฃฐ
๏ฃป
1
1
c)
−2 −2
0 0 0 0
g)
0 0 0 0
๏ฃฎ 1 −3 1 ๏ฃน
๏ฃฏ
๏ฃบ
6 −2๏ฃบ๏ฃบ
d) ๏ฃฏ๏ฃฏ 4
๏ฃฏ−2 6 −2๏ฃบ
๏ฃฐ
๏ฃป
h)
1 2 3 3
1 2 3 5
Exercise A.3.102: Compute the inverse of the given matrices
๏ฃฎ 0 1 0๏ฃน
๏ฃฏ
๏ฃบ
a) ๏ฃฏ๏ฃฏ−1 0 0๏ฃบ๏ฃบ
๏ฃฏ 0 0 1๏ฃบ
๏ฃฐ
๏ฃป
๏ฃฎ1 1 1๏ฃน
๏ฃฏ
๏ฃบ
b) ๏ฃฏ๏ฃฏ1 1 0๏ฃบ๏ฃบ
๏ฃฏ1 0 0๏ฃบ
๏ฃฐ
๏ฃป
๏ฃฎ2 4 0๏ฃน
๏ฃฏ
๏ฃบ
c) ๏ฃฏ๏ฃฏ2 2 3๏ฃบ๏ฃบ
๏ฃฏ2 4 1๏ฃบ
๏ฃฐ
๏ฃป
Exercise A.3.103: Solve (find all solutions), or show no solution exists
a)
5๐‘ฅ + 6๐‘ฆ + 5๐‘ง = 7
4๐‘ฅ1 + 3๐‘ฅ 2 = −1
b) 6๐‘ฅ + 8๐‘ฆ + 6๐‘ง = −1
5๐‘ฅ1 + 6๐‘ฅ 2 = 4
5๐‘ฅ + 2๐‘ฆ + 5๐‘ง = 2
๐‘Ž + ๐‘ + ๐‘ = −1
−2๐‘ฅ1 + 2๐‘ฅ2 + 8๐‘ฅ3 = 6
๐‘Ž + 5๐‘ + 6๐‘ = −1
c)
๐‘ฅ2 + ๐‘ฅ3 = 2
d)
๐‘ฅ1 + 4๐‘ฅ2 + ๐‘ฅ3 = 7
−2๐‘Ž + 5๐‘ + 6๐‘ = 8
Exercise A.3.104: By computing the inverse, solve the following systems for ๐‘ฅ®.
−1 1
4
a)
๐‘ฅ® =
3 3
6
2 7
1
b)
๐‘ฅ® =
1 6
3
Exercise A.3.105: Compute the rank of the given matrices
๏ฃฎ7 −1 6๏ฃน
๏ฃฏ
๏ฃบ
a) ๏ฃฏ๏ฃฏ7 7 7๏ฃบ๏ฃบ
๏ฃฏ7 6 2๏ฃบ
๏ฃฐ
๏ฃป
๏ฃฎ1 1 1๏ฃน
๏ฃฏ
๏ฃบ
b) ๏ฃฏ๏ฃฏ1 1 1๏ฃบ๏ฃบ
๏ฃฏ2 2 2๏ฃบ
๏ฃฐ
๏ฃป
๏ฃฎ0 3 −1๏ฃน
๏ฃฏ
๏ฃบ
c) ๏ฃฏ๏ฃฏ6 3 1 ๏ฃบ๏ฃบ
๏ฃฏ4 7 −1๏ฃบ
๏ฃฐ
๏ฃป
Exercise A.3.106: For the matrices in Exercise A.3.105, find a linearly independent set of row
vectors that span the row space (they don’t need to be rows of the matrix).
Exercise A.3.107: For the matrices in Exercise A.3.105, find a linearly independent set of columns
that span the column space. That is, find the pivot columns of the matrices.
Exercise A.3.108: Find a linearly independent subset of the following vectors that has the same
span.
๏ฃฎ0 ๏ฃน
๏ฃฎ3๏ฃน
๏ฃฎ0๏ฃน
๏ฃฎ−3๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ0 ๏ฃบ , ๏ฃฏ 1 ๏ฃบ , ๏ฃฏ 3 ๏ฃบ , ๏ฃฏ 2 ๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ0 ๏ฃบ
๏ฃฏ−5๏ฃบ
๏ฃฏ−1๏ฃบ
๏ฃฏ4๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฐ ๏ฃป
๏ฃฐ ๏ฃป
๏ฃฐ ๏ฃป
A.4. SUBSPACES, DIMENSION, AND THE KERNEL
A.4
421
Subspaces, dimension, and the kernel
Note: 1 lecture
A.4.1
Subspaces, basis, and dimension
We often find ourselves looking at the set of solutions of a linear equation ๐ฟ ๐‘ฅ® = 0® for some
matrix ๐ฟ, that is, we are interested in the kernel of ๐ฟ. The set of all such solutions has a nice
structure: It looks and acts a lot like some euclidean space โ„ ๐‘˜ .
We say that a set ๐‘† of vectors in โ„ ๐‘› is a subspace if whenever ๐‘ฅ® and ๐‘ฆ® are members of ๐‘†
and ๐›ผ is a scalar, then
๐‘ฅ® + ๐‘ฆ®,
and
๐›ผ ๐‘ฅ®
are also members of ๐‘†. That is, we can add and multiply by scalars and we still land in ๐‘†.
So every linear combination of vectors of ๐‘† is still in ๐‘†. That is really what a subspace is.
It is a subset where we can take linear combinations and still end up being in the subset.
Consequently the span of a number of vectors is automatically a subspace.
Example A.4.1: If we let ๐‘† = โ„ ๐‘› , then this ๐‘† is a subspace of โ„ ๐‘› . Adding any two vectors
in โ„ ๐‘› gets a vector in โ„ ๐‘› , and so does multiplying by scalars.
® that is, the set of the zero vector by itself, is also a subspace of โ„ ๐‘› .
The set ๐‘†0 = {0},
There is only one vector in this subspace, so we only need to verify the definition for that
®
one vector, and everything checks out: 0® + 0® = 0® and ๐›ผ0® = 0.
The set ๐‘†00 of all the vectors of the form (๐‘Ž, ๐‘Ž) for any real number ๐‘Ž, such as (1, 1), (3, 3),
or (−0.5, −0.5) is a subspace of โ„2 . Adding two such vectors, say (1, 1) + (3, 3) = (4, 4) again
gets a vector of the same form, and so does multiplying by a scalar, say 8(1, 1) = (8, 8).
If ๐‘† is a subspace and we can find ๐‘˜ linearly independent vectors in ๐‘†
๐‘ฃ®1 , ๐‘ฃ®2 , . . . , ๐‘ฃ® ๐‘˜ ,
such that every other vector in ๐‘† is a linear combination of ๐‘ฃ®1 , ๐‘ฃ®2 , . . . , ๐‘ฃ® ๐‘˜ , then the set
{®
๐‘ฃ1 , ๐‘ฃ®2 , . . . , ๐‘ฃ® ๐‘˜ } is called a basis of ๐‘†. In other words, ๐‘† is the span of {®
๐‘ฃ1 , ๐‘ฃ®2 , . . . , ๐‘ฃ® ๐‘˜ }. We
say that ๐‘† has dimension ๐‘˜, and we write
dim ๐‘† = ๐‘˜.
® then there exists
Theorem A.4.1. If ๐‘† ⊂ โ„ ๐‘› is a subspace and ๐‘† is not the trivial subspace {0},
a unique positive integer ๐‘˜ (the dimension) and a (not unique) basis {®
๐‘ฃ1 , ๐‘ฃ®2 , . . . , ๐‘ฃ® ๐‘˜ }, such that
® in ๐‘† can be uniquely represented by
every ๐‘ค
® = ๐›ผ1 ๐‘ฃ®1 + ๐›ผ2 ๐‘ฃ®2 + · · · + ๐›ผ ๐‘˜ ๐‘ฃ® ๐‘˜ ,
๐‘ค
for some scalars ๐›ผ1 , ๐›ผ2 , . . . , ๐›ผ ๐‘˜ .
422
APPENDIX A. LINEAR ALGEBRA
Just as a vector in โ„ ๐‘˜ is represented by a ๐‘˜-tuple of numbers, so is a vector in a
๐‘˜-dimensional subspace of โ„ ๐‘› represented by a ๐‘˜-tuple of numbers. At least once we have
fixed a basis. A different basis would give a different ๐‘˜-tuple of numbers for the same
vector.
We should reiterate that while ๐‘˜ is unique (a subspace cannot have two different
dimensions), the set of basis vectors is not at all unique. There are lots of different bases for
any given subspace. Finding just the right basis for a subspace is a large part of what one
does in linear algebra. In fact, that is what we spend a lot of time on in linear differential
equations, although at first glance it may not seem like that is what we are doing.
Example A.4.2: The standard basis
๐‘’®1 , ๐‘’®2 , . . . , ๐‘’®๐‘› ,
is a basis of โ„ ๐‘› , (hence the name). So as expected
dim โ„ ๐‘› = ๐‘›.
® is of dimension 0.
On the other hand the subspace {0}
The subspace ๐‘†00 from Example A.4.1, that is, the set of vectors (๐‘Ž, ๐‘Ž), is of dimension 1.
One possible basis is simply {(1, 1)}, the single vector (1, 1): every vector in ๐‘†00 can be
represented by ๐‘Ž(1, 1) = (๐‘Ž, ๐‘Ž). Similarly another possible basis would be {(−1, −1)}. Then
the vector (๐‘Ž, ๐‘Ž) would be represented as (−๐‘Ž)(1, 1).
Row and column spaces of a matrix are also examples of subspaces, as they are given
as the span of vectors. We can use what we know about rank, row spaces, and column
spaces from the previous section to find a basis.
Example A.4.3: In the last section, we considered the matrix
๏ฃฎ1 2 3 4๏ฃน
๏ฃฏ
๏ฃบ
๐ด = ๏ฃฏ๏ฃฏ2 4 5 6๏ฃบ๏ฃบ .
๏ฃฏ3 6 7 8๏ฃบ
๏ฃฐ
๏ฃป
Using row reduction to find the pivot columns, we found
๏ฃฑ
๏ฃฎ1 2 3 4๏ฃน
๏ฃฎ ๏ฃน ๏ฃฎ ๏ฃน๏ฃผ
๏ฃด
๏ฃฒ ๏ฃฏ1๏ฃบ ๏ฃฏ3๏ฃบ ๏ฃด
๏ฃด
๏ฃฝ
๏ฃด
๏ฃบª
© ๏ฃฏ๏ฃฏ
column space of ๐ด ­ ๏ฃฏ2 4 5 6๏ฃบ๏ฃบ ® = span ๏ฃฏ๏ฃฏ2๏ฃบ๏ฃบ , ๏ฃฏ๏ฃฏ5๏ฃบ๏ฃบ .
๏ฃด
๏ฃด ๏ฃฏ3๏ฃบ ๏ฃฏ7๏ฃบ ๏ฃด
๏ฃด
๏ฃฏ
๏ฃบ
« ๏ฃฐ3 6 7 8๏ฃป ¬
๏ฃณ๏ฃฐ ๏ฃป ๏ฃฐ ๏ฃป๏ฃพ
What we did was we found the basis of the column space. The basis has two elements, and
so the column space of ๐ด is two-dimensional. Notice that the rank of ๐ด is two.
We would have followed the same procedure if we wanted to find the basis of the
subspace ๐‘‹ spanned by
๏ฃฎ1๏ฃน ๏ฃฎ2๏ฃน ๏ฃฎ3๏ฃน ๏ฃฎ4๏ฃน
๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบ
๏ฃฏ2๏ฃบ , ๏ฃฏ4๏ฃบ , ๏ฃฏ5๏ฃบ , ๏ฃฏ6๏ฃบ .
๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบ
๏ฃฏ3๏ฃบ ๏ฃฏ6๏ฃบ ๏ฃฏ7๏ฃบ ๏ฃฏ8๏ฃบ
๏ฃฐ ๏ฃป ๏ฃฐ ๏ฃป ๏ฃฐ ๏ฃป ๏ฃฐ ๏ฃป
423
A.4. SUBSPACES, DIMENSION, AND THE KERNEL
We would have simply formed the matrix ๐ด with these vectors as columns and repeated
the computation above. The subspace ๐‘‹ is then the column space of ๐ด.
Example A.4.4: Consider the matrix
๏ฃฎ1 2 0 0 3 ๏ฃน
๏ฃฏ
๏ฃบ
๐ฟ = ๏ฃฏ๏ฃฏ0 0 1 0 4๏ฃบ๏ฃบ .
๏ฃฏ0 0 0 1 5 ๏ฃบ
๏ฃฐ
๏ฃป
Conveniently, the matrix is in reduced row echelon form. The matrix is of rank 3. The
column space is the span of the pivot columns. It is the 3-dimensional space
๏ฃฑ
๏ฃฎ ๏ฃน ๏ฃฎ ๏ฃน ๏ฃฎ ๏ฃน๏ฃผ
๏ฃด
๏ฃฒ ๏ฃฏ1๏ฃบ ๏ฃฏ0๏ฃบ ๏ฃฏ0๏ฃบ ๏ฃด
๏ฃด
๏ฃฝ
๏ฃด
column space of ๐ฟ = span ๏ฃฏ๏ฃฏ0๏ฃบ๏ฃบ , ๏ฃฏ๏ฃฏ1๏ฃบ๏ฃบ , ๏ฃฏ๏ฃฏ0๏ฃบ๏ฃบ = โ„3 .
๏ฃด
๏ฃด ๏ฃฏ0๏ฃบ ๏ฃฏ0๏ฃบ ๏ฃฏ1๏ฃบ ๏ฃด
๏ฃด
๏ฃณ๏ฃฐ ๏ฃป ๏ฃฐ ๏ฃป ๏ฃฐ ๏ฃป๏ฃพ
The row space is the 3-dimensional space
row space of ๐ฟ = span
1 2 0 0 3 , 0 0 1 0 4 , 0 0 0 1 5
.
As these vectors have 5 components, we think of the row space of ๐ฟ as a subspace of โ„5 .
The way the dimensions worked out in the examples is not an accident. Since the
number of vectors that we needed to take was always the same as the number of pivots,
and the number of pivots is the rank, we get the following result.
Theorem A.4.2 (Rank). The dimension of the column space and the dimension of the row space of
a matrix ๐ด are both equal to the rank of ๐ด.
A.4.2
Kernel
® the kernel of ๐ฟ, is a subspace: If ๐‘ฅ® and ๐‘ฆ®
The set of solutions of a linear equation ๐ฟ ๐‘ฅ® = 0,
are solutions, then
®
๐ฟ(๐‘ฅ® + ๐‘ฆ®) = ๐ฟ ๐‘ฅ® + ๐ฟ ๐‘ฆ® = 0® + 0® = 0,
and
®
๐ฟ(๐›ผ ๐‘ฅ®) = ๐›ผ๐ฟ ๐‘ฅ® = ๐›ผ 0® = 0.
So ๐‘ฅ® + ๐‘ฆ® and ๐›ผ ๐‘ฅ® are solutions. The dimension of the kernel is called the nullity of the
matrix.
The same sort of idea governs the solutions of linear differential equations. We try to
describe the kernel of a linear differential operator, and as it is a subspace, we look for a
basis of this kernel. Much of this book is dedicated to finding such bases.
The kernel of a matrix is the same as the kernel of its reduced row echelon form. For
a matrix in reduced row echelon form, the kernel is rather easy to find. If a vector ๐‘ฅ® is
applied to a matrix ๐ฟ, then each entry in ๐‘ฅ® corresponds to a column of ๐ฟ, the column that
the entry multiplies. To find the kernel, pick a non-pivot column make a vector that has a
−1 in the entry corresponding to this non-pivot column and zeros at all the other entries
424
APPENDIX A. LINEAR ALGEBRA
corresponding to the other non-pivot columns. Then for all the entries corresponding
to pivot columns make it precisely the value in the corresponding row of the non-pivot
® This procedure is best understood by
column to make the vector be a solution to ๐ฟ ๐‘ฅ® = 0.
example.
Example A.4.5: Consider
๏ฃฎ1
๏ฃฏ
๐ฟ = ๏ฃฏ๏ฃฏ 0
๏ฃฏ0
๏ฃฐ
2
0
0
0
1
0
0
0
1
3๏ฃน๏ฃบ
4๏ฃบ๏ฃบ .
5๏ฃบ๏ฃป
This matrix is in reduced row echelon form, the pivots are marked. There are two non-pivot
columns, so the kernel has dimension 2, that is, it is the span of 2 vectors. Let us find the
first vector. We look at the first non-pivot column, the 2nd column, and we put a −1 in the
2nd entry of our vector. We put a 0 in the 5th entry as the 5th column is also a non-pivot
column:
๏ฃฎ?๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฏ−1๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ ? ๏ฃบ.
๏ฃฏ ๏ฃบ
๏ฃฏ?๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ0๏ฃบ
๏ฃฐ ๏ฃป
Let us fill the rest. When this vector hits the first row, we get a −2 and 1 times whatever the
first question mark is. So make the first question mark 2. For the second and third rows, it is
sufficient to make it the question marks zero. We are really filling in the non-pivot column
into the remaining entries. Let us check while marking which numbers went where:
๏ฃฎ1
๏ฃฏ
๏ฃฏ0
๏ฃฏ
๏ฃฏ0
๏ฃฐ
2
0
0
0 0 3๏ฃน๏ฃบ
1 0 4๏ฃบ๏ฃบ
0 1 5๏ฃบ๏ฃป
๏ฃฎ2๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฏ−1๏ฃบ ๏ฃฎ0๏ฃน
๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบ
๏ฃฏ 0 ๏ฃบ = ๏ฃฏ0๏ฃบ .
๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบ
๏ฃฏ 0 ๏ฃบ ๏ฃฏ0๏ฃบ
๏ฃฏ ๏ฃบ ๏ฃฐ ๏ฃป
๏ฃฏ0๏ฃบ
๏ฃฐ ๏ฃป
Yay! How about the second vector. We start with
๏ฃฎ ? ๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฏ 0 ๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ ? ๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ ? ๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ−1.๏ฃบ
๏ฃฐ ๏ฃป
We set the first question mark to 3, the second to 4, and the third to 5. Let us check, marking
things as previously,
๏ฃฎ3๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฎ1 2 0 0 3 ๏ฃน ๏ฃฏ 0 ๏ฃบ ๏ฃฎ0๏ฃน
๏ฃฏ
๏ฃบ๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบ
๏ฃฏ0 0 1 0 4 ๏ฃบ ๏ฃฏ 4 ๏ฃบ = ๏ฃฏ0๏ฃบ .
๏ฃฏ
๏ฃบ๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบ
๏ฃฏ0 0 0 1 5 ๏ฃบ ๏ฃฏ 5 ๏ฃบ ๏ฃฏ0๏ฃบ
๏ฃฐ
๏ฃป๏ฃฏ ๏ฃบ ๏ฃฐ ๏ฃป
๏ฃฏ−1๏ฃบ
๏ฃฐ ๏ฃป
A.4. SUBSPACES, DIMENSION, AND THE KERNEL
425
There are two non-pivot columns, so we only need two vectors. We have found the basis of
the kernel. So,
๏ฃฑ
๏ฃฎ 2 ๏ฃน ๏ฃฎ 3 ๏ฃน๏ฃผ
๏ฃด
๏ฃด
๏ฃด
๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบ๏ฃด
๏ฃด
๏ฃด
๏ฃด
๏ฃฏ−1๏ฃบ ๏ฃฏ 0 ๏ฃบ ๏ฃด
๏ฃด
๏ฃฝ
๏ฃด
๏ฃฒ๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบ๏ฃด
๏ฃด
kernel of ๐ฟ = span ๏ฃฏ๏ฃฏ 0 ๏ฃบ๏ฃบ , ๏ฃฏ๏ฃฏ 4 ๏ฃบ๏ฃบ
๏ฃด
๏ฃด
๏ฃด
๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบ๏ฃด
๏ฃด
๏ฃด
๏ฃด๏ฃฏ 0 ๏ฃบ ๏ฃฏ 5 ๏ฃบ๏ฃด
๏ฃด
๏ฃด ๏ฃฏ 0 ๏ฃบ ๏ฃฏ−1๏ฃบ ๏ฃด
๏ฃด
๏ฃณ๏ฃฐ ๏ฃป ๏ฃฐ ๏ฃป๏ฃพ
What we did in finding a basis of the kernel is we expressed all solutions of ๐ฟ ๐‘ฅ® = 0® as a
linear combination of some given vectors.
The procedure to find the basis of the kernel of a matrix ๐ฟ:
(i) Find the reduced row echelon form of ๐ฟ.
(ii) Write down the basis of the kernel as above, one vector for each non-pivot column.
The rank of a matrix is the dimension of the column space, and that is the span on the
pivot columns, while the kernel is the span of vectors one for each non-pivot column. So
the two numbers must add to the number of columns.
Theorem A.4.3 (Rank–Nullity). If a matrix ๐ด has ๐‘› columns, rank ๐‘Ÿ, and nullity ๐‘˜ (dimension
of the kernel), then
๐‘› = ๐‘Ÿ + ๐‘˜.
The theorem is immensely useful in applications. It allows one to compute the rank ๐‘Ÿ if
one knows the nullity ๐‘˜ and vice versa, without doing any extra work.
Let us consider an example application, a simple version of the so-called Fredholm
alternative. A similar result is true for differential equations. Consider
®
๐ด ๐‘ฅ® = ๐‘,
where ๐ด is a square ๐‘› × ๐‘› matrix. There are then two mutually exclusive possibilities:
(i) A nonzero solution ๐‘ฅ® to ๐ด ๐‘ฅ® = 0® exists.
®
(ii) The equation ๐ด ๐‘ฅ® = ๐‘® has a unique solution ๐‘ฅ® for every ๐‘.
How does the Rank–Nullity theorem come into the picture? Well, if ๐ด has a nonzero
® then the nullity ๐‘˜ is positive. But then the rank ๐‘Ÿ = ๐‘› − ๐‘˜ must be
solution ๐‘ฅ® to ๐ด ๐‘ฅ® = 0,
less than ๐‘›. It means that the column space of ๐ด is of dimension less than ๐‘›, so it is a
subspace that does not include everything in โ„ ๐‘› . So โ„ ๐‘› has to contain some vector ๐‘® not in
the column space of ๐ด. In fact, most vectors in โ„ ๐‘› are not in the column space of ๐ด.
426
APPENDIX A. LINEAR ALGEBRA
A.4.3
Exercises
Exercise A.4.1: For the following sets of vectors, find a basis for the subspace spanned by the vectors,
and find the dimension of the subspace.
๏ฃฎ1๏ฃน
๏ฃฏ ๏ฃบ
a) ๏ฃฏ๏ฃฏ1๏ฃบ๏ฃบ ,
๏ฃฏ1๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฎ1๏ฃน
๏ฃฏ ๏ฃบ
d) ๏ฃฏ๏ฃฏ3๏ฃบ๏ฃบ ,
๏ฃฏ0๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฎ−1๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฏ−1๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ−1๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฎ0 ๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฏ2 ๏ฃบ ,
๏ฃฏ ๏ฃบ
๏ฃฏ2 ๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฎ−1๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฏ−1๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ2๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฎ1๏ฃน
๏ฃฏ ๏ฃบ
b) ๏ฃฏ๏ฃฏ0๏ฃบ๏ฃบ ,
๏ฃฏ5๏ฃบ
๏ฃฐ ๏ฃป
e)
1
,
3
๏ฃฎ0 ๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฏ1 ๏ฃบ ,
๏ฃฏ ๏ฃบ
๏ฃฏ0 ๏ฃบ
๏ฃฐ ๏ฃป
0
,
2
๏ฃฎ0๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฏ−1๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ0๏ฃบ
๏ฃฐ ๏ฃป
−1
−1
๏ฃฎ−4๏ฃน
๏ฃฏ ๏ฃบ
c) ๏ฃฏ๏ฃฏ−3๏ฃบ๏ฃบ ,
๏ฃฏ5๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฎ3๏ฃน
๏ฃฏ ๏ฃบ
f) ๏ฃฏ๏ฃฏ1๏ฃบ๏ฃบ ,
๏ฃฏ3๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฎ2 ๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฏ3 ๏ฃบ ,
๏ฃฏ ๏ฃบ
๏ฃฏ3 ๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฎ2๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฏ4๏ฃบ,
๏ฃฏ ๏ฃบ
๏ฃฏ−4๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฎ2๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฏ0๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ2๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฎ−5๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฏ−5๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ−2๏ฃบ
๏ฃฐ ๏ฃป
Exercise A.4.2: For the following matrices, find a basis for the kernel (nullspace).
๏ฃฎ 2 −1 −3๏ฃน
๏ฃฏ
๏ฃบ
0 −4๏ฃบ๏ฃบ
b) ๏ฃฏ๏ฃฏ 4
๏ฃฏ−1 1 2 ๏ฃบ
๏ฃฐ
๏ฃป
๏ฃฎ1 1 1 ๏ฃน
๏ฃฏ
๏ฃบ
a) ๏ฃฏ๏ฃฏ1 1 5 ๏ฃบ๏ฃบ
๏ฃฏ1 1 −4๏ฃบ
๏ฃฐ
๏ฃป
๏ฃฎ−4 4 4๏ฃน
๏ฃฏ
๏ฃบ
c) ๏ฃฏ๏ฃฏ−1 1 1๏ฃบ๏ฃบ
๏ฃฏ−5 5 5๏ฃบ
๏ฃฐ
๏ฃป
Exercise A.4.3: Suppose a 5 × 5 matrix ๐ด has rank 3. What is the nullity?
๏ฃฎ−2 1 1 1๏ฃน
๏ฃฏ
๏ฃบ
d) ๏ฃฏ๏ฃฏ−4 2 2 2๏ฃบ๏ฃบ
๏ฃฏ 1 0 4 3๏ฃบ
๏ฃฐ
๏ฃป
Exercise A.4.4: Suppose that ๐‘‹ is the set of all the vectors of โ„3 whose third component is zero. Is
๐‘‹ a subspace? And if so, find a basis and the dimension.
Exercise A.4.5: Consider a square matrix ๐ด, and suppose that ๐‘ฅ® is a nonzero vector such that
® What does the Fredholm alternative say about invertibility of ๐ด.
๐ด ๐‘ฅ® = 0.
Exercise A.4.6: Consider
๏ฃฎ 1 2 3๏ฃน
๏ฃฏ
๏ฃบ
๐‘€ = ๏ฃฏ๏ฃฏ 2 ? ?๏ฃบ๏ฃบ .
๏ฃฏ−1 ? ?๏ฃบ
๏ฃฐ
๏ฃป
If the nullity of this matrix is 2, fill in the question marks. Hint: What is the rank?
Exercise A.4.101: For the following sets of vectors, find a basis for the subspace spanned by the
vectors, and find the dimension of the subspace.
1
a)
,
2
๏ฃฎ2๏ฃน
๏ฃฏ ๏ฃบ
d) ๏ฃฏ๏ฃฏ2๏ฃบ๏ฃบ ,
๏ฃฏ4๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฎ2 ๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฏ2 ๏ฃบ ,
๏ฃฏ ๏ฃบ
๏ฃฏ3 ๏ฃบ
๏ฃฐ ๏ฃป
1
1
๏ฃฎ4๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฏ4๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ−3๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฎ1๏ฃน
๏ฃฏ ๏ฃบ
b) ๏ฃฏ๏ฃฏ1๏ฃบ๏ฃบ ,
๏ฃฏ1๏ฃบ
๏ฃฐ ๏ฃป
e)
1
,
0
๏ฃฎ2 ๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฏ2 ๏ฃบ ,
๏ฃฏ ๏ฃบ
๏ฃฏ2 ๏ฃบ
๏ฃฐ ๏ฃป
2
,
0
๏ฃฎ1๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฏ1๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ2๏ฃบ
๏ฃฐ ๏ฃป
3
0
๏ฃฎ5๏ฃน
๏ฃฏ ๏ฃบ
c) ๏ฃฏ๏ฃฏ3๏ฃบ๏ฃบ ,
๏ฃฏ1๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฎ1๏ฃน
๏ฃฏ ๏ฃบ
f) ๏ฃฏ๏ฃฏ0๏ฃบ๏ฃบ ,
๏ฃฏ0๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฎ5๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฏ−1๏ฃบ ,
๏ฃฏ ๏ฃบ
๏ฃฏ5๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฎ2๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฏ0๏ฃบ ,
๏ฃฏ ๏ฃบ
๏ฃฏ0๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฎ−1๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฏ3๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ−4๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฎ0๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฏ1๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ2๏ฃบ
๏ฃฐ ๏ฃป
Exercise A.4.102: For the following matrices, find a basis for the kernel (nullspace).
๏ฃฎ2 6 1 9๏ฃน
๏ฃฏ
๏ฃบ
a) ๏ฃฏ๏ฃฏ1 3 2 9๏ฃบ๏ฃบ
๏ฃฏ3 9 0 9๏ฃบ
๏ฃฐ
๏ฃป
๏ฃฎ 2 −2 −5๏ฃน
๏ฃฏ
๏ฃบ
5 ๏ฃบ๏ฃบ
b) ๏ฃฏ๏ฃฏ−1 1
๏ฃฏ−5 5 −3๏ฃบ
๏ฃฐ
๏ฃป
๏ฃฎ 1 −5 −4๏ฃน
๏ฃฏ
๏ฃบ
3
5 ๏ฃบ๏ฃบ
c) ๏ฃฏ๏ฃฏ 2
๏ฃฏ−3 5 2 ๏ฃบ
๏ฃฐ
๏ฃป
๏ฃฎ0 4 4๏ฃน
๏ฃฏ
๏ฃบ
d) ๏ฃฏ๏ฃฏ0 1 1๏ฃบ๏ฃบ
๏ฃฏ0 5 5๏ฃบ
๏ฃฐ
๏ฃป
A.4. SUBSPACES, DIMENSION, AND THE KERNEL
Exercise A.4.103: Suppose the column space of a 9 × 5 matrix ๐ด of dimension 3. Find
a) Rank of ๐ด.
b) Nullity of ๐ด.
c) Dimension of the row space of ๐ด.
d) Dimension of the nullspace of ๐ด.
e) Size of the maximum subset of linearly independent rows of ๐ด.
427
428
A.5
APPENDIX A. LINEAR ALGEBRA
Inner product and projections
Note: 1–2 lectures
A.5.1
Inner product and orthogonality
To do basic geometry, we need length, and we need angles. We have already seen the
euclidean length, so let us figure out how to compute angles. Mostly, we are worried about
the right angle∗ .
Given two (column) vectors in โ„ ๐‘› , we define the (standard) inner product as the dot
product:
๐‘‡
h๐‘ฅ®, ๐‘ฆ®i = ๐‘ฅ® · ๐‘ฆ® = ๐‘ฆ® ๐‘ฅ® = ๐‘ฅ 1 ๐‘ฆ1 + ๐‘ฅ2 ๐‘ฆ2 + · · · + ๐‘ฅ ๐‘› ๐‘ฆ๐‘› =
๐‘›
Õ
๐‘ฅ ๐‘– ๐‘ฆ๐‘– .
๐‘–=1
Why do we seemingly give a new notation for the dot product? Because there are other
possible inner products, which are not the dot product, although we will not worry about
others here. An inner product can even be defined on spaces of functions as we do in
chapter 4:
h ๐‘“ (๐‘ก), ๐‘”(๐‘ก)i =
∫
๐‘
๐‘“ (๐‘ก)๐‘”(๐‘ก) ๐‘‘๐‘ก.
๐‘Ž
But we digress.
The inner product satisfies the following rules:
(i) h ๐‘ฅ®, ๐‘ฅ®i ≥ 0, and h ๐‘ฅ®, ๐‘ฅ®i = 0 if and only if ๐‘ฅ® = 0,
(ii) h ๐‘ฅ®, ๐‘ฆ®i = h ๐‘ฆ®, ๐‘ฅ®i,
(iii) h๐‘Ž ๐‘ฅ®, ๐‘ฆ®i = h ๐‘ฅ®, ๐‘Ž ๐‘ฆ®i = ๐‘Žh ๐‘ฅ®, ๐‘ฆ®i,
(iv) h ๐‘ฅ® + ๐‘ฆ®, ๐‘ง®i = h๐‘ฅ®, ๐‘ง®i + h ๐‘ฆ®, ๐‘ง®i and h ๐‘ฅ®, ๐‘ฆ® + ๐‘ง®i = h ๐‘ฅ®, ๐‘ฆ®i + h ๐‘ฅ®, ๐‘ง®i.
Anything that satisfies the properties above can be called an inner product, although in
this section we are concerned with the standard inner product in โ„ ๐‘› .
The standard inner product gives the euclidean length:
k ๐‘ฅ® k =
p
h ๐‘ฅ®, ๐‘ฅ®i =
q
๐‘ฅ12 + ๐‘ฅ22 + · · · + ๐‘ฅ ๐‘›2 .
How does it give angles?
You may recall from multivariable calculus, that in two or three dimensions, the
standard inner product (the dot product) gives you the angle between the vectors:
h๐‘ฅ®, ๐‘ฆ®i = k ๐‘ฅ® kk ๐‘ฆ® k cos ๐œƒ.
∗ When
Euclid defined angles in his Elements, the only angle he ever really defined was the right angle.
429
A.5. INNER PRODUCT AND PROJECTIONS
That is, ๐œƒ is the angle that ๐‘ฅ® and ๐‘ฆ® make when they are based at the same point.
In โ„ ๐‘› (any dimension), we are simply going to say that ๐œƒ from the formula is what the
angle is. This makes sense as any two vectors based at the origin lie in a 2-dimensional
plane (subspace), and the formula works in 2 dimensions. In fact, one could even talk
about angles between functions this way, and we do in chapter 4, where we talk about
orthogonal functions (functions at right angle to each other).
To compute the angle we compute
cos ๐œƒ =
h ๐‘ฅ®, ๐‘ฆ®i
.
k ๐‘ฅ® kk ๐‘ฆ® k
Our angles are always in radians. We are computing the cosine of the angle, which is really
the best we can do. Given two vectors at an angle ๐œƒ, we can give the angle as −๐œƒ, 2๐œ‹ − ๐œƒ,
etc., see Figure A.5. Fortunately, cos ๐œƒ = cos(−๐œƒ) = cos(2๐œ‹ − ๐œƒ). If we solve for ๐œƒ using the
inverse cosine cos−1 , we can just decree that 0 ≤ ๐œƒ ≤ ๐œ‹.
๐‘ฅ®
2๐œ‹ − ๐œƒ
๐œƒ
−๐œƒ
๐‘ฆ®
Figure A.5: Angle between vectors.
Example A.5.1: Let us compute the angle between the vectors (3, 0) and (1, 1) in the plane.
Compute
cos ๐œƒ =
(3, 0), (1, 1)
k(3, 0)kk(1, 1)k
=
3+0
1
√ =√ .
3 2
2
Therefore ๐œƒ = ๐œ‹/4.
As we said, the most important angle is the right angle. A right angle is ๐œ‹/2 radians,
and cos(๐œ‹/2) = 0, so the formula is particularly easy in this case. We say vectors ๐‘ฅ® and ๐‘ฆ® are
orthogonal if they are at right angles, that is if
h๐‘ฅ®, ๐‘ฆ®i = 0.
The vectors (1, 0, 0, 1) and (1, 2, 3, −1) are orthogonal. So are (1, 1) and (1, −1). However,
(1, 1) and (1, 2) are not orthogonal as their inner product is 3 and not 0.
430
A.5.2
APPENDIX A. LINEAR ALGEBRA
Orthogonal projection
A typical application of linear algebra is to take a difficult problem, write everything in the
right basis, and in this new basis the problem becomes simple. A particularly useful basis
is an orthogonal basis, that is a basis where all the basis vectors are orthogonal. When we
draw a coordinate system in two or three dimensions, we almost always draw our axes as
orthogonal to each other.
Generalizing this concept to functions, it is particularly useful in chapter 4 to express a
function using a particular orthogonal basis, the Fourier series.
To express one vector in terms of an orthogonal basis, we need to first project one vector
® onto ๐‘ฃ® as
onto another. Given a nonzero vector ๐‘ฃ® , we define the orthogonal projection of ๐‘ค
® ๐‘ฃ® i
h๐‘ค,
® =
proj๐‘ฃ® (๐‘ค)
๐‘ฃ® .
h®
๐‘ฃ , ๐‘ฃ® i
® on the line
For the geometric idea, see Figure A.6. That is, we find the “shadow of ๐‘ค”
spanned by ๐‘ฃ® if the direction of the sun’s rays were exactly perpendicular to the line.
® is the closest point
Another way of thinking about it is that the tip of the arrow of proj๐‘ฃ® (๐‘ค)
® In terms of euclidean distance,
on the line spanned by ๐‘ฃ® to the tip of the arrow of ๐‘ค.
® minimizes the distance k ๐‘ค
® − ๐‘ข® k among all vectors ๐‘ข® that are multiples of
๐‘ข® = proj๐‘ฃ® (๐‘ค)
๐‘ฃ® . Because of this, this projection comes up often in applied mathematics in all sorts of
® as a multiple of
contexts we cannot solve a problem exactly: We can’t always solve “Find ๐‘ค
® is the best “solution.”
๐‘ฃ® ,” but proj๐‘ฃ® (๐‘ค)
®
๐‘ค
๐œƒ
®
proj๐‘ฃ® (๐‘ค)
๐‘ฃ®
Figure A.6: Orthogonal projection.
® should be cos ๐œƒ
The formula follows from basic trigonometry. The length of proj๐‘ฃ® (๐‘ค)
® that is (cos ๐œƒ)k ๐‘คk.
® We take the unit vector in the direction of ๐‘ฃ® , that
times the length of ๐‘ค,
๐‘ฃ®
is, k®๐‘ฃ k and we multiply it by the length of the projection. In other words,
® = (cos ๐œƒ)k ๐‘ค
®k
proj๐‘ฃ® (๐‘ค)
® ๐‘ฃk
® ๐‘ฃ® i
๐‘ฃ®
(cos ๐œƒ)k ๐‘คkk®
h๐‘ค,
=
๐‘ฃ® =
๐‘ฃ® .
2
k®
๐‘ฃk
h®
๐‘ฃ , ๐‘ฃ® i
k®
๐‘ฃk
Example A.5.2: Suppose we wish to project the vector (3, 2, 1) onto the vector (1, 2, 3).
431
A.5. INNER PRODUCT AND PROJECTIONS
Compute
proj(1,2,3) (3, 2, 1) =
3·1+2·2+1·3
h(3, 2, 1), (1, 2, 3)i
(1, 2, 3) =
(1, 2, 3)
1·1+2·2+3·3
h(1, 2, 3), (1, 2, 3)i
5 10 15
10
, ,
.
= (1, 2, 3) =
14
7 7 7
® − proj๐‘ฃ® (๐‘ค)
® ought to be
Let us double check that the projection is orthogonal. That is ๐‘ค
orthogonal to ๐‘ฃ® , see the right angle in Figure A.6 on the facing page. That is,
(3, 2, 1) − proj(1,2,3)
10
15
16 4 −8
5
=
, ,
(3, 2, 1) = 3 − , 2 − , 1 −
7
7
7
7 7 7
ought to be orthogonal to (1, 2, 3). We compute the inner product and we had better get
zero:
16 4 −8
16
4
8
, ,
, (1, 2, 3) =
· 1 + · 2 − · 3 = 0.
7 7 7
7
7
7
A.5.3
Orthogonal basis
As we said, a basis ๐‘ฃ®1 , ๐‘ฃ®2 , . . . , ๐‘ฃ® ๐‘› is an orthogonal basis if all vectors in the basis are orthogonal
to each other, that is, if
h®
๐‘ฃ ๐‘— , ๐‘ฃ® ๐‘˜ i = 0
for all choices of ๐‘— and ๐‘˜ where ๐‘— ≠ ๐‘˜ (a nonzero vector cannot be orthogonal to itself).
A basis is furthermore called an orthonormal basis if all the vectors in a basis are also
unit vectors, that is, if all the vectors have magnitude 1. For example, the standard basis
{(1, 0, 0), (0, 1, 0), (0, 0, 1)} is an orthonormal basis of โ„3 : Any pair is orthogonal, and each
vector is of unit magnitude.
The reason why we are interested in orthogonal (or orthonormal) bases is that they
make it really simple to represent a vector (or a projection onto a subspace) in the basis.
The simple formula for the orthogonal projection onto a vector gives us the coefficients.
In chapter 4, we use the same idea by finding the correct orthogonal basis for the set
of solutions of a differential equation. We are then able to find any particular solution
by simply applying the orthogonal projection formula, which is just a couple of a inner
products.
Let us come back to linear algebra. Suppose that we have a subspace and an orthogonal
basis ๐‘ฃ®1 , ๐‘ฃ®2 , . . . , ๐‘ฃ® ๐‘› . We wish to express ๐‘ฅ® in terms of the basis. If ๐‘ฅ® is not in the span of
the basis (when it is not in the given subspace), then of course it is not possible, but the
following formula gives us at least the orthogonal projection onto the subspace, or in other
words, the best approximation in the subspace.
First suppose that ๐‘ฅ® is in the span. Then it is the sum of the orthogonal projections:
๐‘ฅ® = proj๐‘ฃ®1 (๐‘ฅ®) + proj๐‘ฃ®2 (๐‘ฅ®) + · · · + proj๐‘ฃ®๐‘› (๐‘ฅ®) =
h ๐‘ฅ®, ๐‘ฃ®1 i
h๐‘ฅ®, ๐‘ฃ®2 i
h ๐‘ฅ®, ๐‘ฃ® ๐‘› i
๐‘ฃ®1 +
๐‘ฃ®2 + · · · +
๐‘ฃ® ๐‘› .
h®
๐‘ฃ1 , ๐‘ฃ®1 i
h®
๐‘ฃ2 , ๐‘ฃ®2 i
h®
๐‘ฃ ๐‘› , ๐‘ฃ® ๐‘› i
432
APPENDIX A. LINEAR ALGEBRA
In other words, if we want to write ๐‘ฅ® = ๐‘Ž1 ๐‘ฃ®1 + ๐‘Ž 2 ๐‘ฃ®2 + · · · + ๐‘Ž ๐‘› ๐‘ฃ® ๐‘› , then
๐‘Ž1 =
h๐‘ฅ®, ๐‘ฃ®1 i
,
h®
๐‘ฃ1 , ๐‘ฃ®1 i
๐‘Ž2 =
h๐‘ฅ®, ๐‘ฃ®2 i
,
h®
๐‘ฃ2 , ๐‘ฃ®2 i
...,
๐‘Ž๐‘› =
h๐‘ฅ®, ๐‘ฃ® ๐‘› i
.
h®
๐‘ฃ ๐‘› , ๐‘ฃ® ๐‘› i
Another way to derive this formula is to work in reverse. Suppose that ๐‘ฅ® = ๐‘Ž 1 ๐‘ฃ®1 + ๐‘Ž 2 ๐‘ฃ®2 +
· · · + ๐‘Ž ๐‘› ๐‘ฃ® ๐‘› . Take an inner product with ๐‘ฃ® ๐‘— , and use the properties of the inner product:
h๐‘ฅ®, ๐‘ฃ® ๐‘— i = h๐‘Ž 1 ๐‘ฃ®1 + ๐‘Ž 2 ๐‘ฃ®2 + · · · + ๐‘Ž ๐‘› ๐‘ฃ® ๐‘› , ๐‘ฃ® ๐‘— i
= ๐‘Ž1 h®
๐‘ฃ1 , ๐‘ฃ® ๐‘— i + ๐‘Ž2 h®
๐‘ฃ2 , ๐‘ฃ® ๐‘— i + · · · + ๐‘Ž ๐‘› h®
๐‘ฃ ๐‘› , ๐‘ฃ® ๐‘— i.
As the basis is orthogonal, then h®
๐‘ฃ ๐‘˜ , ๐‘ฃ® ๐‘— i = 0 whenever ๐‘˜ ≠ ๐‘—. That means that only one of
the terms, the ๐‘— th one, on the right-hand side is nonzero and we get
h๐‘ฅ®, ๐‘ฃ® ๐‘— i = ๐‘Ž ๐‘— h®
๐‘ฃ ๐‘— , ๐‘ฃ® ๐‘— i.
Solving for ๐‘Ž ๐‘— we find ๐‘Ž ๐‘— =
h๐‘ฅ®,®
๐‘ฃ๐‘—i
h®
๐‘ฃ ๐‘— ,®
๐‘ฃ๐‘—i
as before.
Example A.5.3: The vectors (1, 1) and (1, −1) form an orthogonal basis of โ„2 . Suppose we
wish to represent (3, 4) in terms of this basis, that is, we wish to find ๐‘Ž 1 and ๐‘Ž 2 such that
(3, 4) = ๐‘Ž 1 (1, 1) + ๐‘Ž2 (1, −1).
We compute:
๐‘Ž1 =
h(3, 4), (1, 1)i 7
= ,
h(1, 1), (1, 1)i 2
๐‘Ž2 =
h(3, 4), (1, −1)i
−1
.
=
2
h(1, −1), (1, −1)i
So
7
−1
(3, 4) = (1, 1) +
(1, −1).
2
2
If the basis is orthonormal rather than orthogonal, then all the denominators are one. It
is easy to make a basis orthonormal—divide all the vectors by their size. If you want to
decompose many vectors, it may be better to find an orthonormal basis. In the example
above, the orthonormal basis we would thus create is
1 1
√ ,√ ,
2 2
1 −1
√ ,√ .
2 2
Then the computation would have been
1 1
(3, 4) = (3, 4), √ , √
2 2
7
1 1
−1
= √ √ ,√ +√
2
2 2
2
1 1
1 −1
√ , √ + (3, 4), √ , √
2 2
2 2 1 −1
√ ,√ .
2 2
1 −1
√ ,√
2 2
433
A.5. INNER PRODUCT AND PROJECTIONS
Maybe the example is not so awe inspiring, but given vectors in โ„20 rather than โ„2 ,
then surely one would much rather do 20 inner products (or 40 if we did not have an
orthonormal basis) rather than solving a system of twenty equations in twenty unknowns
using row reduction of a 20 × 21 matrix.
As we said above, the formula still works even if ๐‘ฅ® is not in the subspace, although
then it does not get us the vector ๐‘ฅ® but its projection. More concretely, suppose that ๐‘† is a
subspace that is the span of ๐‘ฃ®1 , ๐‘ฃ®2 , . . . , ๐‘ฃ® ๐‘› and ๐‘ฅ® is any vector. Let proj๐‘† (๐‘ฅ®) be the vector in
๐‘† that is the closest to ๐‘ฅ®. Then
proj๐‘† (๐‘ฅ®) =
h๐‘ฅ®, ๐‘ฃ®1 i
h๐‘ฅ®, ๐‘ฃ®2 i
h๐‘ฅ®, ๐‘ฃ® ๐‘› i
๐‘ฃ®1 +
๐‘ฃ®2 + · · · +
๐‘ฃ® ๐‘› .
h®
๐‘ฃ 1 , ๐‘ฃ®1 i
h®
๐‘ฃ 2 , ๐‘ฃ®2 i
h®
๐‘ฃ ๐‘› , ๐‘ฃ® ๐‘› i
Of course, if ๐‘ฅ® is in ๐‘†, then proj๐‘† (๐‘ฅ®) = ๐‘ฅ®, as the closest vector in ๐‘† to ๐‘ฅ® is ๐‘ฅ® itself. But
true utility is obtained when ๐‘ฅ® is not in ๐‘†. In much of applied mathematics, we cannot find
an exact solution to a problem, but we try to find the best solution out of a small subset
(subspace). The partial sums of Fourier series from chapter 4 are one example. Another
example is least square approximation to fit a curve to data. Yet another example is given
by the most commonly used numerical methods to solve partial differential equations, the
finite element methods.
Example A.5.4: The vectors (1, 2, 3) and (3, 0, −1) are orthogonal, and so they are an
orthogonal basis of a subspace ๐‘†:
๐‘† = span (1, 2, 3), (3, 0, −1) .
Let us find the vector in ๐‘† that is closest to (2, 1, 0). That is, let us find proj๐‘† (2, 1, 0) .
h(2, 1, 0), (3, 0, −1)i
h(2, 1, 0), (1, 2, 3)i
(1, 2, 3) +
(3, 0, −1)
h(1, 2, 3), (1, 2, 3)i
h(3, 0, −1), (3, 0, −1)i
2
3
= (1, 2, 3) + (3, 0, −1)
7
5
73 4 9
=
, ,
.
35 7 35
proj๐‘† (2, 1, 0) =
A.5.4
The Gram–Schmidt process
Before leaving orthogonal bases, let us note a procedure for manufacturing them out of any
old basis. It may not be difficult to come up with an orthogonal basis for a 2-dimensional
subspace, but for a 20-dimensional subspace, it seems a daunting task. Fortunately, the
orthogonal projection can be used to “project away” the bits of the vectors that are making
them not orthogonal. It is called the Gram–Schmidt process.
We start with a basis of vectors ๐‘ฃ®1 , ๐‘ฃ®2 , . . . , ๐‘ฃ® ๐‘› . We construct an orthogonal basis
®1, ๐‘ค
®2, . . . , ๐‘ค
® ๐‘› as follows.
๐‘ค
® 1 = ๐‘ฃ®1 ,
๐‘ค
434
APPENDIX A. LINEAR ALGEBRA
® 2 = ๐‘ฃ®2 − proj๐‘ค® 1 (®
๐‘ค
๐‘ฃ 2 ),
® 3 = ๐‘ฃ®3 − proj๐‘ค® 1 (®
๐‘ค
๐‘ฃ 3 ) − proj๐‘ค® 2 (®
๐‘ฃ3 ),
® 4 = ๐‘ฃ®4 − proj๐‘ค® 1 (®
๐‘ค
๐‘ฃ 4 ) − proj๐‘ค® 2 (®
๐‘ฃ4 ) − proj๐‘ค® 3 (®
๐‘ฃ4 ),
..
.
® ๐‘› = ๐‘ฃ® ๐‘› − proj๐‘ค® 1 (®
๐‘ค
๐‘ฃ ๐‘› ) − proj๐‘ค® 2 (®
๐‘ฃ ๐‘› ) − · · · − proj๐‘ค® ๐‘›−1 (®
๐‘ฃ ๐‘› ).
What we do is at the ๐‘˜ th step, we take ๐‘ฃ® ๐‘˜ and we subtract the projection of ๐‘ฃ® ๐‘˜ to the subspace
®1, ๐‘ค
®2, . . . , ๐‘ค
® ๐‘˜−1 .
spanned by ๐‘ค
Example A.5.5: Consider the vectors (1, 2, −1), and (0, 5, −2) and call ๐‘† the span of the two
vectors. Let us find an orthogonal basis of ๐‘†:
® 1 = (1, 2, −1),
๐‘ค
® 2 = (0, 5, −2) − proj(1,2,−1) (0, 2, −2)
๐‘ค
= (0, 1, −1) −
h(0, 5, −2), (1, 2, −1)i
(1, 2, −1) = (0, 5, −2) − 2(1, 2, −1) = (−2, 1, 0).
h(1, 2, −1), (1, 2, −1)i
So (1, 2, −1) and (−2, 1, 0) span ๐‘† and are orthogonal. Let us check: h(1, 2, −1), (−2, 1, 0)i = 0.
Suppose we wish to find an orthonormal basis, not just an orthogonal one. Well, we
simply make the vectors into unit vectors by dividing them by their magnitude. The two
vectors making up the orthonormal basis of ๐‘† are:
1
1 2 −1
√ (1, 2, −1) = √ , √ , √ ,
6
6 6 6
A.5.5
1
−2 1
√ (−2, 1, 0) = √ , √ , 0 .
5
5 5
Exercises
Exercise A.5.1: Find the ๐‘  that makes the following vectors orthogonal: (1, 2, 3), (1, 1, ๐‘ ).
Exercise A.5.2: Find the angle ๐œƒ between (1, 3, 1), (2, 1, −1).
® = 3 and h®
Exercise A.5.3: Given that h®
๐‘ฃ , ๐‘คi
๐‘ฃ , ๐‘ข® i = −1 compute
a) h®
๐‘ข , 2®
๐‘ฃi
® + 3®
b) h®
๐‘ฃ , 2๐‘ค
๐‘ขi
® + 3®
c) h๐‘ค
๐‘ข , ๐‘ฃ® i
Exercise A.5.4: Suppose ๐‘ฃ® = (1, 1, −1). Find
a) proj๐‘ฃ® (1, 0, 0)
b) proj๐‘ฃ® (1, 2, 3)
c) proj๐‘ฃ® (1, −1, 0)
Exercise A.5.5: Consider the vectors (1, 2, 3), (−3, 0, 1), (1, −5, 3).
a) Check that the vectors are linearly independent and so form a basis.
b) Check that the vectors are mutually orthogonal, and are therefore an orthogonal basis.
c) Represent (1, 1, 1) as a linear combination
of this basis.
d) Make the basis orthonormal.
435
A.5. INNER PRODUCT AND PROJECTIONS
Exercise A.5.6: Let ๐‘† be the subspace spanned by (1, 3, −1), (1, 1, 1). Find an orthogonal basis of
๐‘† by the Gram-Schmidt process.
Exercise A.5.7: Starting with (1, 2, 3), (1, 1, 1), (2, 2, 0), follow the Gram-Schmidt process to find
an orthogonal basis of โ„3 .
Exercise A.5.8: Find an orthogonal basis of โ„3 such that (3, 1, −2) is one of the vectors. Hint:
First find two extra vectors to make a linearly independent set.
Exercise A.5.9: Using cosines and sines of ๐œƒ, find a unit vector ๐‘ข® in โ„2 that makes angle ๐œƒ with
®๐šค = (1, 0). What is h®๐šค , ๐‘ข® i?
Exercise A.5.101: Find the ๐‘  that makes the following vectors orthogonal: (1, 1, 1), (1, ๐‘ , 1).
Exercise A.5.102: Find the angle ๐œƒ between (1, 2, 3), (1, 1, 1).
® = 1 and h®
Exercise A.5.103: Given that h®
๐‘ฃ , ๐‘คi
๐‘ฃ , ๐‘ข® i = −1 and k®
๐‘ฃ k = 3 and
a) h3®
๐‘ข , 5®
๐‘ฃi
® + 3®
b) h®
๐‘ฃ , 2๐‘ค
๐‘ขi
® + 3®
c) h๐‘ค
๐‘ฃ , ๐‘ฃ® i
Exercise A.5.104: Suppose ๐‘ฃ® = (1, 0, −1). Find
a) proj๐‘ฃ® (0, 2, 1)
b) proj๐‘ฃ® (1, 0, 1)
c) proj๐‘ฃ® (4, −1, 0)
Exercise A.5.105: The vectors (1, 1, −1), (2, −1, 1), (1, −5, 3) form an orthogonal basis. Represent
the following vectors in terms of this basis:
a) (1, −8, 4)
b) (5, −7, 5)
c) (0, −6, 2)
Exercise A.5.106: Let ๐‘† be the subspace spanned by (2, −1, 1), (2, 2, 2). Find an orthogonal basis
of ๐‘† by the Gram-Schmidt process.
Exercise A.5.107: Starting with (1, 1, −1), (2, 3, −1), (1, −1, 1), follow the Gram-Schmidt process
to find an orthogonal basis of โ„3 .
436
A.6
APPENDIX A. LINEAR ALGEBRA
Determinant
Note: 1 lecture
For square matrices we define a useful quantity called the determinant. Define the
determinant of a 1 × 1 matrix as the value of its only entry
det
For a 2 × 2 matrix, define
det
๐‘Ž
๐‘Ž ๐‘
๐‘ ๐‘‘
def
= ๐‘Ž.
def
= ๐‘Ž๐‘‘ − ๐‘๐‘.
Before defining the determinant for larger matrices, we note the meaning of the
determinant. An ๐‘› × ๐‘› matrix gives a mapping of the ๐‘›-dimensional euclidean space โ„ ๐‘›
to itself. So a 2 × 2 matrix ๐ด is a mapping of the plane to itself. The determinant of ๐ด is the
factor by which the area of objects changes. If we take the unit square (square of side 1) in
the plane, then ๐ด takes the square to a parallelogram of area |det(๐ด)|. The sign of det(๐ด)
denotes a change of orientation (negative if the axes get flipped). For example, let
1 1
๐ด=
.
−1 1
Then det(๐ด) = 1 + 1 = 2. Let us see where ๐ด sends the unit square—the square with
vertices (0, 0), (1, 0), (0, 1), and (1, 1). The point (0, 0) gets sent to (0, 0).
1 1
−1 1
1
1
=
,
0
−1
1 1
−1 1
0
1
=
,
1
1
1 1
−1 1
1
2
=
.
1
0
The image of the square is another√square with vertices (0, 0), (1, −1), (1, 1), and (2, 0). The
image square has a side of length 2, and it is therefore of area 2. See Figure A.7.
1
1
0 0
0 0
1
2
1
−1
Figure A.7: Image of the unit quare via the mapping ๐ด.
In general, the image of a square is going to be a parallelogram. In high school geometry,
you may have seen a formula for computing the area of a parallelogram with vertices (0, 0),
437
A.6. DETERMINANT
(๐‘Ž, ๐‘), (๐‘, ๐‘‘) and (๐‘Ž + ๐‘, ๐‘ + ๐‘‘). The area is
det
๐‘Ž ๐‘
๐‘ ๐‘‘
= |๐‘Ž๐‘‘ − ๐‘๐‘|.
The vertical lines above mean absolute value. The matrix
the given parallelogram.
๐‘Ž ๐‘
๐‘ ๐‘‘
carries the unit square to
There are a number of ways to define the determinant for an ๐‘› × ๐‘› matrix. Let us use
the so-called cofactor expansion. We define ๐ด ๐‘–๐‘— as the matrix ๐ด with the ๐‘– th row and the ๐‘— th
column deleted. For example, if
If
๏ฃฎ1 2 3๏ฃน
๏ฃฏ
๏ฃบ
๐ด = ๏ฃฏ๏ฃฏ4 5 6๏ฃบ๏ฃบ ,
๏ฃฏ7 8 9๏ฃบ
๏ฃฐ
๏ฃป
๐ด12
then
4 6
=
7 9
and
๐ด23
1 2
=
.
7 8
We now define the determinant recursively
def
det(๐ด) =
๐‘›
Õ
(−1)1+๐‘— ๐‘Ž 1๐‘— det(๐ด1๐‘— ),
๐‘—=1
or in other words
(
det(๐ด) = ๐‘Ž 11 det(๐ด11 ) − ๐‘Ž 12 det(๐ด12 ) + ๐‘Ž 13 det(๐ด13 ) − · · ·
+๐‘Ž 1๐‘› det(๐ด1๐‘› ) if ๐‘› is odd,
−๐‘Ž 1๐‘› det(๐ด1๐‘› ) if ๐‘› even.
For a 3 × 3 matrix, we get det(๐ด) = ๐‘Ž 11 det(๐ด11 ) − ๐‘Ž 12 det(๐ด12 ) + ๐‘Ž 13 det(๐ด13 ). For example,
๏ฃฎ1 2 3๏ฃน
๏ฃบª
5 6
4 6
4 5
© ๏ฃฏ๏ฃฏ
๏ฃบ
− 2 · det
+ 3 · det
det ­ ๏ฃฏ4 5 6๏ฃบ ® = 1 · det
8 9
7 9
7 8
๏ฃฏ7 8 9๏ฃบ
«๏ฃฐ
๏ฃป¬
= 1(5 · 9 − 6 · 8) − 2(4 · 9 − 6 · 7) + 3(4 · 8 − 5 · 7) = 0.
It turns out that we did not have to necessarily use the first row. That is for any ๐‘–,
det(๐ด) =
๐‘›
Õ
(−1)๐‘–+๐‘— ๐‘Ž ๐‘–๐‘— det(๐ด ๐‘–๐‘— ).
๐‘—=1
It is sometimes useful to use a row other than the first. In the following example it is more
convenient to expand along the second row. Notice that for the second row we are starting
with a negative sign.
๏ฃฎ1 2 3๏ฃน
๏ฃบª
2
3
1
3
1
2
© ๏ฃฏ๏ฃฏ
+ 5 · det
− 0 · det
det ­ ๏ฃฏ0 5 0๏ฃบ๏ฃบ ® = −0 · det
8 9
7 9
7 8
๏ฃฏ7 8 9๏ฃบ
«๏ฃฐ
๏ฃป¬
= 0 + 5(1 · 9 − 3 · 7) + 0 = −60.
438
APPENDIX A. LINEAR ALGEBRA
Let us check if it is really the same as expanding along the first row,
๏ฃฎ1 2 3๏ฃน
๏ฃบª
5
0
0
0
0
5
© ๏ฃฏ๏ฃฏ
det ­ ๏ฃฏ0 5 0๏ฃบ๏ฃบ ® = 1 · det
− 2 · det
+ 3 · det
8 9
7 9
7 8
๏ฃฏ7 8 9๏ฃบ
«๏ฃฐ
¬
๏ฃป
= 1(5 · 9 − 0 · 8) − 2(0 · 9 − 0 · 7) + 3(0 · 8 − 5 · 7) = −60.
In computing the determinant, we alternately add and subtract the determinants of the
submatrices ๐ด ๐‘–๐‘— multiplied by ๐‘Ž ๐‘–๐‘— for a fixed ๐‘– and all ๐‘—. The numbers (−1)๐‘–+๐‘— det(๐ด ๐‘–๐‘— ) are
called cofactors of the matrix. And that is why this method of computing the determinant
is called the cofactor expansion.
Similarly we do not need to expand along a row, we can expand along a column. For
any ๐‘—,
det(๐ด) =
๐‘›
Õ
(−1)๐‘–+๐‘— ๐‘Ž ๐‘–๐‘— det(๐ด ๐‘–๐‘— ).
๐‘–=1
A related fact is that
det(๐ด) = det(๐ด๐‘‡ ).
A matrix is upper triangular if all elements below the main diagonal are 0. For example,
๏ฃฎ1 2 3๏ฃน
๏ฃฏ
๏ฃบ
๏ฃฏ0 5 6๏ฃบ
๏ฃฏ
๏ฃบ
๏ฃฏ0 0 9๏ฃบ
๏ฃฐ
๏ฃป
is upper triangular. Similarly a lower triangular matrix is one where everything above the
diagonal is zero. For example,
๏ฃฎ1 0 0๏ฃน
๏ฃฏ
๏ฃบ
๏ฃฏ4 5 0๏ฃบ .
๏ฃฏ
๏ฃบ
๏ฃฏ7 8 9๏ฃบ
๏ฃฐ
๏ฃป
The determinant for triangular matrices is very simple to compute. Consider the lower
triangular matrix. If we expand along the first row, we find that the determinant is 1
times the determinant of the lower triangular matrix 58 09 . So the deteriminant is just the
product of the diagonal entries:
๏ฃฎ1 0 0๏ฃน
๏ฃบª
© ๏ฃฏ๏ฃฏ
det ­ ๏ฃฏ4 5 0๏ฃบ๏ฃบ ® = 1 · 5 · 9 = 45.
๏ฃฏ
๏ฃบ
« ๏ฃฐ7 8 9๏ฃป ¬
Similarly for upper triangular matrices
๏ฃฎ1 2 3๏ฃน
๏ฃบª
© ๏ฃฏ๏ฃฏ
det ­ ๏ฃฏ0 5 6๏ฃบ๏ฃบ ® = 1 · 5 · 9 = 45.
๏ฃฏ
๏ฃบ
« ๏ฃฐ0 0 9๏ฃป ¬
439
A.6. DETERMINANT
In general, if ๐ด is triangular, then
det(๐ด) = ๐‘Ž 11 ๐‘Ž22 · · · ๐‘Ž ๐‘›๐‘› .
If ๐ด is diagonal, then it is also triangular (upper and lower), so same formula applies.
For example,
๏ฃฎ2 0 0๏ฃน
๏ฃบª
© ๏ฃฏ๏ฃฏ
det ­ ๏ฃฏ0 3 0๏ฃบ๏ฃบ ® = 2 · 3 · 5 = 30.
๏ฃฏ
๏ฃบ
« ๏ฃฐ0 0 5๏ฃป ¬
In particular, the identity matrix ๐ผ is diagonal, and the diagonal entries are all 1. Thus,
det(๐ผ) = 1.
The determinant is telling you how geometric objects scale. If ๐ต doubles the sizes of
geometric objects and ๐ด triples them, then ๐ด๐ต (which applies ๐ต to an object and then it
applies ๐ด) should make size go up by a factor of 6. This is true in general:
Theorem A.6.1.
det(๐ด๐ต) = det(๐ด) det(๐ต).
This property is one of the most useful, and it is employed often to actually compute
determinants. A particularly interesting consequence is to note what it means for the
existence of inverses. Take ๐ด and ๐ต to be inverses, that is ๐ด๐ต = ๐ผ. Then
det(๐ด) det(๐ต) = det(๐ด๐ต) = det(๐ผ) = 1.
Neither det(๐ด) nor det(๐ต) can be zero. This fact is an extremely useful property of the
determinant, and one which is used often in this book:
Theorem A.6.2. An ๐‘› × ๐‘› matrix ๐ด is invertible if and only if det(๐ด) ≠ 0.
In fact, det(๐ด−1 ) det(๐ด) = 1 says that
det(๐ด−1 ) =
1
.
det(๐ด)
So we know what the determinant of ๐ด−1 is without computing ๐ด−1 .
Let us return to the formula for the inverse of a 2 × 2 matrix:
๐‘Ž ๐‘
๐‘ ๐‘‘
−1
1
๐‘‘ −๐‘
=
.
๐‘Ž๐‘‘ − ๐‘๐‘ −๐‘ ๐‘Ž
Notice the determinant of the matrix [ ๐‘Ž๐‘ ๐‘๐‘‘ ] in the denominator of the fraction. The formula
only works if the determinant is nonzero, otherwise we are dividing by zero.
A common notation for the determinant is a pair of vertical lines:
๐‘Ž ๐‘
= det
๐‘ ๐‘‘
๐‘Ž ๐‘
๐‘ ๐‘‘
.
Personally, I find this notation confusing as vertical lines usually mean a positive quantity,
while determinants can be negative. Also think about how to write the absolute value of a
determinant. This notation is not used in this book.
440
APPENDIX A. LINEAR ALGEBRA
A.6.1
Exercises
Exercise A.6.1: Compute the determinant of the following matrices:
a) 3
1 3
b)
2 1
๏ฃฎ2 1 0๏ฃน
๏ฃฏ
๏ฃบ
e) ๏ฃฏ๏ฃฏ−2 7 −3๏ฃบ๏ฃบ
๏ฃฏ0 2 0๏ฃบ
๏ฃฐ
๏ฃป
๏ฃฎ2 1 3๏ฃน
๏ฃฏ
๏ฃบ
f) ๏ฃฏ๏ฃฏ8 6 3๏ฃบ๏ฃบ
๏ฃฏ7 9 7๏ฃบ
๏ฃฐ
๏ฃป
2 1
c)
4 2
๏ฃฎ0
๏ฃฏ
๏ฃฏ0
g) ๏ฃฏ๏ฃฏ
๏ฃฏ3
๏ฃฏ0
๏ฃฐ
๏ฃฎ1 2 3๏ฃน
๏ฃฏ
๏ฃบ
d) ๏ฃฏ๏ฃฏ0 4 5๏ฃบ๏ฃบ
๏ฃฏ0 0 6๏ฃบ
๏ฃฐ
๏ฃป
2
0
4
0
5 7 ๏ฃน๏ฃบ
2 −3๏ฃบ๏ฃบ
5 7 ๏ฃบ๏ฃบ
2 4 ๏ฃบ๏ฃป
๏ฃฎ0 1 2 0๏ฃน
๏ฃฏ
๏ฃบ
๏ฃฏ1 1 −1 2๏ฃบ
๏ฃบ
h) ๏ฃฏ๏ฃฏ
๏ฃบ
๏ฃฏ1 1 2 1๏ฃบ
๏ฃฏ2 −1 −2 3๏ฃบ
๏ฃฐ
๏ฃป
Exercise A.6.2: For which ๐‘ฅ are the following matrices singular (not invertible).
2 3
a)
2 ๐‘ฅ
2 ๐‘ฅ
b)
1 2
๐‘ฅ 1
c)
4 ๐‘ฅ
๏ฃฎ ๐‘ฅ 0 1๏ฃน
๏ฃบ
๏ฃฏ
d) ๏ฃฏ๏ฃฏ 1 4 2๏ฃบ๏ฃบ
๏ฃฏ 1 6 2๏ฃบ
๏ฃป
๏ฃฐ
Exercise A.6.3: Compute
without computing the inverse.
๏ฃฎ
© ๏ฃฏ2
­ ๏ฃฏ0
det ­­ ๏ฃฏ๏ฃฏ
­ ๏ฃฏ0
๏ฃฏ0
«๏ฃฐ
1
8
0
0
2
6
3
0
3๏ฃน๏ฃบ
5๏ฃบ๏ฃบ
9๏ฃบ๏ฃบ
1๏ฃบ๏ฃป
−1
ª
®
®
®
®
¬
Exercise A.6.4: Suppose
๏ฃฎ1
๏ฃฏ
๏ฃฏ2
๐ฟ = ๏ฃฏ๏ฃฏ
๏ฃฏ7
๏ฃฏ28
๏ฃฐ
0 0
1 0
๐œ‹ 1
5 −99
0๏ฃน๏ฃบ
0๏ฃบ๏ฃบ
0๏ฃบ๏ฃบ
1๏ฃบ๏ฃป
and
๏ฃฎ5
๏ฃฏ
๏ฃฏ0
๐‘ˆ = ๏ฃฏ๏ฃฏ
๏ฃฏ0
๏ฃฏ0
๏ฃฐ
9 1 − sin(1)๏ฃน๏ฃบ
1 88
−1 ๏ฃบ๏ฃบ
.
0 1
3 ๏ฃบ๏ฃบ
0 0
1 ๏ฃบ๏ฃป
Let ๐ด = ๐ฟ๐‘ˆ. Compute det(๐ด) in a simple way, without computing what is ๐ด. Hint: First read off
det(๐ฟ) and det(๐‘ˆ).
Exercise A.6.5: Consider the linear mapping from โ„2 to โ„2 given by the matrix ๐ด = 12 ๐‘ฅ1 for
some number ๐‘ฅ. You wish to make ๐ด such that it doubles the area of every geometric figure. What
are the possibilities for ๐‘ฅ (there are two answers).
Exercise A.6.6: Suppose ๐ด and ๐‘† are ๐‘› × ๐‘› matrices, and ๐‘† is invertible. Suppose that det(๐ด) = 3.
Compute det(๐‘† −1 ๐ด๐‘†) and det(๐‘†๐ด๐‘†−1 ). Justify your answer using the theorems in this section.
Exercise A.6.7: Let ๐ด be an ๐‘›×๐‘› matrix such that det(๐ด) = 1. Compute det(๐‘ฅ๐ด) given a number ๐‘ฅ.
Hint: First try computing det(๐‘ฅ๐ผ), then note that ๐‘ฅ๐ด = (๐‘ฅ๐ผ)๐ด.
441
A.6. DETERMINANT
Exercise A.6.101: Compute the determinant of the following matrices:
a) −2
2 −2
b)
1 3
๏ฃฎ 2 1 0๏ฃน
๏ฃฏ
๏ฃบ
e) ๏ฃฏ๏ฃฏ−2 7 3๏ฃบ๏ฃบ
๏ฃฏ 1 1 0๏ฃบ
๏ฃฐ
๏ฃป
2 2
c)
2 2
๏ฃฎ3
๏ฃฏ
๏ฃฏ0
g) ๏ฃฏ๏ฃฏ
๏ฃฏ0
๏ฃฏ2
๏ฃฐ
๏ฃฎ5 1 3๏ฃน
๏ฃฏ
๏ฃบ
f) ๏ฃฏ๏ฃฏ4 1 1๏ฃบ๏ฃบ
๏ฃฏ4 5 1๏ฃบ
๏ฃฐ
๏ฃป
๏ฃฎ2 9 −11๏ฃน
๏ฃฏ
๏ฃบ
d) ๏ฃฏ๏ฃฏ0 −1 5 ๏ฃบ๏ฃบ
๏ฃฏ0 0
3 ๏ฃบ๏ฃป
๏ฃฐ
2
0
4
1
5
2
5
2
7๏ฃน๏ฃบ
0๏ฃบ๏ฃบ
0๏ฃบ๏ฃบ
4๏ฃบ๏ฃป
๏ฃฎ0
๏ฃฏ
๏ฃฏ1
h) ๏ฃฏ๏ฃฏ
๏ฃฏ5
๏ฃฏ1
๏ฃฐ
2 1
0 ๏ฃน๏ฃบ
2 −3 4 ๏ฃบ๏ฃบ
6 −7 8 ๏ฃบ๏ฃบ
2 3 −2๏ฃบ๏ฃป
Exercise A.6.102: For which ๐‘ฅ are the following matrices singular (not invertible).
1 3
a)
1 ๐‘ฅ
3 ๐‘ฅ
b)
1 3
๐‘ฅ 3
c)
3 ๐‘ฅ
๏ฃฎ ๐‘ฅ 1 0๏ฃน
๏ฃฏ
๏ฃบ
d) ๏ฃฏ๏ฃฏ 1 4 0๏ฃบ๏ฃบ
๏ฃฏ 1 6 2๏ฃบ
๏ฃฐ
๏ฃป
Exercise A.6.103: Compute
without computing the inverse.
๏ฃฎ
๏ฃน −1
© ๏ฃฏ3 4 7 12 ๏ฃบ ª
­ ๏ฃฏ0 −1 9 −8๏ฃบ ®
๏ฃบ ®
det ­­ ๏ฃฏ๏ฃฏ
๏ฃบ ®
0
0
−2
4
­๏ฃฏ
๏ฃบ ®
๏ฃฏ0 0 0 2 ๏ฃบ
๏ฃป ¬
«๏ฃฐ
Exercise A.6.104 (challenging): Find all the ๐‘ฅ that make the matrix inverse
1 2
1 ๐‘ฅ
−1
have only integer entries (no fractions). Note that there are two answers.
442
APPENDIX A. LINEAR ALGEBRA
Appendix B
Table of Laplace Transforms
The function ๐‘ข is the Heaviside function, ๐›ฟ is the Dirac delta function, and
∞
∫
๐‘’
Γ(๐‘ก) =
−๐œ ๐‘ก−1
๐œ
0
๐‘‘๐œ,
2
erf(๐‘ก) = √
๐œ‹
∫
๐‘ก
2
๐‘’ −๐œ ๐‘‘๐œ,
๐‘“ (๐‘ก)
๐น(๐‘ ) = โ„’ ๐‘“ (๐‘ก) =
๐ถ
๐‘ข(๐‘ก − ๐‘Ž)
๐ถ
๐‘ 
1
๐‘ 2
2
๐‘ 3
๐‘›!
๐‘  ๐‘›+1
Γ(๐‘+1)
๐‘  ๐‘+1
1
๐‘ +๐‘Ž
๐œ”
๐‘  2 +๐œ” 2
๐‘ 
๐‘  2 +๐œ” 2
๐œ”
๐‘  2 −๐œ” 2
๐‘ 
๐‘  2 −๐œ” 2
๐‘’ −๐‘Ž๐‘ 
๐‘ 
๐›ฟ(๐‘ก)
1
๐›ฟ(๐‘ก − ๐‘Ž)
๐‘’ −๐‘Ž๐‘ 
๐‘ก
๐‘ก2
๐‘ก๐‘›
๐‘ก๐‘
(๐‘ > 0)
๐‘’ −๐‘Ž๐‘ก
sin(๐œ”๐‘ก)
cos(๐œ”๐‘ก)
sinh(๐œ”๐‘ก)
cosh(๐œ”๐‘ก)
erf
√1
๐œ‹๐‘ก
1
√
๐œ‹๐‘ก
๐‘ก
2๐‘Ž
1 (๐‘Ž๐‘ )2
๐‘ ๐‘’
2
−๐‘Ž
4๐‘ก
(๐‘Ž ≥ 0)
√
2
− ๐‘Ž๐‘’ ๐‘Ž ๐‘ก erfc(๐‘Ž ๐‘ก) (๐‘Ž > 0)
exp
erfc(๐‘ก) = 1 − erf(๐‘ก).
0
−๐‘Ž๐‘ 
๐‘’√
๐‘ 
1
√
๐‘ +๐‘Ž
erfc(๐‘Ž๐‘ )
∫∞
0
๐‘’ −๐‘ ๐‘ก ๐‘“ (๐‘ก) ๐‘‘๐‘ก
444
APPENDIX B. TABLE OF LAPLACE TRANSFORMS
∫∞
๐‘“ (๐‘ก)
๐น(๐‘ ) = โ„’ ๐‘“ (๐‘ก) =
๐‘Ž ๐‘“ (๐‘ก) + ๐‘ ๐‘”(๐‘ก)
๐‘Ž๐น(๐‘ ) + ๐‘๐บ(๐‘ )
๐‘“ (๐‘Ž๐‘ก)
๐‘“ (๐‘ก − ๐‘Ž)๐‘ข(๐‘ก − ๐‘Ž)
1
๐‘ 
๐‘Ž๐น ๐‘Ž
๐‘’ −๐‘Ž๐‘  ๐น(๐‘ )
๐‘’ −๐‘Ž๐‘ก ๐‘“ (๐‘ก)
๐น(๐‘  + ๐‘Ž)
๐‘” 0(๐‘ก)
๐‘ ๐บ(๐‘ ) − ๐‘”(0)
๐‘” 00(๐‘ก)
๐‘  2 ๐บ(๐‘ ) − ๐‘  ๐‘”(0) − ๐‘” 0(0)
๐‘” 000(๐‘ก)
๐‘  3 ๐บ(๐‘ ) − ๐‘  2 ๐‘”(0) − ๐‘  ๐‘” 0(0) − ๐‘” 00(0)
๐‘” (๐‘›) (๐‘ก)
๐‘  ๐‘› ๐บ(๐‘ ) − ๐‘  ๐‘›−1 ๐‘”(0) − · · · − ๐‘” (๐‘›−1) (0)
(๐‘Ž > 0)
( ๐‘“ ∗ ๐‘”)(๐‘ก) =
∫๐‘ก
0
๐‘“ (๐œ)๐‘”(๐‘ก − ๐œ) ๐‘‘๐œ
๐น(๐‘ )๐บ(๐‘ )
−๐น0(๐‘ )
๐‘ก ๐‘› ๐‘“ (๐‘ก)
(−1)๐‘› ๐น (๐‘›) (๐‘ )
0
๐‘“ (๐‘ก)
๐‘ก
๐‘“ (๐œ)๐‘‘๐œ
๐‘’ −๐‘ ๐‘ก ๐‘“ (๐‘ก) ๐‘‘๐‘ก
๐‘ก ๐‘“ (๐‘ก)
∫๐‘ก
0
1
๐‘  ๐น(๐‘ )
∫∞
๐น(๐œŽ)๐‘‘๐œŽ
๐‘ 
Further Reading
[BM] Paul W. Berg and James L. McGregor, Elementary Partial Differential Equations,
Holden-Day, San Francisco, CA, 1966.
[BD] William E. Boyce and Richard C. DiPrima, Elementary Differential Equations and
Boundary Value Problems, 11th edition, John Wiley & Sons Inc., New York, NY, 2017.
[EP]
C.H. Edwards and D.E. Penney, Differential Equations and Boundary Value Problems:
Computing and Modeling, 5th edition, Pearson, 2014.
[F]
Stanley J. Farlow, An Introduction to Differential Equations and Their Applications,
McGraw-Hill, Inc., Princeton, NJ, 1994. (Published also by Dover Publications, 2006.)
[I]
E.L. Ince, Ordinary Differential Equations, Dover Publications, Inc., New York, NY,
1956.
[T]
William F. Trench, Elementary Differential Equations with Boundary Value Problems.
Books and Monographs. Book 9. 2013. https://digitalcommons.trinity.edu/
mono/9
446
FURTHER READING
Solutions to Selected Exercises
0.2.101: Compute ๐‘ฅ 0 = −2๐‘’ −2๐‘ก and ๐‘ฅ 00 = 4๐‘’ −2๐‘ก . Then (4๐‘’ −2๐‘ก ) + 4(−2๐‘’ −2๐‘ก ) + 4(๐‘’ −2๐‘ก ) = 0.
0.2.102: Yes.
0.2.103: ๐‘ฆ = ๐‘ฅ ๐‘Ÿ is a solution for ๐‘Ÿ = 0 and ๐‘Ÿ = 2.
0.2.104: ๐ถ1 = 100, ๐ถ2 = −90
0.2.105: ๐œ‘ = −9๐‘’ 8๐‘ 
0.2.106: a) ๐‘ฅ = 9๐‘’ −4๐‘ก b) ๐‘ฅ = cos(2๐‘ก) + sin(2๐‘ก) c) ๐‘ = 4๐‘’ 3๐‘ž d) ๐‘‡ = 3 sinh(2๐‘ฅ)
0.3.101: a) PDE, equation, second order, linear, nonhomogeneous, constant coefficient.
b) ODE, equation, first order, linear, nonhomogeneous, not constant coefficient, not
autonomous.
c) ODE, equation, seventh order, linear, homogeneous, constant coefficient, autonomous.
d) ODE, equation, second order, linear, nonhomogeneous, constant coefficient, autonomous.
e) ODE, system, second order, nonlinear.
f) PDE, equation, second order, nonlinear.
0.3.102: equation: ๐‘Ž(๐‘ฅ)๐‘ฆ = ๐‘(๐‘ฅ), solution: ๐‘ฆ =
๐‘(๐‘ฅ)
.
๐‘Ž(๐‘ฅ)
0.3.103: ๐‘˜ = 0 or ๐‘˜ = 1
2
1.1.101: ๐‘ฆ = ๐‘’ ๐‘ฅ + ๐‘ฅ2 + 9
1.1.102:
1.1.103:
1.1.104:
๐‘ฅ = (3๐‘ก − 2)1/3
√ ๐‘ฅ = sin−1 ๐‘ก + 1/ 2
170
1/(1−๐‘›)
1.1.105: If ๐‘› ≠ 1, then ๐‘ฆ = (1 − ๐‘›)๐‘ฅ + 1
. If ๐‘› = 1, then ๐‘ฆ = ๐‘’ ๐‘ฅ .
1.1.106: The equation is ๐‘Ÿ 0 = −๐ถ for some constant ๐ถ. The snowball will be completely
melted in 25 minutes from time ๐‘ก = 0.
1.1.107: ๐‘ฆ = ๐ด๐‘ฅ 3 + ๐ต๐‘ฅ 2 + ๐ถ๐‘ฅ + ๐ท, so 4 constants.
1.2.101:
448
SOLUTIONS TO SELECTED EXERCISES
๐‘ฆ = 0 is a solution such that ๐‘ฆ(0) = 0.
1.2.102: Yes a solution exists. The equation is ๐‘ฆ 0 = ๐‘“ (๐‘ฅ, ๐‘ฆ) where ๐‘“ (๐‘ฅ, ๐‘ฆ) = ๐‘ฅ๐‘ฆ. The
๐œ•๐‘“
function ๐‘“ (๐‘ฅ, ๐‘ฆ) is continuous and ๐œ•๐‘ฆ = ๐‘ฅ, which is also continuous near (0, 0). So a
solution exists and is unique. (In fact, ๐‘ฆ = 0 is the solution.)
1.2.103:
No, the equation is not defined at (๐‘ฅ, ๐‘ฆ) = (1, 0).
1.2.104:
a) ๐‘ฆ 0 = cos ๐‘ฆ,
b) ๐‘ฆ 0 = ๐‘ฆ cos(๐‘ฅ),
c) ๐‘ฆ 0 = sin ๐‘ฅ.
Justification left to reader.
1.2.105: Picard does not apply as ๐‘“ is not continuous at ๐‘ฆ = 0. The equation does not have
a continuously differentiable solution. Suppose it did. Notice that ๐‘ฆ 0(0) = 1. By the first
derivative test, ๐‘ฆ(๐‘ฅ) > 0 for small positive ๐‘ฅ. But then for those ๐‘ฅ we would have ๐‘ฆ 0(๐‘ฅ) = 0,
so clearly the derivative cannot be continuous.
∫๐‘ฅ
1.2.106:
The solution is ๐‘ฆ(๐‘ฅ) =
1.3.101:
๐‘ฆ = ๐ถ๐‘’ ๐‘ฅ
1.3.102:
1.3.103:
1.3.104:
๐‘ฅ = ๐‘’๐‘ก + 1
๐‘ฅ3 + ๐‘ฅ = ๐‘ก + 2
1
๐‘ฆ = 1−ln
๐‘ฅ
1.3.105:
sin(๐‘ฆ) = − cos(๐‘ฅ) + ๐ถ
1.3.106:
The range is approximately 7.45 to 12.15 minutes.
๐‘ฅ0
๐‘“ (๐‘ ) ๐‘‘๐‘  + ๐‘ฆ0 , and this does indeed exist for every ๐‘ฅ.
2
3
๐‘ก
1.3.107: a) ๐‘ฅ = 1000๐‘’
. b) 102 rabbits after one month, 861 after 5 months, 999 after 10
๐‘’ ๐‘ก +24
months, 1000 after 15 months.
3
1.4.101:
๐‘ฆ = ๐ถ๐‘’ −๐‘ฅ + 1/3
1.4.102:
1.4.103:
๐‘ฆ = 2๐‘’ cos(2๐‘ฅ)+1 + 1
250 grams
1.4.104:
๐‘ƒ(5) = 1000๐‘’ 2×5−0.05×5 = 1000๐‘’ 8.75 ≈ 6.31 × 106
1.4.105:
๐ดโ„Ž 0 = ๐ผ − ๐‘˜ โ„Ž, where ๐‘˜ is a constant with units m2/s.
1.5.101:
๐‘ฆ=
2
3๐‘ฅ−2
1.5.102:
๐‘ฆ=
3−๐‘ฅ 2
2๐‘ฅ
1.5.103:
๐‘ฆ = 7๐‘’ 3๐‘ฅ + 3๐‘ฅ + 1
1.5.104:
๐‘ฆ=
2
p
1/3
๐‘ฅ 2 − ln(๐ถ − ๐‘ฅ)
1.6.101: a) 0, 1, 2 are critical points. b) ๐‘ฅ = 0 is unstable (semistable), ๐‘ฅ = 1 is stable,
and ๐‘ฅ = 2 is unstable. c) 1
1.6.102: a) There are no critical points. b) ∞
√
๐‘˜๐‘€+ (๐‘˜๐‘€)2 +4๐ด๐‘˜
๐‘‘๐‘ฅ
1.6.103: a) ๐‘‘๐‘ก = ๐‘˜๐‘ฅ(๐‘€ − ๐‘ฅ) + ๐ด b)
2๐‘˜
1.6.104:
a) ๐›ผ is a stable critical point, ๐›ฝ is an unstable one.
1.7.101:
Approximately: 1.0000, 1.2397, 1.3829
b) ๐›ผ,
c) ๐›ผ,
d) ∞ or DNE.
449
SOLUTIONS TO SELECTED EXERCISES
1.7.102: a) 0, 8, 12 b) ๐‘ฅ(4) = 16, so errors are: 16, 8, 4. c) Factors are 0.5, 0.5, 0.5.
1.7.103: a) 0, 0, 0 b) ๐‘ฅ = 0 is a solution so errors are: 0, 0, 0.
1.7.104: a) Improved Euler: ๐‘ฆ(1) ≈ 3.3897 for โ„Ž = 1/4, ๐‘ฆ(1) ≈ 3.4237 for โ„Ž = 1/8, b)
Standard Euler: ๐‘ฆ(1) ≈ 2.8828 for โ„Ž = 1/4, ๐‘ฆ(1) ≈ 3.1316 for โ„Ž = 1/8, c) ๐‘ฆ = 2๐‘’ ๐‘ฅ − ๐‘ฅ − 1, so
๐‘ฆ(2) is approximately 3.4366. d) Approximate errors for improved Euler: 0.046852 for
โ„Ž = 1/4, and 0.012881 for โ„Ž = 1/8. For standard Euler: 0.55375 for โ„Ž = 1/4, and 0.30499 for
โ„Ž = 1/8. Factor is approximately 0.27 for improved Euler, and 0.55 for standard Euler.
1.8.101:
a) ๐‘’ ๐‘ฅ ๐‘ฆ + sin(๐‘ฅ) = ๐ถ
b) ๐‘ฅ 2 + ๐‘ฅ๐‘ฆ − 2๐‘ฆ 2 = ๐ถ
c) ๐‘’ ๐‘ฅ + ๐‘’ ๐‘ฆ = ๐ถ
d) ๐‘ฅ 3 + 3๐‘ฅ๐‘ฆ + ๐‘ฆ 3 = ๐ถ
1.8.102: a) Integrating factor is ๐‘ฆ, equation becomes ๐‘‘๐‘ฅ + 3๐‘ฆ 2 ๐‘‘๐‘ฆ = 0.
b) Integrating
๐‘ฅ
๐‘ฅ
−๐‘ฆ
factor is ๐‘’ , equation becomes ๐‘’ ๐‘‘๐‘ฅ − ๐‘’ ๐‘‘๐‘ฆ = 0. c) Integrating factor is ๐‘ฆ 2 , equation
becomes (cos(๐‘ฅ) + ๐‘ฆ) ๐‘‘๐‘ฅ + ๐‘ฅ ๐‘‘๐‘ฆ = 0.
d) Integrating factor is ๐‘ฅ, equation becomes
(2๐‘ฅ ๐‘ฆ + ๐‘ฆ 2 ) ๐‘‘๐‘ฅ + (๐‘ฅ 2 + 2๐‘ฅ๐‘ฆ) ๐‘‘๐‘ฆ = 0.
1.8.103:
1
a) The equation is − ๐‘“ (๐‘ฅ) ๐‘‘๐‘ฅ+ ๐‘”(๐‘ฆ)
๐‘‘๐‘ฆ, and this is exact because ๐‘€ = − ๐‘“ (๐‘ฅ), ๐‘ =
1
,
๐‘”(๐‘ฆ)
so ๐‘€ ๐‘ฆ = 0 = ๐‘๐‘ฅ . b) −๐‘ฅ ๐‘‘๐‘ฅ + 1๐‘ฆ ๐‘‘๐‘ฆ = 0, leads to potential function ๐น(๐‘ฅ, ๐‘ฆ) = − ๐‘ฅ2 + ln|๐‘ฆ|,
solving ๐น(๐‘ฅ, ๐‘ฆ) = ๐ถ leads to the same solution as the example.
2
1.9.101:
a) ๐‘ข =
b) ๐‘ข = cos(๐‘ฅ − 2๐‘ก)
1
1+(๐‘ฅ+5๐‘ก)2
2
1.9.102: ๐‘ข = cos(๐‘ฅ − ๐‘ก)๐‘’ −๐‘ก /2
1.9.103: ๐‘ข = ๐‘ฅ + 4๐‘ก
2.1.101: Yes. To justify try to find a constant ๐ด such that sin(๐‘ฅ) = ๐ด๐‘’ ๐‘ฅ for all ๐‘ฅ.
2.1.102: No. ๐‘’ ๐‘ฅ+2 = ๐‘’ 2 ๐‘’ ๐‘ฅ .
2.1.103: ๐‘ฆ = 5
2.1.104:
๐‘ฆ = ๐ถ1 ln(๐‘ฅ) + ๐ถ2
2.1.105:
๐‘ฆ 00 − 3๐‘ฆ 0 + 2๐‘ฆ = 0
2.2.101:
๐‘ฆ = ๐ถ1 ๐‘’ (−2+
2.2.102:
๐‘ฆ = ๐ถ1 ๐‘’ 3๐‘ฅ + ๐ถ2 ๐‘ฅ๐‘’ 3๐‘ฅ
√
2)๐‘ฅ
√
+ ๐ถ2 ๐‘’ (−2−
2)๐‘ฅ
2.2.103:
√
√
√
๐‘ฆ = ๐‘’ −๐‘ฅ/4 cos ( 7/4)๐‘ฅ − 7๐‘’ −๐‘ฅ/4 sin ( 7/4)๐‘ฅ
2.2.104:
๐‘ฆ=
2.2.105:
๐‘ง(๐‘ก)
2(๐‘Ž−๐‘) −3๐‘ฅ/2
๐‘’
+ 3๐‘Ž+2๐‘
5
5
−๐‘ก
= 2๐‘’ cos(๐‘ก)
๐‘’๐‘ฅ
2.2.107:
๐‘Ž๐›ฝ−๐‘ ๐›ผ๐‘ฅ
๐›ฝ๐‘ฅ
+ ๐‘−๐‘Ž๐›ผ
๐›ฝ−๐›ผ ๐‘’
๐›ฝ−๐›ผ ๐‘’
๐‘ฆ 00 − ๐‘ฆ 0 − 6๐‘ฆ = 0
2.3.101:
๐‘ฆ = ๐ถ1 ๐‘’ ๐‘ฅ + ๐ถ2 ๐‘ฅ 3 + ๐ถ3 ๐‘ฅ 2 + ๐ถ4 ๐‘ฅ + ๐ถ5
2.2.106:
๐‘ฆ=
2.3.102: a) ๐‘Ÿ 3 − 3๐‘Ÿ 2 + 4๐‘Ÿ − 12 = 0
๐ถ3 cos(2๐‘ฅ)
2.3.103: ๐‘ฆ = 0
2.3.104: No. ๐‘’ 1 ๐‘’ ๐‘ฅ − ๐‘’ ๐‘ฅ+1 = 0.
b) ๐‘ฆ 000 − 3๐‘ฆ 00 + 4๐‘ฆ 0 − 12๐‘ฆ = 0
c) ๐‘ฆ = ๐ถ1 ๐‘’ 3๐‘ฅ + ๐ถ2 sin(2๐‘ฅ) +
450
SOLUTIONS TO SELECTED EXERCISES
2.3.105: Yes. (Hint: First note that sin(๐‘ฅ) is bounded. Then note that ๐‘ฅ and ๐‘ฅ sin(๐‘ฅ) cannot
be multiples of each other.)
2.3.106:
๐‘ฆ 000 − ๐‘ฆ 00 + ๐‘ฆ 0 − ๐‘ฆ = 0
2.4.101:
๐‘˜ = 8/9 (and larger)
√
b) ๐ผ = ๐ถ๐‘’ −๐‘ก cos( 3 ๐‘ก − ๐›พ)
2.4.102: a)
0.05๐ผ 00 + 0.1๐ผ 0 + (1/5)๐ผ = 0
√
10 −๐‘ก
√
๐‘’ sin( 3 ๐‘ก)
√
c) ๐ผ = 10๐‘’ −๐‘ก cos( 3 ๐‘ก) +
3
2.4.103: a) ๐‘˜ = 500000
b)
1
√
5 2
≈ 0.141
c) 45000 kg
d) 11250 kg
2.4.104:
๐‘š0 = 31 . If ๐‘š < ๐‘š0 , then the system is overdamped and will not oscillate.
2.5.101:
๐‘ฆ=
−16 sin(3๐‘ฅ)+6 cos(3๐‘ฅ)
73
2๐‘’ ๐‘ฅ +3๐‘ฅ 3 −9๐‘ฅ
6
2
= ๐‘ฅ − 4๐‘ฅ +
√
√
b) ๐‘ฆ = ๐ถ1 cos( 2๐‘ฅ) + ๐ถ2 sin( 2๐‘ฅ) +
2.5.102: a) ๐‘ฆ =
2.5.103:
๐‘ฆ(๐‘ฅ)
2.5.104:
๐‘ฆ=
2.5.105:
๐‘ฆ=
2.5.106:
๐‘ฆ=
2.6.101:
๐œ”=
2.6.102:
๐‘ฅ ๐‘ ๐‘ =
๐œ”0 =
q
6 + ๐‘’ −๐‘ฅ (๐‘ฅ − 5)
2๐‘ฅ๐‘’ ๐‘ฅ −(๐‘’ ๐‘ฅ +๐‘’ −๐‘ฅ ) log(๐‘’ 2๐‘ฅ +1)
4
√
− sin(๐‘ฅ+๐‘)
2๐‘ฅ +
+
๐ถ
๐‘’
1
3
๐‘ฅ 2 + 2๐‘ฅ + 3
√
31
√
4 2
(๐œ”02 −๐œ”2 )๐น0
๐‘š(2๐œ”๐‘)
√
๐ถ2 ๐‘’ −
๐ถ(๐œ”) =
≈ 0.984
2
2๐‘’ ๐‘ฅ +3๐‘ฅ 3 −9๐‘ฅ
6
2
+๐‘š(๐œ”02 −๐œ”2 )
2๐‘ฅ
16
√
3 7
≈ 2.016
cos(๐œ”๐‘ก) +
2๐œ”๐‘๐น0
2
๐‘š(2๐œ”๐‘)2 +๐‘š(๐œ”02 −๐œ”2 )
sin(๐œ”๐‘ก) + ๐ด๐‘˜ , where ๐‘ =
๐‘
2๐‘š
and
๐‘˜
๐‘š.
2.6.103: a) ๐œ” = 2
b) 25
๐ถ1 3๐‘ฅ
2 ๐‘’ ,
๐‘ฆ3 = ๐‘ฆ(๐‘ฅ) = ๐ถ3 ๐‘’ ๐‘ฅ +
๐ถ1 3๐‘ฅ
2 ๐‘’
3.1.101:
๐‘ฆ1 = ๐ถ1 ๐‘’ 3๐‘ฅ , ๐‘ฆ2 = ๐‘ฆ(๐‘ฅ) = ๐ถ2 ๐‘’ ๐‘ฅ +
3.1.102:
๐‘ฅ = 53 ๐‘’ 2๐‘ก − 23 ๐‘’ −๐‘ก , ๐‘ฆ = 53 ๐‘’ 2๐‘ก + 43 ๐‘’ −๐‘ก
3.1.103:
๐‘ฅ10 = ๐‘ฅ 2 , ๐‘ฅ 20 = ๐‘ฅ3 , ๐‘ฅ30 = ๐‘ฅ1 + ๐‘ก
3.1.104:
๐‘ฆ30 + ๐‘ฆ1 + ๐‘ฆ2 = ๐‘ก, ๐‘ฆ40 + ๐‘ฆ1 − ๐‘ฆ2 = ๐‘ก 2 , ๐‘ฆ10 = ๐‘ฆ3 , ๐‘ฆ20 = ๐‘ฆ4
3.1.105:
๐‘ฅ1 = ๐‘ฅ 2 = ๐‘Ž๐‘ก. Explanation of the intuition is left to reader.
3.1.106: a) Left to reader. b) ๐‘ฅ10 = ๐‘‰๐‘Ÿ (๐‘ฅ 2 − ๐‘ฅ 1 ), ๐‘ฅ20 = ๐‘‰๐‘Ÿ ๐‘ฅ1 − ๐‘Ÿ−๐‘ 
๐‘‰ ๐‘ฅ2 .
both ๐‘ฅ 1 and ๐‘ฅ2 go to zero, explanation is left to reader.
3.2.101: −15
3.2.102: −2
15 3.2.103: ๐‘ฅ® = −5
3.2.104: a)
h
1/๐‘Ž
0
0 1/๐‘
1
i
b)
/๐‘Ž 0 0
0 1/๐‘ 0
0 0 1/๐‘
๐‘ก
3.3.101: Yes.
3.3.102: No. 2
cosh(๐‘ก) 1
−
๐‘’
1
−
๐‘’ −๐‘ก
1
= 0®
c) As ๐‘ก goes to infinity,
451
SOLUTIONS TO SELECTED EXERCISES
๐‘ฅ 0
3.3.103:
๐‘ฆ
3.3.104: a)
๐‘ฅ® 0
=
3 −1 ๐‘ฅ =
0 2๐‘ก ๐‘ฅ®
0 2๐‘ก
๐‘ก
๐‘’
0
+
๐‘ฆ
๐‘ก 0
b) ๐‘ฅ® =
3.4.101: a) Eigenvalues: 4, 0, −1
b) ๐‘ฅ® = ๐ถ1
h1i
0
1
๐‘’ 4๐‘ก + ๐ถ2
h0i
1
0
3.4.102: a) Eigenvalues:
b) ๐‘ฅ® = ๐ถ1 ๐‘’ ๐‘ก/2
√
−2 cos 2
√
cos
+ 3 sin
๐‘ฅ® = ๐ถ1
1
3.4.104:
๐‘ฅ® = ๐ถ1
h
1
√
3๐‘ก
2
๐‘’ ๐‘ก + ๐ถ2
cos(๐‘ก)
− sin(๐‘ก)
i
h
i
+ ๐ถ2 ๐‘’ ๐‘ก/2
1
−1
h
+ ๐ถ2
h1i h0i h
,
0
1
√
sin
,
1
0
h
Eigenvectors:
3๐‘ก
i
Eigenvectors:
3
5 ๐‘’ −๐‘ก
−2
√
√
1+ 3๐‘– 1− 3๐‘–
,
2
2 ,
+ ๐ถ3
√
3๐‘ก
2
3.4.103:
2
๐ถ2 ๐‘’ ๐‘ก +๐ถ1
2
๐ถ2 ๐‘’ ๐‘ก
h
−2
√
1− 3๐‘–
−2 sin 2
√
− 3 cos
i
i h
,
−2
√
1+ 3๐‘–
i
3๐‘ก
√
3๐‘ก
2
3
5
−2
√
3๐‘ก
2
๐‘’ −๐‘ก
sin(๐‘ก)
cos(๐‘ก)
i
√
3.5.101: a) Two eigenvalues: ± 2 so the behavior is a saddle. b) Two eigenvalues: 1
and 2, so the behavior is a source. c) Two eigenvalues: ±2๐‘–, so the behavior is a center
(ellipses). d) Two eigenvalues: −1 and −2, so the behavior is a sink. e) Two eigenvalues:
5 and −3, so the behavior is a saddle.
3.5.102: Spiral source.
3.5.103:
The solution does not move anywhere if ๐‘ฆ = 0. When ๐‘ฆ is positive, the solution moves
(with constant speed) in the positive ๐‘ฅ direction. When ๐‘ฆ is negative, the solution moves
(with constant speed) in the negative ๐‘ฅ direction. It is not one of the behaviors we saw.
Note that the matrix has a double eigenvalue 0 and the general solution is ๐‘ฅ = ๐ถ1 ๐‘ก + ๐ถ2
and ๐‘ฆ = ๐ถ1 , which agrees with the description above.
h 1 i
√
√ √
√ h 0 i
3.6.101: ๐‘ฅ® = −1 ๐‘Ž 1 cos( 3 ๐‘ก) + ๐‘ 1 sin( 3 ๐‘ก) + 1
๐‘Ž 2 cos( 2 ๐‘ก) + ๐‘ 2 sin( 2 ๐‘ก) +
−2
1
h0i
0
1
๐‘Ž 3 cos(๐‘ก) + ๐‘ 3 sin(๐‘ก) +
3.6.102:
+
h
1
0
−1
i
3.6.103:
h๐‘š
0 0
0 ๐‘š 0
0 0 ๐‘š
๐‘Ž 2 cos(
p
i
๐‘ฅ® 00 =
๐‘˜/๐‘š ๐‘ก)
−1
1/2
2/3
p
cos(2๐‘ก)
๐‘˜ 0
๐‘˜ −2๐‘˜ ๐‘˜
0 ๐‘˜ −๐‘˜
h −๐‘˜
+ ๐‘ 2 sin(
๐‘ฅ2 = (2/5) cos(
1/6 ๐‘ก)
p
i
๐‘ฅ®. Solution: ๐‘ฅ® =
๐‘˜/๐‘š ๐‘ก)
+
h1i
− (2/5) cos(๐‘ก)
1
1
h
1
−2
1
๐‘Ž3 ๐‘ก + ๐‘3 .
i
p
๐‘Ž 1 cos(
p
3๐‘˜/๐‘š ๐‘ก)+ ๐‘ 1 sin(
3๐‘˜/๐‘š ๐‘ก)
452
SOLUTIONS TO SELECTED EXERCISES
3.7.101: a) 3, 0, 0
3.7.102: a) 1, 1, 2
b) Eigenvalue
h i 1 has a defect
h i of 1h
0
1
−1
c) ๐‘ฅ® = ๐ถ1
c) ๐‘ฅ® = ๐ถ1
b) No defects.
1
0
0
๐‘’ ๐‘ก + ๐ถ2
+๐‘ก
0
1
−1
i
๐‘’ ๐‘ก + ๐ถ3
h
h1i
๐‘’ 3๐‘ก
1
1
3
3
−2
i
+ ๐ถ2
h
1
0
−1
0
๐ด=
3.7.104:
๐‘’ 3๐‘ก +๐‘’ −๐‘ก
2
๐‘’ −๐‘ก −๐‘’ 3๐‘ก
2
"
2๐‘’ 3๐‘ก −4๐‘’ 2๐‘ก +3๐‘’ ๐‘ก
2๐‘’ ๐‘ก −2๐‘’ 2๐‘ก
3๐‘ก
2๐‘’ −5๐‘’ 2๐‘ก +3๐‘’ ๐‘ก
3.8.103: a) ๐‘’ ๐‘ก๐ด =
h
3.8.104:
h
๐‘’ −๐‘ก −๐‘’ 3๐‘ก
2
๐‘’ 3๐‘ก +๐‘’ −๐‘ก
2
1 0
c)
0 1
0
๐‘ก2
2
h 0 i
3
1
๐‘’ 2๐‘ก
3๐‘’ ๐‘ก
2
(๐‘ก+1) ๐‘’ 2๐‘ก −๐‘ก๐‘’ 2๐‘ก
๐‘ก๐‘’ 2๐‘ก (1−๐‘ก) ๐‘’ 2๐‘ก
1+2๐‘ก+5๐‘ก 2 3๐‘ก+6๐‘ก 2
2๐‘ก+4๐‘ก 2 1+2๐‘ก+5๐‘ก 2
i
3๐‘ก
−๐‘’ 3๐‘ก +4๐‘’ 2๐‘ก −3๐‘’ ๐‘ก
2๐‘’ 2๐‘ก −2๐‘’ ๐‘ก
3๐‘ก
−๐‘’ +5๐‘’ 2๐‘ก −3๐‘’ ๐‘ก
− 3๐‘’2
๐‘’๐‘ก
3๐‘’ ๐‘ก 3๐‘’ 3๐‘ก
2 − 2
i
h
b) ๐‘ฅ® =
๐‘’ 0.1๐ด ≈
#
(1−๐‘ก) ๐‘’ 2๐‘ก
(2−๐‘ก) ๐‘’ 2๐‘ก
i
1.25 0.36 0.24 1.25
5(3๐‘› ) − 2๐‘›+2 4(3๐‘› ) − 2๐‘›+2
a)
5(2๐‘› ) − 5(3๐‘› ) 5(2๐‘› ) − 4(3๐‘› )
0 1
if ๐‘› is even, and
if ๐‘› is odd.
1 0
3.8.105:
i
05
๐‘’ ๐‘ก๐ด =
3.8.102:
+ ๐ถ3
0
1
−1
5 5
๐‘’ ๐‘ก๐ด =
3.8.101:
0
1
h
๐‘’ 2๐‘ก
3.7.103: a) 2, 2, 2
b) Eigenvalue
h 0 i 2 has a defect
h 0 i of 2h 0 i h 1 i
h 0 i
2๐‘ก
c) ๐‘ฅ® = ๐ถ1 3 ๐‘’ + ๐ถ2 −1 + ๐‘ก 3 ๐‘’ 2๐‘ก + ๐ถ3 0 + ๐‘ก −1 +
1
i
3 − 2(3๐‘› ) 2(3๐‘› ) − 2
b)
3 − 3๐‘›+1 3๐‘›+1 − 2
3.9.101: The general solution is (particular solutions should agree with one of these):
๐‘ฅ(๐‘ก) = ๐ถ1 ๐‘’ 9๐‘ก + 4๐ถ2 ๐‘’ 4๐‘ก − ๐‘ก/3 − 5/54, ๐‘ฆ(๐‘ก) = ๐ถ1 ๐‘’ 9๐‘ก − ๐ถ2 ๐‘’ 4๐‘ก + ๐‘ก/6 + 7/216
3.9.102: The general solution is (particular solutions should agree with one of these):
๐‘ฅ(๐‘ก) = ๐ถ1 ๐‘’ ๐‘ก + ๐ถ2 ๐‘’ −๐‘ก + ๐‘ก๐‘’ ๐‘ก , ๐‘ฆ(๐‘ก) = ๐ถ1 ๐‘’ ๐‘ก − ๐ถ2 ๐‘’ −๐‘ก + ๐‘ก๐‘’ ๐‘ก
5 ๐‘ก
2๐‘’
3.9.103:
๐‘ฅ® =
1
3.9.104:
๐‘ฅ® =
1 +
1
−1
−9
4.1.101:
80
1
−๐‘ก−1 +
9
1
140
sin(2๐‘ก) +
1
30
๐œ”=๐œ‹
q
+
1√
120 6
cos(2๐‘ก) +
−1 −๐‘ก
2 ๐‘’
1
−1
√
๐‘’ 6๐‘ก
9๐‘ก
40
cos(๐‘ก)
30
−
+
+
1√
120 6
๐‘’−
√
6๐‘ก
−
๐‘ก
60
−
cos(๐‘ก)
70
15
2
4.1.102: ๐œ† ๐‘˜ = 4๐‘˜ 2 ๐œ‹2 for ๐‘˜ = 1, 2, 3, . . .
4.1.103:
1
140
๐‘ฅ ๐‘˜ = cos(2๐‘˜๐œ‹๐‘ก) + ๐ต sin(2๐‘˜๐œ‹๐‘ก)
(for any ๐ต)
๐‘ฅ(๐‘ก) = − sin(๐‘ก)
4.1.104: General solution is ๐‘ฅ = ๐ถ๐‘’ −๐œ†๐‘ก . Since ๐‘ฅ(0) = 0 then ๐ถ = 0, and so ๐‘ฅ(๐‘ก) = 0.
Therefore, the solution is always identically zero. One condition is always enough to
guarantee a unique solution for a first order equation.
√
4.1.105:
√
3
3 −3
๐œ†
2
๐‘’
3
√
−
3
3
cos
√ √
3
3 ๐œ†
2
+ sin
√ √
3
3 ๐œ†
2
=0
453
SOLUTIONS TO SELECTED EXERCISES
4.2.101: sin(๐‘ก)
4.2.102:
∞
Í
๐‘›=1
4.2.103:
1
2
4.2.104:
๐œ‹4
5
(๐œ‹−๐‘›) sin(๐œ‹๐‘›+๐œ‹2 )+(๐œ‹+๐‘›) sin(๐œ‹๐‘›−๐œ‹2 )
๐œ‹๐‘› 2 −๐œ‹3
− 12 cos(2๐‘ก)
∞
Í
+
๐‘›=1
4.3.101: a)
8
6
4.3.102: a)
∞
Í
4.3.103:
(−1)๐‘› (8๐œ‹2 ๐‘› 2 −48)
๐‘›4
∞
Í
+
16(−1)๐‘›
๐œ‹2 ๐‘› 2
๐‘›=1
๐‘›=1
(−1)๐‘›+1 2๐œ†
๐‘›๐œ‹
๐‘“ 0(๐‘ก) =
∞
Í
๐‘›=1
๐‘ก
2
4.3.104: a) ๐น(๐‘ก) =
4.3.105: a)
sin(๐‘›๐‘ก)
∞
Í
๐‘›=1
๐‘›๐œ‹
2 ๐‘ก
cos
๐‘›๐œ‹
๐œ† ๐‘ก
sin
๐œ‹
๐‘› 2 +1
b)
8
6
2๐œ†
๐œ‹
sin
b)
−
16
๐œ‹2
cos
๐œ‹
๐œ†๐‘ก
−
๐œ‹
2๐‘ก
๐œ†
๐œ‹
+
sin
4
๐œ‹2
2๐œ‹
๐œ† ๐‘ก
cos ๐œ‹๐‘ก −
16
9๐œ‹2
cos
2๐œ†
3๐œ‹
sin
3๐œ‹
๐œ† ๐‘ก
+
3๐œ‹
2 ๐‘ก
+···
−···
cos(๐‘›๐œ‹๐‘ก)
∞
Í
+๐ถ+
๐‘›=1
(−1)๐‘›+1
๐‘›
cos(๐‘›๐‘ก)
1
๐‘›4
b) no
b) ๐‘“ is continuous at ๐‘ก = ๐œ‹/2 so the Fourier series converges
sin(๐‘›๐‘ก)
to ๐‘“ (๐œ‹/2) = ๐œ‹/4. Obtain ๐œ‹/4 =
sin(๐‘›๐‘ก)
∞
Í
๐‘›=1
(−1)๐‘›+1
2๐‘›−1
= 1 − 1/3 + 1/5 − 1/7 + · · · .
c) Using the first 4
terms get 76/105 ≈ 0.72 (quite a bad approximation, you would have to take about 50 terms
to start to get to within 0.01 of ๐œ‹/4).
4.3.106: a) ๐น(0) = 1, b) ๐น(−1) = 0, c) ๐น(1) = 2, d) ๐น(−2) = 1, e) ๐น(4) = 1, f) ๐น(−9) = 0
4.4.101: a) 1/2 +
∞
Í
๐‘›=1
๐‘› odd
−4
๐œ‹2 ๐‘› 2
4.4.102: a) cos(2๐‘ก)
b)
4.4.103: a) ๐‘“ (๐‘ก)
4.4.104:
∞
Í
๐‘›=1
−1
๐‘› 2 (1+๐‘› 2 )
∞
Í
∞
Í
b)
−4๐‘›
๐œ‹๐‘› 2 −4๐œ‹
๐‘›=1
๐‘› odd
∞
Í
๐‘›=1
2(−1)๐‘›+1
๐œ‹๐‘›
sin(๐‘›๐‘ก)
sin(๐‘›๐‘ก)
๐‘ก
๐œ‹
4.5.101:
๐‘ฅ=
4.5.102:
๐‘ฅ=
๐‘’ −๐‘›
2
3−(2๐‘›)
๐‘›=1
4.5.103:
๐‘ฅ=
1
√
2 3
๐‘›=1
๐‘›๐œ‹
3 ๐‘ก
b) 0
4.4.105:
+
cos
1
2๐‘› (๐œ‹−๐‘› 2 )
√ 1
2−4๐œ‹2
sin(2๐œ‹๐‘ก) +
∞
Í
+
sin(๐‘›๐‘ก)
∞
Í
√ 0.1
2−100๐œ‹2
cos(10๐œ‹๐‘ก)
cos(2๐‘›๐‘ก)
√−4
2 ๐œ‹2 ( 3−๐‘› 2 ๐œ‹2 )
๐‘›
๐‘›=1
๐‘› odd
cos(๐‘›๐œ‹๐‘ก)
sin
๐‘›๐œ‹
3 ๐‘ก
454
SOLUTIONS TO SELECTED EXERCISES
2
๐‘ก
๐œ‹3
∞
Í
4.5.104:
๐‘ฅ=
4.6.101:
๐‘ข(๐‘ฅ, ๐‘ก) = 5 sin(๐‘ฅ) ๐‘’ −3๐‘ก + 2 sin(5๐‘ฅ) ๐‘’ −75๐‘ก
4.6.102:
๐‘ข(๐‘ฅ, ๐‘ก) = 1 + 2 cos(๐‘ฅ) ๐‘’ −0.1๐‘ก
4.6.103:
๐‘ข(๐‘ฅ, ๐‘ก) = ๐‘’ ๐œ†๐‘ก ๐‘’ ๐œ†๐‘ฅ for some ๐œ†
4.6.104:
๐‘ข(๐‘ฅ, ๐‘ก) = ๐ด๐‘’ ๐‘ฅ + ๐ต๐‘’ ๐‘ก
1
√
2 3
4.6.105: a) 0,
−
sin(๐œ‹๐‘ก) +
๐‘›=3
๐‘› odd
−4
๐‘› 2 ๐œ‹4 (1−๐‘› 2 )
cos(๐‘›๐œ‹๐‘ก)
b) minimum −100, maximum 100,
4.7.101:
๐‘ฆ(๐‘ฅ, ๐‘ก) = sin(๐‘ฅ) sin(๐‘ก) + cos(๐‘ก)
4.7.102:
๐‘ฆ(๐‘ฅ, ๐‘ก) =
4.7.103:
๐‘ฆ(๐‘ฅ, ๐‘ก) =
4.7.104:
๐‘ฆ(๐‘ฅ, ๐‘ก) = sin(2๐‘ฅ) + ๐‘ก sin(๐‘ฅ)
4.8.101:
๐‘ฆ(๐‘ฅ, ๐‘ก) =
c) ๐‘ก =
ln 2
.
4๐œ‹2
1
1
5๐œ‹ sin(๐œ‹๐‘ฅ) sin(5๐œ‹๐‘ก) + 100๐œ‹ sin(2๐œ‹๐‘ฅ) sin(10๐œ‹๐‘ก)
∞
√
Í
2(−1)๐‘›+1
2 ๐‘ก)
sin(๐‘›๐‘ฅ)
cos(๐‘›
๐‘›
๐‘›=1
sin(2๐œ‹(๐‘ฅ−3๐‘ก))+sin(2๐œ‹(3๐‘ก+๐‘ฅ))
2
+
cos(3๐œ‹(๐‘ฅ−3๐‘ก))−cos(3๐œ‹(3๐‘ก+๐‘ฅ))
18๐œ‹
๏ฃฑ
๏ฃด
๐‘ฅ − ๐‘ฅ 2 − 0.04 if 0.2 ≤ ๐‘ฅ ≤ 0.8
๏ฃด
๏ฃฒ
๏ฃด
4.8.102: a) ๐‘ฆ(๐‘ฅ, 0.1) = 0.6๐‘ฅ
b)
๐‘ฆ(๐‘ฅ, 1/2)
= −๐‘ฅ +
if ๐‘ฅ ≥ 0.8
c) ๐‘ฆ(๐‘ฅ, 1) = ๐‘ฅ − ๐‘ฅ 2
๐‘ฅ2
4.8.103: a) ๐‘ฆ(1, 1) = −1/2
4.9.101:
4.9.102:
4.10.101:
๐‘ข(๐‘ฅ, ๐‘ฆ) =
∞
Í
๐‘›=1
1
๐‘›2
∞
Í
sin(๐‘›๐œ‹๐‘ฅ)
๐‘›=1
1 ๐‘›
๐‘Ÿ
๐‘›2
4.10.102: ๐‘ข = 1 − ๐‘ฅ
2
4.10.103: a) ๐‘ข = −1
4 ๐‘Ÿ +
4.10.104:
b) ๐‘ฆ(4, 3) = 0
๐‘ข(๐‘ฅ, ๐‘ฆ) = 0.1 sin(๐œ‹๐‘ฅ)
๐‘ข =1+
1
๐‘ข(๐‘Ÿ, ๐œƒ) =
2๐œ‹
5.1.101: ๐œ†๐‘› =
(2๐‘›−1)๐œ‹
,
2
if ๐‘ฅ ≤ 0.2
๏ฃด
๏ฃด
๏ฃด 0.6 − 0.6๐‘ฅ
๏ฃณ
1
4
∫
c) ๐‘ฆ(3, 9) = 1/2
sinh(๐‘›๐œ‹(1−๐‘ฆ))
sinh(๐‘›๐œ‹)
sinh(๐œ‹(2−๐‘ฆ))
sinh(2๐œ‹)
sin(๐‘›๐œƒ)
b) ๐‘ข =
๐œ‹
−๐œ‹
−1 2
4 ๐‘Ÿ
+
1
4
+ ๐‘Ÿ 2 sin(2๐œƒ)
๐œŒ2 − ๐‘Ÿ 2
๐‘”(๐›ผ) ๐‘‘๐›ผ
๐œŒ − 2๐‘Ÿ๐œŒ cos(๐œƒ − ๐›ผ) + ๐‘Ÿ 2
๐‘› = 1, 2, 3, . . ., ๐‘ฆ๐‘› = cos
(2๐‘›−1)๐œ‹
๐‘ฅ
2
5.1.102: a) ๐‘(๐‘ฅ) = 1, ๐‘ž(๐‘ฅ) = 0, ๐‘Ÿ(๐‘ฅ) = ๐‘ฅ1 , ๐›ผ1 = 1, ๐›ผ2 = 0, ๐›ฝ 1 = 1, ๐›ฝ 2 = 0. The problem is not
regular. b) ๐‘(๐‘ฅ) = 1 + ๐‘ฅ 2 , ๐‘ž(๐‘ฅ) = ๐‘ฅ 2 , ๐‘Ÿ(๐‘ฅ) = 1, ๐›ผ1 = 1, ๐›ผ2 = 0, ๐›ฝ1 = 1, ๐›ฝ 2 = 1. The problem is
regular.
5.2.101:
๐‘ฆ(๐‘ฅ, ๐‘ก) = sin(๐œ‹๐‘ฅ) cos(2๐œ‹2 ๐‘ก)
5.2.102: 9๐‘ฆ ๐‘ฅ๐‘ฅ๐‘ฅ๐‘ฅ + ๐‘ฆ๐‘ก๐‘ก = 0 (0 < ๐‘ฅ < 10, ๐‘ก > 0), ๐‘ฆ(0, ๐‘ก) = ๐‘ฆ ๐‘ฅ (0, ๐‘ก) = 0,
๐‘ฆ ๐‘ฅ (10, ๐‘ก) = 0, ๐‘ฆ(๐‘ฅ, 0) = sin(๐œ‹๐‘ฅ), ๐‘ฆ๐‘ก (๐‘ฅ, 0) = ๐‘ฅ(10 − ๐‘ฅ).
๐‘ฆ(10, ๐‘ก) =
455
SOLUTIONS TO SELECTED EXERCISES
5.3.101:
∞
Í
๐‘ฆ ๐‘ (๐‘ฅ, ๐‘ก) =
๐‘›=1
๐‘› odd
−4
๐‘› 4 ๐œ‹4
cos(๐‘›๐œ‹)−1
sin(๐‘›๐œ‹)
cos(๐‘›๐œ‹๐‘ฅ) −
sin(๐‘›๐œ‹๐‘ฅ) − 1 cos(๐‘›๐œ‹๐‘ก).
5.3.102: Approximately 1991 centimeters
6.1.101:
8
๐‘ 3
8
๐‘ 2
+
+
4
๐‘ 
6.1.102: 2๐‘ก 2 − 2๐‘ก + 1 − ๐‘’ −2๐‘ก
6.1.103:
1
(๐‘ +1)2
6.1.104:
1
๐‘  2 +2๐‘ +2
6.2.101:
๐‘“ (๐‘ก) = (๐‘ก − 1) ๐‘ข(๐‘ก − 1) − ๐‘ข(๐‘ก − 2) + ๐‘ข(๐‘ก − 2)
6.2.102:
๐‘ฅ(๐‘ก) = (2๐‘’ ๐‘ก−1 − ๐‘ก 2 − 1)๐‘ข(๐‘ก − 1) − 12 ๐‘’ −๐‘ก + 32 ๐‘’ ๐‘ก
6.2.103:
๐ป(๐‘ ) =
6.3.101:
1
2 (cos ๐‘ก
1
๐‘ +1
+ sin ๐‘ก − ๐‘’ −๐‘ก )
6.3.102: 5๐‘ก − 5 sin ๐‘ก
6.3.103:
6.3.104:
1
2 (sin ๐‘ก − ๐‘ก cos ๐‘ก)
∫๐‘ก
๐‘“ (๐œ) 1 − cos(๐‘ก
0
− ๐œ) ๐‘‘๐œ
6.4.101:
๐‘ฅ(๐‘ก) = ๐‘ก
6.4.102:
๐‘ฅ(๐‘ก) = ๐‘’ −๐‘Ž๐‘ก
6.4.103:
๐‘ฅ(๐‘ก) = (cos ∗ sin)(๐‘ก) = 21 ๐‘ก sin(๐‘ก)
6.4.104:
๐›ฟ(๐‘ก) − sin(๐‘ก)
6.4.105: 3๐›ฟ(๐‘ก − 1) + 2๐‘ก
6.5.101:
๐‘ฆ = (๐‘ฅ − ๐‘ก)๐‘ข(๐‘ก − ๐‘ฅ) + ๐‘ก
6.5.102:
๐‘ฆ = ๐‘’ −๐‘๐‘ฅ sin(๐‘ก − ๐‘ฅ)๐‘ข(๐‘ก − ๐‘ฅ)
6.5.103:
๐‘  2๐‘Œ(๐‘ฅ) − ๐‘ (1 − ๐‘ฅ 2 ) + 3๐‘Œ 00(๐‘ฅ) + ๐‘Œ(๐‘ฅ) =
6.5.104:
๐‘ฆ = ๐‘ก๐‘ฅ 2 +
๐‘ฅ
๐‘ 
+
1
,
๐‘ 2
๐‘Œ(−1) = 0,
๐‘Œ(1) = 0.
๐‘ก3
3
7.1.101: Yes. Radius of convergence is 10.
7.1.102: Yes. Radius of convergence is ๐‘’.
7.1.103:
1
1−๐‘ฅ
7.1.104:
∞
Í
7.1.105:
๐‘›=7
1
= − 1−(2−๐‘ฅ)
so
1
1−๐‘ฅ
=
∞
Í
๐‘›=0
(−1)๐‘›+1 (๐‘ฅ − 2)๐‘› , which converges for 1 < ๐‘ฅ < 3.
1
๐‘ฅ๐‘›
(๐‘›−7)!
๐‘“ (๐‘ฅ) − ๐‘”(๐‘ฅ) is a polynomial. Hint: Use Taylor series.
๐‘˜−5
7.2.101: ๐‘Ž2 = 0, ๐‘Ž 3 = 0, ๐‘Ž4 = 0, recurrence relation (for ๐‘˜ ≥ 5): ๐‘Ž ๐‘˜ = −2๐‘Ž
, so:
๐‘˜(๐‘˜−1)
๐‘Ž0 5
๐‘Ž 0 10
๐‘Ž0
๐‘Ž1 6
๐‘Ž1 11
๐‘Ž1
15
16
๐‘ฆ(๐‘ฅ) = ๐‘Ž 0 + ๐‘Ž 1 ๐‘ฅ − 10 ๐‘ฅ − 15 ๐‘ฅ + 450 ๐‘ฅ + 825 ๐‘ฅ − 47250 ๐‘ฅ − 99000 ๐‘ฅ + · · ·
456
SOLUTIONS TO SELECTED EXERCISES
๐‘Ž ๐‘˜−3 +1
, so
๐‘˜(๐‘˜−1)
๐‘Ž
+1
๐‘Ž
+2
๐‘Ž
+1
3 5
5 8 ๐‘Ž 0 +3 9 ๐‘Ž 1 +3 10
๐‘ฆ(๐‘ฅ) = ๐‘Ž 0 + ๐‘Ž 1 ๐‘ฅ + 12 ๐‘ฅ 2 + 06 ๐‘ฅ 3 + 112 ๐‘ฅ 4 + 40
๐‘ฅ + 030 ๐‘ฅ 6 + ๐‘Ž142+2 ๐‘ฅ 7 + 112
๐‘ฅ + 72 ๐‘ฅ + 90 ๐‘ฅ + · · ·
1 3
1 4
3 5
1 6
1 7
5 8
1 9
1 10
1 2
๐‘ฅ +···
b) ๐‘ฆ(๐‘ฅ) = 2 ๐‘ฅ + 6 ๐‘ฅ + 12 ๐‘ฅ + 40 ๐‘ฅ + 15 ๐‘ฅ + 21 ๐‘ฅ + 112 ๐‘ฅ + 24 ๐‘ฅ + 30
7.2.102: a) ๐‘Ž 2 = 12 , and for ๐‘˜ ≥ 1 we have ๐‘Ž ๐‘˜ =
7.2.103: Applying the method of this section directly we obtain ๐‘Ž ๐‘˜ = 0 for all ๐‘˜ and so
๐‘ฆ(๐‘ฅ) = 0 is the only solution we find.
7.3.101: a) ordinary, b) singular but not regular singular, c) regular singular, d) regular
singular, e) ordinary.
√
1+ 5
2
+ ๐ต๐‘ฅ
√
1− 5
2
7.3.102:
๐‘ฆ = ๐ด๐‘ฅ
7.3.103:
๐‘ฆ = ๐‘ฅ 3/2
7.3.104:
๐‘ฆ = ๐ด๐‘ฅ + ๐ต๐‘ฅ ln(๐‘ฅ)
∞
Í
๐‘˜=0
(−1)−1 ๐‘˜
๐‘ฅ
๐‘˜! (๐‘˜+2)!
(Note that for convenience we did not pick ๐‘Ž0 = 1.)
8.1.101: a) Critical points (0, 0) and (0, 1). At (0, 0) using ๐‘ข = ๐‘ฅ, ๐‘ฃ = ๐‘ฆ the linearization
is ๐‘ข 0 = −2๐‘ข − (1/๐œ‹)๐‘ฃ, ๐‘ฃ 0 = −๐‘ฃ. At (0, 1) using ๐‘ข = ๐‘ฅ, ๐‘ฃ = ๐‘ฆ − 1 the linearization is
๐‘ข 0 = −2๐‘ข + (1/๐œ‹)๐‘ฃ, ๐‘ฃ 0 = ๐‘ฃ.
b) Critical point (0, 0). Using ๐‘ข = ๐‘ฅ, ๐‘ฃ = ๐‘ฆ the linearization is ๐‘ข 0 = ๐‘ข + ๐‘ฃ, ๐‘ฃ 0 = ๐‘ข.
c) Critical point (1/2, −1/4). Using ๐‘ข = ๐‘ฅ − 1/2, ๐‘ฃ = ๐‘ฆ + 1/4 the linearization is ๐‘ข 0 = −๐‘ข + ๐‘ฃ,
๐‘ฃ 0 = ๐‘ข + ๐‘ฃ.
8.1.102: 1) is c), 2) is a), 3) is b)
8.1.103: Critical points are (0, 0, 0), and (−1, 1, −1). The linearization at the origin using
variables ๐‘ข = ๐‘ฅ, ๐‘ฃ = ๐‘ฆ, ๐‘ค = ๐‘ง is ๐‘ข 0 = ๐‘ข, ๐‘ฃ 0 = −๐‘ฃ, ๐‘ง 0 = ๐‘ค. The linearization at the point
(−1, 1, −1) using variables ๐‘ข = ๐‘ฅ + 1, ๐‘ฃ = ๐‘ฆ − 1, ๐‘ค = ๐‘ง + 1 is ๐‘ข 0 = ๐‘ข − 2๐‘ค, ๐‘ฃ 0 = −๐‘ฃ − 2๐‘ค,
๐‘ค 0 = ๐‘ค − 2๐‘ข.
8.1.104: ๐‘ข 0 = ๐‘“ (๐‘ข, ๐‘ฃ, ๐‘ค), ๐‘ฃ 0 = ๐‘”(๐‘ข, ๐‘ฃ, ๐‘ค), ๐‘ค 0 = 1.
8.2.101: a) (0, 0): saddle (unstable), (1, 0): source (unstable),
b) (0, 0): spiral sink
(asymptotically stable), (0, 1): saddle (unstable),
c) (1, 0): saddle (unstable), (0, 1):
source (unstable)
8.2.102: a) 12 ๐‘ฆ 2 + 13 ๐‘ฅ 3 − 4๐‘ฅ = ๐ถ, critical points: (−2, 0), an unstable saddle, and (2, 0), a
stable center. b) 12 ๐‘ฆ 2 + ๐‘’ ๐‘ฅ = ๐ถ, no critical points. c) 12 ๐‘ฆ 2 + ๐‘ฅ๐‘’ ๐‘ฅ = ๐ถ, critical point at
(−1, 0) is a stable center.
p
8.2.103: Critical point at (0, 0). Trajectories are ๐‘ฆ = ± 2๐ถ − (1/2)๐‘ฅ 4 , for ๐ถ > 0, these give
closed curves around the origin, so the critical point is a stable center.
8.2.104: A critical point ๐‘ฅ0 is stable if ๐‘“ 0(๐‘ฅ0 ) < 0 and unstable when ๐‘“ 0(๐‘ฅ0 ) > 0.
8.3.101: a) Critical points are ๐œ” = 0, ๐œƒ = ๐‘˜๐œ‹ for any integer ๐‘˜. When ๐‘˜ is odd, we have a
saddle point. When ๐‘˜ is even we get a sink. b) The findings mean the pendulum will
simply go to one of the sinks, for example (0, 0) and it will not swing back and forth. The
friction is too high for it to oscillate, just like an overdamped mass-spring system.
8.3.102:
๐‘Ž
a) Solving for the critical points we get (0, − โ„Ž/๐‘‘) and ( ๐‘ โ„Ž+๐‘Ž๐‘‘
๐‘Ž๐‘ , ๐‘ ). The Jacobian
matrix at (0, − โ„Ž/๐‘‘) is
h
๐‘Ž+๐‘ โ„Ž/๐‘‘ 0
−๐‘ โ„Ž/๐‘‘ −๐‘‘
i
whose eigenvalues are ๐‘Ž + ๐‘ โ„Ž/๐‘‘ and −๐‘‘. The eigenvalues
457
SOLUTIONS TO SELECTED EXERCISES
are real of opposite signs and we get a saddle. (In the application, however, we are only
๐‘Ž
looking at the positive quadrant so this critical point is irrelevant.) At ( ๐‘ โ„Ž+๐‘Ž๐‘‘
๐‘Ž๐‘ , ๐‘ ) we get
Jacobian matrix
is
( 550
3 , 40)
๐‘(๐‘ โ„Ž+๐‘Ž๐‘‘)
๐‘Ž๐‘
๐‘ โ„Ž+๐‘Ž๐‘‘
๐‘Ž −๐‘‘
0 −
๐‘Ž๐‘
๐‘
the matrix is
h
. b) For the specific numbers given, the second critical point
0 −11/6
3/25 1/4
i
, which has eigenvalues
√
5±๐‘– 327
.
40
Therefore there is a
spiral source; the solution spirals outwards. The solution eventually hits one of the axes,
๐‘ฅ = 0 or ๐‘ฆ = 0, so something will die out in the forest.
8.3.103: The critical points are on the line ๐‘ฅ = 0. In the positive quadrant the ๐‘ฆ 0 is always
positive and so the fox population always grows. The constant of motion is ๐ถ = ๐‘ฆ ๐‘Ž ๐‘’ −๐‘๐‘ฅ−๐‘ ๐‘ฆ ,
for any ๐ถ this curve must hit the ๐‘ฆ-axis (why?), so the trajectory will simply approach a
point on the ๐‘ฆ axis somewhere and the number of hares will go to zero.
8.4.101: Use Bendixson–Dulac Theorem. a) ๐‘“๐‘ฅ + ๐‘” ๐‘ฆ = 1 + 1 > 0, so no closed trajectories.
b) ๐‘“๐‘ฅ + ๐‘” ๐‘ฆ = − sin2 (๐‘ฆ) + 0 < 0 for all ๐‘ฅ, ๐‘ฆ except the lines given by ๐‘ฆ = ๐‘˜๐œ‹ (where we get
zero), so no closed trajectories. c) ๐‘“๐‘ฅ + ๐‘” ๐‘ฆ = ๐‘ฆ + 0 > 0 for all ๐‘ฅ, ๐‘ฆ except the line given by
๐‘ฆ = 0 (where we get zero), so no closed trajectories.
8.4.102: Using Poincaré–Bendixson Theorem, the system has a limit cycle, which is the
unit circle centered at the origin, as ๐‘ฅ = cos(๐‘ก) + ๐‘’ −๐‘ก , ๐‘ฆ = sin(๐‘ก) + ๐‘’ −๐‘ก gets closer and closer
to the unit circle. Thus ๐‘ฅ = cos(๐‘ก), ๐‘ฆ = sin(๐‘ก) is the periodic solution.
8.4.103: ๐‘“ (๐‘ฅ, ๐‘ฆ) = ๐‘ฆ, ๐‘”(๐‘ฅ, ๐‘ฆ) = ๐œ‡(1 − ๐‘ฅ 2 )๐‘ฆ − ๐‘ฅ. So ๐‘“๐‘ฅ + ๐‘” ๐‘ฆ = ๐œ‡(1 − ๐‘ฅ 2 ). The Bendixson–Dulac
Theorem says there is no closed trajectory lying entirely in the set ๐‘ฅ 2 < 1.
8.4.104: The closed trajectories are those where sin(๐‘Ÿ) = 0, therefore, all the circles centered
at the origin with radius that is a multiple of ๐œ‹ are closed trajectories.
√ √
√
√
8.5.101: Critical points: (0, 0, 0), (3 8, 3 8, 27), (−3 8, −3 8, 27). Linearization at (0, 0, 0)
0
using
10๐‘ฃ, ๐‘ฃ 0 = 28๐‘ข − ๐‘ฃ, ๐‘ค 0 = −(8/3)๐‘ค. Linearization
√ ๐‘ข√= ๐‘ฅ, ๐‘ฃ = ๐‘ฆ, ๐‘ค = ๐‘ง is ๐‘ข√ = −10๐‘ข + √
√ at
0 = −10๐‘ข+10๐‘ฃ, ๐‘ฃ 0 = ๐‘ข−๐‘ฃ−3 8๐‘ค,
(3 8, 3√ 8, 27)√using ๐‘ข = ๐‘ฅ−3 8, ๐‘ฃ = ๐‘ฆ−3 8, ๐‘ค√= ๐‘ง−27
is
๐‘ข
√
√
√
๐‘ค 0 = 3 8๐‘ข +3 8๐‘ฃ −(8/3)๐‘ค. Linearization at (−3
√ 8, −3 8, 27)
√ using ๐‘ข√ = ๐‘ฅ +3 8, ๐‘ฃ = ๐‘ฆ +3 8,
๐‘ค = ๐‘ง − 27 is ๐‘ข 0 = −10๐‘ข + 10๐‘ฃ, ๐‘ฃ 0 = ๐‘ข − ๐‘ฃ + 3 8๐‘ค, ๐‘ค 0 = −3 8๐‘ข − 3 8๐‘ฃ − (8/3)๐‘ค.
√
√
A.1.101: a) 10 b) 14 c) 3
" −1 #
√
2
√1
2
A.1.102: a)
9
A.1.103: a)
−2
A.1.104: a) 20
๏ฃฎ √1 ๏ฃน
๏ฃฏ 6๏ฃบ
๏ฃฏ −1 ๏ฃบ
b) ๏ฃฏ √6 ๏ฃบ
๏ฃฏ2๏ฃบ
๏ฃฏ√ ๏ฃบ
๏ฃฐ 6๏ฃป
−3
b)
3
b) 10
A.1.105: a) (3, −1)
7 4 4
A.2.101: a)
2 3 4
√2 , √−5 , √2
33
33
33
5
c)
−3
c) (−1, −1)
๏ฃฎ 5 −3 0๏ฃน
๏ฃฏ
๏ฃบ
b) ๏ฃฏ๏ฃฏ 13 10 6๏ฃบ๏ฃบ
๏ฃฏ−1 3 1๏ฃบ
๏ฃฐ
๏ฃป
−4
d)
8
c) 20
b) (4, 0)
c)
3
e)
7
−8
f)
3
458
SOLUTIONS TO SELECTED EXERCISES
−1 13
A.2.102: a)
9 14
22 31
A.2.103: a)
42 44
A.2.104: a)
1/2
A.2.105: a)
2 −5
b)
5 5
๏ฃฎ18 18 12 ๏ฃน
๏ฃฏ
๏ฃบ
b) ๏ฃฏ๏ฃฏ 6 0 8 ๏ฃบ๏ฃบ
๏ฃฏ34 48 −2๏ฃบ
๏ฃฐ
๏ฃป
−5 2
c)
3 −1
0 1
b)
1 0
0
0
1/3
1 0 1
A.3.101: a)
0 1 0
๏ฃฎ1 0 −1/2 0 ๏ฃน
๏ฃฏ
๏ฃบ
f) ๏ฃฏ๏ฃฏ0 1 1/2 1/2๏ฃบ๏ฃบ
๏ฃฏ0 0
0
0 ๏ฃบ๏ฃป
๏ฃฐ
๏ฃฎ0 −1
๏ฃฏ
A.3.102: a) ๏ฃฏ๏ฃฏ1 0
๏ฃฏ0 0
๏ฃฐ
๏ฃฎ1/4
๏ฃฏ
b) ๏ฃฏ๏ฃฏ 0
๏ฃฏ0
๏ฃฐ
1/2
−1
A.3.104: a)
3
A.3.105: a) 3
b) 1
0 0 ๏ฃน๏ฃบ
1/5
0 ๏ฃบ๏ฃบ
0 −1๏ฃบ๏ฃป
1 0
b)
0 1
0๏ฃน๏ฃบ
0๏ฃบ๏ฃบ
1๏ฃบ๏ฃป
๏ฃฎ 11 12 36
๏ฃฏ
c) ๏ฃฏ๏ฃฏ−2 4 5
๏ฃฏ 13 38 20
๏ฃฐ
d)
๏ฃฎ−1
๏ฃฏ
๏ฃฏ0
c) ๏ฃฏ๏ฃฏ
๏ฃฏ0
๏ฃฏ0
๏ฃฐ
1 1
c)
0 0
0 0 0 0
g)
0 0 0 0
h)
−3
b)
1
−1/4
−1/2
1/2
0
1/2
0
0
๏ฃฎ−2 −12๏ฃน
๏ฃฏ
๏ฃบ
24 ๏ฃบ๏ฃบ
d) ๏ฃฏ๏ฃฏ 3
๏ฃฏ1
9 ๏ฃบ๏ฃป
๏ฃฐ
14 ๏ฃน๏ฃบ
−2๏ฃบ๏ฃบ
28 ๏ฃบ๏ฃป
1/2
0 0 ๏ฃน๏ฃบ
0 0 ๏ฃบ๏ฃบ
1/3 0 ๏ฃบ
๏ฃบ
0 10๏ฃบ๏ฃป
๏ฃฎ1 0
0 ๏ฃน๏ฃบ
๏ฃฏ
d) ๏ฃฏ๏ฃฏ0 1 −1/3๏ฃบ๏ฃบ
๏ฃฏ0 0
0 ๏ฃบ๏ฃป
๏ฃฐ
๏ฃฎ1 0 0 77/15 ๏ฃน
๏ฃฏ
๏ฃบ
e) ๏ฃฏ๏ฃฏ0 1 0 −2/15๏ฃบ๏ฃบ
๏ฃฏ0 0 1 −8/5 ๏ฃบ
๏ฃฐ
๏ฃป
1 2 3 0
0 0 0 1
๏ฃฎ 5/2
๏ฃฏ
c) ๏ฃฏ๏ฃฏ−1
๏ฃฏ−1
๏ฃฐ
๏ฃฎ0 0 1 ๏ฃน
๏ฃฏ
๏ฃบ
b) ๏ฃฏ๏ฃฏ0 1 −1๏ฃบ๏ฃบ
๏ฃฏ1 −1 0 ๏ฃบ
๏ฃฐ
๏ฃป
A.3.103: a) ๐‘ฅ1 = −2, ๐‘ฅ2 = 7/3
๐‘ฅ1 = −1 + 3๐‘ฅ 3 , ๐‘ฅ 2 = 2 − ๐‘ฅ3
1 −3๏ฃน๏ฃบ
−1/2 3/2 ๏ฃบ
๏ฃบ
0
1 ๏ฃบ๏ฃป
c) ๐‘Ž = −3, ๐‘ = 10, ๐‘ = −8
b) no solution
d) ๐‘ฅ3 is free,
c) 2
A.3.106: a) 1 0 0 , 0 1 0 , 0 0 1
๏ฃฎ7๏ฃน ๏ฃฎ−1๏ฃน
๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบ
A.3.107: a) ๏ฃฏ๏ฃฏ7๏ฃบ๏ฃบ , ๏ฃฏ๏ฃฏ 7 ๏ฃบ๏ฃบ ,
๏ฃฏ7๏ฃบ ๏ฃฏ 6 ๏ฃบ
๏ฃฐ ๏ฃป ๏ฃฐ ๏ฃป
๏ฃฎ3๏ฃน ๏ฃฎ0๏ฃน
๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบ
A.3.108: ๏ฃฏ๏ฃฏ 1 ๏ฃบ๏ฃบ , ๏ฃฏ๏ฃฏ 3 ๏ฃบ๏ฃบ
๏ฃฏ−5๏ฃบ ๏ฃฏ−1๏ฃบ
๏ฃฐ ๏ฃป ๏ฃฐ ๏ฃป
๏ฃฎ7๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฏ6๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ2๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฎ1๏ฃน
๏ฃฏ ๏ฃบ
b) ๏ฃฏ๏ฃฏ1๏ฃบ๏ฃบ
๏ฃฏ2๏ฃบ
๏ฃฐ ๏ฃป
A.4.101:
1
1
a)
,
dimension 2,
2
1
sion 3,
๏ฃฎ2๏ฃน ๏ฃฎ2๏ฃน
๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบ
d) ๏ฃฏ๏ฃฏ2๏ฃบ๏ฃบ , ๏ฃฏ๏ฃฏ2๏ฃบ๏ฃบ dimension 2,
๏ฃฏ4๏ฃบ ๏ฃฏ3๏ฃบ
๏ฃฐ ๏ฃป ๏ฃฐ ๏ฃป
๏ฃฎ0๏ฃน
๏ฃฏ ๏ฃบ
c) ๏ฃฏ๏ฃฏ6๏ฃบ๏ฃบ ,
๏ฃฏ4๏ฃบ
๏ฃฐ ๏ฃป
b) 1 1 1
c) 1 0
1/3
, 0 1
−1/3
๏ฃฎ3๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฏ3๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ7๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฎ1๏ฃน ๏ฃฎ1๏ฃน
๏ฃฎ5๏ฃน ๏ฃฎ 5 ๏ฃน ๏ฃฎ−1๏ฃน
๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบ
๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบ
b) ๏ฃฏ๏ฃฏ1๏ฃบ๏ฃบ , ๏ฃฏ๏ฃฏ1๏ฃบ๏ฃบ dimension 2, c) ๏ฃฏ๏ฃฏ3๏ฃบ๏ฃบ , ๏ฃฏ๏ฃฏ−1๏ฃบ๏ฃบ , ๏ฃฏ๏ฃฏ 3 ๏ฃบ๏ฃบ dimen๏ฃฏ1๏ฃบ ๏ฃฏ2๏ฃบ
๏ฃฏ1๏ฃบ ๏ฃฏ 5 ๏ฃบ ๏ฃฏ−4๏ฃบ
๏ฃฐ ๏ฃป ๏ฃฐ ๏ฃป
๏ฃฐ ๏ฃป ๏ฃฐ ๏ฃป ๏ฃฐ ๏ฃป
๏ฃฎ1๏ฃน ๏ฃฎ0๏ฃน
๏ฃฏ ๏ฃบ ๏ฃฏ ๏ฃบ
1
e)
dimension 1, f) ๏ฃฏ๏ฃฏ0๏ฃบ๏ฃบ , ๏ฃฏ๏ฃฏ1๏ฃบ๏ฃบ dimension 2
1
๏ฃฏ0๏ฃบ ๏ฃฏ2๏ฃบ
๏ฃฐ ๏ฃป ๏ฃฐ ๏ฃป
459
SOLUTIONS TO SELECTED EXERCISES
๏ฃฎ3๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฏ−1๏ฃบ
A.4.102: a) ๏ฃฏ๏ฃฏ ๏ฃบ๏ฃบ ,
๏ฃฏ0๏ฃบ
๏ฃฏ0๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฎ3๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฏ0๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ3๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ−1๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฎ−1๏ฃน
๏ฃฏ ๏ฃบ
b) ๏ฃฏ๏ฃฏ−1๏ฃบ๏ฃบ
๏ฃฏ0๏ฃบ
๏ฃฐ ๏ฃป
A.4.103: a) 3 b) 2 c) 3 d) 2
A.5.101: ๐‘  = −2
A.5.102: ๐œƒ ≈ 0.3876
A.5.103: a) -15 b) -1 c) 28
A.5.104: a) (−1/2, 0, 21 )
b) (0, 0, 0)
๏ฃฎ1๏ฃน
๏ฃฏ ๏ฃบ
c) ๏ฃฏ๏ฃฏ 1 ๏ฃบ๏ฃบ
๏ฃฏ−1๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฎ−1๏ฃน
๏ฃฏ ๏ฃบ
d) ๏ฃฏ๏ฃฏ 0 ๏ฃบ๏ฃบ ,
๏ฃฏ0๏ฃบ
๏ฃฐ ๏ฃป
๏ฃฎ0๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฏ1๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ−1๏ฃบ
๏ฃฐ ๏ฃป
e) 3
c) (2, 0, −2)
A.5.105: a) (1, 1, −1) − (2, −1, 1) + 2(1, −5, 3) b) 2(2, −1, 1) + (1, −5, 3)
2(2, −1, 1) + 2(1, −5, 3)
A.5.106: (2, −1, 1), (2/3 , 8/3 , 4/3)
A.5.107: (1, 1, −1), (0, 1, 1), (4/3, −2/3, 2/3)
A.6.101: a) −2 b) 8 c) 0 d) −6 e) −3 f) 28 g) 16 h) −24
A.6.102: a) 3 b) 9
A.6.103: 12
A.6.104: 1 and 3
c) 3
d) 1/4
c) 2(1, 1, −1) −
460
SOLUTIONS TO SELECTED EXERCISES
Index
absolute convergence, 328
acceleration, 24
adding vectors, 387
addition of matrices, 127
Airy’s equation, 336
algebraic multiplicity, 162
almost linear, 357
amplitude, 98
analytic functions, 330
angular frequency, 98
ansatz, 84
antiderivative, 21
antidifferentiate, 22
associated homogeneous equation, 104, 138
associative law, 400
asymptotically stable, 358
asymptotically stable limit cycle, 374
atan, 99
atan2, 99
attractor, 381
augmented matrix, 132, 408
autonomous, 19
autonomous equation, 51
autonomous system, 124
backsubstitution, 411
basis, 391, 421
beating, 112
Bendixson–Dulac Theorem, 375
Bernoulli equation, 47
Bessel function of second kind, 348
Bessel function of the first kind, 348
Bessel’s equation, 347
boundary conditions for a PDE, 232
boundary value problem, 189
Burger’s equation, 18
cartesian plane, 386
catenary, 14
Cauchy–Euler equation, 82, 89
center, 149, 359
cgs units, 291, 292
chaotic systems, 378
characteristic coordinates, 72, 253
characteristic equation, 85
characteristics, 72
Chebyshev’s equation of order ๐‘, 340
Chebyshev’s equation of order 1, 83
clamped end of beam, 284
closed curves, 362
closed trajectory, 373
coefficients, 18
cofactor, 130, 438
cofactor expansion, 130, 437
column space, 416
column vector, 127, 387
commute, 129
complementary solution, 104
complete eigenvalue, 162
complex conjugate, 143
complex number, 86
complex roots, 87
conservative equation, 361
conservative vector field, 64
consistent, 412
constant coefficient, 19, 84, 137
constant of motion, 370
convection, 73
convection equation, 320
convergence of a power series, 328
convergent power series, 328
converges absolutely, 328
461
462
convolution, 308
corresponding eigenfunction, 190
cosine series, 220
critical point, 51, 353
critically damped, 101
INDEX
elementary row operations, 408
elimination, 407
ellipses (vector field), 149
elliptic PDE, 232
endpoint problem, 189
entry of a matrix, 127, 390
d’Alembert solution to the wave equation, envelope curves, 101
252
equilibrium, 353
damped, 99
equilibrium solution, 51, 353
damped motion, 96
euclidean space, 386
damped nonlinear pendulum equation, 371 Euler’s equation, 82
defect, 163
Euler’s formula, 87
defective eigenvalue, 163
Euler’s method, 57
deficient matrix, 163
Euler–Bernoulli equation, 317
delta function, 314
even function, 202, 218
dependent variable, 10
even periodic extension, 218
determinant, 129, 436
exact equation, 63
diagonal matrix, 153, 403
existence and uniqueness, 30, 80, 90
matrix exponential of, 170
existence and uniqueness for systems, 125
diagonalization, 171
exponential growth, 17
differential equation, 10
exponential growth model, 12
Dirac delta function, 314
exponential of a matrix, 169
direction, 386, 389
exponential order, 296
direction field, 124
extend periodically, 198
Dirichlet boundary conditions, 221, 273
first order differential equation, 10
Dirichlet problem, 259
first order linear equation, 40
displacement vector, 153
first order linear system of ODEs, 136
distance, 24
first order method, 58
distributive law, 400
first order system, 119
divergent power series, 328
first shifting property, 298
dot product, 128, 199, 398
fixed end of beam, 284
Duffing equation, 379
forced motion, 96
dynamic damping, 161
systems, 159
echelon form, 409
four fundamental equations, 13
eigenfunction, 190, 274
Fourier series, 200
eigenfunction decomposition, 273, 278
fourth order method, 60
eigenvalue, 140, 274
Fredholm alternative, 425
eigenvalue of a boundary value problem, 190
simple case, 194
eigenvector, 140
Sturm–Liouville problems, 278
eigenvector decomposition, 179, 186
free end of beam, 284
element of a matrix, 127, 390
free motion, 96
elementary operations, 408
free variable, 133, 412
463
INDEX
frequency, 98
Frobenius method, 345
Frobenius-type solution, 345
fundamental frequency, 205, 248
fundamental matrix, 137
fundamental matrix solution, 137, 170
Gauss–Jordan elimination, 409
Gaussian elimination, 409
general solution, 13
general solution to a system, 120
generalized eigenvectors, 163, 166
generalized function, 314
Genius software, 9
geometric multiplicity, 162
geometric series, 270, 333
Gibbs phenomenon, 205
Gram–Schmidt process, 433
half period, 208
Hamiltonian, 361
harmonic conjugate, 71
harmonic function, 71, 258
harvesting, 53
heat equation, 17, 232
Heaviside function, 294
Hermite’s equation of order ๐‘›, 339
Hermite’s equation of order 2, 83
hinged end of beam, 284
homogeneous, 19
homogeneous equation, 48
homogeneous linear equation, 79
homogeneous matrix equation, 413
homogeneous side conditions, 233
homogeneous system, 137
Hooke’s law, 96, 152
hyperbolic cosine, 14
hyperbolic PDE, 232
hyperbolic sine, 14
identity matrix, 128, 400
ill-posed, 323
imaginary part, 87
implicit solution, 35
Improved Euler’s method, 62
impulse response, 313, 315
inconsistent, 412
inconsistent system, 133
indefinite integral, 21
independent variable, 10
indicial equation, 344
inhomogeneous, 19
initial condition, 13
initial condition for a PDE, 72
initial condition for a system, 120
initial conditions for a PDE, 232
inner product, 128, 398, 428
inner product of functions, 200, 277
integral equation, 305, 310
integrate, 22
integrating factor, 40
integrating factor method, 40
for systems, 177
inverse Laplace transform, 297
invertible matrix, 129, 401
IODE software, 9
isolated critical point, 357
Jacobian matrix, 354
kernel, 413
la vie, 106
Laplace equation, 232, 258
Laplace transform, 293
Laplacian, 258
Laplacian in polar coordinates, 265
leading entry, 133, 409
Leibniz notation, 23, 33
limit cycle, 373
linear combination, 80, 90, 391, 413
linear equation, 18, 40, 79
linear first order system, 120
linear mapping, 390
linear operator, 80, 104
linear PDE, 232
464
linear systems, 120
linearity of the Laplace transform, 295
linearization, 354
linearly dependent, 90, 414
linearly independent, 81, 90, 414
for vector-valued functions, 137
logistic equation, 51
with harvesting, 53
Lorenz attractor, 383
Lorenz system, 382
Lotka–Volterra, 368
lower triangular, 438
magnitude, 387
mass matrix, 153
mathematical model, 12
mathematical solution, 12
matrix, 127, 390
matrix exponential, 169
matrix inverse, 129, 401
matrix product, 399
matrix-valued function, 136
Maxwell’s equations, 17
mechanical vibrations, 17
method of characteristics, 72
Method of Frobenius, 345
method of partial fractions, 297
Mixed boundary conditions, 273
mks units, 99, 102, 227
multiplication of complex numbers, 86
multiplicity, 93
multiplicity of an eigenvalue, 162
๐‘›-dimensional space, 386
natural (angular) frequency, 98
natural frequency, 111, 155
natural mode of oscillation, 155
Neumann boundary conditions, 222, 273
Newton’s law of cooling, 17, 36, 44, 51
Newton’s second law, 96, 97, 122, 152
nilpotent, 171
nonhomogeneous, 19
nonlinear equation, 18
INDEX
normal mode of oscillation, 155
nullity, 423
nullspace, 413
odd function, 201, 218
odd periodic extension, 218
ODE, 11, 17
one-dimensional heat equation, 232
one-dimensional wave equation, 243
operator, 80, 390
order, 18
ordinary differential equation, 11
Ordinary differential equations, 17
ordinary point, 335
orthogonal, 429
functions, 193, 200
vectors, 198
with respect to a weight, 276
orthogonal basis, 431
orthogonal projection, 430
orthogonality, 193
orthonormal basis, 431
overdamped, 100
overtones, 205, 248
parabolic PDE, 232
parallelogram, 130, 436
partial differential equation, 11, 232
Partial differential equations, 17
partial sum, 327
particular solution, 13, 104
PDE, 11, 17, 232
period, 98
periodic, 198
periodic extension, 198
periodic orbit, 374
phase diagram, 52, 352
phase plane portrait, 124
phase portrait, 52, 124, 352
phase shift, 98
Picard’s theorem, 30, 125
piecewise continuous, 211
piecewise smooth, 211
INDEX
pivot, 409
Poincaré section, 380
Poincaré–Bendixson Theorem, 374
Poisson kernel, 269
potential function, 63
power series, 327
practical resonance, 115, 231
practical resonance amplitude, 115
practical resonance frequency, 115
predator-prey, 368
product of matrices, 128, 399
projection, 200
orthogonal, 430
proper rational function, 298
pseudo-frequency, 102
pure resonance, 113, 229
quadratic formula, 85
radius of convergence, 328
rank, 415
ratio test for series, 329
real part, 87
real-world problem, 12
recurrence relation, 336
reduced row echelon form, 133, 409
reduction of order method, 81
regular singular point, 345
regular Sturm–Liouville problem, 275
reindexing the series, 332
relaxation oscillation, 373
repeated roots, 92
resonance, 113, 160, 229, 310
RLC circuit, 96
Robin boundary conditions, 273
root test for series, 329
row echelon form, 409
row reduction, 409
row space, 416
row vector, 127, 391
Runge–Kutta method, 61
saddle point, 148
465
sawtooth, 201
scalar, 127, 388
scalar multiplication, 127
scale a vector, 388
second order differential equation, 14
second order linear differential equation, 79
second order method, 58
second order system, 119
second shifting property, 302
semistable critical point, 53
separable, 33
separation of variables, 234
shifting property, 298, 302
side conditions for a PDE, 232
Sierpinski triangle, 384
simple harmonic motion, 98
simply connected region, 375
sine series, 220
singular matrix, 129, 401
singular point, 335
singular solution, 35
sink, 147
slope field, 27
solution, 10
solution curve, 124
solution to a system, 120
source, 147
span, 415
spectrum, 205, 248
spiral sink, 149
spiral source, 149
square matrix, 391
square wave, 116, 203
stable center, 359
stable critical point, 51, 357
stable node, 147
standard basis vectors, 391
standard inner product, 428
steady periodic solution, 114, 226
steady state temperature, 241, 258
stiff problem, 61
stiffness matrix, 153
466
strange attractor, 380
Sturm–Liouville problem, 274
regular, 275
subspace, 421
subtracting vectors, 388
superposition, 79, 90, 137, 233
symmetric matrix, 193, 198, 404
system of differential equations, 17, 119
Taylor series, 330
tedious, 106, 108, 114, 182, 269, 378
thermal conductivity, 232
three mass system, 152
three-point beam bending, 317
timbre, 248, 284
total derivative, 63
trajectory, 124
transfer function, 304
transformation, 390
transient solution, 114
transport equation, 17, 72, 320
transpose, 128, 404
trigonometric series, 200
undamped, 98
undamped motion, 96
systems, 152
underdamped, 101
undetermined coefficients, 105
for second order systems, 159, 185
for systems, 182
unforced motion, 96
unit step function, 294
unit vector, 389
unstable critical point, 51, 357
unstable node, 147
upper triangular, 438
upper triangular matrix, 164
Van der Pol oscillator, 373
variation of parameters, 107
for systems, 184
vector, 127, 386
INDEX
vector field, 64, 124, 352
vector-valued function, 136, 389
velocity, 24
Volterra integral equation, 310
wave equation, 232, 243, 252
wave equation in 2 dimensions, 17
wavefronts, 256
weight function, 276
Download