Uploaded by Samuel Riggan

pdfLecture Notes for Zill {11.1

advertisement
Lecture Notes for {11.1 Zill
Note: All page, theorem, example and figure numbers refer to the 9th edition of the text. Examples I myself introduce are not
numbered.
We bid a fond farewell to autonomous systems and stability theory and move to a completely different but
extremely important topic or, better, family of topics: Orthogonal functions/Fourier series/Boundary-Value
Problems (a.k.a. “Partial Differential Equations”). Let me just mention for those who may be interested that
there is of course much more to the ideas in Chapter 10 than what we covered; standard references are
Dynamics and Bifurcations by Hale and Koςak, and Differential Equations and Dynamical Systems by Perko;
Also, here at UNO we offer a semester-long course on this topic.
Orthogonal Functions
Let us recall some ideas you have seen in Calculus III and in Linear Algebra. The “dot product” of two
vectors in, say, ℝ! , 𝐮 ⋅ 𝐯 = 𝑢! 𝑣! + 𝑢! 𝑣! + 𝑢! 𝑣! is in fact only one example of a structure with which many
vector spaces can be equipped, the generalized inner product. A generalized inner product is any operation that
sends a pair of vectors 𝐮, 𝐯 to a real number in a way that satisfies the following properties:
i) 𝐮, 𝐯 = 𝐯, 𝐮
ii) 𝑘𝐮, 𝐯 = 𝑘 𝐮, 𝐯
iii) 𝐮, 𝐮 = 0 if 𝐮 = 𝟎 and 𝐮, 𝐮 > 0 if 𝐮 ≠ 𝟎
iv) 𝐮 + 𝐯, 𝐰 = 𝐮, 𝐰 + 𝐯, 𝐰
We recall also that two vectors 𝐮 and 𝐯 are orthogonal if 𝐮, 𝐯 = 0. In the vector spaces ℝ! or ℝ! we
interpret orthogonality in a geometric sense as perpendicularity; in higher dimensions or for other kinds of
vector spaces this physical interpretation generally does not apply.
In particular we consider the (integral) inner product of two functions 𝑓! and 𝑓! defined on an interval 𝑎, 𝑏 :
!
𝑓! , 𝑓! =
!
Naturally 𝑓! and 𝑓! are orthogonal on 𝑎, 𝑏 if 𝑓! , 𝑓! =
𝑓! 𝑥 𝑓! 𝑥 𝑑𝑥
!
𝑓
! !
𝑥 𝑓! 𝑥 𝑑𝑥 = 0.
This definition of inner product can in fact be applied to any such interval as 𝑎, 𝑏 , 𝑎, 𝑏 , 𝑎, 𝑏 , 𝑎, 𝑏 ,
−∞, ∞ , [𝑎, ∞), etc., etc., as long as the set of functions 𝑆 = 𝑓! under consideration constitutes a vector
space (under addition and scalar multiplication) thereon and is continuous, and hence integrable, thereon1. To
simplify the discussion we will generally refer to 𝑓! and 𝑓! as being defined on an interval 𝑎, 𝑏 .
Example: The functions 𝑓! (𝑥) = 𝑥 and 𝑓! (𝑥) = 𝑥 ! are orthogonal on the interval −1, 1 whereas 𝑓! (𝑥) = 𝑥
and 𝑓! (𝑥) = 𝑥 ! are not orthogonal on the interval −1, 1 . (Verify).
Example: The functions 𝑓! (𝑥) = cos 𝑥 and 𝑓! (𝑥) = sin! 𝑥 are orthogonal on the interval 0, 𝜋 whereas this
𝑓! (𝑥) and 𝑓! (𝑥) are not orthogonal on the interval 0, 𝜋/2 . (Verify).
1
Here we use the notation 𝑓! to allow the possibility that there may be uncountably many such functions; however, for most of
what we will do the sets of functions will be countable and hence indexed as 𝑓! = 𝑓! , 𝑓! , 𝑓! , …
We extend the definition of orthogonality to sets or “families” of functions: A set of functions
!
𝑆 = 𝜙! 𝑥 , 𝜙! 𝑥 , 𝜙! 𝑥 , … is orthogonal on 𝑎, 𝑏 if 𝜙! , 𝜙! = ! 𝜙! 𝑥 𝜙! 𝑥 𝑑𝑥 = 0 for 𝑚 ≠ 𝑛.
Another idea from Calculus III and Linear Algebra, which we adapt from vector spaces such as ℝ! , is the
norm or length of a vector. Recall that for a vector 𝐮 in, say, ℝ! we have 𝐮 = 𝐮 ∙ 𝐮 (the norm of 𝐮) or
equivalently, 𝐮 ! = 𝐮 ∙ 𝐮 (the square norm of 𝐮). We will adapt this definition to our integral inner product,
!
and define the square norm of a function 𝜙! on 𝑎, 𝑏 to be 𝜙! 𝑥 ! = 𝜙! , 𝜙! = ! 𝜙!! 𝑑𝑥 , while the norm
of a function 𝜙! on 𝑎, 𝑏 is 𝜙! 𝑥
=
! !
𝜙 𝑑𝑥
! !
.
If 𝑆 = 𝜙! 𝑥 , 𝜙! 𝑥 , 𝜙! 𝑥 , … is orthogonal on 𝑎, 𝑏 such that 𝜙! 𝑥
say that 𝑆 is an orthonormal set on 𝑎, 𝑏 .
= 1 for all 𝑛 = 0, 1, 2, … then we
Read Example 2.
Example: (Exercise #8) a) Show that the set 𝑆 = cos 𝑥 , cos 3𝑥 , cos 5𝑥 , … is orthogonal on the interval
0, 𝜋/2 .
!/!
Here we calculate 𝜙! , 𝜙! = ! cos 𝑛𝑥 cos 𝑚𝑥 𝑑𝑥 for 𝑛 ≠ 𝑚 and 𝑛 and 𝑚 both odd. We use the
product-to-sum formulae which you may have seen in a Calculus or Trigonometry course:
1
sin 𝐴 − 𝐵 + sin 𝐴 + 𝐵
2
1
sin 𝐴 sin 𝐵 = cos 𝐴 − 𝐵 − cos 𝐴 + 𝐵
2
1
cos 𝐴 cos 𝐵 = cos 𝐴 − 𝐵 + cos 𝐴 + 𝐵
2
sin 𝐴 cos 𝐵 =
(We will make extensive use of these identities in this and the next few sections.)
Thus
!/!
!/!
cos 𝑛𝑥 cos 𝑚𝑥 𝑑𝑥 =
!
!
1
=
2
=
1
cos 𝑛 − 𝑚 + cos 𝑛 + 𝑚 𝑑𝑥
2
!/!
!/!
cos 𝑛 − 𝑚 𝑥 𝑑𝑥 +
!
cos 𝑛 + 𝑚 𝑥 𝑑𝑥
!
1
1
1
!/!
!/!
sin 𝑛 − 𝑚 𝑥|! +
sin 𝑛 + 𝑚 𝑥 |!
2 𝑛−𝑚
𝑛+𝑚
Note that since 𝑛 and 𝑚 are both odd, 𝑛 − 𝑚 and 𝑛 + 𝑚 are both even. (Exercise: Show why this is true.) Thus,
since sin(0) = 0, the above becomes
=
1
1
sin
2 𝑛−𝑚
𝑛−𝑚
𝜋
1
+
sin
2
𝑛+𝑚
𝑛+𝑚
𝜋
2
=
1
1
1
sin 𝑘! 𝜋 +
sin 𝑘! 𝜋
2 𝑛−𝑚
𝑛+𝑚
for 𝑘! and 𝑘! both even integers. Thus our family is indeed orthogonal.
= 0,
b) Find the norm of each function in 𝑆.
Here, using the familiar power-reducing identity you learned in Calculus II we have
𝜙! 𝑥
=
!/!
!
𝜙! 𝑥
! 𝑑𝑥
=
!/!
cos !
!
!/! !!!"# !!"
!
!
𝑛𝑥 𝑑𝑥 =
1 !/!
1
!/!
𝑥|! +
sin 2𝑛𝑥 |!
=
2
2𝑛
=
𝑑𝑥
1 𝜋
𝜋
=
2 2
2
Normalization
Any orthogonal set can be converted into an orthonormal set simply by dividing every member of the set by its
norm. Thus the normalized version of the set 𝑆 from the previous example is
! !"# ! ! !"# !!
!
,
!
,
! !"# !!
!
!"# !
!
!
,
!"# !!
!
!
,
!"# !!
!
!
,… =
,… .
Continuing with the extension of ideas from vector spaces such as ℝ! to vector spaces which are sets of
functions, recall that if, say, three vectors 𝐯! , 𝐯! and 𝐯! in 𝑉 ! (a three-dimensional vector space) are mutually
orthogonal, they form what is known as a basis for 𝑉 ! . That is to say, every vector 𝐮 in 𝑉 ! can be expressed
uniquely as 𝐮 = 𝑐! 𝐯! + 𝑐! 𝐯! + 𝑐! 𝐯! for real coefficients 𝑐! , 𝑐! , and 𝑐! . How do we obtain these coefficients?
We note that by Axioms (iv) and (ii) for generalized inner products, we can write
𝐮, 𝐯! =
𝑐! 𝐯! + 𝑐! 𝐯! + 𝑐! 𝐯! , 𝐯! = 𝑐! 𝐯! , 𝐯! + 𝑐! 𝐯! , 𝐯! + 𝑐! 𝐯! , 𝐯!
Since we assumed that 𝐯! , 𝐯! and 𝐯! are mutually orthogonal, 𝐯! , 𝐯! = 𝐯! , 𝐯! = 0 and so we have
𝐮, 𝐯! = 𝑐! 𝐯! , 𝐯! = 𝑐! 𝐯!
and similarly 𝑐! =
𝐮,𝐯!
𝐯! !
and 𝑐! =
𝐮=
𝐮,𝐯!
𝐯! !
𝐮,𝐯!
!
⇒ 𝑐! =
𝐮, 𝐯!
,
𝐯! !
. Thus
𝐯
𝐯! ! !
+
𝐮,𝐯!
𝐯! !
𝐯! +
𝐮,𝐯!
𝐯! !
𝐯! =
!
!!!
𝐮,𝐯!
𝐯! !
𝐯! .
Orthgonal Series Expansions
Let’s apply the idea above to an inner product space of functions. Let 𝑆 = 𝜙! 𝑥 , 𝜙! 𝑥 , 𝜙! 𝑥 , … be an
orthogonal family on 𝑎, 𝑏 . For some arbitrary function 𝑓 defined on 𝑎, 𝑏 , we seek coefficients 𝑐! such that
!
𝑓(𝑥) = 𝑐! 𝜙! + 𝑐! 𝜙! + 𝑐! 𝜙! + ⋯ + 𝑐! 𝜙! + ⋯ =
𝑐! 𝜙!
!!!
Adapting the ℝ! inner product calculation above to our integral inner product, we multiply the above equation
through by 𝜙! (𝑥) and integrate from 𝑎 to 𝑏 on both sides (term-by-term on the right) to obtain
!
𝑓(𝑥)𝜙!
!
𝑥 𝑑𝑥 = 𝑐!
!
𝜙!
!
𝑥 𝜙! 𝑥 𝑑𝑥 + 𝑐!
!
𝜙
! !
𝑥 𝜙! 𝑥 𝑑𝑥 + … + 𝑐!
!
𝜙
! !
𝑥 𝜙! 𝑥 𝑑𝑥 + …
(There are obvious issues about convergence of the sum on the right; Jean-Baptiste Joseph Fourier, who
developed this technique, was not concerned about this, and so, neither are we, for now! We will address these
concerns in the next section.)
Writing the right-hand side above with inner product notation gives
!
!
𝑓(𝑥)𝜙! 𝑥 𝑑𝑥 = 𝑐! 𝜙! , 𝜙! + 𝑐! 𝜙! , 𝜙! + ⋯ + 𝑐! 𝜙! , 𝜙! + ⋯
By the fact that 𝑆 = 𝜙! 𝑥 , 𝜙! 𝑥 , 𝜙! 𝑥 , … is orthogonal on 𝑎, 𝑏 , all terms on the right are zero except one,
that where 𝑛 = 𝑚, so we have
!
!
!
𝑓(𝑥)𝜙! 𝑥 𝑑𝑥 = 𝑐! 𝜙! , 𝜙! = 𝑐!
and finally
𝑐! =
!
𝑓(𝑥)𝜙! 𝑥 𝑑𝑥
!
! !
𝜙 (𝑥)𝑑𝑥
! !
=
!
!
𝑓(𝑥)𝜙!
!
𝜙! (𝑥)
!
𝜙!
(𝑥)𝑑𝑥
𝑥 𝑑𝑥
!
To summarize and use the notation of Zill, we have a formula for the orthogonal series expansion or Fourier
series for 𝑓 𝑥 :
!
𝑓 𝑥 =
𝑐! 𝜙! 𝑥
!!!
for
𝑐! =
More concisely,
!
𝑓(𝑥)𝜙!
!
𝜙! (𝑥)
!
𝑓 𝑥 =
!!!
𝑓, 𝜙!
𝜙! (𝑥)
𝑥 𝑑𝑥
!
!
𝜙! 𝑥
Hurrah!
We will now fine-tune the notion of orthogonality a little. What comes next will be referred to in {11.4.
Basically we can insert another (fixed) function into the integral inner product.
Definition: A set of functions 𝑆 = 𝜙! 𝑥 , 𝜙! 𝑥 , 𝜙! 𝑥 , … is orthogonal with respect to a “weight
!
function” 𝜔 on 𝑎, 𝑏 if ! 𝜔(𝑥)𝜙! 𝑥 𝜙! 𝑥 𝑑𝑥 = 0 when 𝑚 ≠ 𝑛. In this case the inner products are referred
to as weighted inner products. (Usually 𝜔 > 0 on 𝑎, 𝑏 ; letting 𝜔 = 0 would be a bit silly, wouldn’t it?)
Because the integrand now involves the product of three rather than two functions, verifying orthogonality
with respect to a weight function can be a somewhat lengthy calculation, hence I will not write out all of the
details for
Example: (Exercise #14)
!
Show that the set 𝐿! 𝑥 = 1, 𝐿! 𝑥 = −𝑥 + 1, 𝐿! 𝑥 = ! 𝑥 ! − 2𝑥 + 1 is orthogonal with respect to
𝜔 𝑥 = 𝑒 !! on the interval 0, ∞ . (Note the half-infinite interval of integration).
Here we must calculate three weighted integral inner products: 𝐿! , 𝐿! , 𝐿! , 𝐿! , and 𝐿! , 𝐿! . (Note that by
axiom (i) for inner products the order of the pair in the inner product does not matter).
Since 𝐿! ∙ 𝐿! = 𝐿! , 𝐿! , 𝐿! =
!
𝜔
!
𝑥 𝐿! 𝑥 𝑑𝑥 =
! !!
𝑒
!
Since 𝐿! ∙ 𝐿! = 𝐿! , 𝐿! , 𝐿! =
needed.)
!
𝜔
!
𝑥 𝐿! 𝑥 𝑑𝑥 =
! !!
𝑒
!
!
Since 𝐿! ∙ 𝐿! = −𝑥 + 1
𝐿! , 𝐿! =
needed.)
!
𝜔
!
!
!
−𝑥 + 1 𝑑𝑥 = 0. (One integration by parts needed.)
!
!
𝑥 ! − 2𝑥 + 1 𝑑𝑥 = 0. (Two integrations by parts
!
𝑥 ! − 2𝑥 + 1 = − ! 𝑥 ! + ! 𝑥 ! − 3𝑥 + 1,
𝑥 𝐿! ∙ 𝐿! 𝑑𝑥 =
! !!
𝑒
!
!
!
− ! 𝑥 ! + ! 𝑥 ! − 3𝑥 + 1 𝑑𝑥 = 0. (Three integrations by parts
Note that the evaluation of the above integrals often involves the use of L’Hopital’s Rule to evaluate limits such
!!
!
!
as lim!→! ! ! |!! = −lim ! ! = −lim ! ! = 0.
!→!
!→!
For 𝜙! 𝑥 , 𝜙! 𝑥 , 𝜙! 𝑥 , … orthogonal with respect to a weight function 𝜔 on 𝑎, 𝑏 , calculations similar to
those for our unweighted inner product representation above yield
!
𝑓 𝑥 =
!!!
for
𝑐! =
where 𝜙! (𝑥)
𝑐! 𝜙! 𝑥
!
=
!
𝜔
!
!
𝑓(𝑥)𝜔(𝑥)𝜙!
!
𝜙! (𝑥) !
𝑥 𝑑𝑥
𝑥 𝜙!! 𝑥 𝑑𝑥.
Finally, read the discussion of complete sets on pp. 429-430. To summarize, if we let 𝜙! (𝑥) be a subset of a
larger set 𝑆 such as 𝑆 = all functions continuous on 𝑎, 𝑏 , 𝜙! (𝑥) is said to be complete in 𝑆 if every
function 𝑓 ∈ 𝑆 can be written as 𝑓 𝑥 = !
!!! 𝑐! 𝜙! 𝑥 . Will the functions we wish to represent live in sets for
which the orthogonal families used in their representations are complete? We close by quoting Zill: “We
assume for the remainder of the discussion in this chapter that any orthogonal set used in a series expansion of
a function is complete in some class of functions 𝑆. "
Download