Uploaded by Arfa Yaseen

ila5fourierseries

advertisement
489
10.5. Fourier Series: Linear Algebra for Functions
10.5 Fourier Series: Linear Algebra for Functions
This section goes from finite dimensions to infinite dimensions. I want to explain linear
algebra in infinite-dimensional space, and to show that it still works. First step: look back.
This book began with vectors and dot products and linear combinations. We begin by
converting those basic ideas to the infinite case—then the rest will follow.
What does it mean for a vector to have infinitely many components? There are two
different answers, both good:
1. The vector is infinitely long: v = (v1 , v2 , v3 , . . .). It could be (1, 12 , 14 , . . .).
2. The vector is a function f (x). It could be v = sin x.
We will go both ways. Then the idea of a Fourier series will connect them.
After vectors come dot products. The natural dot product of two infinite vectors
(v1 , v2 , . . .) and (w1 , w2 , . . .) is an infinite series:
Dot product
v · w = v1 w1 + v2 w2 + · · · .
(1)
This brings a new question, which never occurred to us for vectors in Rn . Does this infinite
sum add up to a finite number? Does the series converge? Here is the first and biggest
difference between finite and infinite.
When v = w = (1, 1, 1, . . .), the sum certainly does not converge. In that case
v · w = 1 + 1 + 1 + · · · is infinite. Since v equals w, we are really computing v · v = kvk2 ,
the length squared. The vector (1, 1, 1, . . .) has infinite length. We don’t want that vector.
Since we are making the rules, we don’t have to include it. The only vectors to be allowed
are those with finite length:
DEFINITION The vector v = (v1 , v2 , . . .) and the function f (x) are in our infinitedimensional “Hilbert spaces” if and only if their lengths kvk and ||f || are finite:
kvk2 = v · v = v12 + v22 + v32 + · · ·
R 2π
kf k2 = (f, f ) = 0 |f (x)|2 dx
must add to a finite number.
must be a finite integral.
The vector v = (1, 12 , 41 , . . .) is included in Hilbert space, because its length
√
is 2/ 3. We have a geometric series that adds to 4/3. The length of v is the square root:
Example 1
Length squared
v·v =1+
1
4
+
1
16
+ ··· =
1
1−
1
4
= 34 .
Question If v and w have finite length, how large can their dot product be?
Answer The sum v · w = v1 w1 + v2 w2 + · · · also adds to a finite number. We can safely
take dot products. The Schwarz inequality is still true:
Schwarz inequality
|v · w| ≤ kvk kwk.
(2)
The ratio of v · w to kvk kwk is still the cosine of θ (the angle between v and w). Even
in infinite-dimensional space, |cos θ| is not greater than 1.
490
Chapter 10. Applications
Now change over to functions. Those are the “vectors.” The space of functions f (x),
g(x), h(x), . . . defined for 0 ≤ x ≤ 2π must be somehow bigger than Rn . What is the dot
product of f (x) and g(x)? What is the length of f (x)?
Key point in the continuous case: Sums are replaced by integrals. Instead of a sum
of vj times wj , the dot product is an integral of f (x) times g(x). Change the “dot” to
parentheses with a comma, and change the words “dot product” to inner product:
DEFINITION The inner product of f (x) and g(x), and the length squared of f (x), are
(f, g) =
Z
2π
f (x)g(x) dx
kf k2 =
and
0
Z
0
2π
2
f (x) dx.
(3)
The interval [0, 2π] where the functions are defined could change to a different interval
like [0, 1] or (−∞, ∞). We chose 2π because our first examples are sin x and cos x.
Example 2
The length of f (x) = sin x comes from its inner product with itself:
(f, f ) =
Z
2π
(sin x)2 dx = π.
The length of sin x is
√
π.
0
That is a standard integral in calculus—not part of linear algebra. By writing sin2 x as
1
1
1
2 − 2 cos 2x, we see it go above and below its average value 2 . Multiply that average by
the interval length 2π to get the answer π.
More important: sin x and cos x are orthogonal in function space: (f, g) = 0
Inner product
is zero
Z
0
2π
sin x cos x dx =
Z
0
2π
1
2
2π
sin 2x dx = − 41 cos 2x 0 = 0. (4)
This zero is no accident. It is highly important to science. The orthogonality goes beyond
the two functions sin x and cos x, to an infinite list of sines and cosines. The list contains
cos 0x (which is 1), sin x, cos x, sin 2x, cos 2x, sin 3x, cos 3x, . . ..
Every function in that list is orthogonal to every other function in the list.
Fourier Series
The Fourier series of a function f (x) is its expansion into sines and cosines:
f (x) = a0 + a1 cos x + b1 sin x + a2 cos 2x + b2 sin 2x + · · · .
(5)
We have an orthogonal basis! The vectors in “function space” are combinations of the sines
and cosines. On the interval from x = 2π to x = 4π, all our functions repeat what they did
from 0 to 2π. They are “periodic.” The distance between repetitions is the period 2π.
491
10.5. Fourier Series: Linear Algebra for Functions
Remember: The list is infinite. The Fourier series is an infinite series. We avoided
the vector v = (1, 1, 1, . . .) because its length is infinite, now we avoid a function like
1
2 + cos x + cos 2x + cos 3x + · · · . (Note: This is π times the famous delta function δ(x).
It is an infinite “spike” above a single point. At x = 0 its height 12 + 1 + 1 + · · · is infinite.
At all points inside
R 0 < x < 2π the series adds in some average way to zero.) The integral
of δ(x) is 1. But δ 2 (x) = ∞, so delta functions are not allowed into Hilbert space.
Compute the length of a typical sum f (x):
Z 2π
(f, f ) =
(a0 + a1 cos x + b1 sin x + a2 cos 2x + · · · )2 dx
=
Z
0
2π
0
(a20 + a21 cos2 x + b21 sin2 x + a22 cos2 2x + · · · ) dx
kf k2 = 2πa20 + π(a21 + b21 + a22 + · · · ).
(6)
The step from line 1 to line 2 used orthogonality. All products like cos x cos 2x integrate to
give zero. Line 2 contains what is left—the integrals of each sine and cosine squared. Line
3 evaluates those integrals. (The integral of 12 is 2π, when all other integrals give π.) If we
divide by their lengths, our functions become orthonormal:
1 cos x sin x cos 2x
√ , √ , √ , √ , . . . is an orthonormal basis for our function space.
π
π
π
2π
These are unit vectors. We could combine them with coefficients A0 , A1 , B1 , A2 , . . . to
yield a function F (x). Then the 2π and the π’s drop out of the formula for length.
Function length = vector length
kF k2 = (F, F ) = A20 + A21 + B12 + A22 + · · · . (7)
Here is the important point, for f (x) as well as F (x). The function has finite length exactly
when the vector of coefficients has finite length. Fourier series gives us a perfect match
between the Hilbert spaces for functions and for vectors. The function is in L2 , its Fourier
coefficients are in ℓ2 .
The function space contains f (x) exactly when the Hilbert space contains the vector
v = (a0 , a1 , b1 , . . .) of Fourier coefficients of f (x). Both must have finite length.
Example 3 Suppose f (x) is a “square wave,” equal to 1 for 0 ≤ x < π. Then f (x) drops
to −1 for π ≤ x < 2π. The +1 and −1 repeat forever. This f (x) is an odd function like
the sines, and all its cosine coefficients are zero. We will find its Fourier series, containing
only sines :
i
4 h sin x sin 3x sin 5x
Square wave
f (x) =
(8)
+
+
+ ··· .
π 1
3
5
√
2
The length of this function is 2π, because at every point f (x) is (−1)2 or (+1)2 :
kf k2 =
Z
0
2π
2
f (x) dx =
Z
0
2π
1 dx = 2π.
492
Chapter 10. Applications
At x = 0 the sines are zero and the Fourier series gives zero. This is half way up the jump
from −1 to +1. The Fourier series is also interesting when x = π2 . At this point the square
wave equals 1, and the sines in (8) alternate between +1 and −1:
1 1 1
4
1 − + − + ··· .
Formula for π
1=
(9)
π
3 5 7
Multiply by π to find a magical formula 4(1 −
1
3
+
1
5
−
1
7
+ · · · ) for that famous number.
The Fourier Coefficients
How do we find the a’s and b’s which multiply the cosines and sines? For a given function f (x), we are asking for its Fourier coefficients ak and bk :
Fourier series
f (x) = a0 + a1 cos x + b1 sin x + a2 cos 2x + · · · .
Here is the way to find a1 . Multiply both sides by cos x. Then integrate from 0 to 2π.
The key is orthogonality! All integrals on the right side are zero, except for cos2 x:
Z 2π
Z 2π
For coefficient a1
f (x) cos x dx =
a1 cos2 x dx = πa1 .
(10)
0
0
Divide by π and you have a1 . To find any other ak , multiply the Fourier series by cos kx.
Integrate from 0 to 2π. Use orthogonality, so only the integral of ak cos2 kx is left. That
integral is πak , and divide by π:
ak =
1
π
Z
2π
f (x) cos kx dx
0
and similarly
bk =
1
π
Z
2π
f (x) sin kx dx.
(11)
0
The exception is a0 . This time we multiply by cos 0x = 1. The integral of 1 is 2π:
Z 2π
1
Constant term a0 =
f (x) · 1 dx = average value of f (x).
2π 0
(12)
I used those formulas to find the Fourier coefficients for the square wave in equation (8).
The integral of f (x) cos kx was zero. The integral of f (x) sin kx was 4/k for odd k.
Compare Linear Algebra in Rn
Infinite-dimensional Hilbert space is very much like the n-dimensional space Rn . Suppose
the nonzero vectors v 1 , . . . , v n are orthogonal in Rn . We want to write the vector b (instead
of the function f (x)) as a combination of those v’s:
Finite orthogonal series b = c1 v 1 + c2 v 2 + · · · + cn v n .
(13)
T
Multiply both sides by v T
1 . Use orthogonality, so v 1 v 2 = 0. Only the c1 term is left:
Coefficient c1
T
T
T
vT
1 b = c1 v 1 v 1 + 0 + · · · + 0. Therefore c1 = v 1 b/v 1 v 1 .
vT
1 v1
(14)
The denominator
is theRlength squared, like π in equation (11). The numerator
vT
b
is
the
inner
product
like f (x) cos kx dx. Coefficients are easy to find when the
1
493
10.5. Fourier Series: Linear Algebra for Functions
basis vectors are orthogonal. We are just doing one-dimensional projections, to find the
components along each basis vector.
The formulas are even better when the vectors are orthonormal. Then we have unit
T
vectors in Q. The denominators v T
k v k are all 1. You know ck = v k b in another form:

 
c1

  .. 
Equation for c’s c1 v 1 + · · · + cn v n = b or  v 1 · · · v n   .  = b.
cn
Qc = b yields c = QT b. Row by row this is ck = q T
k b.
Fourier series is like having a matrix with infinitely many orthogonal columns. Those
columns are the basis functions 1, cos x, sin x, . . .. After dividing by their lengths we have
an “infinite orthogonal matrix.” Its inverse is its transpose, QT . Orthogonality is what
reduces a series of terms to one single term, when we integrate.
Problem Set 10.5
1
Integrate the trig identity 2 cos jx cos kx = cos(j + k)x + cos(j − k)x to show that
cos jx is orthogonal to cos kx, provided j 6= k. What is the result when j = k?
2
Show that 1, x, and x2 − 13 are orthogonal, when the integration is from x = −1 to
x = 1. Write f (x) = 2x2 as a combination of those orthogonal functions.
3
Find a vector (w1 , w2 , w3 , . . .) that is orthogonal to v = (1, 12 , 14 , . . .). Compute its
length kwk.
4
The first three Legendre polynomials are 1, x, and x2 − 13 . Choose c so that the fourth
polynomial x3 − cx is orthogonal to the first three. All integrals go from −1 to 1.
5
For the square wave f (x) in Example 3 jumping from 1 to −1, show that
Z
2π
f (x) cos x dx = 0
0
Z
2π
f (x) sin x dx = 4
0
Z
2π
f (x) sin 2x dx = 0.
0
Which three Fourier coefficients come from those integrals?
6
The square wave has kf k2 = 2π. Then (6) gives what remarkable sum for π 2 ?
7
Graph the square wave. Then graph by hand the sum of two sine terms in its series,
or graph by machine the sum of 2, 3, and 10 terms. The famous Gibbs phenomenon
is the oscillation that overshoots the jump (this doesn’t die down with more terms).
8
Find the lengths of these vectors in Hilbert space:
(a) v = √11 , √12 , √14 , √18 , . . .
494
Chapter 10. Applications
(b) v = (1, a, a2 , . . .)
(c) f (x) = 1 + sin x.
9
Compute the Fourier coefficients ak and bk for f (x) defined from 0 to 2π:
(a) f (x) = 1 for 0 ≤ x ≤ π, f (x) = 0 for π < x < 2π
(b) f (x) = x.
10
When f (x) has period 2π, why is its integral from −π to π the
R πsame as from 0 to
2π? If f (x) is an odd function, f (−x) = −f (x), show that −π f (x) dx is zero.
Odd functions only have sine terms, even functions only have cosines.
11
Using trigonometric identities find the two terms in the Fourier series for f (x):
(a) f (x) = cos2 x (b) f (x) = cos x + π3
(c) f (x) = sin3 x
12
13
The functions 1, cos x, sin x, cos 2x, sin 2x, . . . are a basis for Hilbert space. Write
the derivatives of those first five functions as combinations of the same five functions.
What is the 5 by 5 “differentiation matrix” for these functions?
Find the Fourier coefficients ak and bk of the square pulse F (x) centered at x = 0:
F (x) = 1/h for |x| ≤ h/2 and F (x) = 0 for h/2 < |x| ≤ π.
As h → 0, this F (x) approaches a delta function. Find the limits of ak and bk .
Section 4.1 of Computational Science and Engineering explains the sine series,
cosine series, complete series, and complex series Σ ck eikx on math.mit.edu/cse.
Section 9.3 of this book explains the Discrete Fourier Transform. This is “Fourier
series for vectors” and it is computed by the Fast Fourier Transform. That fast
algorithm comes quickly from special complex numbers z = eiθ = cos θ + i sin θ
when the angle is θ = 2πk/n.
Download