Fourier Series Part 1: Introduction Recall Taylorβs theorem: Suppose π(π₯) is a πΆ π+1 ([π, π]) function. Then π β² (π) π β²β² (π) π (π) (π) 2 (π₯ β π) + (π₯ β 2) + β― + (π₯ β π)π + π π+1 (π₯) π(π₯) = π(π) + 1! 2! π! where π π+1 (π₯) = 1 π₯ β« (π₯ β π‘)π π (π+1) (π‘) ππ‘ π! π In particular, if π(π₯) is a πΆ β ([π, π]) function then the Taylor series ππ (π₯, π) for the function π converges to the function π for all π₯ β [π, π] if and only if lim π π+1 (π₯) = 0 for all π₯ β [π, π]. As corollary we can see that every πΆ π+1 function πβ β can be approximated by a polynomial of degree π: Example: Approximate π(π₯) = tan(π₯ 2 + 1) by a second degree polynomial near the origin. We know that tan(π₯ 2 + 1) β π(0) + π β² (0) π₯ + π β²β² (0) 2 π₯ = 2! = (tan(π₯ 2 + 1)|π₯=0 ) + (sec 2 (π₯ 2 + 1)2π₯|π₯=0 ) π₯ + + (2Sec 2 (1 + π₯ 2 ) + 8π₯ 2 sec 2 (1 + π₯ 2 ) tan(1 + π₯ 2 )|π₯=0 ) 2 π₯ = 2! = 1.55741+ 6.85104 2 π₯ = 1.55741 + 3.42552 π₯ 2 2 This is nice: if I do not like a function π(π₯) I can instead approximate it by a polynomial (of degree n). So, instead of dealing with the complicated function π(π₯) = tan(π₯ 2 + 1) I can instead deal with the (much simpler) function π(π₯) = 1.55741 + 3.42552 π₯ 2. As long as x is close to zero β the center of my series approximation β the difference between both functions should be small (and I could use the remainder to estimate the maximum error). If I want a better approximation, I could compute additional Taylor coefficients. But this approach does not work so well if the original function is very βdifferentβ from a polynomial. For example, any polynomial goes to infinity if x goes to infinity, and functions that would be very different from these polynomials are periodic functions that stay bounded if x goes to plus or minus infinity. Thus, I try another approach: if π is periodic, I want to try to approximate it by sums of periodic functions, like sine and cosine. In other words, I want to write the (periodic) function π(π₯) between, say, [βπ, π] as follows: β β π(π₯) β π0 + β ππ cos(ππ₯) + β ππ sin(ππ₯) π=1 π=1 This type of series is called a Fourier series and the coefficients π0 , ππ , and ππ are called Fourier coefficients. Note that at this point I do not know whether the function π really has such a series expression, nor do I know whether such series converge. But we can hope, canβt we? Definition: (Fourier Series) A series of the form β β π0 + β ππ cos(ππ₯) + β ππ sin(ππ₯) π=1 π=1 for βπ β€ π₯ β€ π is called a Fourier series and the coefficients π0 , ππ , and ππ are called Fourier coefficients. As mentioned, we donβt worry about convergence at this point but we simply investigate what would happen if a given function π(π₯) could be written as a Fourier series: β β π(π₯) β π0 + β ππ cos(ππ₯) + β ππ sin(ππ₯) π=1 π=1 First, some preliminary calculations: π ο· 1 β«βπ sin(ππ₯) ππ₯ = β π cos(ππ₯)| π ο· 1 β«βπ cos(ππ₯) ππ₯ = π sin(ππ₯)| π π βπ = 0 for all n = 0 for all n βπ π β«βπ sin(ππ₯) cos(ππ₯) ππ₯ = 0 for all π, π π β«βπ sin(ππ₯) sin(ππ₯) ππ₯ = 0 if π β π π β«βπ cos(ππ₯) cos(ππ₯) ππ₯ = 0 if π β π π π β«βπ sin(ππ₯) sin(ππ₯) ππ₯ = β«βπ sin2(ππ₯) ππ₯ = π π π β«βπ cos(ππ₯) cos(ππ₯)ππ₯ = β«βπ cos2(ππ₯) ππ₯ = π ο· ο· ο· ο· ο· Now, assuming that β β π(π₯) = π0 + β ππ cos(ππ₯) + β ππ sin(ππ₯) π=1 (β) π=1 we multiply both sides by sin(ππ₯) then integrate from βπ to π: π π β π β π β« π(π₯) sin(ππ₯) ππ₯ = β« π0 sin(ππ₯) ππ₯ + β β« ππ cos(ππ₯) sin(ππ₯) ππ₯ + β β« ππ sin(ππ₯) sin(ππ₯) ππ₯ = βπ βπ π=1 βπ π=1 βπ = 0 + 0 + π ππ because the first integral is zero, and the second integrals are all zero as well, according to our preliminary calculations above. And for the last sum, the only non-zero integral occurs when π = π. Next, we multiply both sides of the representation (*) for π(π₯) by πππ (ππ₯) and integrate again from βπ to π: π π β π β π β« π(π₯) cos(ππ₯) ππ₯ = β« π0 cos(ππ₯) ππ₯ + β β« ππ cos(ππ₯) cos(ππ₯) ππ₯ + β β« ππ sin(ππ₯) cos(ππ₯) ππ₯ = βπ βπ π=1 βπ π=1 βπ = 0 + π ππ + 0 because again most integrals work out to be zero according to our preliminary calculations above. Finally, we just integrate both sides of equation (*) to get: π β π π β π β« π(π₯)ππ₯ = β« π0 ππ₯ + β β« ππ cos(ππ₯) ππ₯ + β β« ππ sin(ππ₯) ππ₯ = βπ βπ π=1 βπ π=1 βπ 2ππ0 + 0 + 0 Thus, we have proved the following theorem: β Theorem: If π(π₯) = π0 + ββ π=1 ππ cos(ππ₯) + βπ=1 ππ sin(ππ₯) then 1 π π0 = β« π(π₯)ππ₯ , 2π βπ 1 π ππ = β« π(π₯) cos(ππ₯) ππ₯ , π βπ 1 π ππ = β« π(π₯) sin(ππ₯) ππ₯ π βπ Example: Find the Fourier series for π(π₯) = π₯ for π₯ β [βπ, π] We use the above formulas to find the Fourier coefficients, assuming that the function can be written as a Fourier series: 1 π 1 π π0 = β« π₯ ππ₯ = 0 πππ ππ = β« π₯ cos(ππ₯) ππ₯ = 0 2π βπ π βπ because the integrants are both odd functions, and 1 π 1 π 2 ππ = β« π₯ sin(ππ₯) ππ₯ = 2 (β1)π+1 = (β1)π+1 π βπ π π π Thus: β 2 π₯ = β (β1)π+1 sin(ππ₯) π π=1 Letβs verify this by plotting both functions in one coordinate system (using only the first 10 terms of the series): Note that the Fourier series is actually periodic, as we can see if we plot it over the interval [β3π, 3 π] Example: Find the Fourier series for π(π₯) = π₯ 2 for π₯ β [βπ, π] We again use the above formulas to find the Fourier coefficients. This time we get: 1 π 2 1 2 3 π2 π0 = β« π₯ ππ₯ = π = 2π βπ 2π 3 3 1 π 2 1 π 4 ππ = β« π₯ cos(ππ₯) ππ₯ = 4 2 cos(π π) = 2 (β1)π π βπ π π π and 1 π ππ = β« π₯ 2 sin(ππ₯) ππ₯ = 0 π βπ because this time the last integrant is odd. Thus, we have β π2 4 π₯ = + β 2 (β1)π πππ (ππ₯) 3 π 2 π=1 Letβs again plot both functions in one coordinate system: Example: Now we will try to approximate a function that is not continuous by a Fourier series. Let π, 0<π₯β€π π(π₯) = { 0, βπ β€ π₯ β€ 0 and find once again the Fourier series of this function Of course we compute: π 1 π π2 π β« π ππ₯ = = 2π 0 2π 2 βπ 1 π 1 π ππ = β« π(π₯)cos(ππ₯) ππ₯ = β« π cos(ππ₯) ππ₯ = 0 π βπ π 0 π π 1 1 1 1 ππ = β« π(π₯) sin(ππ₯) ππ₯ = β« π sin(ππ₯) ππ₯ = (cos(ππ) β 1) = (1 β (β1)π ) π βπ π 0 π π π0 = β« π(π₯) ππ₯ = so that β β π=1 π=0 π 1 π 2 π(π₯) = + β (1 β (β1)π ) sin(ππ₯) = + β sin((2π + 1) π₯) 2 π 2 2π + 1 Note on using Mathematica So, finding Taylor series involves taking lots of derivatives, while finding Fourier series involves many integration problems. Mathematica can be very helpful for this and can, in fact, often solve the problem completely. Here are some βπ₯, π₯ < 0 steps how to use Mathematica to find the Fourier series for, say π(π₯) = { π₯, π₯ β₯ 0 Step 1: Define the function and plot its graph Step 2: Define the coefficients of the Fourier series Note that these coefficients can be simplified manually, since sin(π π) = 0 and cos(π π) = (β1)π but we donβt have to do that for Mathematica. Step 3: Verify the Fourier series by plotting the 10-th partial sum (say) together with the original function in one coordinate system: Exercises: 1. Assuming that the functions below have Fourier series expansions, find them: a. π(π₯) = |π₯| 0, βπ β€ π₯ < 0 b. π(π₯) = { π₯, 0β€π₯β€π 2π₯, βπ β€ π₯ < 0 c. π(π₯) = { 10π₯, 0β€π₯β€π β1, βπ β€ π₯ < 0 2. Assuming that the function π(π₯) = { has a Fourier series, find it and use it to approximate the 1, 0β€π₯β€π original function by plotting the function and several N-th partial Fourier series for N=1, 2, ,4, 5, 10, 20, and 100. Describe in words how the N-th partial Fourier series approximates the original function as N increases. 3. Use a Taylor series and a Fourier series to approximate the function π(π₯) = π π₯ , π₯ β [βπ, π]. Which one works better? How about for f(π₯) = sin(5.3 π₯) ? 4. Prove that if π(π₯) had a Fourier series and π(π₯) was an odd function defined on [βπ, π], then π(π₯) = ββ π=1 ππ sin(ππ₯), i.e. ππ = 0 for all π. What about for even functions?