Math Methods for Polymer Science Lecture 2: Fourier Transforms, Delta Functions and Gaussian Integrals In the first lecture, we reviewed the Taylor and Fourier series. These where both essentially ways of decomposing a given function into a different, more convenient, or more meaningful form. In this lecture, we review the generalization of the Fourier series to the Fourier transformation. In the context, it is also natural to review 2 special functions, Dirac delta functions and Gaussian functions, as these functions commonly arise in problems of Fourier analysis and are otherwise essential in polymer physics. For additional reading on Fourier transforms, delta functions and Gaussian integrals see Chapters 15, 1 and 8 of Arken and Weber’s text, Mathematical Methods for Physicists. 1 Fourier Transforms Conceptually, Fourier transforms are a straightforward generalizations of Fourier series which represent a function on finite domain of size L by an 2πn 2πn infinite sum over a discrete sets of functions, sin L x and cos L x . In the Fourier transform, that size of the domain is taken to ∞ so that the domain becomes the all positive and negative values of x (see Fig. 1). The key differences between Taylor series, Fourier series and Fourier transforms are summarized as follows: Taylor Series - series representation in polynomials, “local” representation Fourier Series - series representation in sines and cosines, “global” representation (over finite or periodic domain) Fourier transform - integral representationin sines and cosines over infinite domain (L → ∞) How do we generalize a Fourier series to an infinite domain? For the Fourier transform it’s more convenient to use complex representation of sine and cosine: eix = cos x + i sin x (1) 1 Figure 1: Schematic of Fourier transform of function f (x), where f (x) = X ∞ ∞ a0 X 2πn 2πn + x + x . an cos bn sin 2 L L n=1 n=1 Using this we can rewrite the Fourier series: ∞ a0 X an cos f (x) = + 2 n=1 = ∞ X cn exp n=−∞ X ∞ 2πn 2πn x + bn sin x L L n=1 i2πnx L where ( cn = (2) an − ibn 2 an + ibn 2 a0 c0 = . 2 n>0 (3) n<0 (4) Notice also that complex functions ei2πnx/L , are also orthogonal. Z L 2 −L 2 dx exp L/2 i2πn L 2π(m + n)i i2πm x exp x = exp x L L 2π(m + n)i L −L/2 L π(m + n) sin π(m + n) L 0 for m + n 6= 0 = L for m + n = 0 = (5) Using this, we can extract Fourier coefficients by 1 cn = L Z L 2 −L 2 2πn dx exp − x f (x) L Note that the complex notation takes care of factors of 2, etc. 2 (6) We want to define Fourier transform as the L → ∞ limit of a Fourier series. This limit is unusual because we take L → ∞ while 2πn ≡k L→∞ L lim (finite). (7) Here, k is referred to as the wavenumber of the Fourier mode, eikx . The Fourier transform of a function, f (x), is defined as: Z ∞ ˜ f (k) = lim (cn L) = dx e−ikx f (x). (8) L→∞ −∞ Often the Fourier transform is written as F [f (x)] = f˜(k) (9) where F means Fourier transform. Notice that the Fourier transform takes f (x), function of “real space” variables, x, and outputs f˜(k), a function of “Fourier space” variables, k. The Fourier transform can be inverted using the definition of the Fourier series ∞ ∞ X X 1 cn e−ik(n)x = f (x) = (cn L)eik(n)x . (10) L n=−∞ n=−∞ Now as L → ∞ k takes on a continuum of values lim ∆k = k(n + 1) − k(n) = L→∞ 2π 2π [(n + 1) − n] = →0 L L That is, δk becomes infinitely narrow in the L → ∞ limit. This means we can write the sum as Z ∞ ∞ ∞ X X 1 1 1 lim = lim ∆k = dk (11) L→∞ L 2π ∆k→0 n=−∞ 2π −∞ n=−∞ The last step is the is Riemann’s definition of an integral (see Fig. 2). And thus we get Z ∞ dk ˜ f (x) = f (k)eikx (12) −∞ 2π Eq. (12) is the inverse Fourier transform or h i F −1 f˜(k) = f (x) (13) where F −1 means inverse Fourier transform, f˜(x) is function of “Fourier space”, and f (k) is function of “real space”. 3 Figure 2: A figure showing that Fourier series becomes an integral of continuous function f˜(k) in limit that ∆k → 0 (area). 1.1 Delta Function Related to the Fourier transform is a special function called the Dirac delta function, δ(x). It’s essential properties can be deduced by the Fourier transform and inverse Fourier transform. Here, we simply insert the definition of the Fourier transform, eq. (8), into equation for the inverse transform, eq. (12), Z ∞ Z Z ∞ dk ˜ dk ikx ∞ 0 ikx0 ikx f (x) = dx e f (x0 ) f (k)e = e −∞ 2π −∞ −∞ 2π Z ∞ Z ∞ dk ik(x−x0 ) = dx0 e f (x0 ) 2π −∞ Z−∞ ∞ = dx0 δ(x − x0 )f (x0 ) = f (x) (14) −∞ This last line defines the properties of delta function, which is defined implicitly by the integral in the parentheses on the second line. When you integrate the product of the Dirac delta function with another function, it returns the value of that function at the point where the argument of δ(x) vanishes. Geometrically, you can think of it as an infinitely tall and narrowly peeked function, with area 1 under the curve (see Fig. ??). Using this definition of δ(x) we can derive the Fourier transform of oscillatory functions. 4 Figure 3: Sketch of a Dirac delta function. Figure 4: Plot of f (x), which is only non-zero near x = 0. Example 1: Compute Fourier transform of A cos(qx). Z ∞ ˜ f (k) = dx A cos(qx)eikx −∞ Z A ∞ = dx eiqx + e−iqx eikx 2 −∞ Z h i A ∞ = dx ei(q+k)x + e−i(q−k)x 2 −∞ = πA [δ(q + k) + δ(q − k)] which is only non-zero for k = ±q. The Fourier transform is particularly useful for studying the properties of a function which is non-zero only over a finite region of space. For example, density or probability distribution for polymer chain. For this case, we can use Taylor series expansion to cast light on what 5 Figure 5: Plot of f (x). the Fourier transform tells us. Z ∞ ˜ f (k) = dx e−ikx f (x) −∞ Z ∞ i 3 3 1 4 4 1 2 2 = dx 1 − ikx − k x + k x + k x + . . . f (x) 2! 3! 4! −∞ Clearly, f˜(k = 0) = (15) ∞ Z dx f (x) (16) −∞ which is the total area under curve, named N. But we see from Taylor series that various powers of k represent certain averages. That is, 1 2 2 i 3 3 1 4 4 ˜ f (k) = N 1 − ikhxi − k hx i + k hx i + k hx i + . . . (17) 2! 3! 4! where Z ∞ dx xn f (x) −∞ hxn i = Z ∞ (18) dx f (x) −∞ is an average of xn weighted by f (x). That is what I mean when we say that f˜(k) is something of a global representation. f˜(k) seems to encodes properties of the function over its entire range not just locally. Let’s try an example. Compute Fourier transform of A |x| ≤ a f (x) = 0 |x| > a 6 Figure 6: Plot of Gaussian function. f˜(k) = Z ∞ −ikx dx f (x)e Z a =A −∞ dx e−ikx −a i 2A sin(ka) A h −ika e − eika = = ik k (19) Now that we have full expression, let’s examine small k behavior. ka − 3!1 (ka)3 + . . . sin(ka) lim f˜(k) = 2A = 2A k→0 k k 1 = 2Aa 1 − (ka)2 + . . . 3! (20) This is just the form we derived above. Note that 2Aa = N (area), and you can check that Z a 1 3 a 2 x dx x 3 −a a2 −a 2 hx i = Z a = = (21) 2a 3 dx −a 1.2 Gaussian Integral Let’s use the Fourier transform to study an important function, the Gaussian bump x2 f (x) = Ae− 2a2 7 (22) This function is very important in random systems, especially in polymer physics. Aside: Integrating a Gaussian function (A trick!). Z ∞ x2 dx exp − 2 (23) I= 2a −∞ 2 I = 2 2 Z ∞ Z ∞ x2 x + y2 dx exp − 2 = dx dy exp − 2a 2a2 −∞ −∞ −∞ Z ∞ (24) This double integral is carried outpover whole x − y plane. Let’s do same integral in polar coordinates: r = x2 + y 2 , x = r cos φ and y = r sin φ. Z ∞ Z 2π r2 2 dr r exp − 2 dφ I = 2a 0 0 Z ∞ d r2 2 dr = 2π −a exp − 2 dr 2a 0 ∞ r2 = 2π −a2 exp − 2 (25) = 2πa2 2a 0 √ thus, I = 2πa. Let’s go back to Fourier transform of a Gaussian bump. Z ∞ 2 2 ˜ f (k) = A dx e−x /2a e−ikx (26) −∞ This can be done by ”completing the square” of argument in exponential 1 k 2 a2 x2 2 2 + ikx = (x + ika ) + 2a2 2a2 2 (27) then ∞ 2 2 (x + ika2 )2 k a dx exp − exp − 2 2a 2 −∞ 2 2Z ∞ k a u2 = A exp − du exp − 2 2 2a −∞ 2 2 √ k a = A 2πa exp − 2 f˜(k) = A Z (28) Note that this was done by changing variables u = x + ika2 and du = dx. Once again we can learn something of f (x) by examining small k properties of f˜(k). √ k2 k4 f˜(k) = Aa 2π 1 − a2 + (3a2 ) + . . . (29) 2! 4! 8 Figure 7: Schematic of an electron density distribution. There is no odd terms since hxn i = 0 for n odd since f (x) = f (−x). By taking Fourier transform of Gaussian function, we have automatically calculated all moments, or averages, of the distribution. The most important is the second moment Z ∞ x2 dx exp − 2 x2 2a = a2 hx2 i = −∞ (30) Z ∞ x2 dx exp − 2 2a −∞ We’ll use this extensively. Finally, we conclude our discussion of Fourier transforms with a discussion of convolution. Z ∞ f ∗g = dy f (y)g(x − y) (31) −∞ where f ∗ g stands for convolution of f (x) and g(x), which are function of x only. Convolution theorem says F {f ∗ g} = f˜(k)g̃(k) = F {f }F {g} (32) Fourier transform os convolution is product of Fourier transform of f (x) and g(x). This is fairly straight forward to prove. Insert Z ∞ dk ik(x−y) g(x − y) = e g̃(k) (33) −∞ 2π above and take Fourier transform. What is this good for? Say you have a density (probability) distribution created by a lot of copies of the same thing (identical proteins floating in solution). For example, a given macromolecule created a Gaussian electron distribution around it’s center of mass. 9 The total density is computed by summing individual distributions around position of each molecule. ρcm (x) = δ(x − x0 ) + δ(x − x1 ) where ρcm (x) is density of molecule center of mass, and Z ∞ dy ρcm (y)ρcm (x − y) ρtot (x) = (34) (35) −∞ Fourier transform (needed form scattering) is simply product of individual Fourier transforms. 10