14. The Heisenberg Uncertainty Principle 14.1. The uncertainty inequality If f (t) is a function that “lives” in some region of time – a signal that dies away as t tends to ±∞ – then we can look at it and tell, in some crude sense, when this signal occurs. Similarly, we can look at its Fourier transform fb(ν) and in nearly the same way tell just what its frequency is. Of course, we can’t specify the “when” perfectly, since the signal lasts for some length of time; we can’t specify the frequency exactly, for very nearly the same reasons. We have seen that a function that is very narrow in time has a Fourier transform that is very wide in frequency, and vice versa. There seems to be some kind of reciprocal relationship between the widths (the uncertainties) of the time and the frequency. The quantitative version of this is the following inequality: The Heisenberg Uncertainty Theorem: Suppose that f is a function in L2 of the line with all of the following properties: kf (t)k2 = 1, tf (t) ∈ L2 , and ν fb(ν) ∈ L2 . Define the following numbers: Z ∞ The average value of t : mt = t|f (t)|2 dt −∞ ∞ Z The uncertainty of t : 2 2 1 2 (t − mt ) |f (t)| dt σt = −∞ Z The average value of ν : ∞ ν|fb(ν)|2 dν mν = −∞ Z The uncertainty of ν : ∞ σν = 2 (ν − mν ) |fb(ν)|2 , dν 1 2 −∞ Then: σt σν ≥ The fact that kf (t)k2 = 1 – that is, Z 1 4π (14.1) ∞ |f (t)|2 dt = 1 – makes the function |f (t)|2 a probability −∞ density function for t considered as a random variable. Given that, the definition of the average value of t and the uncertainty of t are the standard Zprobabilistic definitions of mean and standard ∞ deviation. Plancherel’s Theorem now specifies that |fb(ν)|2 dν also equals 1, so that |fb(ν)|2 is −∞ also a probability density function for ν as a random variable, and we can compute its mean and standard deviation. The use of the square of a function, rather than the function itself, as the probability density function is justified in a course on quantum mechanics, but it makes some sense to us here – how else would we get a positive function? We will do the proof twice. The first time, we will need to assume that both the average time and the average frequency are zero. The second time, we will be both clever and careful enough to be able to take care of the general case. Case 1: assume that both mt and mν are zero. Then Z ∞ 1= |f (t)|2 dt −∞ 100 Integrate this by parts, letting: u = |f (t)|2 = f (t)f (t) so that du = f 0 (t)f (t) + f (t)f 0 (t) = 2 Re f (t)f 0 (t) dv = dt so that v = t The boundary terms of this integration by parts will have uv = t|f (t)|2 ; the integrability of t2 |f (t)|2 pretty much forces this to go to zero as t → ±∞, at least in any meaningful sense, so that these boundary terms contribute zero. We have: Z ∞ Z ∞ 2 tf (t)f 0 (t) dt |f (t)| dt = −2 Re 1= −∞ −∞ Z ≤ 2 ∞ Z ∞ −∞ tf (t)f 0 (t) dt since | Re z| ≤ |z| 1 Z 2 |tf (t)| dt ∞ 2 ≤2 1 2 |f (t)| dt 0 2 (14.2) −∞ −∞ The last line above is justified by the Cauchy-Schwarz inequality (12.19). The first factor of (14.2) is just what we are looking for - it is σt . To understand the second factor, we apply Plancherel’s Theorem to it, so that it becomes the L2 norm of the Fourier transform of f 0 (ν), which by (7.12), is 2πiν fb(ν). Thus, Z ∞ 1 Z ∞ 1 2 2 2 2 2 b t |f (t)| dt 1≤2 |2πiν f (ν)| dν −∞ Z −∞ 1 Z ∞ 2 ≤ 4π 2 2 2 ν |fb(ν)|2 dν t |f (t)| dt −∞ 1 2 = 4πσt σν −∞ σt σν ≥ Dividing, we get ∞ 1 4π q.e.d. Case 2: the general case. We modify the proof in step 1 in two stages. In the first, we let g(t) be the function given by g(t) = e−2πimν t f (t). The important thing to notice is that |f (t)| = |g(t)| for all t. That means that Z ∞ Z ∞ 1= |f (t)|2 dt = |g(t)|2 dt −∞ −∞ The second modification is to do the integration by parts just slightly differently by choosing v = t − mt instead of t. This leads to the following modified version of (14.2): Z ∞ 1 = −2 Re (t − mt )g(t)g 0 (t) dt −∞ Z 1 Z 2 ∞ ≤2 2 2 ∞ (t − mt ) |g(t)| dt −∞ 1 2 |g (t)| dt 0 2 (14.3) −∞ The first term of (14.3) is now just what we need, since we can replace |g(t)| by |f (t)|. We work on the second. Z ∞ 1 Z ∞ 1 2 2 0 2 2 |g (t)| dt = |2πiνb g (ν)| dν −∞ −∞ 101 But from (7.7), gb(ν) = fb(ν + mν ) ∞ Z 0 2 1 |g (t)| dt so, Z 2 Z 2 Z 2 |2πiνb g (ν)| dν = −∞ 1 ∞ 2 (ν − mν ) |fb(ν)|2 dν = 2π 2 ν |fb(ν + mν )|2 dν = 2π −∞ ∞ ∞ 1 2 −∞ 1 2 by a change of variables. −∞ Z ∞ Finally, 1 ≤ 4π 1 Z 2 (t − mt ) |f (t)| dt 2 2 (ν − mν ) |fb(ν)|2 dν 1 2 = 4πσt σν −∞ −∞ σt σν ≥ We divide to get 14.2. ∞ 2 1 4π q.e.d. Examples of the inequality Inequality (14.1) is definitely an inequality and not an equality. To show this, we need only let f be a box function, in which case the time uncertainty is finite but the frequency uncertainty is infinite. It is also sharp, meaning it cannot be improved. We know this since there exist extremal functions: functions for which the inequality does turn out to be an equality. All gaussians (all normal distributions) are such extremal functions. The following table lists these and a few other examples. Table 14.4 f (t) √1 if − 1 ≤ t ≤ 1 2 0 otherwise r 3 (1 − |t|) if − 1 ≤ t ≤ 1 2 0 otherwise √ 2π π (1 + t2 ) √ 2 4 2e−πt 14.3. fb(ν) 1 sin 2πν √ πν 2 r 3 1 − cos 2πν 2 2π 2 ν 2 √ 2πe−2π|ν| √ 2 4 2e−πν σt σν σt σν 1 √ 3 ∞ ∞ 1 √ 10 1 1 √ 2 π √ 3 2π 1 √ 2 2π 1 √ 2 π r 1 6 4π 5 √ 2 4π 1 4π An application to music It might be interesting to try to apply this principle to music: why is it that bass instruments always seem to play their notes more slowly than treble instruments? In a Sousa march, why do the tubas play those solid, slow notes while the piccolos get to play fast and fancy runs and trills? Part of the reason is style and taste, but part of the reason is the physics of the uncertainty principle. Consider trying to trill the two lowest notes on a piano. These notes are an A whose frequency should be approximately 27.5 Hz and an A# whose frequency should be approximately 29.1 Hz. At least, those would be the frequencies if we could hold the notes for an infinitely long time; in practice, there will be an uncertainty in the frequencies. The difference between the frequencies of these two notes is 1.6 Hz. If the uncertainty of each frequency were as large as that – as large 102 as 1.6 Hz – then our perceptions of the two notes would overlap somewhat. We wouldn’t quite be completely sure that these were two different notes. Setting a criterion for separation is an arbitrary matter, but just for the sake of the argument, let me say that we would cleanly perceive these as two separate notes if the uncertainty of the frequency of each would be no more than half of the difference – in this case, .8 Hz. In that case, the Heisenberg uncertainty theorem tells us that 1 the smallest possible value of the uncertainty in the time is = .1 second, approximately. 4π(.8Hz) But this minimum is for perfectly shaped functions, such as Gaussians. For the actual shape of the function that the sounds created by a hammer striking a piano string – with a sudden attack – the product is likely to be bigger. The duration of the note will be more than twice the uncertainty, since the uncertainty is a measure from the center outwards. Thus each note needs to last for perhaps a quarter of a second, and we won’t be able to play more than about 4 of these notes a second - and that’s awfully slow for a “trill”. In fact, this calculation is quite generous - if you try to play these two notes on a piano even as fast as four notes per second, you’re likely to consider the resulting sound to be a muddy mess. If we try the same calculation for middle C and the C# immediately above it, we get that the frequency difference is now 15.6 Hz (from 261.6 to 277.2). Using the same criteria for separation, we get that the uncertainty in time must be at least .01 seconds - a tenth the size of the previous calculation. This means that we can play ten times as fast - perhaps as many as 40 notes per second. Since no one’s fingers can move fast enough to play 40 notes per second, we can play notes near middle C as fast as is humanly possible without really encountering the Heisenberg limitation. 14.4. Exercises Activities: 14.1. Find a piano. Try trilling the bottom two notes (the A and the A# ). Listen to what it sounds like. Now repeat with middle C and the C# immediately above it. 14.2 . As long as you have a piano, try this exercise from the textbook Waves: Berkeley Physics Course – Volume 3, by Frank S. Crawford, Jr. (McGraw-Hill, 1965): Hold down the damper pedal. Shout “hey” into the region of the strings and sounding board. Listen. Shout “ooh.” Try all vowels. The piano strings are picking up (in somewhat distorted form) and the preserving the Fourier analysis of your voice! Notice that the recognizable vowel sound persists for several seconds. What does that tell you about the importance to your ear and brain of the relative phases of the Fourier components that make up the sound? Now clap your hands. The handclap has a very short duration (that is, a very small time uncertainty) so that it should have a large frequency uncertainty. What do you hear back from the piano? 103