18.303 Problem Set 6 Solutions Problem 1: Distributions

advertisement
18.303 Problem Set 6 Solutions
Problem 1: Distributions
Let f (x) =
√1
x
for x > 0, and f (x) = 0 for x ≤ 0.
(a) Explain why f defines a regular distribution, even though f (x) blows up as x → 0+ .
1
(b) Let g(x) = − 12 x3/2
for x > 0, and g(x) = 0 for x ≤ 0: g(x) matches the ordinary derivative f 0 (x) everywhere
0
f (x) is defined (i.e. everywhere but x = 0). Explain why g(x) does not correspond to any regular distribution.
(c) Viewed as a distibution, f must have a derivative. Give an explicit formula for f 0 {φ} in terms of an integral of
´∞
√ dx (why does this limit exist?), and integrate by parts using
φ(x) − φ(0) (not φ0 ). Hint: f {φ} = lim→0 φ(x)
x
φ0 (x) =
d
dx [φ(x)
− φ(0)]. How is this different from trying to define a distribution directly from g(x)?
(d) Give a similar formula for f 00 {φ} in terms of φ(x) − · · · (no φ0 or φ00 ), and compare to the 18.01 f 00 (x) (which
exists for x 6= 0 only).
Problem 1: (5+5+10+5 points)
(a) As pointed out in the notes, for any f (x) to define a regular distribution, it is necessary and sufficient for
´b
|f (x)|dx to be finite for any [a, b]. Here, even though f (x) diverges as x → 0+ , the integral of f (x) does not:
´ab dx
√ b
√ = 2 x| , which is finite for any a, b ≥ 0 (and f = 0 for x < 0 so negative a, b are not a problem).
a
a
x
´b
√ b
+
(b) In this case, the integral does diverge: if a, b ≥ 0, then a |g(x)|dx = −1/
´ x|a , which blows up as a → 0 .
Therefore, if you have a test function which does not go to zero as x → 0, g(x)φ(x)dx won’t exist.
´∞
√
√ dx, where the limit exists since, for small x, the integrand is ≈ φ(0)/ x
(c) As a distribution, f {φ} = lim→0+ φ(x)
x
(since φ is continuous), which is a finite integral from above. The definition of the distributional derivative is
ˆ ∞
−φ0 (x)
√ dx
f 0 {φ} = f {−φ0 } = lim+
x
→0
ˆ ∞
−1 d
√
[φ(x) − φ(0)] dx
= lim
x dx
→0+ ∞
ˆ ∞
−1
−1
= lim+ √ [φ(x) − φ(0)] + lim+
[φ(x) − φ(0)] dx
x
→0
→0
2x3/2
ˆ ∞
ˆ ∞
= lim+
g(x) [φ(x) − φ(0)] dx =
g(x) [φ(x) − φ(0)] dx
→0
0+
where in the second line we have used the hint, in the third line we have integrated by parts, and in the
fourth line we have used the fact that φ(x) vanishes at ∞ (by definition of the space of test functions) and
0
√
φ()−φ(0)
√
→ φ√()
= φ0 () → 0 as → 0 (by virtue of φ being differentiable). By the same token, the
√
integrand g(x)[φ(x) − φ(0)] is asymptotically proportional to φ0 (0)/2 x in the limit of small x, and hence has a
finite integral. This is not a regular distribution—it is not an integral of φ(x) multiplied by any function—it is a
singular distribution. It is, however, the same as the naive integration against g(x) for any φ(x) with φ(0) = 0.
(d) To get f 00 as a distribution, we use the definition f 00 {φ} = f 0 {−φ0 } = f {φ00 }. Exploiting our result from the
previous part, we have
ˆ ∞
f 00 {φ} = f 0 {−φ0 } = lim+
g(x) [−φ0 (x) + φ0 (0)] dx
→0
ˆ ∞
= lim
g(x) [−φ0 (x) + φ0 (0)] dx
→0+ ˆ ∞
0
= − lim+
g(x) [φ(x) − φ(0) − φ0 (0)x] dx
→0
ˆ ∞
∞
0
= − lim+ g(x) [φ(x) − φ(0) − φ (0)x]| + lim+
g 0 (x) [φ(x) − φ(0) − φ0 (0)x] dx
→0
→0
ˆ ∞
3
=
[φ(x) − φ(0) − φ0 (0)x] dx.
5/2
4x
+
0
1
In the fourth line, we can eliminate the boundary term similar to the previous part: at ∞, φ = 0, whereas in the
→ 0+ limit we can use the finite-difference approximations (from the fact that φ is infinitely differentiable):
φ() − φ(0) = φ0 (/2) + O(3 ) =⇒ φ() − φ(0) − φ0 (0) =
φ0 (/2) − φ0 (0) 2
/2 + O(3 ) = φ00 (0)2 /2 + O(3 )
/2
and hence
φ00 (0) √
+ O(3/2 ) → 0.
2
Many of you were probably tempted to use a simpler derivation: Taylor-expand φ(x) around φ(0), and it follows
that φ(x)−φ(0)−φ0 (0)x = φ00 (0)x2 /2+O(x3 ), at which point you get a zero limit as above. This is the right idea
in spirit, but technically it is problematic because the test functions are required to be infinitely differentiable
1
but are not required to have a convergent Taylor series (they may not be “analytic”).
For the same reasons, the
√
resulting integral is well-defined, since the integrand is proportional to 1/ x for small x.
g() [φ(x) − φ(0) − φ0 (0)x] = −
√
We see that the result is the same as the 18.01 derivative (1/ x)00 = 3/4x5/2 for x > 0 if we have a test
function such that φ(0) and φ0 (0) are both zero. Otherwise, we get a regularized integral (a “Hadamard finite
part”) by essentially subtracting off the singular terms from the test function.
Problem 2: (5+5+10+10+5 points)
Recall that the displacement u(x, t) of a stretched string [with fixed ends: u(0, t) = u(L, t) = 0] satisfies the wave
2
2
equation ∂∂xu2 + f (x, t) = ∂∂t2u , where f (x, t) is an external force density (pressure) on the string.
2
2
(a) Suppose that ũ solves ∂∂xũ2 + f˜(x, t) = ∂∂t2ũ and satisfies ũ(0, t) = ũ(L, t) = 0. Now, consider u = Re ũ =
Clearly, u(0, t) = u(L, t) = 0, so u satisfies the same boundary conditions. It also satisfies the PDE:
"
#
∂2u
1 ∂ 2 ũ ∂ 2 ũ
=
+ 2
∂t2
2 ∂t2
∂t
"
#
2 ũ
1 ∂ 2 ũ
∂
=
+ f˜ +
+ f˜
2 ∂x2
∂x2
=
¯
ũ+ũ
2 .
∂2u
+ f,
∂x2
˜ ¯
˜
f
˜
since f = f +
2 = Re f . The key factors that allowed us to do this are (i) linearity, and (ii) the real-ness of the
PDE (the PDE itself contains no i factors or other complex coefficients).
(b) Plugging u(x, t) = v(x)e−iωt and f (x, t) = g(x)e−iωt into the PDE, we obtain
∂ 2 v −iωt
+ g
= −ω 2 v
,
e e−iωt
e−iωt
∂x2
and hence
and  = −
∂2
− 2 − ω2 v = g
∂x
∂2
− ω 2 . The boundary conditions are v(0) = v(L) = 0, from the boundary conditions on u.
∂x2
Since ω 2 is real, this is in the general Sturm-Liouville form that we showed in class is self-adjoint.
Subtracting a constant from an operator just shifts all of the eigenvalues by that constant, keeping the eigenfunctions the same. Thus  is still positive-definite if ω 2 is < the smallest eigenvalue of −∂ 2 /∂x2 , and positive
semidefinite if ω 2 = the smallest eigenvalue. In this case, we know analytically that the eigenvalues of −∂ 2 /∂x2
with these boundary conditions are (nπ/L)2 for n = 1, 2, . . .. So  is positive-definite if ω 2 < (π/L)2 , it is
positive-semidefinite if ω 2 = (π/L)2 , and it is indefinite otherwise.
1 For
the same reason, even the finite-difference expressions above are not quite right (since the error analysis was done only for analytic
functions): technically, we should write φ00 (0)2 /2 + o(2 ), where o(2 ) denotes terms that go to zero faster than 2 (but perhaps not as
fast as 3 ).
2
(c) We know that ÂG(x, x0 ) = 0 for x 6= x0 . Also, just as in class and as in the notes, the fact that ÂG(x, x0 ) =
δ(x − x0 ) meas that G must be continuous (otherwise there would be a δ 0 factor) and ∂G/∂x must have a jump
discontinuity:
∂G ∂G −
= −1
∂x x=x0+
∂x x=x0−
for −∂ 2 G/∂x2 to give δ(x − x0 ). We could also show this more explicitly by integrating:
ˆ
ˆ
x0 +0+
x0 +0+
δ(x − x0 ) dx = 1
ÂG dx =
x0 −0+
x0 −0+
=−
x0 +0+ ˆ x0 +0+
∂G 2
−
ω
G
dx,
∂x x0 −0+
0 −0+
x
which gives the same result.
Now, similar to class, we will solve it separately for x < x0 and for x > x0 , and then impose the continuity
requirements at x = x0 to find the unknown coefficients.
2
For x < x0 , ÂG = 0 means that ∂∂xG2 = −ω 2 G, hence G(x, x0 ) is some sine or cosine of ωx. But since G(0, x0 ) = 0,
we must therefore have G(x, x0 ) = α sin(ωx) for some coefficient α.
Similarly, for x > x0 , we also have a sine or cosine of ωx. To get G(L, x0 ) = 0, the simplest way to do this
is to use a sine with a phase shift: G(x, x0 ) = β sin(ω[L − x]) for some coefficient β.
Continuity now gives two equations in the two unknowns α and β:
α sin(ωx0 ) = β sin(ω[L − x0 ])
αω cos(ωx0 ) = −βω cos(ω[L − x0 ]) + 1,
0
0
sin(ω[L−x ])
sin(ωx )
1
which has the solution α = ω1 cos(ωx0 ) sin(ω[L−x
0 ])+sin(ωx0 ) cos(ω[L−x0 ]) , β = ω cos(ωx0 ) sin(ω[L−x0 ])+sin(ωx0 ) cos(ω[L−x0 ]) .
This simplifies a bit from the identity sin A cos B + cos B sin A = sin(A + B), and hence
1
G(x, x ) =
ω sin(ωL)
0
(
sin(ωx) sin(ω[L − x0 ]) x < x0
,
sin(ωx0 ) sin(ω[L − x]) x ≥ x0
which obviously obeys reciprocity.
Note that if ω is an eigenfrequency, i.e. ω = nπ/L for some n, then this G blows up. The reason is that
 in that case is singular (the n-th eigenvalue was shifted to zero), and defining Â−1 is more problematical.
(Physically, this corresponds to driving the oscillating string at a resonance frequency, which generally leads to
a diverging solution unless there is dissipation in the system.)
(d) (For this part, see also the IJulia notebook posted with the solutions.
2
d
T
It is critical to get the signs right. Recall that we approximated dx
2 by −D D for a 1st-derivative matrix
T
2
D. Therefore, you want to make a matrix A = D D − ω I. In Julia, A = D’*D - omega^2 * eye(N) where N
is the size of the matrix D = diff1(N) / dx with dx = 1/(N+1). We make dk by dk = zeros(N); dk[k] =
1/dx. I used N = 100.
The resulting plots are shown in figure 1, for ω = 0.4π/L (left) and ω = 5.4π/L (right) and x0 = 0.5 and
0.75. As expected, for ω < π/L where it is positive-definite, the Green’s function is positive and qualitatively
similar to that for ω = 0, except that it is slightly curved. For larger ω, though, it becomes oscillatory and much
more interesting in shape. The exact G and the finite-difference G match very well, as they should, although the
match becomes worse at higher frequencies—the difference approximations become less and less accurate as the
function becomes more and more oscillatory relative to the grid ∆x, just as was the case for the eigenfunctions
as discussed in class.
3
0.30
frequency ω =0.4π/L
0.06
0.25
0.04
0.20
0.02
0.15
0.00
0.10
0.02
exact x0 =0.5L
FD x0 =0.5L
exact x0 =0.75L
FD x0 =0.75L
0.05
0.000.0
0.2
0.4
x/L
0.6
0.8
frequency ω =5.4π/L
0.04
1.0 0.060.0
0.2
0.4
x/L
0.6
2
0.8
1.0
∂
2
Figure 1: Exact (lines) and finite-difference (dots) Green’s functions G(x, x0 ) for  = − ∂x
2 − ω , for two different ω
0
values (left and right) and two different x values (red and blue).
4
[If you try to plug in an eigenfrequency (e.g. 4π/L) for ω, you may notice that the exact G blows up but
the finite-difference G is finite. The reason for this is simple: the eigenvalues of the finite-difference matrix DT D
are not exactly (nπ/L)2 , so the matrix is not exactly singular, nor is it realistically possible to make it exactly
singular thanks to rounding errors. If you plug in something very close to an eigenfrequency, then G becomes
very close to an eigenfunction multipled by a large amplitude. It is easy to see why this is if you look at the
expansion of the solution in terms of the eigenfunctions.]
(e) For small ω, sin(ωy) ≈ ωy for any y, and hence
(
2
ω
x(L − x0 ) x < x0
1
,
G(x, x ) → 2
2
ωx0 (L − x) x ≥ x0
ωL 0
2
d
which matches the Green’s function of − dx
2 from class. (Equivalently, you get 0/0 as ω → 0 and hence must
use L’Hôpital’s rule or similar to take the limit.)
5
Download