cosine series / N

advertisement
0.6
0.6
N=10
N=100
N=1000
0.5
0.5
0.4
cosine series / N
cosine series / N
0.4
0.3
0.2
0.1
0.3
0.2
0.1
0
0
−0.1
−0.1
−0.2
N=10
N=100
N=1000
0
0.2
0.4
0.6
0.8
−0.2
0.45
1
x
0.5
0.55
x
PN
Figure 1: Cosine series fN (x) = a20 + n=1 an cos(nπx/L) for an = cos(nπ/2), for N = 10, 100, 1000;
to fit them on the same plot, we instead show fN (x)/N . Right plot is the same thing, but zoomed
in on the peak around x = 0.5.
18.303 Problem Set 3 Solutions
Due Wednesday, 29 September 2010.
Problem 1: (5+10+10+5 points)
(a) The plot is shown in fig. 1. To show them all on the same plot, I rescaled the series by 1/N
[i.e. plotting f (x)/N ]. The right panel shows a zoomed version of the same plot on the peak
around x = 0.5.
RL
(b) 0 f (x)dx = a0 L/2 = L/2 , independent of N , since all of the n > 0 terms integrate to
P
zero. f (L/2) = 12 + n cos(nπ/2)2 . Since cos(nπ/2) = 0 if n is odd and ±1 if n is even,
1
f (L/2) = + bN/2c , where bN/2c is N/2 rounded down, i.e. the number of even positive
2
integers ≤ N . Since the area L/2 is fixed and the height of the peak in the center grows
as ≈ N/2, the width of the peak must get narrower and narrower to have the same area,
L/2
i.e. a width of roughly N/2
= L/N . [This series in fact approaches a Dirac “delta function”
1
16
t=0.0001
t=0.001
t=0.01
t=0.1
t=1
14
heat equation solution u(x,t)
12
10
8
6
4
2
0
−2
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
Figure 2: Solution u(x, t) of the diffusion equation for an initial condition u(x, 0) = δ(x − 0.5).
Physically, this corresponds to dumping mass into the system at x = 0.5, and then watching it
diffuse and spread out equally over the whole length.
δ(x − 0.5), something that will come up soon in 18.303, although for this notion (and this
limit N → ∞) to be well defined we will need to modify our definition of “function.”]
(c) The plot is shown in fig. 2; on this scale, you cannot really distinguish t = 0.1 from t = 1,
since in both cases u(x, t) is basically a constant. Physically, this represents a situation where
you have injected a big concentration of salt right at x = 0.5 at t = 0, and then for t > 0
the concentration is diffusing outwards. As time goes on, the diffusion makes the salt spread
out to eventually have an equal concentration everywhere. As we derived in class, Neumann
R
boundary conditions represent zero mass flux at the ends, so that the total mass of salt ( u)
is conserved. Hence, it should spread out to a constant u(x, ∞) = 1/2 to have the correct
area, and of course it does (as we could see analytically from the cosine series in this case
anyway).
(d) Running the sum function for t = 0.0001, 0.001, 0.01, 0.1, 1, we indeed find that the answer
is always L/2 = 0.5 (to at least 3 significant digits, which is good enough since this crude
summation isn’t an exact integral).
Problem 2: (10+5+5+10+5)
In class, we considered d2 /dx2 for x ∈ [0, L] and Dirichlet boundary conditions u(0) = u(L) = 0.
We approximated u(x) by its values um ≈ u(m∆x) on a grid of M points, with ∆x = L/(M + 1)
and u0 = uM +1 = 0. We then approximated d2 /dx2 in two center-difference steps: first, we
approximated d/dx by u0m+1/2 ≈ (um+1 − um )/∆x; then, we approximated d2 /dx2 by u00m ≈
(u0m+1/2 − u0m−1/2 )/∆x. This gave us the wonderful discrete Laplacian matrix A (with 1, −2, 1 on
the diagonals), which we could also write as A = −DT D/∆x2 for the D matrix given in class (and
computed by the diff1.m file from pset 2). Now, in this problem we will do the same thing for the
Neumann boundary conditions u0 (0) = u0 (L) = 0.
2
(a) The center-difference steps from class are
u0m+0.5 ≈
u0
− u0m−0.5
um+1 − um
um+1 − 2um + um−1
.
=⇒ u00m ≈ m+0.5
≈
∆x
∆x
∆x2
The boundary conditions only come in for m = 1 and m = M (the first and last rows of the
matrix). For Dirichlet boundary conditions in class, we just set u0 = uM = 0. Now, however,
we’ll set u00.5 = u0M +0.5 = 0, obtaining
u001 ≈
u01.5 − 0
um+1 − um
,
≈
∆x
∆x2
u00M ≈
0 − u0M −0.5
um−1 − um
.
≈
∆x
∆x2
All of the other u00m equations (the middle rows of the
Laplacian matrix

−1 1
 1 −2 1
1 

..
..
à =

.
.
∆x2 

1
matrix) are unchanged. This gives the




.

1 
−1
..
.
−2
1
(b) In the first step, computing u0 = Du/∆x, the only difference from before is that we no longer
compute u00.5 and u0M +0.5 , which were computed by the first and last rows of D. Thus, we
can omit the first and last rows of D to obtain the (M − 1) × M matrix



D̃ = 

−1
1
−1

1
..
.


.

..
.
−1
1
In the second step, computing u00 = −DT u0 /∆x, since we no longer have the first and last
rows of u0 then we shoud omit the first and last columns of DT . But this is exactly the same
as omitting the first and last rows of D, hence à = D̃T D̃/∆x2 . In Matlab, we can modify
diff1.m to do this simply by adding one line at the end to delete the first and last rows:
D = D(2:end-1,:); (and also renaming the function name to diff1n). A quick check, e.g.
-diff1n(5)’ * diff1n(5), shows that this yields the expected matrix à above.
(c) In this case D̃ is already in upper-triangular form with M − 1 pivots, and hence rank(D̃) =
rank(Ã) = M − 1 . Since à is M × M , the dimension of its nullspace is 1. Recall that
the nullspace of the exact d2 /dx2 operator consists of the constant functions (which can be
nonzero with Neumann boundaries), and it is easy to see that N (Ã) is similarly the constant
vectors. That is, N (Ã) is spanned by n = (1, 1, 1, . . . , 1, 1, 1)T : multiplying Ãn simply sums
the rows of Ã, which gives zero by inspection.
(d) As in pset 2, let’s pick L = 1 and set M = 100 to start with [thus ∆x = L/(M + 1)]; the other
commands are the same as in pset 3 except that we don’t have the e−x weighting factor and
we use diff1n. The smallest-|λ| three eigenvalues are ≈ 0, −10.0672, and 40.2587, similar to
but not quite on the exact values of 0, π 2 ≈ 9.8696, and (2π)2 ≈ 39.4784. The corresponding
eigenvectors of Ã, along with the exact eigenfunctions cos(nπx), are shown in fig. 3; again,
close but with noticeable small errors.1 The error |λn +(nπ)2 | is plotted versus M on a log–log
scale for the n = 1, 2 eigenvalues in figure 4, and for comparison a line ∼ ∆x is also shown: the
errors clearly decrase almost exactly proportional to ∆x, not ∆x2 even though we nominally
3
1
0.5
0
2
2
d /dx eigenfunctions ∼ cos(nπx)
1.5
n=0 discrete
n=0 exact
n=1 discrete
n=1 exact
n=2 discrete
n=2 exact
−0.5
−1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
Figure 3: Finite-difference eigenvectors (symbols) for the 100 × 100 Ã matrix, corresponding to the
three smallest-magnitude eigenvalues, along with the exact d2 /dx2 eigenfunctions cos(nπx) (lines).
0
10
part (d)
−1
10
−2
|error| in λ
10
−3
10
part (e)
−4
10
(d) |λ1 + π2|
(d) |λ2 + (2π)2|
−5
10
∼ ∆x
(e) |λ1 + π2|
(e) |λ2 + (2π)2|
∼ ∆x2
−6
10
100
200
400
800
# grid points M
Figure 4: Errors in eigenvalues for discrete d2 /dx2 versus the number M of grid ponts, for the smallest two nonzero eigenvalues.. Top curves: part (d), Neumann approximated by u01/2 = u0M +1/2 = 0.
Bottom curves: part (e), re-stretched to remove the first-order errors from the boundaries. For
reference, lines ∼ 1/M ∼ ∆x and ∼ 1/M 2 ∼ ∆x2 are also shown.
4
have a second-order accurate center-difference approximation! This is because we are setting
u01/2 = (u1 −u0 )/∆x = 0, which is a forward difference approximation for u0 (0) = u00 , and also
set u0M +1/2 = (uM +1 − uM )/∆x = 0, a backward difference approximation for u0 (L) = u0M
Forward/backward differences are only first-order accurate, and all it takes is one first-order
error to screw up the convergence of the entire solution.
(e) Changing to ∆x = L/M , we are now effectively putting the boundary conditions at the correct
place, removing our first-order error from before. As seen in figure 4, the errors now scale as
∼ ∆x2 . Second-order accuracy is restored! (The moral of this story is that it is very easy for
errors in boundary conditions and other interfaces to dominate; unfortunately, these are not
always so easy to correct in higher dimensions.)
Problem 3: (10+5+10+5 points)
(a) Our boundary conditions give αJm (kR1,2 ) + βYm (kR1,2 ) = 0, or
Jm (kR1 ) Ym (kR1 )
E=
,
Jm (kR2 ) Ym (kR2 )
and hence fm (k) = det E = Jm (kR1 )Ym (kR2 ) − Ym (kR1 )Jm (kR2 ) must satisfy fm (k) = 0.
Assuming we find a k such that det E = 0, we must then have αJm (kR1 ) = −βYm (kR1 ),
and hence a basis for the nullspace will be α = Ym (kR1 ), β = Jm (kR1 ) [there are several
other correct answers, of course]. Note that the nullspace is 1-dimensional (it would only be
2-dimensional if E = 0, which is clearly never true), so we only need one (α, β) 6= 0 solution
to span it.
(b) In Matlab, fm (k) would be written as @(k) besselj(m,k).*bessely(m,k*2)-bessely(m,k).*besselj(m,k*2)
... note that we use .*, not *, since many of the Matlab functions need to handle the case
where k is a vector, not a number. The plot for m = 0, 1, 2 is shown in figure 5.
(c) From the graph, a rough estimate for the first three roots of f0 is k = 3, 6, 9. This is good
enough for fzero to find the exact roots (to machine precision, ≈ 15 decimal places). In particular, we just call k1 = fzero(@(k)..., 3), k2 = fzero(@(k)..., 6), k3 = fzero(@(k)...,
9), where @(k)... is the same as what you used for fplot in the previous part (with m=0).
This gives k1 ≈ 3.1230, k2 ≈ 6.2734, k3 ≈ 9.4182. We plot the corresponding function, using
our α and β from part (a), by fplot(@(r) besselj(m,k*r).*bessely(m,k)-bessely(m,k*r).*besselj(m,k),
[1,2]), and the result is shown in figure 6. As desired, they satisfy both boundary conditions.
Comment: If we made R1 very big and kept R2 − R1 fixed, then the curvature of the annulus
would vanish and this problem would reduce for m = 0 to the 1d problem d2 /dx2 with Dirichlet boundaries, having eigenfunctions sin[nπ(r − R1 )/(R2 − R1 )]. Even for R1 = 1, here, the
eigenfunctions are already strikingly close to the R1 → ∞ sine solutions.
(d) Performing the integral with the commands given, I get ≈ 2.53 × 10−8 , which is zero to within
the accuracy we requested. (You can sort of see that they are orthogonal-ish just by looking
at the graphs in figure 5, since the larger k’s oscillate more, but it is quite another matter to
get such precise cancellation in the integrals!
1 Note that the scaling and sign of the eigenvectors is somewhat arbitrary in Matlab (since of course we can
multiply them by any nonzero constants we wish). To compare with cos(nπx), I rescaled the eigenvectors so that
their maximum component is 1, and flipped the sign as needed.
5
0.6
m=0
m=1
m=2
0.5
0.4
fm(k)
0.3
0.2
0.1
0
−0.1
0
2
4
6
8
10
12
14
16
18
20
k
Figure 5: Plot of fm (k) for m = 0, 1, 2. where the roots fm (k) = 0 correspond to k values where a
Bessel solution can satisfy the Dirichlet boundary conditions.
0.15
radial eigenfunction Y0(kR1) J0(kr) − J0(kR1) Y0(kr)
k1
k2
k3
0.1
0.05
0
−0.05
−0.1
−0.15
−0.2
1
1.1
1.2
1.3
1.4
1.5
r/R
1.6
1.7
1.8
1.9
2
1
Figure 6: The radial part of the first three m = 0 eigenfunctions in an annulus, for the first three
roots k1,2,3 of our equation fm (k) = 0. By construction, these vanish at r = R1 and r = R2 = 2R1 .
6
Download