18.303 Problem Set 2 Solutions Problem 1: (10+10 points) We are using a difference approximation of the form: u0 (x) ≈ −u(x + 2∆x) + c · u(x + ∆x) − c · u(x − ∆x) + u(x − 2∆x) . d · ∆x (a) First, we Taylor expand: u(x + ∆x) = ∞ X u(n) (x) ∆xn . n! n=0 The numerator of the difference formula flips sign if ∆x → −∆x, which means that when you plug in the Taylor series all of the even powers of ∆x must cancel! To get 4th-order accuracy, the ∆x3 term in the numerator (which would give an error ∼ ∆x2 ) must cancel as well, and this determines our choice of c: the ∆x3 term in the numerator is u000 (x) ∆x3 −23 + c + c − 23 , 3! and hence we must have c = 23 = 8 . The remaining terms in the numerator are the ∆x term and the ∆x5 term: u0 (x)∆x [−2 + c + c − 2] + 2 u(5) (x) ∆x5 −25 + c + c − 25 = 12u0 (x)∆x − u(5) (x)∆x5 + · · · . 5! 5 Clearly, to get the correct u0 (x) as ∆x → 0, we must have d = 12 . Hence, the error is 1 (5) approximately − 30 u (x)∆x4 , which is ∼ ∆x4 as desired. (b) The Matlab code is the same as in the handout, except now we compute our difference approximation by the command: d = (-sin(x+2*dx) + 8*sin(x+dx) - 8*sin(x+dx) + sin(x-2*dx)) ./ (12 * dx); the result is plotted in Fig. 1. Note that the error falls as a straight line (a power law), until it reaches ∼ 10−15 , when it starts becoming dominated by roundoff errors (and actually gets worse). To verify the order of accuracy, it would be sufficient to check the slope of the straight-line region, but it is more fun to plot the actual d5 predicted error from the previous part, where dx 5 sin(x) = − cos(x). Clearly the predicted error is almost exactly right (until roundoff errors take over). Problem 2: (10+10+(5+10) points) (a) As in class, we integrate by parts: 00 Z hu, v i = L 00 uv = 0 L uv 0 |0 L Z 0 0 − 0 uv =−u 0 L v|0 Z + L u00 v = hu00 , vi, 0 where the boundary terms vanish because u0 and v 0 are zero. As in class, integrating by parts only a single time gives hu, −u00 i = hu0 , u0 i = ku0 k2 ≥ 0, so this is positive semidefinite. It is not positive defintie because it can = 0 for u(x) = constant 6= 0, which satisfies the boundary conditions. 1 −5 10 −6 10 −7 error in d(sin)/dx at x=1 10 −8 10 −9 10 −10 10 −11 10 −12 10 −13 10 −14 10 error in du/dx 4 predicted ∼ ∆x −15 10 −8 10 −7 10 −6 10 −5 −4 10 10 ∆x −3 10 −2 10 −1 10 0 10 Figure 1: Actual vs. predicted error for problem 1(b), using fourth-order difference approximation for u0 (x) with u(x) = sin(x), at x = 1. 1 (b) As in class, we integrate by parts twice, but this time we also have to move the w(x) term over (since this is just a real number, it is symmetric). (The w term doesn’t change the fact that the boundary terms vanish, since u and v are still zero there regardless of what you multiply them by.) 1 1 d u 0 d2 u hu, − v 00 i = h u, −v 00 i = h , v i = h− 2 , vi, w w dx w dx w h h i∗ i d2 1 d2 1 = − dx2 w(x) . This is not the same unless 1/w commutes with the and so − w(x) dx2 derivative, which only happens if w is a constant. More explicitly, let Â∗ u = − 1 w = c(x). Then d2 (cu) = −c00 u − 2c0 u0 − cu00 , dx2 which is 6= −cu00 for all u(x) unless c0 and c00 are zero, i.e. c(x) is constant (and hence w is too). (c) Suppose that V consists of functions with boundary conditions u(0) = u(L) = 0, and the RL inner product is hu, vi = 0 w(x)u(x)v(x)dx for some function w(x) > 0. R R (i) The R first two follow byRinspection: hu, vi = wuv = wvu = hv, ui, and hαu1R+βu22, vi = w(αu1 + βu2 )v = α wu1 v + βu2 v = αhu1 , vi + βhu2 , vi. Clearly, hu, ui = w|u| ≥ 0, since w R> 0 and |u|2 ≥ 0. Since the integrand is everywhere non-negative, the only way to have w|u|2 = 0 is if u(x) = 0 except at isolated points covering zero area (technically, on a set of measure zero). R d2 (ii) With this inner product, hu, Âvi = − u dx 2 v (the w factors cancel), and then by inteR d2 R 1 d2 grating by parts twice as in class we obtain − w − w dx2 u v = hÂu, vi, dx2 u v = hence ÂR is symmetric. By the same argument, except hu, Âui by parts only once, we obtain |u0 |2 ≥ 0, which is = 0 only for u = 0 as in class, hence it is positive definite. Hence, as in class, the eigenvalues are real and positive, and the eigenfunctions are 2 4 12 x 10 weighted Laplacian ordinary Laplacian finite−difference eigenvalues λk 10 8 6 4 2 0 0 10 20 30 40 50 60 index k of eigenvalue λk 2 70 80 90 100 2 d d Figure 2: Plot of eigenvalues λk for the discretized −ex dx 2 and − dx2 operators, for problem 3(a). orthogonal. And, as usual for real-symmetric operators, we expect that it is probably diagonalizable as long as w(x) is not too crazy (diagonalizability only fails for weird nonphysical operators with infinities somewhere, as mentioned in class, although defining this precisely takes a lot of functional analysis). Problem 3: (5+5+5+10 points) (a) There are 100 eigenvalues; rather than looking at the one by one, I’ll check that they are real by asking Matlab for the maximum imaginary part: max(abs(imag(diag(S)))), which returns 0 as expected. The plot of λ is shown in Fig. 2. (If you had problems with plotting the two functions together, you could plot each one individually and use the hold on command in between to superimpose the plots.) Evidently, this w(x) factor makes the eigenvalues bigger; this should not too surprising since we are multipling  by ex ≥ 1. For comparison, multiplying by a constant > 1 would just increase the eigenvalues by that factor, so it is not surprising that multiplying by a function ≥ 1 does something similar. (b) These are plotted in Fig. 3. They are somewhat similar to sin(nπx/L) for n = 1, 2, 3, except that they oscillate faster towards the left-hand side (where w is larger). (c) uT1 u2 ≈ 0.1762, uT1 u3 ≈ −0.0276, and uT2 u3 ≈ −0.1912. These are far larger than mere roundoff errors, so the are definitely not orthogonal in the unweighted inner product. P (d) A discrete analogue of the weighted inner product from 2(c) is hu, vi = i w(xi )ui vi (if we wanted to be picky we could multiply by ∆x to make the sum approximate the integral, which in matlab is u’ * (v .* exp(-x)). Using this inner product, I get hu1 , u2 i ≈ −2.0610×10−13 , hu1 , u3 i ≈ −2.4631×10−14 , and hu2 , u3 i ≈ 1.2313×10−13 , which are 6= 0 but only because of roundoff errors; they are orthogonal to within the accuracy of the computer arithmetic. 3 0.15 u1 u2 u3 0.1 eigenfunctions uk(x) 0.05 0 −0.05 −0.1 −0.15 −0.2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 x 2 d Figure 3: Plot of first three eigenfunctions of ex dx 2 on [0, 1] with zero boundary conditions. 4