18.311 — MIT (Spring 2015) Answers to Problem Set # 7. Contents

advertisement
18.311 — MIT (Spring 2015)
Rodolfo R. Rosales (MIT, Math. Dept., E17-410, Cambridge, MA 02139).
May 1, 2015.
Answers to Problem Set # 7.
Contents
1 Haberman Book problem 77.03
1.1 Statement: Haberman problem 77.03 (shock speed for weak shocks) . . . . . . . . . . . . . . .
1.2 Answer: Haberman problem 77.03 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2
2
2
2 Haberman Book problem 78.03
2.1 Statement: Haberman problem 78.03 (invariant density) . . . . . . . . . . . . . . . . . . . . .
2.2 Answer: Haberman problem 78.03 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2
2
2
3 Short notes on separation of variables.
3.1 Example: heat equation in a square, with zero boundary conditions. . . . . . . . . . . . . .
3.2 Example: heat equation in a circle, with zero boundary conditions. . . . . . . . . . . . . .
3.3 Example: Laplace equation in a circle sector, with Dirichlet boundary
conditions, non-zero on one side. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
3
5
4 ExPD21a. Laplace equation in a rectangle (Dirichlet).
4.1 ExPD21a statement: Laplace equation in a rectangle (Dirichlet). . . . . . . . . . . . . . . .
4.2 ExPD21a answer: Laplace equation in a rectangle (Dirichlet). . . . . . . . . . . . . . . . .
9
9
10
7
5 ExPD40a. Heat equation with mixed boundary conditions.
10
5.1 ExPD40a statement: Heat equation with mixed boundary conditions. . . . . . . . . . . . . 10
5.2 ExPD40a answer: Heat equation with mixed boundary conditions. . . . . . . . . . . . . . . 11
6 ExPD50. Normal modes do not always work. #01.
12
6.1 ExPD50 statement: Normal modes do not always work. # 01. . . . . . . . . . . . . . . . . 12
6.2 ExPD50 answer: Normal modes do not always work. # 01. . . . . . . . . . . . . . . . . . . 12
7 Introduction. Experimenting with Numerical Schemes.
12
8 Problem GBNS02 scheme B. Backward differences for ut + ux = 0.
13
8.1 Statement: GBNS02 Scheme B. Backward differences for ut + ux = 0 . . . . . . . . . . . . . 13
8.2 Answer: GBNS02 Scheme B. Backward differences for ut + ux = 0 . . . . . . . . . . . . . . 14
9 Problem GBNS03 scheme C. Centered differences for ut + ux = 0.
14
9.1 Statement: GBNS03 Scheme C. Centered differences for ut + ux = 0 . . . . . . . . . . . . . 14
9.2 Answer: GBNS03 Scheme C. Centered differences for ut + ux = 0 . . . . . . . . . . . . . . . 14
1
MIT, (Rosales)
1
1.1
Haberman Book problem 77.03.
2
Haberman Book problem 77.03
Statement: Haberman problem 77.03 (shock speed for weak shocks)
A weak shock is a shock in which the shock strength (the difference in densities) is small. For a weak
shock, show that the shock velocity is approximately the average of the density wave velocities associated
with the two densities. [Hint: Use Taylor series methods.]
1.2
Answer: Haberman problem 77.03
The shock speed s is given by the formula
s=
[q]
q1 − q0
=
,
[ρ]
ρ1 − ρ0
(1.1)
where q0 = q(ρ0 ) and q1 = q(ρ1 ). Introduce now ∆ρ = ρ1 − ρ0 , and expand both q1 and c1 = c(ρ1 ) in a
dq
Taylor series centered at ρ0 (here, as usual, c =
is the density wave speed). Then
dρ
where c0 = c(ρ0 ), and d0 =
(1.2)
c1
(1.3)
d2 q
(ρ0 ). Introducing these expansions into equation (1.1), we obtain:
dρ2
s = c0 +
2
2.1
1
d0 (∆ρ)2 + O((∆ρ)3 ),
2
= c0 + d0 ∆ρ + O((∆ρ)2 ),
q1 = q0 + c0 ∆ρ +
1
1
d0 ∆ρ + O((∆ρ)2 ) = (c0 + c1 ) + O((∆ρ)2 ).
2
2
(1.4)
Haberman Book problem 78.03
Statement: Haberman problem 78.03 (invariant density)
Assume that u = umax (1 − ρ/ρmax ) and at t = 0 the traffic density is
(1/3) ρmax
for x < 0,
ρ(x, 0) =
(2/3) ρmax
for x > 0.
(2.1)
Why does the density not change in time?
2.2
Answer: Haberman problem 78.03
In this case the car flow is given by q = ρ u = umax (1 − ρ/ρmax ) ρ and the density wave velocity by
dq
c=
= umax (1 − 2 ρ/ρmax ). Since c(ρmax /3) > c(2 ρmax /3), the initial conditions in (2.1) above give
dρ
rise to a shock. Since q(ρmax /3) = q(2 ρmax /3), this shock propagates at zero velocity. It follows that the
solution (for all times) of the problem with the initial conditions above in (2.1) is:
(1/3) ρmax
for x < 0 ,
ρ(x, t) =
(2.2)
(2/3) ρmax
for x > 0 .
18.311 MIT, (Rosales)
3
Short notes on separation of variables.
3
Short notes on separation of variables.
THIS IS NOT A PROBLEM. This section is for background.
Imagine that you are trying to solve a problem whose solution is a function of several variables, e.g.:
u = u(x, y z). Finding this solution directly may be very hard, but you may be able to find other solutions
with the simpler structure
u = f (x) g(y) h(z),
(3.3)
where f , g, and h are some functions. This is called a separated variables solution.
If the problem that you are trying to solve is linear, and you can find enough solutions of the form in (3.3),
then you may be able to solve the problem using a linear combinations of separated variables solutions.
The technique described in the paragraph above is called separation of variables. Note that
1. When the problem is a pde and solutions of the form in (3.3) are allowed, the pde reduces to three
ode — one for each of f , g, and h. Thus the solution process is enormously simplified.
2. In (3.3) all the three variables are separated. But it is also possible to seek for partially separated
solutions. For example
u = f (x) v(y, z).
(3.4)
This is what happens when you look for normal mode solutions to time evolution equations of the
form
ut = L u,
(3.5)
where L is a linear operator acting on the space variables ~r = (x, y, . . . ) only — for example:
L = ∂x2 + ∂y2 + . . .. The normal mode solutions have time separated
u = eλ t φ(~r),
where
λ = constant,
(3.6)
and reduce the equation to an eigenvalue problem in space only
λφ = L φ.
(3.7)
3. Separation of variables does not always work. In fact, it rarely works if you just think of random
problems. But it works for many problems of physical interest. For example: it works for the heat
equation, but only for a few very symmetrical domains (rectangles, circles, cylinders, ellipses). But
these are enough to build intuition as to how the equation works. Many properties valid for generic
domains can be gleaned from the solutions in these domains.
4. Even if you cannot find enough separated solutions to write all the solutions as linear combinations
of them, or if the problem is nonlinear and you can get just a few separated solutions, sometimes
this is enough to discover interesting physical effects, or gain intuition as to the system behavior.
3.1
Example:
heat equation in a square, with zero boundary conditions.
Consider the problem
Tt = ∆ T = Txx + Tyy ,
(3.8)
18.311 MIT, (Rosales)
Short notes on separation of variables.
4
in the square domain 0 < x, y < π, with T vanishing along the boundary, and with some initial data
T (x, y, 0) = W (x, y). To solve this problem by separation of variables, we first look for solutions of the
form
T = f (t) g(x) h(y),
(3.9)
which satisfy the boundary conditions, but not the initial data. Why this? This is important!:
A. We would like to solve the problem for generic initial data, while solutions of the form (3.9) can only
achieve initial data for which W = f (0) g(x) h(y). This is very restrictive, even more so because (as
we will soon see) the functions g and h will be very special. To get generic initial data, we have no
hope unless we use arbitrary linear combinations of solutions like (3.9).
B. Arbitrary linear combinations of solutions like (3.9) will satisfy the boundary conditions if an only if
each of them satisfies them.
Substituting (3.9) into (3.8) yields
f 0 g h = f g 0 0 h + f g h0 0 ,
(3.10)
where the primes indicate derivatives with respect to the respective variables. Dividing this through by u
now yields
f0
g 0 0 h0 0
=
+
.
(3.11)
f
g
h
Since each of the terms in this last equation is a function of a different independent variable, the equation
can be satisfied only if each term is a constant. Thus
g0 0
= c1 ,
g
h0 0
= c2 ,
h
and
f0
= c1 + c2 ,
f
(3.12)
where c1 and c2 are constants. Now the problem has been reduced to a set of three simple ode. Furthermore,
for (3.9) to satisfy the boundary conditions in (3.8), we need:
g(0) = g(π) = h(0) = h(π) = 0,
(3.13)
which restricts the possible choices for the constants c1 and c2 . The equations in (3.12 – 3.13) are easily
solved, and yield1
g = sin(n x),
with c1 = −n2 ,
(3.14)
h = sin(m y),
with c2 = −m2 ,
(3.15)
f
= e
−(n2 +m2 ) t
,
(3.16)
where n = 1, 2, 3, . . . and m = 1, 2, 3, . . . are natural numbers. The solution to the problem in (3.8) can
then be written in the form
T =
∞
X
2 +m2 ) t
wnm sin(n x) sin(m y) e−(n
,
(3.17)
n, m=1
where the coefficients wnm follow from the double sine-Fourier series expansion (of the initial data)
W =
∞
X
wnm sin(n x) sin(m y).
(3.18)
n, m=1
1
We set the arbitrary multiplicative constants in each of these solutions to one. Given (3.17 – 3.18), there is no loss of
generality in this.
18.311 MIT, (Rosales)
Short notes on separation of variables.
That is
wnm
3.2
4
= 2
π
Z
π
Z
5
π
dy W (x, y) sin(n x) sin(m y).
dx
(3.19)
0
0
Example:
Heat equation in a circle, with zero boundary conditions.
We now, again, consider the heat equation with zero boundary conditions, but on a circle instead of a
square. That is, using polar coordinates, we want to solve the problem
Tt = ∆ T =
1
(r(r Tr )r + Tθθ ) ,
r2
(3.20)
for 0 ≤ r < 1, with T (1, θ, t) = 0, and some initial data T (r, θ, 0) = W (r, θ). To solve the problem using
separation of variables, we look for solutions of the form
T = f (t) g(r) h(θ),
(3.21)
which satisfy the boundary conditions, but not the initial data. The reasons for this are the same as in
items A and B above — see § 3.1. In addition, note that
C. We must use polar coordinates, otherwise solutions of the form (3.21) cannot satisfy the boundary
conditions. This exemplifies an important feature of the separation of variables method:
The separation must be done in a coordinate system where the boundaries are coordinate surfaces.
Substituting (3.21) into (3.20) yields
f0 g h =
1
f r (r g 0 )0 h + f g h0 0 ,
2
r
(3.22)
where, as before, the primes indicate derivatives. Dividing this through by u yields
f0
(r g 0 )0
h0 0
=
+ 2 .
f
rg
r h
(3.23)
Since the left side in this equation is a function of time only, while the right side is a function of space
only, the two sides must be equal to the same constant. Thus
f 0 = −λ f
(3.24)
and
r (r g 0 )0
h0 0
+ λ r2 +
= 0,
(3.25)
g
h
where λ is a constant. Here, again, we have a situation involving two functions of different variables being
equal. Hence
h0 0 = µ h,
(3.26)
and
1
µ
(r g 0 )0 + λ + 2 g = 0,
(3.27)
r
r
where µ is another constant. The problem has now been reduced to a set of three ode. Furthermore, from
the boundary conditions and the fact that θ is the polar angle, we need:
g(1) = 0
and
h is periodic of period 2 π.
(3.28)
18.311 MIT, (Rosales)
Short notes on separation of variables.
6
In addition, g must be non-singular at r = 0 — the singularity that appears for r = 0 in equation (3.27) is
due to the coordinate system singularity, equation (3.20) is perfectly fine there.
It follows that it should be µ = −n2 and
h = ei n θ ,
where n is an integer.
(3.29)
Notes:
– As in § 3.1, here and below, we set the arbitrary multiplicative constants in each of the ode solutions
to one. As before, there is no loss of generality in this.
– Instead of complex exponentials, the solutions to (3.26) could be written in terms of sine and cosines.
But complex exponentials provide a more compact notation.
Then (3.27) takes the form
1
n2
(r g 0 )0 + λ − 2 g = 0.
r
r
(3.30)
This is a Bessel equation of integer order. The non-singular (at r = 0) solutions of this equation are
proportional to the Bessel function of the first kind J|n| . Thus we can write
and λ = κ2|n| m ,
g = J|n| (κ|n| m r),
(3.31)
where m = 1, 2, 3, . . ., and κ|n| m > 0 is the m-th zero of J|n| .
Remark 3.1 That (3.30) turns out to be a well known equation should not be a surprise. Bessel functions,
and many other special functions, were first introduced in the context of problems like the one here — i.e.,
solving pde (such as the heat or Laplace equations) using separation of variables.
Putting it all together, we see that the solution to the problem in (3.20) can be written in the form
T =
∞
∞
X
X
wn m J|n| (κ|n| m r) exp i n θ − κ2|n| m t ,
(3.32)
n=−∞ m=1
where the coefficients wn m follow from the double (Complex Fourier) – (Fourier – Bessel) expansion
W =
∞
∞
X
X
wn m J|n| (κ|n| m r) ei n θ .
(3.33)
n=−∞ m=1
That is:
wn m
1
=
2
π J|n|+1 (κ|n| m )
Z
2π
Z
dθ
0
1
r dr W (r, θ) J|n| (κ|n| m r) e−i n θ .
(3.34)
0
Remark 3.2 You may wonder how (3.34) arises. Here is a sketch:
(i) For θ we use a Complex-Fourier series expansion. For any 2 π-periodic function
G(θ) =
∞
X
n=−∞
an e
inθ
,
where
1
an =
2π
Z
2π
G(θ) e−i n θ dθ.
0
(ii) For r we use the Fourier-Bessel series expansion explained in item (iii).
(3.35)
18.311 MIT, (Rosales)
Short notes on separation of variables.
7
(iii) Note that (3.30), for any fixed n, is an eigenvalue problem in 0 < r < 1. Namely
L g = λ g,
1
n2
L g = − (r g 0 )0 + 2 g,
r
r
where
(3.36)
g is regular for r = 0, and g(1) = 0. Without loss of generality, assume that n ≥ 0, and consider the
Z 1
set of all the (real valued) functions such that
g̃ 2 (r) r dr < ∞. Then define the scalar product
0
< f˜, g̃ >=
Z
1
f˜(r) g̃(r) r dr.
(3.37)
0
With this scalar product L is self-adjoint, and it yields a complete set of orthonormal eigenfunctions
φm = Jn (κn m r)
and
λm = κ2n m ,
where
m = 1, 2, 3, . . .
(3.38)
Thus one can expand
F (r) =
∞
X
bm φm (r),
where
bm = R 1
m=1
Finally, note that
Z 1
0
r φ2m (r) dr
Z
follows from
0
Z
1
r φ2m (r) dr
1
F (r) φm (r) r dr.
(3.39)
1 2
J
(κn m ).
2 n+1
(3.40)
0
1
r Jn2 (κn m r) dr =
0
We will not prove this identity here.
3.3
Example: Laplace equation in a circle sector, with Dirichlet
boundary conditions, non-zero on one side.
Consider the problem
0 = ∆u =
1
(r(r ur )r + uθθ ) ,
r2
0<r<1
and
0 < θ < α,
(3.41)
where α is a constant, with 0 < α < 2 π. The boundary conditions are
u(1, θ) = 0,
u(r, 0) = 0,
and u(r, α) = w(r),
(3.42)
for some given function w. To solve the problem using separation of variables, we look for solutions of the
form
u = g(r) h(θ),
(3.43)
which satisfy the boundary conditions at r = 1 and θ = 0, but not the boundary condition at θ = α. The
reasons are as before: we aim to obtain the solution for general w using linear combinations of solutions
of the form (3.43) — see items A and B in § 3.1, as well as item C in § 3.2.
Substituting (3.43) into (3.41) yields, after a bit of manipulation
r (r g 0 )0 h0 0
+
= 0.
g
h
(3.44)
18.311 MIT, (Rosales)
Short notes on separation of variables.
8
Using the same argument as in the prior examples, we conclude that
r (r g 0 )0 − µ g = 0
and h0 0 + µ h = 0,
(3.45)
where µ is some constant, g(1) = 0, and h(0) = 0. Again: the problem is reduced to solving ode. In fact,
it is easy to see that it should be2
h=
1
sinh(s θ)
s
and g =
ri s − r−is
,
s
where
µ = −s2 ,
(3.46)
and as yet we know nothing about s, other than it is some (possibly complex) constant.
In § 3.2, we argued that g should be non-singular at r = 0, since the origin was no different from any other
point inside the circle for the problem in (3.20) — the singularity in the equation for g in (3.27) is merely
a consequence of the coordinate system singularity at r = 0. We cannot make this argument here, since
the origin is on the boundary for the problem in (3.41 – 3.42) — there is no reason why the solutions should
be differentiable across the boundary! We can only state that
g should be bounded
⇐⇒
s 6= 0 is real.
(3.47)
Furthermore, exchanging s by −s does not change the answer in (3.46). Thus
0 < s < ∞.
(3.48)
In this example we end up with a continuum of separated solutions,
as opposed to the two prior examples, where discrete sets occurred.
Putting it all together, we now write the solution to the problem in (3.41 – 3.42) as follows
Z ∞
sinh(s θ) i s
u=
r − r−i s W (s) ds,
sinh(s α)
0
(3.49)
where W is computed in (3.52) below.
Remark 3.3 Start with the complex Fourier Transform
Z ∞
Z ∞
1
isζ
ˆ
ˆ
f (ζ) =
f (s) e ds, where f (s) =
f (ζ) e−i s ζ dζ.
2 π −∞
−∞
(3.50)
Apply it to an odd function. The answer can then be manipulated into the sine Fourier Transform
Z
Z ∞
2 ∞
f (ζ) =
F (s) sin(s ζ) ds, where F (s) =
f (ζ) sin(s ζ) dζ,
(3.51)
π 0
0
and 0 < ζ, s < ∞. Change variables, with 0 < r = e−ζ < 1, w(r) = f (ζ), and W (s) = −
Then
Z
w(r) =
W (s) =
1
2i
∞
W (s) r i s − r −i s ds, where
0
!
Z 1
1
r −i s − r i s
w(r) dr,
2π 0
r
which is another example of a transform pair associated with the spectrum of an operator (see below).
2
F (s).
The multiplicative constant in these solutions is selected so that the correct solution is obtained for s = 0.
(3.52)
18.311 MIT (Rosales).
Exercises in Partial Differentiation and Linear PDE.
ExPD21a.
9
Finally: What is behind (3.52)? Why should we expect something like this? Note that the problem for g can
be written in the form
L g = µ g, where L g = r (r g 0 )0 ,
(3.53)
g(1) = 0 and g is bounded (more accurately: the inequality in (3.54) applies). This is an eigenvalue problem
in 0 < r < 1. Further, consider consider the set of all the functions such that
Z 1
dr
|g̃|2 (r)
< ∞,
(3.54)
r
0
and define the scalar product
< f˜, g̃ >=
Z
1
0
dr
f˜∗ (r) g̃(r) .
r
(3.55)
With this scalar product L is self-adjoint. However, it does not have any discrete spectrum,3 only continuum
spectrum — with the pseudo-eigenfunctions given in (3.46), for 0 < s < ∞. This continuum spectrum is
what is associated with the formulas in (3.52). In particular, note the presence of the scalar product (3.55)
in the formula for W .
4
ExPD21a. Laplace equation in a rectangle (Dirichlet).
4.1
ExPD21a statement: Laplace equation in a rectangle (Dirichlet).
Consider the question of determining the steady state temperature in a thin rectangular plate such that:
(a) The temperature is prescribed at the edges of the plate, and (b) The facets of the plate (top and
bottom) plate are insulated. The solution to this problem can be written as the sum of the solutions to
four problems of the form4
Txx + Tyy = 0,
for 0 ≤ x ≤ π
and 0 ≤ y ≤ L,
(4.1)
with boundary conditions
T (0, y) = T (π, y) = T (x, 0) = 0 and T (x, L) = h(x),
(4.2)
where L > 0 is a constant, and h = h(x) is some prescribed function. In turn, the solution to (4.1 – 4.2)
can be written as a (possibly infinite) sum of separation of variables solutions.
Separation of variables solutions to (4.1 – 4.2) are product solutions of the form
T (x, y) = φ(x) ψ(y),
(4.3)
where φ and ψ are some functions. In particular, from (4.2), it must be that
φ(0) = φ(π) = ψ(0) = 0
and h(x) = φ(x) ψ(L).
(4.4)
However, for the separated solutions, the function h is NOT prescribed: It is whatever the form in (4.3)
allows. The general h is obtained by adding separation of variables solutions.
These are the tasks in this exercise:
1. Find ALL the separation of variables solutions.
3
4
No solutions that satisfy g(1) = 0 and (3.54).
Here we use non-dimensional variables, selected so that one side of the plate has length π.
18.311 MIT (Rosales).
Exercises in Partial Differentiation and Linear PDE.
ExPD40a.
10
2. In exercise ExPD04 we stated that all the solutions to the Laplace equation — i.e.: (4.1) — must have
the form T = f (x − i y) + g(x + i y), for some functions f and g. Note that these two functions
must make sense for complex arguments, and have derivatives in terms of these complex arguments.
— which means that f and g must be analytic functions.
Show that the result in the paragraph above applies to all the separated solutions found in item 1.
Namely: find the functions f and g that correspond to each separated solution.
4.2
ExPD21a answer: Laplace equation in a rectangle (Dirichlet).
Substituting (4.3) into (4.1), and dividing the result by T = φ ψ, yields
φ0 0 ψ 0 0
+
= 0,
φ
ψ
(4.5)
where we use primes to denote derivatives. Since the first term in (4.5) is a function of x only, and the
second a function of y only, it follows that both must be constant. Hence
φ0 0 + µ φ = 0,
and
ψ 0 0 − µ ψ = 0,
(4.6)
where µ is a constant. In addition φ(0) = φ(π) = 0, from which we conclude — upon use of the additional
condition ψ(0) = 0 — that: µ = n2 , n is a natural number,5
φ = a sin(n x),
and ψ = b sinh(n y),
(4.7)
where both a and b are constants. In particular h = a b sinh(n L) sin(n x).
It follows that
T = a b sin(n x) sinh(n y) =
ab
2i
(cos(n (x − i y)) − cos(n (x + i y))) .
(4.8)
The answers to the tasks 1 and 2 should be obvious from (4.7) and (4.8).
5
ExPD40a.
Heat equation with mixed boundary conditions.
5.1
ExPD40a statement:
Heat equation with mixed boundary conditions.
Consider the problem
Tt = Txx
for 0 < x < π,
(5.1)
with boundary conditions6 T (0, t) = 0 and Tx (π, t) = 0. Then
1. Find all the normal mode solutions — that is: non-trivial solutions of the form T = eλ t φ(x), where
λ is a constant.
5
6
That is: n = 1, 2, 3, . . .
Fixed temperature on the left and no heat flux on the right.
18.311 MIT (Rosales).
Exercises in Partial Differentiation and Linear PDE.
ExPD40a.
11
2. Show that the associated eigenvalue problem7 is self-adjoint. Hence the spectrum is real, and the
eigenfunctions can be used to represent “arbitrary” functions. In this case the spectrum is a discrete
set of eigenvalues {λn }, and the eigenfunctions form an orthogonal basis.
3. Given initial conditions T (x, 0) = T0 (x), write the solution to (5.1) as a series involving the normal
modes
X
an eλn t φn (x).
(5.2)
T =
n
Write explicit expressions8 for the coefficients an .
5.2
ExPD40a answer:
Heat equation with mixed boundary conditions.
1. The equation for λ and φ is the eigenvalue problem
λ φ = φxx ,
with φ(0) = φx (π) = 0.
(5.3)
From the equation and the boundary condition at x = 0, φ must have the form φ(x) = sin(c x) for
some (possibly complex9 ) constant c 6= 0, with λ = −c2 . The boundary condition at x = π then
reduces to cos(c π) = 0, from which we see that the normal modes are
1 2
λn = − n +
2
1
and φn = sin
n+
x ,
2
(5.4)
where n = 0, 1, 2, . . .
2. Let A be the second derivative operator, and let f and g be functions satisfying the boundary
conditions f (0) = g(0) = f 0 (π) = g 0 (π) = 0, where the prime denotes derivation. Then
Z π
Z π
hA f , gi =
(Af )(x) g(x) dx =
f 00 (x) g(x) dx
0
Z π
Z0 π
= −
f 0 (x) g 0 (x) dx =
f (x) g 00 (x) dx = hf , A gi,
(5.5)
0
0
where the end point contributions in the integrations by part vanish because of the boundary conditions. Hence A = A† .
3. Given initial conditions T (x, 0) = T0 (x) for (5.1), the solution can be written in the form
T =
∞
X
an eλn t φn (x),
(5.6)
n=0
where the λn and φn are as in (5.4), and the coefficients an are given by
Rπ
Z
T0 (x) φn (x) dx
2 π
an = 0 R π 2
=
T0 (x) φn (x) dx.
π 0
0 φn (x) dx
7
That is: λ φ = φxx , with the appropriate boundary conditions.
They should be given by integrals involving T0 and the eigenfunctions φn .
9
Since λ = −c2 , from part 2 we know that c has to be either real or pure imaginary.
8
(5.7)
18.311 MIT (Rosales).
Exercises in Partial Differentiation and Linear PDE.
ExPD50.
12
Note: The sense in which series like the one in (5.7) converge is subtle. For example: (i) If T0 is square
integrable, then for t = 0 the series converges in the square integral sense — L2 sense. (ii) If T0 is
continuous and satisfies the boundary conditions, then point-wise convergence also occurs. Finally,
because of the exponential decay (as n → ∞) provided by the terms eλn t for t > 0, convergence for
t > 0 is as “good” as one could desire.
6
ExPD50. Normal modes do not always work. #01.
6.1
ExPD50 statement: Normal modes do not always work. # 01.
Normal mode expansions are not guaranteed to work when the associated eigenvalue problem is not self
adjoint (more generally, not normal10 ). Here is an example: consider the problem
ut + ux = 0
for 0 < x < 1,
with boundary condition u(0, t) = 0.
(6.1)
Find all the normal mode solutions to this problem,11 and show that the general solution to (6.1)
cannot be written as a linear combination of these modes.
6.2
ExPD50 answer: Normal modes do not always work. # 01.
Substitute u = eλ t φ(x) into (6.1). This yields the eigenvalue problem
λ φ + φ0 = 0,
for 0 < x < 1,
with φ(0) = 0,
(6.2)
where a prime denotes a derivative. Hence φ ≡ 0 for any λ. Thus there are no normal modes at all!
7
Introduction. Experimenting with Numerical Schemes.
Consider the numerical schemes that follow after this introduction, for the specified equations. YOUR
TASK here is to experiment (numerically) with them so as to answer the question: Are they sensible?
Specifically:
(i 1) Which schemes give rise to the type of behavior illustrated by the “bad” scheme in the GBNS lecture
script12 in the 18.311 MatLab Toolkit?
(i 2) Which ones behave properly as ∆x and ∆t vanish?
SHOW THAT they all arise from some approximation of the derivatives in the equations (i.e.: the
schemes are CONSISTENT), similar to the approximations used to derive the “good” and “bad” schemes
used in the GBNS lecture script13 in the 18.311 MatLab Toolkit.
I recommend that you read the Stability of Numerical Schemes for PDE’s notes in the course WEB page
before you do these problems.
An operator A is normal if it commutes with its adjoint: A A† = A† A.
That is, non-trivial solutions of the form u = eλ t φ(x), where λ is a constant.
12
Alternatively: check the Stability of Numerical Schemes for PDE’s notes in the course WEB page.
13
Alternatively: check the Stability of Numerical Schemes for PDE’s notes in the course WEB page.
10
11
18.311 MIT, (Rosales)
Good and Bad Numerical Schemes. Problem GBNS02 scheme B .
13
Remark 7.1 Some of these schemes are “good” and some are not. For the “good” schemes you will
find that — as ∆x gets small — restrictions are needed on ∆t to avoid bad behavior (i.e.: fast growth of
grid–scale oscillations).
Specifically: in all the schemes a parameter appears, the parameter being λ = ∆t/∆x in some cases and
ν = ∆t/(∆x)2 in others. You will need to keep this parameter smaller than some constant to get the “good”
schemes to behave. That is: λ < λc , or ν < νc . For the “bad” schemes, it will not matter how small λ
(or ν) is. Figuring out these constants is ALSO PART OF THE PROBLEM. For the assigned schemes
the constants, when they exist (“good” schemes), are simple O(1) numbers, somewhere between 1/4 and 2.
√
That is, stuff like 1/2, 2/3, 1, 3/2, etc., not things like λc = π/4 or νc = e. You should be able to, easily,
find them by careful numerical experimentation!
Remark 7.2 In order to do these problems you may need to write your own programs. If you choose
to use MatLab for this purpose, there are several scripts in the 18.311 MatLab Toolkit that can easily be
adapted for this. The relevant scripts are:
• The schemes used by the GBNS lecture MatLab script are implemented in the script InitGBNS.
• The two series of scripts
PS311 Scheme A, PS311 Scheme B, . . . and PS311 SchemeDIC A, PS311 SchemeDIC B, . . . ,
have several examples of schemes already setup in an easy to use format. Most of the algorithms here
are already implemented in these scripts. The scripts are written so that modifying them to use with
a different scheme involves editing only a few (clearly indicated) lines of the code.
• Note that the scripts in the 18.311 MatLab Toolkit are written avoiding the use of “for” loops and
making use of the vector forms MatLab allows — things run a lot faster this way. Do your programs
this way too, it is good practice.
Remark 7.3 Do not put lots of graphs and numerical output in your answers. Explain what you did
and how you arrived at your conclusions, and illustrate your points with an extremely few selected
graphs.
Remark 7.4 In all cases the notation:
xn = x0 + n∆x,
tk = t0 + k∆t,
and
ukn = u(xn , tk ),
is used.
8
8.1
Problem GBNS02 scheme B.
Backward differences for ut + ux = 0.
Statement: GBNS02 Scheme B.
Backward differences for ut + ux = 0.
Equation: ut + ux = 0.
Scheme: uk+1
= ukn − λ (ukn − ukn−1 ),
n
where λ =
∆t
∆x
.
18.311 MIT, (Rosales)
8.2
Good and Bad Numerical Schemes. Problem GBNS03 scheme C .
14
Answer: GBNS02 Scheme B.
Backward differences for ut + ux = 0.
This is the scheme implemented by the PS311 Scheme B and the PS311 SchemeDIC B scripts in the 18311
MatLab Toolkit. You should have found that this scheme is stable: for λ < λc = 1 the numerical scheme
does not amplify the noise. Furthermore, this scheme is convergent — as ∆x → 0 the numerical solution
converges to the exact solution, provided that λ < λc — because it is also consistent. Consistency follows
because the scheme equations can be written in the form
uk+1
− ukn ukn − ukn−1
n
+
= 0,
∆t
∆x
which are satisfied with errors of O(∆t, ∆x) by any smooth solution of the equation ut + ux = 0.
9
9.1
Problem GBNS03 scheme C.
Centered differences for ut + ux = 0.
Statement: GBNS03 Scheme C. Centered differences for ut + ux = 0.
Equation: ut + ux = 0.
9.2
Scheme: uk+1
= ukn −
n
1
2
λ (ukn+1 − ukn−1 ),
where λ =
∆t
∆x
.
Answer: GBNS03 Scheme C. Centered differences for ut + ux = 0.
This is the scheme implemented by the PS311 Scheme C and the PS311 SchemeDIC C scripts in the 18311
MatLab Toolkit. You should have found that this scheme is unstable: there is no value λc > 0 such that
(for λ < λc ) the numerical solution does not develop exponentially large, grid scale, oscillations as ∆x → 0.
On the other hand, the scheme is consistent. It’s equations can be written in the form
uk+1
− ukn ukn+1 − ukn−1
n
+
= 0,
∆t
2∆x
which are satisfied with errors of O(∆t, (∆x)2 ) by any smooth solution of the equation ut + ux = 0.
THE END.
Download