18.311 — MIT (Spring 2015) May 1, 2015.

advertisement
18.311 — MIT (Spring 2015)
Rodolfo R. Rosales (MIT, Math. Dept., E17-410, Cambridge, MA 02139).
May 1, 2015.
Problem Set # 7. Due: Last day of classes.
Turn it in before 3:00 PM, in the boxes provided in Room E18-366.
IMPORTANT:
— Turn in the regular and the special problems stapled in two SEPARATE packages.
— Print your name in EACH PAGE of your answers.1
— In page one of each package print the names of the other members of your group.
Contents
1 Short notes on separation of variables.
1.1 Example: heat equation in a square, with zero boundary conditions. . . . . . . . . . . . . .
1.2 Example: heat equation in a circle, with zero boundary conditions. . . . . . . . . . . . . .
1.3 Example: Laplace equation in a circle sector, with Dirichlet boundary
conditions, non-zero on one side. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2 Regular Problems.
2.1 Statement: Haberman problem 77.03 (shock speed for weak shocks) . .
2.2 Statement: Haberman problem 78.03 (invariant density) . . . . . . . .
2.3 ExPD50 statement: Normal modes do not always work. # 01. . . .
2.4 Introduction. Experimenting with Numerical Schemes. . . . . . . . .
2.5 Statement: GBNS02 Scheme B. Backward differences for ut + ux = 0
2.6 Statement: GBNS03 Scheme C. Centered differences for ut + ux = 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
2
3
6
8
8
8
8
8
10
10
3 Special Problems.
10
3.1 ExPD21a statement: Laplace equation in a rectangle (Dirichlet). . . . . . . . . . . . . . . . 10
3.2 ExPD40a statement: Heat equation with mixed boundary conditions. . . . . . . . . . . . . 11
1
Short notes on separation of variables.
THIS IS NOT A PROBLEM. This section is for background.
Imagine that you are trying to solve a problem whose solution is a function of several variables, e.g.:
u = u(x, y z). Finding this solution directly may be very hard, but you may be able to find other solutions
with the simpler structure
u = f (x) g(y) h(z),
(1.1)
where f , g, and h are some functions. This is called a separated variables solution.
If the problem that you are trying to solve is linear, and you can find enough solutions of the form in (1.1),
then you may be able to solve the problem using a linear combinations of separated variables solutions.
The technique described in the paragraph above is called separation of variables. Note that
1
Pages, quite often, become separated from the rest of the package.
1
18.311 MIT, (Rosales)
Short notes on separation of variables.
2
1. When the problem is a pde and solutions of the form in (1.1) are allowed, the pde reduces to three
ode — one for each of f , g, and h. Thus the solution process is enormously simplified.
2. In (1.1) all the three variables are separated. But it is also possible to seek for partially separated
solutions. For example
u = f (x) v(y, z).
(1.2)
This is what happens when you look for normal mode solutions to time evolution equations of the
form
ut = L u,
(1.3)
where L is a linear operator acting on the space variables ~r = (x, y, . . . ) only — for example:
L = ∂x2 + ∂y2 + . . .. The normal mode solutions have time separated
u = eλ t φ(~r),
where
λ = constant,
(1.4)
and reduce the equation to an eigenvalue problem in space only
λφ = L φ.
(1.5)
3. Separation of variables does not always work. In fact, it rarely works if you just think of random
problems. But it works for many problems of physical interest. For example: it works for the heat
equation, but only for a few very symmetrical domains (rectangles, circles, cylinders, ellipses). But
these are enough to build intuition as to how the equation works. Many properties valid for generic
domains can be gleaned from the solutions in these domains.
4. Even if you cannot find enough separated solutions to write all the solutions as linear combinations
of them, or if the problem is nonlinear and you can get just a few separated solutions, sometimes
this is enough to discover interesting physical effects, or gain intuition as to the system behavior.
1.1
Example:
heat equation in a square, with zero boundary conditions.
Consider the problem
Tt = ∆ T = Txx + Tyy ,
(1.6)
in the square domain 0 < x, y < π, with T vanishing along the boundary, and with some initial data
T (x, y, 0) = W (x, y). To solve this problem by separation of variables, we first look for solutions of the
form
T = f (t) g(x) h(y),
(1.7)
which satisfy the boundary conditions, but not the initial data. Why this? This is important!:
A. We would like to solve the problem for generic initial data, while solutions of the form (1.7) can only
achieve initial data for which W = f (0) g(x) h(y). This is very restrictive, even more so because (as
we will soon see) the functions g and h will be very special. To get generic initial data, we have no
hope unless we use arbitrary linear combinations of solutions like (1.7).
B. Arbitrary linear combinations of solutions like (1.7) will satisfy the boundary conditions if an only if
each of them satisfies them.
18.311 MIT, (Rosales)
Short notes on separation of variables.
3
Substituting (1.7) into (1.6) yields
f 0 g h = f g 0 0 h + f g h0 0 ,
(1.8)
where the primes indicate derivatives with respect to the respective variables. Dividing this through by u
now yields
f0
g 0 0 h0 0
=
+
.
(1.9)
f
g
h
Since each of the terms in this last equation is a function of a different independent variable, the equation
can be satisfied only if each term is a constant. Thus
g0 0
= c1 ,
g
h0 0
= c2 ,
h
and
f0
= c1 + c2 ,
f
(1.10)
where c1 and c2 are constants. Now the problem has been reduced to a set of three simple ode. Furthermore,
for (1.7) to satisfy the boundary conditions in (1.6), we need:
g(0) = g(π) = h(0) = h(π) = 0,
(1.11)
which restricts the possible choices for the constants c1 and c2 . The equations in (1.10 – 1.11) are easily
solved, and yield2
g = sin(n x),
f
= e
(1.12)
2
(1.13)
with c2 = −m ,
h = sin(m y),
−(n2 +m2 ) t
with c1 = −n2 ,
,
(1.14)
where n = 1, 2, 3, . . . and m = 1, 2, 3, . . . are natural numbers. The solution to the problem in (1.6) can
then be written in the form
T =
∞
X
2 +m2 ) t
wnm sin(n x) sin(m y) e−(n
,
(1.15)
n, m=1
where the coefficients wnm follow from the double sine-Fourier series expansion (of the initial data)
∞
X
W =
wnm sin(n x) sin(m y).
(1.16)
n, m=1
That is
wnm =
1.2
4
π2
Z
π
Z
π
dx
0
dy W (x, y) sin(n x) sin(m y).
(1.17)
0
Example:
Heat equation in a circle, with zero boundary conditions.
We now, again, consider the heat equation with zero boundary conditions, but on a circle instead of a
square. That is, using polar coordinates, we want to solve the problem
Tt = ∆ T =
1
(r(r Tr )r + Tθθ ) ,
r2
(1.18)
2
We set the arbitrary multiplicative constants in each of these solutions to one. Given (1.15 – 1.16), there is no loss of
generality in this.
18.311 MIT, (Rosales)
Short notes on separation of variables.
4
for 0 ≤ r < 1, with T (1, θ, t) = 0, and some initial data T (r, θ, 0) = W (r, θ). To solve the problem using
separation of variables, we look for solutions of the form
T = f (t) g(r) h(θ),
(1.19)
which satisfy the boundary conditions, but not the initial data. The reasons for this are the same as in
items A and B above — see § 1.1. In addition, note that
C. We must use polar coordinates, otherwise solutions of the form (1.19) cannot satisfy the boundary
conditions. This exemplifies an important feature of the separation of variables method:
The separation must be done in a coordinate system where the boundaries are coordinate surfaces.
Substituting (1.19) into (1.18) yields
f0 g h =
1
f r (r g 0 )0 h + f g h0 0 ,
2
r
(1.20)
where, as before, the primes indicate derivatives. Dividing this through by u yields
f0
(r g 0 )0
h0 0
=
+ 2 .
f
rg
r h
(1.21)
Since the left side in this equation is a function of time only, while the right side is a function of space
only, the two sides must be equal to the same constant. Thus
and
f 0 = −λ f
(1.22)
r (r g 0 )0
h0 0
+ λ r2 +
= 0,
g
h
(1.23)
where λ is a constant. Here, again, we have a situation involving two functions of different variables being
equal. Hence
h0 0 = µ h,
(1.24)
and
1
µ
(r g 0 )0 + λ + 2 g = 0,
(1.25)
r
r
where µ is another constant. The problem has now been reduced to a set of three ode. Furthermore, from
the boundary conditions and the fact that θ is the polar angle, we need:
g(1) = 0
and
h is periodic of period 2 π.
(1.26)
In addition, g must be non-singular at r = 0 — the singularity that appears for r = 0 in equation (1.25) is
due to the coordinate system singularity, equation (1.18) is perfectly fine there.
It follows that it should be µ = −n2 and
h = ei n θ ,
where n is an integer.
(1.27)
Notes:
– As in § 1.1, here and below, we set the arbitrary multiplicative constants in each of the ode solutions
to one. As before, there is no loss of generality in this.
18.311 MIT, (Rosales)
Short notes on separation of variables.
5
– Instead of complex exponentials, the solutions to (1.24) could be written in terms of sine and cosines.
But complex exponentials provide a more compact notation.
Then (1.25) takes the form
1
n2
0 0
(r g ) + λ − 2 g = 0.
r
r
(1.28)
This is a Bessel equation of integer order. The non-singular (at r = 0) solutions of this equation are
proportional to the Bessel function of the first kind J|n| . Thus we can write
and λ = κ2|n| m ,
g = J|n| (κ|n| m r),
(1.29)
where m = 1, 2, 3, . . ., and κ|n| m > 0 is the m-th zero of J|n| .
Remark 1.1 That (1.28) turns out to be a well known equation should not be a surprise. Bessel functions,
and many other special functions, were first introduced in the context of problems like the one here — i.e.,
solving pde (such as the heat or Laplace equations) using separation of variables.
Putting it all together, we see that the solution to the problem in (1.18) can be written in the form
T =
∞
∞
X
X
wn m J|n| (κ|n| m r) exp i n θ − κ2|n| m t ,
(1.30)
n=−∞ m=1
where the coefficients wn m follow from the double (Complex Fourier) – (Fourier – Bessel) expansion
W =
∞
∞
X
X
wn m J|n| (κ|n| m r) ei n θ .
(1.31)
n=−∞ m=1
That is:
wn m
1
=
2
π J|n|+1 (κ|n| m )
Z
2π
1
Z
dθ
0
r dr W (r, θ) J|n| (κ|n| m r) e−i n θ .
(1.32)
0
Remark 1.2 You may wonder how (1.32) arises. Here is a sketch:
(i) For θ we use a Complex-Fourier series expansion. For any 2 π-periodic function
G(θ) =
∞
X
an e
inθ
,
where
n=−∞
1
an =
2π
Z
2π
G(θ) e−i n θ dθ.
(1.33)
0
(ii) For r we use the Fourier-Bessel series expansion explained in item (iii).
(iii) Note that (1.28), for any fixed n, is an eigenvalue problem in 0 < r < 1. Namely
L g = λ g,
1
n2
L g = − (r g 0 )0 + 2 g,
r
r
where
(1.34)
g is regular for r = 0, and g(1) = 0. Without loss of generality, assume that n ≥ 0, and consider the
Z 1
set of all the (real valued) functions such that
g̃ 2 (r) r dr < ∞. Then define the scalar product
0
< f˜, g̃ >=
Z
0
1
f˜(r) g̃(r) r dr.
(1.35)
18.311 MIT, (Rosales)
Short notes on separation of variables.
6
With this scalar product L is self-adjoint, and it yields a complete set of orthonormal eigenfunctions
φm = Jn (κn m r)
and
λm = κ2n m ,
where
m = 1, 2, 3, . . .
(1.36)
Thus one can expand
∞
X
F (r) =
bm φm (r),
where
bm = R 1
m=1
Finally, note that
Z 1
0
r φ2m (r) dr
Z
follows from
Z
1
r φ2m (r) dr
F (r) φm (r) r dr.
(1.37)
1 2
J
(κn m ).
2 n+1
(1.38)
0
1
r Jn2 (κn m r) dr =
0
0
1
We will not prove this identity here.
1.3
Example: Laplace equation in a circle sector, with Dirichlet
boundary conditions, non-zero on one side.
Consider the problem
0 = ∆u =
1
(r(r ur )r + uθθ ) ,
r2
0<r<1
and
0 < θ < α,
(1.39)
where α is a constant, with 0 < α < 2 π. The boundary conditions are
u(1, θ) = 0,
u(r, 0) = 0,
and u(r, α) = w(r),
(1.40)
for some given function w. To solve the problem using separation of variables, we look for solutions of the
form
u = g(r) h(θ),
(1.41)
which satisfy the boundary conditions at r = 1 and θ = 0, but not the boundary condition at θ = α. The
reasons are as before: we aim to obtain the solution for general w using linear combinations of solutions
of the form (1.41) — see items A and B in § 1.1, as well as item C in § 1.2.
Substituting (1.41) into (1.39) yields, after a bit of manipulation
r (r g 0 )0 h0 0
+
= 0.
g
h
(1.42)
Using the same argument as in the prior examples, we conclude that
r (r g 0 )0 − µ g = 0
and h0 0 + µ h = 0,
(1.43)
where µ is some constant, g(1) = 0, and h(0) = 0. Again: the problem is reduced to solving ode. In fact,
it is easy to see that it should be3
h=
3
1
sinh(s θ)
s
and g =
ri s − r−is
,
s
where
µ = −s2 ,
The multiplicative constant in these solutions is selected so that the correct solution is obtained for s = 0.
(1.44)
18.311 MIT, (Rosales)
Short notes on separation of variables.
7
and as yet we know nothing about s, other than it is some (possibly complex) constant.
In § 1.2, we argued that g should be non-singular at r = 0, since the origin was no different from any other
point inside the circle for the problem in (1.18) — the singularity in the equation for g in (1.25) is merely
a consequence of the coordinate system singularity at r = 0. We cannot make this argument here, since
the origin is on the boundary for the problem in (1.39 – 1.40) — there is no reason why the solutions should
be differentiable across the boundary! We can only state that
g should be bounded
⇐⇒
s 6= 0 is real.
(1.45)
Furthermore, exchanging s by −s does not change the answer in (1.44). Thus
0 < s < ∞.
(1.46)
In this example we end up with a continuum of separated solutions,
as opposed to the two prior examples, where discrete sets occurred.
Putting it all together, we now write the solution to the problem in (1.39 – 1.40) as follows
Z ∞
sinh(s θ) i s
u=
r − r−i s W (s) ds,
sinh(s α)
0
(1.47)
where W is computed in (1.50) below.
Remark 1.3 Start with the complex Fourier Transform
Z ∞
Z ∞
1
f (ζ) =
fˆ(s) ei s ζ ds, where fˆ(s) =
f (ζ) e−i s ζ dζ.
2
π
−∞
−∞
(1.48)
Apply it to an odd function. The answer can then be manipulated into the sine Fourier Transform
Z ∞
Z
2 ∞
f (ζ) =
F (s) sin(s ζ) ds, where F (s) =
f (ζ) sin(s ζ) dζ,
(1.49)
π 0
0
and 0 < ζ, s < ∞. Change variables, with 0 < r = e−ζ < 1, w(r) = f (ζ), and W (s) = −
Then
Z
w(r) =
W (s) =
1
2i
F (s).
∞
W (s) r i s − r −i s ds, where
0
!
Z 1
1
r −i s − r i s
w(r) dr,
2π 0
r
(1.50)
which is another example of a transform pair associated with the spectrum of an operator (see below).
Finally: What is behind (1.50)? Why should we expect something like this? Note that the problem for g can
be written in the form
L g = µ g, where L g = r (r g 0 )0 ,
(1.51)
g(1) = 0 and g is bounded (more accurately: the inequality in (1.52) applies). This is an eigenvalue problem
in 0 < r < 1. Further, consider consider the set of all the functions such that
Z 1
dr
|g̃|2 (r)
< ∞,
(1.52)
r
0
18.311 MIT, (Rosales)
Experimenting with Numerical Schemes. Introduction.
8
and define the scalar product
< f˜, g̃ >=
Z
0
1
dr
f˜∗ (r) g̃(r) .
r
(1.53)
With this scalar product L is self-adjoint. However, it does not have any discrete spectrum,4 only continuum
spectrum — with the pseudo-eigenfunctions given in (1.44), for 0 < s < ∞. This continuum spectrum is
what is associated with the formulas in (1.50). In particular, note the presence of the scalar product (1.53)
in the formula for W .
2
Regular Problems.
2.1
Statement: Haberman problem 77.03 (shock speed for weak shocks)
A weak shock is a shock in which the shock strength (the difference in densities) is small. For a weak
shock, show that the shock velocity is approximately the average of the density wave velocities associated
with the two densities. [Hint: Use Taylor series methods.]
2.2
Statement: Haberman problem 78.03 (invariant density)
Assume that u = umax (1 − ρ/ρmax ) and at t = 0 the traffic density is
(1/3) ρmax
for x < 0,
ρ(x, 0) =
(2/3) ρmax
for x > 0.
(2.54)
Why does the density not change in time?
2.3
ExPD50 statement: Normal modes do not always work. # 01.
Normal mode expansions are not guaranteed to work when the associated eigenvalue problem is not self
adjoint (more generally, not normal5 ). Here is an example: consider the problem
ut + ux = 0
for 0 < x < 1,
with boundary condition u(0, t) = 0.
(2.55)
Find all the normal mode solutions to this problem,6 and show that the general solution to (2.55)
cannot be written as a linear combination of these modes.
2.4
Introduction. Experimenting with Numerical Schemes.
Consider the numerical schemes that follow after this introduction, for the specified equations. YOUR
TASK here is to experiment (numerically) with them so as to answer the question: Are they sensible?
Specifically:
(i 1) Which schemes give rise to the type of behavior illustrated by the “bad” scheme in the GBNS lecture
script7 in the 18.311 MatLab Toolkit?
4
No solutions that satisfy g(1) = 0 and (1.52).
An operator A is normal if it commutes with its adjoint: A A† = A† A.
6
That is, non-trivial solutions of the form u = eλ t φ(x), where λ is a constant.
7
Alternatively: check the Stability of Numerical Schemes for PDE’s notes in the course WEB page.
5
18.311 MIT, (Rosales)
Experimenting with Numerical Schemes. Introduction.
9
(i 2) Which ones behave properly as ∆x and ∆t vanish?
SHOW THAT they all arise from some approximation of the derivatives in the equations (i.e.: the
schemes are CONSISTENT), similar to the approximations used to derive the “good” and “bad” schemes
used in the GBNS lecture script8 in the 18.311 MatLab Toolkit.
I recommend that you read the Stability of Numerical Schemes for PDE’s notes in the course WEB page
before you do these problems.
Remark 2.4 Some of these schemes are “good” and some are not. For the “good” schemes you will
find that — as ∆x gets small — restrictions are needed on ∆t to avoid bad behavior (i.e.: fast growth of
grid–scale oscillations).
Specifically: in all the schemes a parameter appears, the parameter being λ = ∆t/∆x in some cases and
ν = ∆t/(∆x)2 in others. You will need to keep this parameter smaller than some constant to get the “good”
schemes to behave. That is: λ < λc , or ν < νc . For the “bad” schemes, it will not matter how small λ
(or ν) is. Figuring out these constants is ALSO PART OF THE PROBLEM. For the assigned schemes
the constants, when they exist (“good” schemes), are simple O(1) numbers, somewhere between 1/4 and 2.
√
That is, stuff like 1/2, 2/3, 1, 3/2, etc., not things like λc = π/4 or νc = e. You should be able to, easily,
find them by careful numerical experimentation!
Remark 2.5 In order to do these problems you may need to write your own programs. If you choose
to use MatLab for this purpose, there are several scripts in the 18.311 MatLab Toolkit that can easily be
adapted for this. The relevant scripts are:
• The schemes used by the GBNS lecture MatLab script are implemented in the script InitGBNS.
• The two series of scripts
PS311 Scheme A, PS311 Scheme B, . . . and PS311 SchemeDIC A, PS311 SchemeDIC B, . . . ,
have several examples of schemes already setup in an easy to use format. Most of the algorithms here
are already implemented in these scripts. The scripts are written so that modifying them to use with
a different scheme involves editing only a few (clearly indicated) lines of the code.
• Note that the scripts in the 18.311 MatLab Toolkit are written avoiding the use of “for” loops and
making use of the vector forms MatLab allows — things run a lot faster this way. Do your programs
this way too, it is good practice.
Remark 2.6 Do not put lots of graphs and numerical output in your answers. Explain what you did
and how you arrived at your conclusions, and illustrate your points with an extremely few selected
graphs.
Remark 2.7 In all cases the notation:
xn = x0 + n∆x,
tk = t0 + k∆t,
and
ukn = u(xn , tk ),
is used.
8
Alternatively: check the Stability of Numerical Schemes for PDE’s notes in the course WEB page.
18.311 MIT, (Rosales)
2.5
= ukn − λ (ukn − ukn−1 ),
Scheme: uk+1
n
where λ =
∆t
∆x
.
Statement: GBNS03 Scheme C. Centered differences for ut + ux = 0.
Equation: ut + ux = 0.
3
10
Statement: GBNS02 Scheme B.
Backward differences for ut + ux = 0.
Equation: ut + ux = 0.
2.6
Experimenting with Numerical Schemes. Introduction.
Scheme: uk+1
= ukn −
n
1
2
λ (ukn+1 − ukn−1 ),
where λ =
∆t
∆x
.
Special Problems.
3.1
ExPD21a statement: Laplace equation in a rectangle (Dirichlet).
Consider the question of determining the steady state temperature in a thin rectangular plate such that:
(a) The temperature is prescribed at the edges of the plate, and (b) The facets of the plate (top and
bottom) plate are insulated. The solution to this problem can be written as the sum of the solutions to
four problems of the form9
Txx + Tyy = 0,
for 0 ≤ x ≤ π
and 0 ≤ y ≤ L,
(3.56)
with boundary conditions
T (0, y) = T (π, y) = T (x, 0) = 0 and T (x, L) = h(x),
(3.57)
where L > 0 is a constant, and h = h(x) is some prescribed function. In turn, the solution to (3.56 – 3.57)
can be written as a (possibly infinite) sum of separation of variables solutions.
Separation of variables solutions to (3.56 – 3.57) are product solutions of the form
T (x, y) = φ(x) ψ(y),
(3.58)
where φ and ψ are some functions. In particular, from (3.57), it must be that
φ(0) = φ(π) = ψ(0) = 0
and h(x) = φ(x) ψ(L).
(3.59)
However, for the separated solutions, the function h is NOT prescribed: It is whatever the form in (3.58)
allows. The general h is obtained by adding separation of variables solutions.
These are the tasks in this exercise:
1. Find ALL the separation of variables solutions.
2. In exercise ExPD04 we stated that all the solutions to the Laplace equation — i.e.: (3.56) — must have
the form T = f (x − i y) + g(x + i y), for some functions f and g. Note that these two functions
must make sense for complex arguments, and have derivatives in terms of these complex arguments.
— which means that f and g must be analytic functions.
Show that the result in the paragraph above applies to all the separated solutions found in item 1.
Namely: find the functions f and g that correspond to each separated solution.
9
Here we use non-dimensional variables, selected so that one side of the plate has length π.
18.311 MIT, (Rosales)
3.2
Experimenting with Numerical Schemes. Introduction.
11
ExPD40a statement:
Heat equation with mixed boundary conditions.
Consider the problem
Tt = Txx
for 0 < x < π,
(3.60)
with boundary conditions10 T (0, t) = 0 and Tx (π, t) = 0. Then
1. Find all the normal mode solutions — that is: non-trivial solutions of the form T = eλ t φ(x), where
λ is a constant.
2. Show that the associated eigenvalue problem11 is self-adjoint. Hence the spectrum is real, and the
eigenfunctions can be used to represent “arbitrary” functions. In this case the spectrum is a discrete
set of eigenvalues {λn }, and the eigenfunctions form an orthogonal basis.
3. Given initial conditions T (x, 0) = T0 (x), write the solution to (3.60) as a series involving the normal
modes
X
T =
an eλn t φn (x).
(3.61)
n
Write explicit expressions12 for the coefficients an .
THE END.
10
Fixed temperature on the left and no heat flux on the right.
That is: λ φ = φxx , with the appropriate boundary conditions.
12
They should be given by integrals involving T0 and the eigenfunctions φn .
11
Download