Introduction to PDE’s: (Strauss, 1.1) (Haberman, 1.1) const coeff(e ) undetermined coefficients

advertisement
Introduction to PDE’s: (Strauss, 1.1) (Haberman, 1.1)
• PDE’s vs. ODE’s ← methods of solution

const coeff(eλt )





undetermined coefficients



variation of parameters

series solutions




reduction of order



Laplace transform
PDE’s have the complication of more than 1 independent variable
For PDE’s: more difficult to “guess” the form of solution as in ODE
• Definitions:
linear vs. nonlinear
u t = ux
ut = uux
order of the equation ut = uxx utt = uxx
const. coefficient
non-const coefficient ut = cuxx ut = c(x, t)uxx
• Analogies for linear ode’s to linear pde’s : REVIEW
ODE’s: Superposition: if eλ,t , eλ2 t are solutions of au00 + bu0 + u = 0
c1 eλ1 + c2 eλ2 t is also a solution
uses concepts of linear algebra for vector spaces:
For a vector space V , there is associative and commutative addition, a multiplicative identity (1), and additive(0) identity
Basis = smallest # of nonzero linear independnet elements from which we generate the rest
e.g. if solutions are 1, sin2 x, cos2 x for a linear pde
then c1 + c2 sin2 x + c3 cos2 x is a vector space
but the basis is (sin2 x, cos2 x)
so we can write any solution as d1 sin2 x + d2 cos2 x
Complication of PDE - more variation in additional independent variables makes
solution methods more complicated
1
U
ODE
U
,
f (x 0
PDE
Ux = h(x, y, Uy , U, Ux )
y
Ux
U) =
x0
x
x
Additional material: (from Weinberger)
Classification of 2nd order PDE’s:
Examples
Elliptic - uxx + uyy = 0
∇2 u = 0 Laplace’s equation
Parabolic - ut = uxx
Heat Equation
Hyperbolic - utt = uxx
Wave Equation
nd
General 2
order pde: a11 uxx + 2a12 uxy + a22 uyy + a1 ux + a2 uy + a0 u = 0
Classification:
a212 < a11 a22 Elliptic
a212 = a11 a22 parabolic
a212 > a11 a22 hyperbolic
Ex(from
Weinberger:)
∂2
∂2
∂2
Lu = A ∂t
u=0
2 + 2B ∂x∂t + C ∂x2
Make a real change of variables:
ξ = αx + βt
η = γx + δt
(2nd order part only)
∂t = β∂ξ + δ∂η
∂x = α∂ξ + γ∂η
⇒ [Aβ 2 + 2Bαβ + Cα2 ]uξξ + uξη [2Aβδ + 2B[αδ + βγ] + 2Cαγ]
+ uηη [Aδ 2 + 2Bγδ + Cγ 2 ] = 0
if 4B 2 − 4AC > 0, we can find α, β, δ, γ to set coefficient of u ξξ , uηη = 0
⇒ uξη = 0. Then we can find α, β, δ, γ to write pde as u ξξ + uηη = 0
Similarly for parabolic, hyperbolic ⇒ can find change of variables to put 2 nd
order part in form of wave, heat equation
Derivation of heat equation: 1 − D
(Strauss 1.3, Haberman 1.2)
Conservation of heat energy in a 1 − D rod a
(lateral sides insulated)
2
b
Ux , Uy
different at
different points
d
dt
|
Z
=
e dx
a
{z
φ(a, t) − φ(b, t)
|
}
R b h ∂e
a
∂t
+
∂φ
∂x
{z
+
|
Rb
{z
Q dx
}
internal sources
}
− ∂φ
dx
∂x
a
for arbitrary a, b, Q
∂φ
∂e
∂t = − ∂x + Q
i
− Q dx = 0
b
| a {z
}
flux at x = a
− flux at x = b
(flowing to the right)
change in heat energy
⇒
Z
b
a
b
Use: 1) heat energy/unit mass = specific heat · temperature
= c(x) · u(x, t)
⇒ e = c(x)u(x, t)ρ (ρ = mass density)

φ = −K0 ux
↑
thermal conductivity,
material dependent
2) Fourier’s law
of heat conduction
⇒ c(x)ρ ∂u
∂t =
∂
∂u
∂x (K0 ∂x )
flow from
hot to cold, greater
flow for greater difference






+Q
c, ρ, K0 all depend on material
if constants
ut = kuxx
k=
Boundary conditions:
K0
cρ
thermal diffusivity
a
Dirichlet: u is specified at
a, b
∂u Neumann: normal derivative ∂x b
e.g. u(a, t) = f (t)
= g(t)
x=a
(recall ∂u
∂x proportional to flux, from Fourier’s law)
(f = 0, g = 0, homogeneous)
Robin:
ux x=a
± αu
= h(t)
x=a
Obtained from: Newton’s law of cooling
− K0 (a)
Bath
at temp
uB (t)
∂u
(a, t) = −H[u(a, t) −
∂x
a
uB (t)]
| {z }
temperature
of the bath
If the bath is at x =
b
− K0 (b)ux (b, t) = +H[u(b, t) − uB (t)]
H = heat transfer coefficient
3
b
Compare with equation for diffusion of particles in a fluid
(e.g. a dye)
(Strauss, 1.3)
Conservation of dye:
a
∂
∂t
Z
b
b
a
u(x, t)dx = k(ux (b, t)−ux (a, t))
↑
change in concentration
↑
flow in
Fick’s law
↑
flow out
(rate of motion proportional to concentration gradient)
again yields
Z
b
a
(k may be const. or variable)
∂
∂
u−
(kux )dx = 0
∂t
∂x
assuming
k = k(x)
!
If there is an internal source F (x, t)
Z
b
a
ut −
∂
(kux ) − F (x, t)dx = 0
∂x
again,
“heat” equation
ut =
⇒
∂
∂x (kux )
+ F (x, t)
Equilibrium distribution:
No longer depends on time ( lim u(x, t) =
u(x))
↑
equilibrium
t→∞
For Dirichlet b.c., equilibrium solution satisfies
uxx = 0 ⇒
u = c1 x + c2 ← line
u(a, t) = T1
u(b, t) = T2
U
T1
ine
strait l
b.c.’s
through
T2
X
Neumann b.c. (insulated)
u = c1 x + c2
⇒ u = c2
ux = 0 ⇒ flat line (u = const)
undetermined!
4
How might it be determined?
1
c2 =
b−a
Inhomogeneous: uxx = x2
initial distribution
u(x, 0) = f (x)
Hint:
Z
b
f (x)dx average!
a
u(0) = T
ux (L) = 0
(Review of ODE’s)
⇒
ux =
x3
+ c1
3
,
u=
x4
+ c1 x + c 2
12
u(0) = T = C2
L3
ux (L) = 0 =
+ c1
3
⇒
u=
x4 L3
−
x+T
12
3
Other issues to keep in mind:
Well-Posedness: Existence, Uniqueness, Stability
• Uniqueness: Already seen if: uxx = 0, with ux = 0 at 0, L has a solution v,
then v + c is also a solution (non-unique solution)
“Usually” (non) uniqueness demonstrated as follows:
Example
uxx = F (x), ux = 0 at x = 0, L
Assume this has 2 solutions v, w, then v − w = y ⇒ y xx = 0 yx = 0 at x = 0, L
If y has only the trivial solution, then u has a unique solution (i.e. v = w)
Otherwise not unique
If F (x) = 0, then y = const ⇒ v 6= w, so solution for u is not unique!
• Existence: We have seen solutions exist for u xx = 0 on 0 < x < L, +various
b.c.
What about - equilibrium distribution for:
ut = uxx + 1
uxx = −1
⇒
u(x, 0) = f (x)
2
u = − x2 + c1 x + c2
= 1 = c1
ux ux x=0
x=L
= −L + c1 = β
In general, conditions for heat equation are:
One initial condition (not end condition)
Two b.c.’s (in 1 − D case)
5
ux (0, t) = 1




ux (L, t) = β
for which values of β
does equilibrium solution
exist?
Why initial condition? diffusion is a smoothing process
Physical intuition: If we try to solve u t = uxx with u(x, Tend ) = f (x), we
ask the question “How do we get to a given equilibrium distribution?”. But f (x)
could have been obtained from a variety of initial distributions, or it might be the
wrong equilibrium distribution! The problem is ill - posed (physically), with an end
condition
Later we will see mathematically that the heat equation with an end condition
is ill-posed
t→∞
g(x)
f (x)
u(x, ∞)
x
2 possible initial conditions
same equilibrium solution
x
• Stability: Does a small perturbation to the problem make a big difference in
the solution?
Ex: uxx − n2 u = 0
√
If we specify √both u, ux on boundary, e.g. u(0) = 0,
ux (0) = e− n , then one
− n
solution is e n sinh nx
The boundary condition for u vanishes as n → ∞, but the solution u does not
vanish for x 6= 0. Compare with the case of u x = 0, so that u(x) ≡ 0. In this
example a small change in the b.c. data gives a large change in the solution, ⇒
Unstable!
Time dependent Heat Equation (Haberman Chpt. 2, Strauss Chpt. 4)
ut = kuxx
0<x<L
t>0
u(x, 0) = f (x)
u(0, t) = 0
u(L, t) = 0
(no sources)
Q=0
Assume a product form of solution
Separation of Variables
u = X(x)T (t), substituting in equation yields
kX 00
T0
=
= const
| {z }
X
T
−λ
X 00 + λX = 0
X(0) = X(L) = 0
↑
(function of x = function of t)
must equal const
choose −λ (for convenience)
Boundary value problem
6
Separation
of Variables
√
√
X = A cos√ λx + B sin √λx
if λ > 0
X = A cosh λx + B sinh λx
if λ < 0
If λ < 0, then trivial solution (A = B = 0) (no eigenvalues, eigenfunctions)
√
n2 π 2
, n = 1, 2, 3, ...
If λ > 0, then A = 0, B sin λL = 0 ⇒
λ=
L2 {z
|
}
Then
eigenvalues
note: n = 0 is trivial solution
(If λ = 0, then solution is
Eigenvalues/
eigenfunctions
X = Ax + B
A=B=0
So X = Bn sin nπ
EigenL x ⇐
function
↑
coefficient undetermined
The eigenfunction is a special homogeneous solution.
by b.c.’s again trivial solution)
(L = 1)
sin πx
1
0.5
0.2
0.4
0.6
1
0.8
-0.5
sin 2πx
sin 3πx
-1
what about T ? For λ =
T 0 + λkT = 0
n2 π 2
L2
⇒ T = Ce−λkt = Ce−
X · T = An e
−n2 π 2
kt
L2
|{z}
sin
const
n2 π 2
L2
kt
nπ
x
L
Since ut = kuxx is a linear equation,
and u(x, 0) = f (x) u(0, t) = u(L, t) = 0
are linear conditions
the solution A1 e
−1·π 2
L2
kt
sin
π
Lx
2
− 4·π2 kt
+ A2 e L
sin
is also a solution
4π
Lx
etc.
u(x, t) =
∞
X
An e−
n2 π 2
L2
n=1
kt
sin
nπ
x
L
general
solution
|{z}
sum over
index of
eigenvalues
What about the initial condition?
First note, the solution decays in time. Physically we expect this,
since we expect an equilibrium solution, and b.c.’s are 0
7
Superposition
But how does the eigenfunction expansion reflect general solution?
3π
L x,
Example If u(x, 0) = f (x) = 4 sin
then
In general
∞
X
f (x) = u(x, 0) =
An sin
n=1
A3 = 4
An = 0 for n 6= 3
(by inspection)
nπ
x
L
Fourier (sine) series for f (x)
f (x) can be expressed as an infinite sum of eigenfunctions sin
What are the A0n s?
L
0
mπ
nπ
x sin
xdx =
sin
L
L
=
(
0
L
2
n 6= m
n=m
#
(
− 21
RL
Orthogonality
nπ
L x
Note the following property of sin
Z
nπ
L x
RL
1
0 2
cos (n+m)π
x − cos (n−m)π
xdx
L
L
− 12 cos 2mπ
xdx
n
=
m
L
0
n 6= m
orthogonality
of sin nπ
L x
Then, we can use orthogonality to solve for f (x)
Z
L
0
nπ
xf (x)dx =
sin
L
= Am ·
Z
0
∞
LX
nπ
mπ
An sin
x sin
xdx =
L
L
n=1
all other terms have
0 coeff by orthogonality
( L2 )
R
Z
L
u(x, 0) sin
0
!
Use initial condition ⇒ Am = L2 0L sin mπ
L xf (x)dx,
This completes the solution for u(x, t)
Review steps
Neumann b.c.’s (Hints for homework: (Exercise 2.3.7 of Haberman))
ut = kuxx
ux (0, t) = ux (L, t) = 0
u(x, 0) = f (x)
Separation of variables leaves boundary value problem
X 00 + λX = 0
X 0 (0) = X 0 (L) = 0
What are the eigenfunctions, eigenvalues? Using b.c.’s
√
√
X = A cos λx
X 0 (L) = A sin λL = 0
Note:
n = 0 does not
yield trivial solution
8
mπ
xdx
L
cos 0πx
1
L=1
0.5
0.2
0.4
0.6
What are the e.v.’s?
2 2
λ = nLπ2 again
n = 0, 1, 2 · · ·
1
0.8
-0.5
-1
cos 2πx
cos πx
use superposition
and orthogonality to
complete (exercise 2.3.6)
√
eigenfunctions: An cos λnx
n 6= 0
A0 for n = 0
Mixed b.c.’s
f (x)
kuxx = ut
ux (L) = u(0) = 0
u(x, 0) = f (x)
0
Separation of variables: u = X(x)T (t) ⇒
1 T0
X 00
=
= −λ
X
kT
X 00 √
+ λX = 0 √
X = A cos λx + B sin λx
(λ > 0)
A = 0 by X(0) = 0
√
B cos λL = 0 by ux (L) = 0
√
(2n + 1)π
⇒ λ=
n = 0, 1, 2, ...
2L
(n + 12 )2 π 2
(2n + 1)2 π 2
=
←
eigenvalues
λn =
4L2
L2
(n + 12 )π
A sin
x ← eigenfunctions
L
T = e−λn kt
!
∞
(n+ 1 )2 π 2
X
(n + 21 )π
2
−
kt
L2
x e
⇒U =
An sin
L
n=0
f (x) = u(x, 0) =
∞
X
An sin
sin
(m+ 1 )π
(n+ 21 )π
x sin L2 xdx
L
n=0
Orthogonality:
⇒
2
L
Z
RL
0
L
sin
0
(n + 21 )π
x
L
(n + 12 )π
xf (x)dx = An
L
9
=
(
0
L
2
n 6= m
n=m
x
L
(More on separation of variables)
Robin B.C.’s (Strauss, 4.3)
ut = kuxx
ux − a0 u = 0 at x = 0
ux + a` u = 0 at x = `
u(x, 0) = f (x)
Eigenvalue problem:
X 00 + λX = 0
X 0 − a0 X = 0 x = 0
X 0 + a` X = 0 x = `
Recall physical interpretation of b.c.’s
x=L
bath
at
uB
ux = −a` (u − uB )
(shift so that uB = 0)
a` > 0 radiating
a` < 0 absorbing
similarly at x = 0
ux = a0 (u − uB )
a0 > 0 radiating
a0 < 0 absorbing
What is the boundary value problem (bvp), the eigenvalues, eigenfunctions?
Separation of Variables:
1 T0
X 00
=
= −λ
X
kT
For which values of λ are there eigenvalues, eigenfunctions?
X 00 + λX = 0 X 0 − a0 X = 0
X 0 + a` X = 0
√
√
For λ > 0, X = A cos
√ λx + B sin λx
at x = 0 −a0 A
√+ λB = 0 √
√
√
at
x
=
`
(−A
λ + a` B) sin λ` + ( λB + a` A) cos #λ` = 0 !
"
√
−a0
λ√
A
√
√
√
√
√
⇒
=0
B
− λ sin λ` + a` cos λ` a` sin λ` + λ cos λ`
Either A = B = 0 or determinant of matrix = 0
Setting the determinant = 0
√
λa` cos
√
√
√
√
√
λ` − λ sin λ` + a0 λ cos λ` + a0 a` sin λ` = 0
10
√
⇒ eigenvalue condition: tan
| {z λ`} =
Graphical:
For a0 , a` > 0
intersections give
eigenvalues

√ 
(a0 +a` ) λ
λ−a0 a`  g(λ)
for λ > 0 √
Note: √
as λ → ∞
√
nπ
<
,⇒ λ →
λ < (n+1)π
`
`
tan
√
λl
tan
√
λl
g(λ)
π/l
2π/l
g(λ)
↑ In this case, both end - pts radiating (a 0 , a` > 0)
If a0 , a` are opposite signs (some radiation, some absorption)
the picture is:
g(λ)
λ1
λ2
0
π/l
2π/l
λ3
3π/l
If there
(λ n > 0) then
 are only positive eigenvalues

u=
P  −λn kt
e

Xn (x)
| {z }

 → 0 as t → 0

eigenfunctions
√
√
Bn sin λn x + An cos λn x
e.g. if both b.c.’s are radiating, we would expect this (we show this below)
Looking for negative e.v’s λ < 0
11
3π/l
nπ
`
The solution of X 00 + λX = 0 is
X = A cosh
√
√
−λx + B sinh −λx
Using b.c.’s as before, let λ = −µ for notation
√ )
(a0 + a` ) µ
√
g(µ)
⇒ tanh µ` = −
µ + a 0 a`
(µ > 0)
Are there intersections (and thus eigenvalues) for µ > 0
a0 , a ` > 0
(λ < 0)
no intersectionsi.e.
no negative
e.v.’s
√
tanh( µl)
√
µ
g(µ)
so u → 0 as t → ∞, as expected for only radiating b.c.’s
g(µ)
√
tanh( µl)
a0 , a` opposite signs
√
µ
√
If the slope of g(µ) near the origin (u 1) is less than slope of tanh µ`, then the
curves intersect.
`)
That is, if ` < − (aa00+a
a` (for µ small, near the origin), ⇒ −a 0 a` ` > a0 + a` , then
there is a negative eigenvalue
This can only happen if a0 , a` are opposite signs, (both absorption and radiation)
For a zero eigenvalue, the solution is for X 00 = 0
⇒
A − a0 B = 0
A + a` (A` + B) = 0
⇒
"
X = Ax + B # "
#
1
−a0
A
=0
1 + a ` ` a`
B
⇒ a` + a0 + a0 a` ` = 0 ⇒ a` + a0 = −a0 a` `
|
12
{z
condition for
zero e.v. in
Robin case
}
Summary: for Robin B.C.’s
• For: a0 , a` > 0, radiation only, only positive eigenvalues
eigenfunctions An cos
√
√
λnλ + Bn sin λnx = Xn (x)
with
u=
P∞
n=1 e
−λn kt X
− a 0 An +
n (x)
p
λ n Bn = 0
• For : a0 a` < 0 (some absorption, some radiation)
There are no negative e.v.’s if −a0 a` ` < a0 + a`
otherwise, for −a0 a` ` > a0 + a`
u = e−λ−1 kt X−1 (x) +
|
{z
}
grows in time
q
∞
X
e−λn kt Xn (x)
n=1
q
(X−1 (x) = A cos h −λ−1 x + B sin h −λ−1 x)
• There is a zero eigenvalue if −a0 a` ` = a0 + a`
(absorption and radiation balance)
u = A0 x +
|
{z
∞
A X
+
e−λn kt Xn (x)
a0 n=1
X0 (x)
}
and as t → ∞ u → X0 (x) (steady state)
For both absorbing (a0 , a` < 0) - hmwk
13
General results about Fourier series: ( Strauss Chpt. 5, Haberman Chpt. 3)
How can we expand functions in series of eigenfunctions, special solutions to
bvp’s:
f (x) =
(Later u(x, t) =
X
X
∞
X
2
fn =
`
nπ
x
e.g.
fn sin
`
n=1
fn Xn (x)
un (t)Xn (x))
? Does this series converge to f (x), i.e. lim n→∞
Pn
j=1 fj
Z
`
f (x) sin
0
nπ
xdx
`
sin jπ
` x = f (x)?
Types of convergence: pointwise, uniformly, L 2
P
A series fn (x) converges pointwise to f (x) in (a, b)
P
•
if it converges at each a < x < b, | N fn (x) − f (x)| → 0 as N → ∞
• Convergence is uniform if
P
max |f (x) − fn (x)| → 0 in [a,b] as N → ∞
a≤x≤b
Note: uniform is stronger, also includes end-pts!
• L2 -convergence:
Rb
a
|f (x) −
Uniform ⇒ pointwise and L2
PN
fn (x)|2 dx → 0
as N → ∞
Conditions for convergence:
– If f (x), f 0 (x) are piecewise continuous, then we have pointwise convergence to
f (x)
– If f (x) only is piecewise continuous, convergence is to the average,
when there is a jump f (x+ ) 6= f (x− )
f (x+ )+f (x− )
,
2
– If f (x), f 0 (x), f 00 (x) exist and are continuous for all a ≤ x ≤ b, then we have
uniform convergnece, assuming f (x) satisfies b.c. at a, b
Details in Weinberger - discussion of convergence, demonstrating in Sections
14–20 the need for conditions
14
Examples of pointwise, uniform convergence
Expanding x as a Fourier sine or cosine series on [a, b]
(hmwk)
What type of convergence? Note behavior at the end points!
Contrast with Fourier sine series of x(x − 1)
on [0, 1]
x(x − 1) =
∞
X
an sin nπx
an = 2
n=1
−an = (−1)n+1
Z
1
x(x − 1) sin nπxdx
0
2
nπ
How do we know such an expansion yields the correct function?
Can we expand any function in such a series?
Completeness: (Strauss)
Use mean square convergence (L2 )
1st show :
kf (x) −
X
cn =
cn Xn (x)k
Rb
a
is minimized for
f (x)Xn (x)dx
, i.e.
kXn k2
Show by calling (error) En = kf −
by definition, En
=
Z
b
a
P
n≤N cn Xn k
XX
n
by completing the square En
X
n≤N
So choose cn =
Rb
a
cn cm
m
= kf k2 − 2
=
2
X
|f (x)|2 dx − 2
+
by orthogonality, En
cn are the Fourier coefficients
for Xn an arbitrary set of
orthogonal functions
X
kXn k
2
"
cn
Z
n≤N
Z b
cn
Z
b
a
f (x)Xn (x)dx
Xn Xm dx
a
b
a
cn −
f (x)Xn (x)dx +
X
n≤N
Rb
a
f (x)Xn (x)dx
kXn k2
#2
f (x)Xn (x)dx
kXn k2
−
c2n kXn k2
X (
n≤N
Rb
a
f Xn )dx)2
· kf k2
kXn k2
i.e. choose the Fourier coefficient to minimize error E n
Then, using this expression for cn
R
Rb
2 b
2
n=1 cn a |Xn (x)| dx ≤ a
since En ≥ 0
P∞
|f (x)|2 dx
Bessel’s inequality
Therefore the Fourier series of f (x) converges to f (x) in the L 2 sense
⇐⇒
15
there is = replacing ≤ in Bessels inequality ↑
which is the Parseval equality
Definition
The infinite orthogonal set of functions {X n } is called complete if Parseval’s
equality is true and f (x) is in L2
Theorem We have L2 convergence (of series) for any f (x) in L 2
( Details in Strauss and Weinberger)
Comments on orthogonality:
How do we know we will have orthogonal functions for a given problem (bvp),
that is, for a particular equation + b.c.’s.
We showed orthogonality for sin, cos series.
Instead we now consider a general Sturm-Liouville problem (Linear! of course)
(2nd order)
"
0 0
(p(x)u ) − q(x)u + λρ(x)u = 0
will see general form
later for other geometries
#
So far we’ve seen the particular case X 00 + λX = 0 (p = 1, ρ = 1, q = 0)
In general need p, q, ρ > 0 (≥ 0 in some cases)
Boundary conditions: general, symmetric, linear at x = a, b

α1 X(a) + β1 X(b) + γ1 X 0 (a) + δ1 X 0 (b) = 0

α2 X(a) + β2 X(b) + γ2 X 0 (a) + δ2 X 0 (b) = 0
(periodic also included)
Dirichlet



 Neumann
Robin included
– equation (involves λ) + b.c.
To show orthogonality of eigenfunctions X n , Xm , n 6= m for λn 6= λm
(pXn0 )0 − qXn + λn ρXn = 0
equation for Xn :
0 0
(pXm
)
equation for Xm :
(1)
− qXm + λm ρXm = 0
(2)
Xn , Xm are eigenfunctions for eigenvalues λ n , λm , respectively
Both Xn and Xm satisfy b.c.’s at x = a, b
multiply
–
subtract
:
and integrate
(1) · Xm − Xn · (2) = 0
=0
Rb
0 0
1 (pXn ) Xm
−
0 )X dx
(pXm
n
+
Integrate By Parts
IBP
⇒
−
Z
b
a
Z
b
a
−
z
Z
}|
{
qXn Xm − qXn Xm dx
(λn − λm )ρXn Xm dx = 0
b
b
0
0
0
pXn0 Xm
− pXm
Xn0 dx + pXm
Xn − pXn0 Xm a
+(λn − λm )
16
Z
b
a
ρXn Xm dx = 0
a
Use b.c’s:
( then get boundary terms to disappear
)
Dirichlet: Xn (a) = Xn (b) = Xm (a) = Xm (b) = 0
e.g. in
and/or
0 (a) = X 0 (b) = 0
Neumann: Xn0 (a) = Xn0 (b) = Xm
m
For Robin b.c.’s α1 X(a) + X 0 (a) = 0, α2 X(b) + X 0 (b) = 0
the boundary terms reduce to:
−p[−α1 Xm (a)Xn (a)]+p[−α2 Xm (b)Xn (b)]−p[−α2 Xn (b)Xm (b)]+p[−α1 Xn (a)Xm (a)]
which vanishes!
Z
⇒ (λn − λm )
|a
b
ρ(x)Xn Xm dx = 0 ⇒
{z
}
notation:hXn ,Xm iρ
either λn = λm
or Xn and Xm orthogonal
This implies orthogonality for eigenfunctions of Sturm-Liouville problems.
Comments on non-uniform convergence
Approximations, Gibbs Phenomenon:
We have already seen only pointwise convergence for x expanded as a sine series
on (0, 1)
That is, at x = 1
sin nπx = 0,
but x = 1
What does the finite series look like?
Simple example:
expand f (x) = 1 in sine series on (0, π]
eigenfunctions
sin nx
=
1
π
f (x) ≈
Zπ X
N
1
0 n=1
2
Z
N
X
1
n=1
π
π
0
1 · sin nξdξ sin nx
cos n(x − ξ) − cos n(x + ξ)dξ =
←→
interchange assuming uniform convergence. as N → ∞!
=
constant %
( Using
C
Z
π
0
sin(N + 21 )(x − ξ) sin(N + 21 )(x + ξ)
−
dξ
sin 12 (x − ξ)
sin 12 (x + ξ)
N
X
1
cos nz sin z =
2
n=1
=
N
1X
1
1
sin(n + )z − sin(n − )z
2 n=1
2
2
1
1
1
sin(N + )z − sin z
2
2
2
17
rewrite limits: (x − ξ = −z in 1st integral, x + ξ = z in 2nd )
C

R π−x
−x
−
R π+x
x
sin(N + 12 )z
dz
sin 12 z
=







 Z

Z π
1

x sin(N + 1 )z
sin(N + 2 )z 

 using symmetry
2
C 2
dz −

1
1
 0
 about 0, π
sin 2 z
sin 2 z
π−x
|
{z
}
|
{z
}




→0
→0


as x → 0
Fixing N , this expression does not give f (x) = 1 at the boundary ! as x → 0
But for x small (not quite 0),
Z
x
0
sin(N + 12 )z
dz →
sin 12 z
1 as N → ∞
then we get f (x) = 1 at boundary if we let x → 0
Gibb’s phenomenon - oscillations in small region around discontinuity, or where
eigenfunctions satisfy different b.c.’s than the function f (x)
In this case, near x = 0, sin nx → 0 as x → 0
so
finite
series gives 0 as x = 0
(fixed N )
1 everywhere else, for f (x) = 1
f (x) = 1
x=0
area of oscillations
near x = 0
18
Discussion of symmetries/eigenfunction expansions:
Eigenfunctions found as solutions to X 00 + λX = 0 with b.c.’s, that determine
λn , Xn , e.g.
X(0) = X(`) = 0 Dirichlet ⇒ Xn = Bn sin nπ
` x, n = 1, 2, ...
X 0 (0) = X 0 (`) = 0 Neumann ⇒ Xn = An cos nπ
` x, n = 0, 1, ...
0
X (0) = X(`) = 0 mixed ⇒
Xn = An cos(n + 1/2) π` x n = 0, 1, ...
periodic X(−`) = X(`)
nπ
X 0 (−`) = X 0 (`) Xn = An cos nπ
` x +Bn sin ` x
Symmetry of functions satisfying Dirichlet b.c.’s
0
odd about
x = 0, `
P∞
nπ
so f (x) =
n=1 Cn sin ` x
also is odd about x = 0, `
l
That is, if we extend f (x) outside the range 0 < x < `, then the series is odd about
x = 0 and x = `, so f (x) is also odd (f (x) = −f (−x)), (f (x) = −f (2` − x)).
What about Neumann b.c.’s - symmetry about x = 0, `
The series extension
is even about x = 0, `
f (x) = f (−x)
f (x) = f (2` − x)
cos πx
l
0
l
cos 2πx
l
What about mixed?
Periodic?
Bn sin nπ
` x
Then we have the “full” series f (x) = A 0 +
P∞
n=1 An
cos nπ
` x +
How can the series for periodic conditions and the series for Neumann
both be complete?
Using the full series, the coefficients are
1
An =
`
Z
`
1
nπ
xdx, Bn =
f (x) cos
`
`
−`
A0 =
1
2`
Z
Z`
f (x) sin
−`
`
f (x)dx
−`
19
nπ
xdx
`
n≥1
R
Bn = 0
For Neumann, An = 2` 0` f (x) cos nπ
` xdx,
If we try to construct a “full” Fourier series for a function that is even about
x = 0, `
fext (x) =
(
Full Fourier series for the extended function
f (x)
0<x<`
P
⇒
nπ
nπ
f (−x) −` < x < 0
fext (x) = Ã0 + ∞
n=1 Ãn cos ` x + B̃n sin ` x
Z
Z
1 0
nπ
nπ
x dx +
dx
f (−x) cos
`
`
x
0
−`
Z
Z
nπ
nπ
1 `
1 0
f (x) sin
f (−x) sin
B̃n =
xdx +
x dx = 0!
` 0
`
` −`
`
Ãn =
1
`
`
f (x) cos
|
and Ãn =
2 R`
nπ
` 0 f (x) cos ` x
1
`
dx
R`
0
{z
f (x)(− sin
nπ
x
`
)dx
}
That is, this yields the same coefficients as for Neumann b.c.’s. Similarly, for
Dirichlet b.c.’s. Therefore both series are complete, but they have different symmetries.
Laplace’s equation on a finite domain
Where does Laplace’s equation arise?
∇ × E = 0 , E = −∇φ ← electric potential
∇ · ∇φ = −4πρ ← charge density
Electrostatics:
| {z }
∇2 φ=−4πρ
Steady state for heat equation ( in higher dimensions)
equilibrium
Ut = Uxx + Uyy ,
z
}|
{
∇2 U = 0
Irrotational flow ∇ × v = 0
v = ∇φ
∇ · v = 0 (incompressible) ⇒ ∇2 φ = 0
Mean Exit time of a particle from a region, under diffusion ∇ 2 v = −1
v = 0 on boundary
In Cartesian coordinates, ∇2 u = uxx + uyy
In polar coordinates
1
1
urr + ur + 2 uθθ = 0
r
r
20
y
In spherical coordinates
φ
1
1
2
uφφ ] = 0
urr + ur + 2 [uθθ + cot θuθ +
r
r
sin2 θ
ρ
θ
Example 1
U =0
x
y
∇2 U = 0
U = f (y)
0
Ux = g(y)
U =0
l
X 00
00
Y
Separation of Var:
X = − Y : Should we use
X or Y for eigenfunctions satisfying a bvp?
00
00
If X, then XX = − YY = −λ
X(0) = f (y)?, X 0 (`) = g(y)?
{z
|
}
obviously this does not make sense
X 00
X
Instead, try
00
= − YY = λ
⇒ Y 00 + λY = 0
⇒ u(x, y) =
∞
X
Y (0) = Y (1) = 0 ⇒ Yn = An sin nπy
An X(x) sin nπy
n=1
⇒ u(x, y) =
∞
X
X 00 − λn X = 0
X = C cosh λn x + D sinh λn x
[Cn cosh λn x + Dn sinh λn x] sin nπy
n=1
Determine constants - use orthogonality
u(0, y) = f (y) =
⇒
Z
[Cn cosh λn (0) + 0] sin nπy
n=1
1
0
∞
X
1
sin mπyf (y)dy = Cm ( )
2
ux (`, y) = g(y) =
∞
X
n=1
Z
⇒
Z
1
0
[Cn sinh λn ` + Dn cosh λn `] sin nπy · λn
sin mπyg(y)dy = (Cm sinh λm · ` + Dm cosh λm `) ·
1
0
sin mπyg(y)dy = (Cm sinh λm · ` + Dm cosh λm `) ·
? Could we use eigenfunctions in x rather than y?
If so, they must satisfy a homogeneous bvp
X 00
= −λ
X
X(0) = X 0 (`) = 0
|
{z
}
same type of b.c.’s, but homogeneous
21
λn
2
λn
2
which has solutions Xn = An sin (n+1/2)
πx
`
If we had u as such an expansion, it would look like
u(x, y) =
∞
X
Yn (y) sin
n=1
(n + 1/2)π
x
`
But we want to find Yn , so we need an equation for Yn .
Express Y in terms of U - how? - orthogonality
normalized →
(n+1/2)
2 R`
πxu(x, y)dx
` 0 sin
`
Yn (y)
= Yn (y)
is the finite Fourier transform of u(x, y)
(Fourier coefficient)
To get an equation for Yn , transform equation for u
2
`
Z
`
0
(n + 12 )
2
sin
πx[uxx + uyy ]dx = 0 ⇒
`
`
"
Z
`
0
(n + 1/2)π
xdx +
`
Z `
2
(n + 1/2)π
xdx = 0
+
uyy sin
` 0
`
uxx sin
|
"
{z
(n + 1/2)π `
(n + 1/2)π
(n + 1/2)π `
2
ux sin
x − u
cos
x +
⇒
`
`
`
`
0
0
(n + 1/2)2 π 2
+
`2
Z
`
0
}
Yn00 (y)
(n + 1/2)π
u sin
xdx
`
##
+ Yn00 (y) = 0
2
(n + 1/2)π
(n + 1/2)2 π 2
⇒
g(y) sin(n + 1/2)π − 0 − [0 − f (y)
−
Yn (y) +
`
`
`2
+Yn00 (y) = 0
Yn00 (y) −
2
(n + 1/2)π
(n + 1/2)π 2
Yn (y) = − [g(y) sin(n + 1/2)π + f (y)
]
2
`
`
`
homogeneous solutions: sinh (n+1/2)
πy, cosh (n+1/2)π
y
`
`
We can solve this using variation of parameters
Beware of possible Gibb’s phenomenon: using eigenfunctions which vanish on
the boundaries, when f and g do not!
Note: By using the transform, we get an inhomogenous equation for Y n (y).
The boundary conditions become the inhomogeneous part by the integration by
parts. Some of the boundary terms (from integration) do not contribute, since
eigenfunctions disappear at these points. That is, only those terms contribute which
correspond to the known boundary conditions. This tells us we used the right
eigenfunction expansion/transform in x.
22
U =0
Shortcut:
1
U = f (y)
Ux = g(y)
0
U =0
P
If we expect the eigenfunction expansion in y to be U = ∞
n=1 Bn (x) sin nπy
(we obtained these eigenfucntions by separation of variables)
Then would also expect f (y) =
P∞
n=1 fn sin nπy
g(y) =
P∞
n=1 gn sin nπy
∞
P
R
fn = 2 01 f (z) sin nπzdz
Bn00 (x) sin πy
u + uyy =
R1
and xx
n=1
gn = 2 0 g(z) sin nπzdz
∞
P
−
n2 π 2 B(x) sin nπy = 0
n=1
Then, setting the coeff of sin nπy = 0 (by orthogonality)
Bn00 (x) − n2 π 2 Bn = 0
and substituting in b.c.: B(0) = fn B 0 (`) = gn
Bn =
fn cosh nπx +
gn
nπfn sinh nπ`
−
cosh nπ`
cosh nπ`
sinh nπx
nπ
yielding the same result as on the previous page.
– However, if we try this with eigenfunctions in x,
sin
(n + 1/2)π
x
`
v(x, y) =
∞
X
Cn (y) sin
n=1
(n + 1/2)π
x
`
2 2
π
and substituting in the equation yields C n00 − (n+1/2)
Cn = 0
`2
But the boundary conditions do not enter the equation! For example, u(0, y) = 0
if we substitute x = 0 in the series, which does not yield the boundary condition
u(0, y) = f (y)
So, in general the finite Fourier transform, integration-by-parts, must be used,
as shown previously for the eigenfunctions in x.
– What if we use a different set of eigenfunctions (for the finite transform)
e.g. what if we use an expansion of
u(x, y) =
∞
X
Cn (y) sin
n=1
nπ
x
`
Note: sin nπ
` x are not the eigenfunctions obtained from the separation of variables
procedure for homogeneous
problem
2 R`
Then Cn (y) = ` 0 u(x, y) sin nπ
` x, from the orthogonality of the eigenfunctions
23
Transforming the equation:
2
`
|
z
Z
`
0
uxx sin
{z
}|
nπ
xdx
`
+
}
"
#{
Z
nπ
nπ
2
nπ `
nπ ` n2 π 2 `
u sin
[ux sin
x −
u cos
x + 2
xdx
`
` 0
`
` 0
`
`
0
2
`
|
Z
`
0
uyy sin
{z
nπ
xdx = 0
`
=Cn00 (y)
}
Cn (y)
z
}|
{
Z
nπ ` n2 π 2 `
2 nπ
nπ
cos
x − 2
xdx +Cn00 (y)
⇒0=− u
u sin
` `
` 0
`
`
0
2
nπ 2
nπ
n2 π 2
⇒ 0 = + f (y)
− u(`, y)
cos nπ + Cn00 (y) − 2 Cn (y) = 0
`
`
`
`
`
Compare with the equation for Yn (y) on the previous page
The equation for Cn (y) involves f (y), but not g(y), and it involves u(`, y), which
is unknown.
So the equation for Cn misses some information and requires unknown information.
This is typical if one uses the wrong finite Fourier transform, based on eigenfunctions which don’t come from the correct homogeneous problem!
Summary:
For inhomogeneous problems:
First find correct eigenfunction expansion from the homogeneous problem (separation of variables), correct finite Fourier transform are the coefficients
Use the transform on the equation to find the correct equation for coeff’s in e.f.
expansion
Shortcut using substitution
|
{z
must be used carefully−
}
“quick and dirty
method”
i.e. substituting eigenfunctions in equation
and differentiating term-by-term
Note about differentiating term-by-term:
If f 0 (x) is piecewise smooth, the Fourier sine series of f (x) can not in general be
differentiated term-by-term.
Ex
x=2
∞
X
`
nπ
n=1
(1)n+1 sin
nπ
x
`
Now x0 = 1, and the term-by-term differentiation of series is
2
∞
X
(−1)n+1 cos
n=1
why?
24
nπ
x 6= 1
`
Let’s consider what the series for x gives - it gives the odd extension of x about
0, `
−3l
−2l
−l
0
l
2l
3l
which is not continuous
(Haberman 3.4, 3.5)
More inhomogeneous problems
So far we’ve seen ut = uxx
u(x, 0) = f (x)
+ homogeneous b.c. at 0, `
Note: no “alternative” eigenfunction expansion in t- there’s not a bvp in t condition is an initial condition
(Inhomogeneous problem: Homework)
We’ve also considered ∇2 u = 0 on a rectangle.
Now let’s consider other geometries. Laplace’s equation on a disk in polar coordinates x = r cos θ, y = r sin θ
1
1
∇2 u = (rur )r + 2 uθθ = 0
r
r
r=a
∇2 u = 0
Substituting
Does Separation of Variables work here?
θ
u = R(r)Θ(θ) ⇒
⇒
1 Θ00
11
(rR0 )0 + 2
=0
Rr
r Θ
r
Θ00
(rR0 )0 = −
= const
R
Θ
If we let −
the ansatz
looks promising!
Θ00
= λ ⇒ Θ00 + λΘ = 0
Θ
What are the b.c.’s? Recall this is Laplace’s equation on a disk
u(θ, r = a) = f (θ)
What are the b.c.s for u in terms of θ? − (2π) − periodic!
Θ00 + λΘ = 0
Θ(0) = Θ(2π)
Θ0 (0) = Θ0 (2π)
(
So
eigenfunctions in Θ give Fourier series
cos nθ n = 0, 1, 2...
⇒Θ=
sin nθ n = 1, 2...
25
Aside: Note that one can write a Fourier series in real form
U (r, θ) = a0 +
∞
X
an cos nθ + bn sin nθ
n=1
or complex form
∞
X
u(r, θ) =
inθ
cn e
n=−∞
So, letting cn =





an
2
an
2
a0
bn
2i
bn
2i
+
−
−inθ
cos nθ = e +e
2
using
inθ
−inθ
sin nθ = e −e
2i
inθ
n>0
n<0
n=0
yields the complex
form
So, we could take Θ = einθ , n = 0, ±1, ±2, ... as the eigenfunctions
λn = n 2
Then
r(rR0 )0 − n2 R = 0 ⇒ r 2 R00 + rR0 − n2 R = 0
α
Euler’s equation
2
⇒ R = r ⇒ α(α − 1) + α − n = 0 ⇒ α = ±n
R = Cr ±n
0 0
for n 6= 0
For n = 0, (rR ) = 0 ⇒ R = C1 log r, C2
Note: There are 2 unknown constants in r, but only one b.c. for u in r, i.e.
u(r = a, θ) = f (θ)
There is an implied b.c. at r = 0 of u is bounded (physical)
⇒ We can not have log r or r −n for n > 0, r n for n < 0
So, in the expansion for u,
u(r, θ) = A0 +
∞
X
An r +n cos nθ + Bn r +n sin nθ
n=1
Using orthogonality and b.c. u(a, θ) = f (θ)
Z
1 2π
A0 =
f (θ)dθ
2π 0
Z
1 2π
f (θ) cos nθdθ
An =
π 0
Z
1 2π
Bn =
f (θ) sin nθdθ
π 0
Aside:
Or, in the complex form
u(r, θ) =
P∞
+n inθ + P−1
−n einθ
n=1 cn r P e
n=−∞ cn r
∞
n
inθ
= n=1 r [cn e + c−n e−inθ ]
Orthogonality for complex eigenfunctions
1
2π
|
Z
2π
e
0
inθ −imθ
e
{z
dθ
}
using complex conjugate
of eigenfunction
26
=
(
1 if n = m
0 if n 6= m
+ c0
+ c0
Note: Once again the form of eigenvalue equation is of Sturm-Liouville form
Θ00 + λΘ = 0
Aside: Laplace’s equation + b.c.:
r=a
r
What if we solve
2
∇ u = 0
∂u = f (θ)?
∂r r=a
Is there a solution?
Or, in general
∂R
n̄
R
∂u
= f (s)
∂n
{z
}
|
∇2 u = 0,
derivative in direction of
outward normal on ∂R (boundary)
variable
← on boundary
=
outflow
Recall Green’s theorem:
0=
Z
|
∇ · ∇udR =
{z
integrate
equation
}
Z
| ∂R {z
}
integrate
on boundary
⇒
=
n · ∇udS
Z
|
Z
f (θ)dθ
∂R
{z
}
in the case of the disk
f (θ)dθ = 0!
∂R
in the case of the disk
Or, if there is an internal source ∇2 u =
F (x,R y) then we Rhave the general result:
F (x)dR = ∂R f (s)ds
⇒ Balance outflow and source!
27
The Wave Equation
So far, we’ve seen heat equation and Laplace’s equation
(parabolic)
(elliptic)
Now - A hyperbolic equation - the wave equation:
Derivation of wave equation - “vibrating string”
u(x, t)
(displacement
from
horizontal)
dS
u(x, t)
force on string
F sin(ψ + dψ) − F sin ψ
ψ + dψ
≈ ρ ds utt
↑
density
For small displacement dψ
(over a small length of string)
the equation becomes
dS
ψ + dψ
ψ
x
∆x
ds
F cos ψ dψ
dx = ρutt dx
F uxx
⇒ sec4 ψ = ρutt
F uxx
⇒
2 = ρutt
)2
(1+( du
dx )
Use
dψ
du
dx
2
ds = sin ψ, ds = cos ψ, dx = cos ψuxx
du
dx = tan ψ
For small ux , (ψ small)
linearize: F uxx = ρutt
,
usually written as utt = c2 uxx
In higher dimensions, derivation is from acoustics (see Strauss)
What do the Dirichlet, Neumann, and Robin b.c. now mean physically?
Example
utt − c2 uxx = 0 ux (0, t) = 0 = ux (`, t) ← free ends
u(x, 0) = φ(x), ut (x, 0) = ψ(x) initial conditions
Note: 2 initial conditions for well-posedness
We will see we have 2 unknown constants which will be determined by these
conditions.
Back to Separation of Variables: Substitute u = X(x)T (t)
Then
1 T 00
c2 T
=
X 00
X
= −λ
X 00 + λX = 0
X 0 (0) = X 0 (`) = 0
As before: ⇒ X = cos nπ
` x n = 0, 1, 2, ...
Again Neumann b.c. yields even eigenfunctions
28
Then, the equation for T is:
nπc
T 00 + c2 λT = 0 ⇒ T = A cos nπc
` t + B sin ` t, n 6= 0
For n = 0, T = A0 + B0 t
Then
u = A 0 + B0 t +
P∞
n=1 (An cos
nπc
` t
+ Bn sin nπc
` t) cos
nπ
` x
%
note 2 unknown coeffs in each term
Apply initial conditions:
=1
∞
X
z
}|
{
nπc
nπ
u(x, 0) = φ(x) = A0 +
(An cos
0) cos
x
`
`
n=1
R
`
Orthogonality gives: An 2` = 0 φ(x) cos
R
A0 (`) = 0` φ(x)dx
ψ(x) = ut (x, 0) = B0 +
∞
X
Bn
n=1
nπ
` xdx,
n 6= 0
nπ
nπc
cos
x
`
`
Z
2 `
nπ
nπc
Bn =
φ(x) cos
xdx, n 6= 0
`
` 0
`
Z `
1
B0 =
φ(x)dx
` 0
What about the inhomogeneous problem?
utt − c2 uxx = f (x, t)
u(0, t) = h(t) u(`, t) = k(t)
u(x, 0) = φ(x) ut (x, 0) = ψ(x)
Note: even the equation is inhomogeneous!
Same procedure holds as before.
One simplification: Aside about superposition
ũtt − c2 ũxx = 0 ũx (0, t) = 0,
ũx (`, t) = 0
ũ(x, 0) = φ(x), ũt (x, 0) = ψ(x)
Let’s assume we can solve: (using a procedure similar to that used to solve for ũ)
We’ve already solved:
wtt − c2 wxx = 0,
w(0, t) = 0,
w(x, 0) = φ(x)
w(`, t) = 0
wt (x, 0) = ψ(x)
Then, could we write u = w + v, and what equation does v solve?
For the linear equation: then
f (x, t)
vtt − c2 vxx = f (x, t)
wtt − c2 wxx = 0
29
)
add ⇒ utt − c2 uxx =
w(0, t) = 0, w(`, t) = 0
v(0, t) = h(t), v(`, t) = k(t)
|
{z
add ⇒
u(0, t) = h(t), u(`, t) = k(t)
Then, if we can find w
separately, then we solve for v
}
vtt − c2 vxx = f (x, t)
v(0, t) = h(t) v(`, t) = k(t)
v(x, 0) = 0 vt (x, 0) = 0
w(x, 0) = φ(x), wt (x, 0) = ψ(x)
v(x, 0) = 0, vt (x, 0) = 0
{z
|
add ⇒
u(x, 0) = φ(x), ut (x, 0) = ψ(x)
}
First, we can write down the solution for w - the method is again separation of
nπ
variables, but we use eigenfunctions sin nπ
` x, rather than cos ` x ( used to find ũ)
w=
∞
X
(An cos
n=1
2
`
An =
Z
nπc
nπc
nπ
t + Bn sin
t) sin
x
`
`
`
`
φ(x) sin
0
nπc
2
Bn =
`
`
nπ
xdx,
`
Z
`
φ(x) sin
0
nπ
xdx
`
What about v - what method for inhomogeneous problem - what eigenfunction’s?
Same type of b.c.’s as for w
P
2 R`
nπ
nπ
We look for a solution of the form v(x, t) = ∞
n=1 cn (t) sin ` x, cn = ` 0 v(x, t) sin ` xdx
Then use a finite sine transform to transform the inhomogeneous equation:
2
`
c00n (t)
|
Z
`
0
2
nπ
xvtt dx −c2
sin
`
`
{z
c00
n (t)
"
"
}
Z
`
0
2
nπ
x vxx dx =
sin
`
`
nπ `
nπ
nπ ` n2 π 2
−c
vx sin
x − v
cos
x + 2
`
` 0
`
` 0
`
22
Z
`
0
Z
`
f (x, t) sin
0
##
= fn (t)
f (x, t) sin
nπ
xdx
`
nπ
x v(x, t)dx
sin
`
fn (t) =
2
`
Z
nπ
xdx ⇒
`
`
0
2 nπ
n2 π 2
c
(t)
+
c2
[(cos nπ)k(t) − h(t)] = fn (t)
n
`2
` `
2c2 nπ
c2 n2 π 2
c
(t)
=
−
[(−1)n k(t) − h(t)] + fn (t) ≡ F (t)
c00n (t) +
n
`2
`2
⇒ c00n (t) + c2
This is an inhomogeneous ODE for cn (t)
We can use variation of parameters to write down the solution, using the i.c., c n (0) =
c0n (0) = 0, since v has homogeneous initial conditions
cnπ
cnπ
t + B cos
t+
(A = B = 0 by ic.’s)
`
`
Z
t sin cuπ τ F (τ )
cos cnπ
cnπ
cnπ
` τ F (τ )
`
t
−
t
dτ
sin
dτ cos
cnπ
cnπ
`
`
0
`
`
cn = A sin
+
Z
t
0
Then u = v + w
u=
∞
X
n=1
(An cos
nπc
nπc
nπ
nπ
t + Bn sin
t) sin
x + cn (t) sin
x
`
`
`
`
30
Similar approach for other b.c.’s
Neumann ux (0, t) = r(t)
mixed u0 (0, t) = q(t)
ux (`, t) = s(t)
ux (`, t) = m(t)
which eigenfunctions?
which eigenfunctions?
etc.
Note:
We don’t have the option of using eigenfunction expansion in t - there are no t
eigenfunctions!
(initial value problem in t, not bvp)
Other approaches for solving the wave equation: (unbounded domain first)
Note that the wave equation can be written as
2
utt − c uxx =
2
∂2
2 ∂
−
c
∂t2
∂x2
!
u=
∂
∂
−c
∂t
∂x
∂
∂
+c
u=0
∂t
∂x
for c = constant
– Therefore we could consider u as a solution to either
(
∂
∂
− c )u = 0
∂t
∂x
or
(
∂
∂
+ c )u = 0
∂t
∂x
that is, a first order linear pde e.g. u t − cux = 0
We can view this as a directional derivative in the direction (1, −c) in the x − t
plane, that is,
(1, −c) · (ut , ux ) = 0
Therefore, since this directional derivative = 0
⇒ u is constant in the direction, (1, −c)
t
Lines parallel to (1, −c) have the equation
vectors in
(1, −c)
direction
x
x + ct = constant, characteristic lines, or characteristics
u is constant on these lines, so u = f (x + ct), wheref is a arbitrary function
If we have an initial condition, u(x, 0) = g(x), then at t = 0
f (x + c · 0) = g(x) ⇒ f = g ⇒ u = g(x + ct)
Aside: for a general first order, linear ode
from Strauss (1.2)
a(x, y)ux + b(x, y)uy = 0
Recall from example above: we found characteristic lines s = x + ct,
where
dx dt
,
ds ds
31
· (1, −c) = 0
or
dx
dt
= −c, x = −ct + s(const.), that is, on the characteristic curves, s is a constant,
a(x,y)
In general, if we look for dx
dy = b(x,y) for the general case, we are looking for
characteristic curves (not necessarily straight lines)
Example ux + yuy = 0
↓ const
↓ const
dy
x
= y ⇒ y = Ce
( or x = ln y − K
)
dx
The (constant) characteristic variable is C = ye −x
Note that with the change of variables y = Ce x , the equation is satisfied
x
∂u ∂(Ce )
u = u(x, Cex ) ⇒ ux + yuy = ux + ∂(Ce
+ 0 = ux +
x)
∂x
which is satisfied for y = Cex
∂u
∂y
· Cex
Or in terms of the characteristic variable u = u(C) = u(ye −x )
that is, u is constant on the characteristics curves, ye −x = C
And indeed,
ux + y uy = u0 (ye−y ) · (−ye−x ) + y · u0 (yex ) · ex = 0
so u = f (ye−x ) where f is an arbitrary funciton.
For an initial condition, u(0, y) = y 2 ⇒ f (y)) = y 2
⇒ u = y 2 e−2x
Back to the wave equation:
Recall, we have
so a solution of
equation:
∂
∂t
∂
∂
−c
∂t
∂x
∂
u = 0
− c ∂x
,
or
∂
∂t
∂
∂
+c
u=0
∂t
∂x
∂
u = 0 will satisfy the wave
+ c ∂x
solution of 1st u = f (x + ct)
⇒ u = f (x + ct) + g(x − ct)
solution of 2nd u = g(x − ct)
What about i.c.’s? u(x, 0) = φ(x), ut (x, 0) = ψ(x)
Apply i.c.’s:
u(x, 0) = f (x) + g(x) = φ(x), u t (x, 0) = c(f 0 (x) − g 0 (x)) = ψ(x)
combine: ⇒ 2cf 0 (x) = ψ(x) + cφ0 (x),
2cg 0 (x) = cφ0 (x) − ψ(x)
Then, integrating,
Z
1
1
f (x) = φ(x) +
2
2c Z
1
1
g(x) = φ(x) −
2
2c
x
ψdx + A
x
ψdx + B, A and B are constants
32
Now replace x with x + ct in 1st equation, x with x − ct in second, and adjust
A, B so that i.c. are satisfied
(combine in integral)
u(x, t) =
=
1
[φ(x + ct) + φ(x − ct)] +
2
1
[φ(x + ct) + φ(x − ct)] +
2
Z
1
1 x+ct
ψdx0 + −
2c 0
2c
Z
1 x+ct
ψdx0
2c x−ct
Z
x−ct
0
ψ(x0 )dx0
d’Alembert’s solution!
What does this solution tell us, what about characteristics?
Consider first the case ψ = 0.
Then the solution at u(x∗ , t∗ ) = 12 [φ(x∗ + ct∗ ) + φ(x∗ − ct∗ )]
Picture this in the x − t plane
t
(x∗ , t∗ )
t=0
2 contributions on initial line
propagate along characteristics to
give u(x∗ , t∗ )
x
(x∗ − ct∗ , 0)
(x∗ + ct∗ , 0)
Note that φ(x∗ + ct∗ ) is
constant on x + ct = const.
Likewise φ(x∗ − ct∗ ) is constant on
x∗ − ct∗ = const, and the contributions
combine for u(x∗ , t∗ ),
or, u(x∗ , t∗ ) depends on
initial conditions at the 2
locations x∗ + ct∗ , x∗ − ct∗
Similarly, information from x0 travels along characteristics to influence the solution at a later time t at 2 different locations
(0, xc0 )
x+
ct =
x
x0
t=
−c
x0
(2x0 , x0 /c)
x0
t x + x
ct= x− ct =
Now consider ψ 6= 0. Then we have influence from ψ, for its argument between
the characteristics (see integral on previous page)
x
0
t=0
so an initial condition at x0
influences all solutions at points in
the wedge, the domain of influence.
0
x0
x
33
Similarly, the domain of dependence corresponds to all initial conditions between
the characteristics, drawn from a point (x ∗ , t∗ )
t
−
c
t=
=x
x +
t
c
ct
x−
∗
The solution depends on
initial conditions for x < x∗ − ct∗ ,
x > x∗ + ct∗ , x < x∗ − ct∗
do not influence u(x∗ , t∗ )
(x∗ , t∗ )
∗
x+
ct
∗
(x∗ − ct∗ , 0)
∗
x
(x∗ + ct∗ , 0)
domain of dependence
for solutions at (x∗ , t∗ )
(for (ψ(x) 6= 0))
Example For φ(x) = 0, ψ(x) =
Then formally u(x, t) =
(
1 −a ≤ x ≤ a
0 otherwise
1 R x+ct
2c x−ct ψ(s)ds
Practically, for x − ct < s < x + ct we need to determine if ψ = 1 or 0
For example, for t ≤ ac the limits are x + ct ≤ x + a, x − ct ≥ x − a
We must consider separate regions in space:
x − ct > a
or x + ct < −a ⇒ u = 0
Z
x + ct + a
1 x+ct
ds =
2c −a
2c
−a < x + ct < a ⇒ u = t
a − x + ct
a < x + ct ⇒ u =
2c
x − ct < −a
−a < x + ct < a ⇒ u =
−a + ct < x
−a < x − ct < a
t = a/c
u
t < a/c
−2a
−a − ct
−a + ct
a − ct
a + ct
Now take t > ac :
Then, for x > 0, x + ct > a
u(x, t) =
For x − ct < −a,
1
u(x, t) =
2c
Z
a − x + ct
for a + ct > x > ct − a
2c
a
ds =
−a
a
,
c
and by symmetry, we can infer
the result for x < 0
34
2a
x
u
for t >
a/c
−a − ct
a − ct
0
ct − a
a
c
ct + a
x
We can ask, when does the wave hit a location x ∗ ? When does x∗ = ct + a, as
seen by the previous figure
What is the physical intuition? How do we use/characteristics to explain how
the initial condition propagates?
u
t=0
−a < x < a, ut 
 1
= φ(x) =

0
= 0, yields the following:
t=0
x
Graphically, the behavior is:
u
u
t=0
later t
x
(Exercise: Mathematically describe the solution, corresponding to the figure) ↑
x
Propagation of discontinuities
Note that the discontinuity in ψ(x, t) (u x (0, t)) propagates along the characteristics.
For the solution for t ≤ ac , the derivative ux is zero for |x| > a+ct and |x| < a−ct
1
for a − ct < |x| < a + ct. Then there is a discontinuity in the derivative
and |ux | = 2c
along the lines
t
x−
a
=
ct
x−
ux = 0
a
ux = 0
=
−a
ct
=
x+
ct
ct
=
x+
ux = 0
1
ux = − 2c
−a
ux =
1
2c
−a
a
x
x − ct = a
x + ct = a
x − ct = −a x + ct = −a
Similarly, for t > ac ,
ux = 0 for a − ct < x < ct − a
ux = 0 for −a − ct < x, x > ct + a
1
|ux | = 2c
elsewhere,
so again the discontinuity propagates
along the characteristic.
Similarly, if the initial condition u(0, t) = φ(x) has a discontinuity, it will propagate along the characteristics, and u will not be continuous across its characteristics.
This solution is known as a “weak” solution, since it does not satisfy the wave equation everywhere, utt − c2 uxx = 0, since its derivatives do not exist everywhere. It
solves the wave equation in the regions bounded by the characteristics, but there
35
will be different values on either side of the characteristics.
In the previous problems: no boundary conditions
x+
ct =
t
ons
con
x−
st
c
ct =
−∞ < x < ∞
Now consider on a semi-infinite interval, (x > 0): utt = c2 uxx
u(x, 0) = φ(x)
ut (x, 0) = ψ(x)
u(0, t) = 0
|
{z
}
fixed end at
0
Again, consider u(x∗ , t∗ ), and recall domain of dependence
Note: no problem if x∗ − ct∗ > 0,
domain of dependence
t
(x∗ , t∗ )
falls in the domain x > 0,
where the initial conditions
are defined (only on x > 0)
characteristics
But, what if x∗ − ct∗ < 0
∗
that is, t∗ > xc ?
(0, x∗ − ct∗ )
(0, x∗ + ct∗ )
Let’s go back to d’Alembert’s solution
1
1
[φ(x + ct) + φ(x − ct)] +
2
2c
= f (x + ct) + g(x − ct)
u =
Z
x+ct
ψ(s)ds
x−ct
?
z }| {
and use the boundary condition u(0, t) = f (ct) + g(−ct) = 0 for x > ct, g(x − ct)
is defined, that is, φ(x), ψ(x) are defined for x > 0, and g(x − ct) = 12 φ(x − ct) −
1 R x−ct
ψ(s)ds
2c
But what if x < ct, or, in other words, what is g(z) for z < 0? (for x = 0 we have
g(−ct))
From the boundary condition at x = 0, we have g(−z) = −f (z)
so we define:
⇒ φ(x − ct) = −φ(ct − x) for x < ct
(for negative argument)
ψ(x − ct) = −ψ(ct − x)
Then
u(x, t) =
=
1
[φ(x − ct) − φ(ct − x)] +
2
1
[φ(x − ct) − φ(ct − x)] +
2
36
Z
1 x+ct
ψ(s)ds −
2c 0
Z
1 x+ct
ψ(s)ds −
2c 0
Z
1 0
ψ(−s)ds
2c x−ct
Z
1 ct−x
ψ(z)dz
2c 0
= [φ(x − ct) − φ(ct − x)] +
1
2c
Z
x+ct
ψ(s)ds
for x < ct
ct−x
(usual d’Alembert’s solution for x > ct)
What we have done in defining φ, ψ for negative arguments is extended them as
odd functions about zero
φ(x)
φ(−x) = −φext (x)
similarly for ψ(x)
Graphically, this solution looks as follows, for x ∗ − ct∗ < 0
(0, t∗ −
x∗
)
c
x−
ct
t(x∗ , t∗ )
s
n
o
x+
=c
ct =
con
st
x + ct = const
(x∗ − ct∗ , 0) (ct∗ − x∗ , 0)
(x∗ + ct∗ , 0)
The contributions to v(x∗ , t∗ ) result from
initial conditions at ct∗ − x∗ , x∗ + ct∗ ,
(and in between). This dependence can be drawn
from the characteristics from (x∗ , t∗ ).
One of these characteristics hits the boundary,
and is then “reflected” onto the x + ct = const
characteristic, eventually heading to ct ∗ − x∗ on
the initial line
Note in the solution we have
.
1
2 [φ(x − ct) − φ(ct − x)] + ...
This minus sign is a result of the
odd relection about x = 0 of the
initial conditions.
Thus the solution is the usual d’Alembert’s solution for the extended initial
conditions:
u=
where φext =
(
1
1
[φext (x + ct) + φext (x − ct)] +
2
2c
φ(x)
x>0
−φ(−x) x < 0
ψ ext =
(
Z
x+ct
ψ ext (s)ds
x−ct
ψ(x)
x>0
−ψ(−x) x < 0
Is it similar for Neumann b.c.’s ? utt − c2 uxx = 0 + i.c.0 s
ux (0) = 0
We would expect an even extension
φext =
(
φ(x),
x>0
φ(−x),
x0
ψ ext =
(
ψ(x),
x>0
ψ(−x), x < 0
Then the d’Alembert solution is
u=











1
2 [φ(x
1
2 [φ(x
+ ct) + φ(x − ct)] +
1 R x+ct
2c Zx−ct ψ(s)ds
1 0
1
ψ(−y)dy) +
+ ct) + φ(ct − x)] +
2c x−ct
2c
|
1
c
R ct−x
0
37
{z
1
ψ(s)ds+ 2c
Z
for x = ct > 0
x+ct
ψ(y)dy
0
R x+ct
ct−x
ψ(s)ds
}
for x − ct < 0
Note that the boundary conditions and initial conditions are satisfied:
ux (0, t) = 0, u(x, 0) = φ(x), ut (x, 0) = ψ(x)
And of course the solution is of the form which satisfies the equation!
Does this continue for finite intervals: 0 ≤ x ≤ `?
For example:
utt − c2 uxx = 0 u(x, 0) = φ(x)
0<x<`
ut (x, 0) = ψ(x)
0<x<`
u(0, t) = u(`, t) = 0
We have “odd” boundary conditions at both 0, and `
Already we’ve seen φ(x), ψ(x) must be extended to be odd about x = 0, φ ext (−x) =
−φext (x),
Similarly, we can define the extension as odd about x = `
φext
↓


 φ(x)
0<x<`
−φ(−x)
x<0
=

 −φ(2` − x)
x>`
−2l
−l
etc.
0
l
2l
3l
Note: the extended function φext is periodic with period 2`
ψ(x) is extended in a similar way.
1 R x+ct
Then u = 12 [φext (x + ct) + φext (x − ct)] + 2c
x−ct ψext (s)ds where, for a given x
and t, the functions φext , ψext must be evaluated appropriately.
Ex
utt = uxx
u(x, 0) = 0 ut (x, 0) =
(
1 1/2 < x < 3/2
0 otherwise
u(0, t) = u(2, t) = 0
Then the extended function ψ is
−3 /2
− 1 /2
−1
1
0
/2
1
3
/2
2
5
/2
3
Let’s evaluate u( 12 , 3 12 ), u(1, 2 21 )
First, evaluate u(1, 2 12 ) (c = 1)
u=
1
2
Z
1+2 21
1−(2 12 )
ψext dx =
1
2
Z
3 12
−1 12
ψext dx =
38
1
2
Z
7/2
5/2
ψext dx = −
1
2
7
/2
Graphically
following the characteristics,
with reflections off the
boundary, the solution
u(1, 2 1/2) depends on the
initial solution between
1/2 and 3/2. The reflection
off the boundary
gives (−) sign, so
R 3/2
u(1, 2 21 ) = − 21 1/2 ψ(s)ds = − 21
t
3/2
1/2
Similarly, for u( 12 , 3 12 )
u( 21 , 3 12 )
=
1
+3 12
2
R
ψext dx =
1
−3 21
2
and graphically
1
2
(1, 2 1/2)
x
/2 + ct =
3
−
31
t=
/2
c
−
x
2
3/2
1/
x+
t=
c
ct =
x−
3/2
R4
−3
ψext dx =
1
2
−2 12
R
1dx + 0 +
−3
3/2
1
2
3 12
R
2 12
x
−1dx = −1/4
(1 / 2 , 3 1 / 2 )
3
2
1
0
1
u( 21 , 3 21 ) = − 21
R1
0
2
ψ(s)ds = − 41
For mixed conditions: e.g.
utt − c2 uxx = 0 u(x, 0) = φ(x), ut (x, 0) = ψ(x)
u(0, t) = ux (`, t) = 0
We should extend φ(x), ψ(x) as odd functions about x = 0 (Dirichlet b.c.) and even
about x = ` (Neumann b.c.), which gives 4`-periodic functions.
−2l
−l
0
l
2l
3l
What about inhomogeneous b.c.’s? Also inhomogeneous equations? (source)
First: Source on boundary (inhomogeneous b.c.)
utt − c2 uxx = 0
u(x, 0) = φ(x), ut (x, 0) = ψ(x)
u(0, t) = h(t)
Recall superposition:
u = v + w, where v satisfies
vtt − c2 vxx = 0, v(x, 0) = φ(x), vt (x, 0) = ψ(x), v(0, t) = 0
39
and w satisfies wtt − c2 wxx = 0 w(x, 0) = 0 = wt (x, 0)
w(0, t) = h(t)
We know the solution for v (d’Alembert’s, with extended odd function φ(x))
As always, w = f (x + ct) + g(x − ct)
Then w(0, t) = f (ct) + g(−ct) = h(t)
w(x, 0) = f (x) + g(x) = 0
wt (x, 0) = (f 0 (x) − g 0 (x))c = 0
for x > 0
The second 2 equations (i.c.) ⇒ f (x), g(x) = 0 for x > 0, or f (x) = −g(x) = const
but in order to satisfy the b.c., we must have f (x) = g(x) = 0 for x > 0, since f and
g can not be constant
Then, for t > 0, f (ct) = 0,
⇒ g(−ct) = h(t)
Then, w = f (x + ct) + g(x − ct)
For x > ct, w = 0 (both arguments > 0)
So there is no contribution from the source at the boundary
For x < ct,
w = g(x − ct) (x + ct > 0 so f (x + ct) = 0)
ct − x
)=
h(t − x/c)
= h(
|
{z
}
c
contribution from the
boundary x − ct
so u = 12 [φext (x − ct) + φext (x + ct)] +
1
2c
x+ct
R
x−ct
ψext (s)ds +
x
)
| {z c }
for x < ct only
h(t −
ct
2
(x2 , t2 )
−
Solution for x2 − ct2 < 0,
combination from
i.c., and b.c.’s
x−
ct
=
x2
(x1 , t1 )
(0, t2 −
x2
)
c
ct2 − x2
for x1 − ct1 > 0,
no contribution from
boundary- only usual
d’Alembert solution
40
x1 − ct1
x2 + ct2
x1 + ct1
Finally, if we consider a source to the wave equation:
u(x, 0) = φ(x), y(x, 0) = ψ(x)
u(0, t) = h(t)
utt − c2 uxx = F (x, t)
Once again, we can construct the solution by superposition,
u = v + w, where for v there are inhomogeneous i.c. and b.c.’s, and w satisfies:
wtt − c2 wxx = F (x, t) w(x, 0) = wt (x, 0) = 0
w(0, t) = 0
First, consider the equation with no. b.c. (−∞ < x < ∞)
wtt − c2 wxx = F (x, t), w(x, 0) = wt (x, 0) = 0
Then, under the change of variables, ξ = x + ct, η = x − ct
∂
∂
∂
∂
− c )( + c )u =
∂t
∂x ∂t
∂x
Z ξZ
∂
ξ
+η ξ−η
1
∂
2
u = F(
,
)⇒u=− 2
−4c
∂ξ ∂η
2
2c
4c
(
η
F dη 0 dξ 0
Choosing the limits so that the initial conditions are satisfied,
1
u=+ 2
4c
Zξ Zξ
η η0
ξ0 + η0 ξ0 − η0
,
)dξ 0 dy 0
F(
2
2c
ξ 0 = x0 + ct0
η 0 = x0 − ct0
!
η < η 0 < ξ 0 < ξ ⇒ x − ct < x0 − ct0 < x0 + ct0 < x + ct
then
and
⇒ x − c(t − t0 ) < x0 < x + c(t − t0 )
t0 > 0, t > t0
⇒
⇒ 0<t
So, writing the solution back in terms of x, t yields
1
u= 2
4c
Z
1
u(x, t) =
2c
t
0
0)
x+c(t−t
Z
(2c)F (x0 , t0 )dx0 dt0 ,
since
x−c(t−t0 )
Z tZ
0
x+c(t−t0 )
x−c(t−t0 )
dξ 0 dη 0 = det
F (x0 , t0 )dx0 dt0
1 c
1 −c
!
0 0
dx dt
Graphically, this is integrating over the triangle bounded by the characteristics:
x0 − ct0 = x − ct
x0 + ct0 = x + ct
t0
(0, t)
(x, t)
x
x
+ = c(t
)
t
− t )
−
t c(
x −
=
x
0
0
0
(x − ct, 0)
0
(x + ct, 0)
41
x0
Of course, with non-zero initial conditions, the solution is
1
1
u(x, t) = (φ(x − ct) + φ(x + ct)) +
2
2c
Z
x+ct
x−ct
1
ψ(s)ds +
2c
Z tZ
0
x+c(t−t0 )
x−c(t−t0 )
F (x0 , t0 )dx0 dt0
Now add boundary conditions:
The same rule applies for Dirichlet and/or Neumann conditions on the boundary,
e.g. x = a, x = b.
For Dirichlet conditions at x = a, the functions φ(x), ψ(x), F (x, t) are extended
as odd functions (in space) about x = a
For Neumann conditions, e.g. at x = b, the extensions are as even functions, etc.
Then, using the appropriately extended functions φ ext (x), ψext (x), Fext (x, t),
1
1
u(x, t) =
[φext (x − ct) + φext (x − ct)] +
2
2c
Z Z
0
1 t x+c(t−t )
Fext (x0 , t0 )dx0 dt0
+
2c 0 x−c(t−t0 )
Z
x+ct
x−ct
ψext (s)ds
One more method for solving the inhomogeneous problem
Duhamel’s principle
For utt − c2 uxx = g(x, t), we consider a related problem:
vtt − c2 vxx = 0 v(x, t = τ ; τ ) = 0
vt (x, t = τ ; τ ) = g(x, τ )
So τ is a parameter, with t > τ in the solution for v(x, t; τ )
Then, letting T = t − τ , ṽT T − c2 ṽxx = 0
ṽ(x, 0; τ ) = 0
ṽT (x, 0; τ ) = g(x, τ ) for ṽ(x, T ; τ ) = v(x, t; τ )
so ṽ(x, T ; τ ) =
x+cT
Z
x−cT
g(s, τ )ds, ⇒ v(x, t; τ ) =
Now the claim is u(x, t) =
To verify:
Rt
0
x+(t−τ
Z )
g(s, τ )ds
x−(t−τ )
v(x, t; τ )dτ
ut (x, t) = v(x, t; t) +
Z
t
0
vt (x, t; τ )dτ
∂
utt (x, t) = [v(x, t; t)] + vt (x, t; t) +
| {z }
∂t
g(x,t)
and uxx =
Rt
0
vxx (x, t; τ )dτ
42
Z
t
0
vtt (x, t; τ )dτ
so
⇒
2
utt − c uxx
∂
[v(x, t; t)] +g(x, t) +
=
∂t | {z }
= 0 by
i.c.
Z
t
0
vtt − c2 vxx dτ
|
utt − c2 uxx = g(x, t), so the claim is verified
Compare with d’Alembert’s solution!
43
{z
=0
}
Heat Equation on an infinite domain
Solution of IVP for heat equation, −∞ < x < ∞
ut = kuxx
t > 0,
−∞ < x < ∞
u(x, 0) = f (x)
“Review” of Laplace transform:
Z
U (s, x) =
∞
0
e−st u(x, t)dt ≡ L(u(x, t))
For ODE’s (constant coefficients): au tt + but + cu = 0
U (s) =
Z
∞
0
u(0) = u0
ut (0) = v0
e−st u(t)dt ≡ L(u(t))
Steps:
Transform
the equation:
IBP ⇒
(Integrate by Parts)
Z
∞
0
e−st autt dt +
∞
ae−st ut +s
0
Z
∞
0
⇒
e−st u(t) → 0
assuming
as t → ∞
!
∞
0
e−st but dt +
Z
∞
0
∞
e−st aut dt+be−st u
0
∞
−aut (0) + se−st au
Z
ce−st udt = 0
R
+bsR 0∞ e−st udt
+c 0∞ e−st udt = 0
+ s2 aU (s)
0
−bu(0) + bsU (s) + cU = 0
⇒ (s2 a + bs + c)U (s) = av0 + bu0 + sau0
Now we can solve for U (s) algebraically ⇒ transform converts differentiation to
algebra
av0 + bu0 + sau0
av0 + bu0 + sau0
⇒ L−1
U (s) =
2
s a + bs + c
s2 a + bs + c
= u(t)
The inversion can be accomplished by complex integration. Many tables of Laplace
transforms exist!
So what about PDE’s?
U (x, s) = L(u(x, t))
, apply to: ut = kuxx t > 0
−∞ < x < ∞ u(x, 0) = f (x)
Using the same transform,
R∞
0
e−st u
t (x, t)dt
once again,
assume
⇒
e−st u(x, t) → 0
as t → ∞
=
∞
R ∞ −st
+ s 0 e u(x, t)dt
e−st u(x, t)
=
R∞
0
0
e−st uxx (x, t)dt = kUxx (x, s)
−u(x, 0) + sU (x, s) = kUxx (x, s)
⇒ kUxx − sU = −f (x)
44
This is an ODE in the variable x, treating s as a parameter
How do you solve such an ODE?
Variation of parameters solution:
√s
√s
Z x
Z x
f (y) e k (y−x)
f (y) e k (x−y)
q
q
U (x, s) =
dy −
dy
s
s
−∞ k
−∞ k
k
+c1 e
√
s
x
k
k
+ c2 e
√
− ks x
,
A more compact form:
U (x, s) =
Z∞
−∞
1
2
s
k √ s |y−x|
e k
s
|
{z
f (y)
dy
k
}
this function is known as G(x, y),
the kernel, or Green’s function for the
operator in the equation for U (x, s)
How does one obtain the Green’s function? ( for an ODE for now )
Let’s consider a general ODE, Lu = −f (x)
We’ve seen the Sturm-Lioville form,
Lu =
du
d
[p(x) ] + qu = −f (x)
dx
dx
with some boundary conditions u(α) = u(β) = 0.
We’ve seen the solution when there are eigenfunctions, let’s consider the general
case, using variation of parameters. Variation parameters says that if we know 2
solutions to the homogeneous problem, u 1 (x), u2 (x), then
u(x) = −
=−
Z
Zx
α
(u1 (x)u2 (ξ) − u2 (x)u1 (ξ))f (ξ)
dξ + c1 u1 (x) + c2 u2 (x)
p(x)[u01 (x)u2 (x) − u02 (x)u1 (x)]
|
{z
}
- call it K
for short
x
R(x, ξ)
a
f (ξ)dξ+c1 u1 (x)+c2 u2 (x) =
| {z }
particular
solution
“influence
function”
+c1 u1 (x)
+c2 u2 (x)
|
{z
}
homogeneous solution
Then one can solve for c1 , c2 by satisfying the boundary conditions. After some
algebra, we can write the solution as:
u(x) = −
−
Zx
(u1 (α)u2 (ξ) − u2 (α)u1 (ξ))(u1 (x)u2 (β) − u2 (x)u1 (β))
f (ξ)dξ
K (u1 (α)u2 (β) − u2 (α)u2 (β)
α
Z
−β
x
|
{z
}
call this D
[u2 (α)u1 (x) − u1 (α)u2 (x)][u1 (β1 )u2 (ξ) − u2 (β)u1 (ξ)]
f (ξ)dξ
KD
45
Some things to note: K = p(x)[u01 (x)u2 (x) − u02 (x)u1 (x)] = const
• K 0 (x) = p0 (x)[u01 (x)u2 (x) − u02 (x)u1 (x)] + p(x)[u001 (x)u2 (x) − u002 (x)u1 (x) + 0]
=
−qu1 u2 + qu2 u1
{z
|
=0
}
using the (homogeneous) equation for u 2 , u1 as homogeneous solutions to Lu = 0
• D is of course a const.
• the 2 “parts” (integrals) for u(x) are symmetric in α, β
Then we can write the solution compactly as
Z
β
G(x, ξ)f (ξ)dξ with
α
G(x, ξ) =
(
1
KD [u1 (ξ)u2 (α) − u1 (α)u2 (ξ)][u1 (x)u2 (β) − u2 (x)u1 (β)], ξ ≤ x
1
KD [u1 (x)u2 (α) − u2 (x)u1 (α)][u1 (ξ)u2 (β) − u2 (ξ)u1 (β)], x ≤ ξ
Note the following properties of G(x, ξ)
boundary
conditons ⇒
satisfied
1
KD
G
x=α
= G
= 0,
Gx x=ξ +
x=β
− Gx x=ξ −
≡ lim Gx (x, ξ) − lim Gx (x, ξ) =
x↓ξ
x↑ξ
[−u01 (ξ)u2 (ξ)u1 (α)u2 (β) − u02 (ξ)u1 (ξ)u2 (α)u1 (β) + u02 (ξ)u1 (α)u2 (β)u1 (ξ) + u2 (ξ)u01 (ξ)u2 (α)u1 (β)]
=
K
− p(ξ)
D
KD
Also G
x=ξ +
=−
− G
1
(after some algebra, using the definitions of K, D)
p(ξ)
=0
x=ξ −
So, in summary, one can solve the problem
Lu = −f (x), u(α) = u(β) = 0
using the Green’s function G(x, ξ),
u(x) =
Z
G1 (x, ξ)
β
G(x, ξ)f (ξ)dξ, and G(x, ξ)
α
α
x<ξ
G2 (x, ξ)
ξ
x>ξ
is constructed using the homogeneous solutions u 1 , u2 .
From the expression above, we see G consists of 2 parts, say G 1 , G2 ,
G(x, ξ) =
(
G1 x ≤ ξ
G2 x ≥ ξ
G1 satisfies the boundary condition at x = α
G2 satisfies the boundary condition at x = β
β
(x ≤ ξ)
(x ≥ ξ)
G1 , G2 are both linear combinations of u1 (x), u2 (x)
so
(
Au1 + Bu2 x ≤ ξ
Cu1 + Du2 x ≥ ξ
so we have 4 unknown constants, and 2 boundary conditions. We also have 2 jump
conditions, i.e.
46
G2x − G1 = 0,
x=ξ +
x=ξ −
1
− G1x =−
p(ξ)
−
x=ξ +
G2 (x, ξ)
G1 (x, ξ)
G2 α
x<ξ
ξ
x>ξ
β
x=ξ
So we have 4 conditions to find the 4 unknown
constants.
Ex
u00 = −f (x),
u(0) = 0, u(1) = 0
the solution can be found from
u=
Z
1
G(x, ξ)f (ξ)dξ
0
In this problem, p(x) = 1, q = 0, in Lu = (pu 0 )0 + qu = −f (x)
The homogeneous solutions are u1 = 1, u2 = x
G(x, ξ) =
(
A + Bx x < ξ
C + Dx x > ξ
To satisfy the boundary conditions, A + B · 0 = 0, C + D · 1 = 0
⇒ G(x, ξ) =
(
Bx
x<ξ
C(1 − x) x > ξ
Then, to satisfy the jump conditions,
[C(1 − x)]
−C x=ξ
G(x, ξ) =
so u(x) =
(
Rx
0
− B x=ξ
x=ξ
− [Bx]
= −1
⇒
(1 − ξ)x x ≤ ξ
ξ(1 − x) x ≥ ξ
ξ(1 − x)f (ξ)dξ +
x=ξ
= C − Cξ − Bξ = 0
B =1−C
C(1 − ξ) = (1 − C)ξ ⇒
C=ξ
B =1−ξ
⇐ Note: G(x, ξ) = G(ξ, x)
R1
x (1
− ξ)xf (ξ)dξ
Note:
u(0) = u(1) = 0
R
R
u0 = x(1 − x)f (x) − (1 − x)xf (x) − 0x ξf (ξ)dξ + x1 (1 − ·ξ)f (ξ)dξ
u00 = −xf (x) − (1 − x)f (x) = −f (x)
Ex This also works for other “symmetric” boundary conditions (i.e. Dirichlet, Neumann, Robin are included in these)
u00 = −f (x)
u(0) = 0
u0 (1) + βu(1) = 0
47
G(x, ξ) =
Again, assume
a linear combination
of the homogeneous
solutions to u00 = 0
(
Ax + B x ≤ ξ
Cx + D x ≥ ξ
u(0) = 0 ⇒ B = 0
u0 (1) + βu(1) = 0 ⇒ C + β(C + D) = 0
G(x, ξ) =
jump
conditions
)
⇒ G(x, ξ) =
=
Aξ = C ξ −
C − A = −1








−1(1+β)+βξ
−(1+β) x
βξ
−(1+β)β (βx −
(1+β)
β
)
(
Ax
C(x −
1+β
β )
x≤ξ
x≥ξ
(1 + C)ξ = C(ξ − (1+β)
β )
⇒ ⇒C=
βξ
ξ
(1+β) = −(1+β)
−
β
x≤ξ
(1 + β)) x ≥ ξ
(1+β(1−ξ))
x
1+β
x≤ξ
(1+β(1−x))
ξ
1+β
x≥ξ
Note: Again G(x, ξ)
is symmetric in x, ξ
More properties of G(x, ξ) discussed later.
Now let’s return to the problem of solving the heat equation:
ut = kuxx
u(x, 0) = f (x)
− ∞ < x < ∞, t > 0
We had used the Laplace transform to “simplify” the equation to an inhomogeneous ODE
U ≡ L(u) with equation for U :
This is an inhomogeneous ODE!,
s
f (x)
Uxx − U = −
we can use a Green’s function,
k
k
but what are “boundary conditions”?
As x → ±∞, physically we expect U to remain bounded (in fact, it will decay). So
we will use this assumption as “boundary conditions”.
√s
First, we need the homogeneous solutions, which are e ± k x
√s
√s
(
Ae− √ k x + Be √ k x x ≤ ξ
So G(x, ξ) =
s
s
Ce− k x + De k x x ≥ ξ
Using the “boundary conditions”, A = 0, D = 0 to assure that G(x, ξ) (and U )
remains bounded(as x →
√ ±∞.
+ s
Be √ k x x ≤ ξ
So G(x, ξ) =
− s
Ce k x x ≥ ξ
48
the jump conditions yield:
⇒
2C =
2B =
q
q
k +
se
√s
k −
se
k
−
q
s
k
√s
√
− sξ
ξ
k = Ce
k
Be
√
√ Ce
− sξ
k
+ Be
s
ξ
k
= −1
ξ
√s
ξ
k
⇒ G(x, ξ) =


√
q
s
(x−ξ)
1
k
k
2 q s e√
s
(ξ−x)
1
k
k
2
se
x≤ξ
x≥ξ
This yields the Laplace transform of the solution u

Z
∞
f (ξ)
G(x, ξ)dξ
−∞ k
q
√s
Note that G(x, ξ) can be written as 21 ks e− k |x−ξ|
U = L(u) =
and u(x, t) = L−1 (U ) = L−1
"Z
∞
−∞
f (ξ) 1
k 2
s
k −√ s |x−ξ|
k
dξ
e
ξ
#
The inversion is defined as
1
2πi
Z
α+i∞
est U (s)ds = u(t), α > 0,
α−i∞
where the (complex) integration is over a closed curve which has a vertical part lying
to the right of the y-axis, and “closed” to the positve real side.
Another way to view the inversion is to use tables, and write


√
Z ∞
n
− ks |x−yi |
√
X
s
f (ξ) 1 − |x−ξ|
e
k
√ e
√
u(x, t) = L−1
dξ = L−1  lim
f (yi )∆y 
n→∞
2
−∞
ks
2
ks
i=−n
Assuming this “sum” (integral) converges uniformly, and that the L −1 of each term
in the series exists, yields
n
√s
X
1
f (yi )
lim
L−1 [e− k |x−yi | √ ∆yi ]
n→∞
2
sk
i=−n
u(x, t) =
(x−yi )2
n
X
1
e− 4kt
√
f (yi )∆y
lim
2 n→∞ i=−n
kπt
| {z }
from tables
Z ∞
(x−y)2
1
assuming sum
√
e− 4kt f (y)dy
(
)
converges
2 kπt −∞
=
=
That is, for the heat equation
ut = kuxx
− ∞ < x < ∞, t > 0
with initial condition, u(x, 0) = f (x), the solution is
u(x, t) =
Z
∞
(x−ξ)2
1
e− 4kt ,
G(x, t; ξ)f (ξ)dξ, G(x, t; ξ) = √
4kπt
−∞
49
where G(x, t; ξ) is known as the fundamental solution, Green’s function, or kernel,
for the heat equation.
Now that we know the Green’s function for the heat equation, we can use it to
solve similar problems on semi-infinite domains
Ex
ut = kuxx , 0 < x < ∞, t > 0
u(x, 0) = f (x) x ≥ 0
u(0, t) = 0
Note: The initial condition is as before, but now there is a Dirichlet b.c. at x = 0.
This suggests u should be an odd function about x = 0, if we were
( to “extend” it, as
f (x)
x>0
in the wave equation. That is, if we extend the i.c. as f ext (x) =
−f (−x) x < 0
R∞
1
Then we expect that the solution will be u(x, t) = √4kπt
−∞ G(x, t; ξ)fext (ξ)dξ
Z
Z
∞ −(x−ξ)2
0
(x−ξ)2
1
1
√
e 4kt f (ξ)dξ − √
e− 4kt f (−ξ)dξ
4kπt 0
4kπt −∞
Z ∞
Z ∞
−(x+ξ)2
−(x−ξ)2
1
4kt
4kt
= √
f (ξ)dξ −
f (ξ)dξ
e
e
4πkt 0
0
Z ∞
−(x−ξ)2
−(x+ξ)2
1
√
e 4kt − e 4kt
f (ξ)dξ
=
0
4πkt
−(x+ξ)2
−(x−ξ)2
1
Godd (x, t, ξ) =
e 4kt − e 4kt
4πkt
=
So,
is the Green’s function, or kernel, for the heat equation with Dirichlet b.c. at x = 0,
(on semi-infinite interval)
One can view these Green’s functions from the following viewpoint:
First, note that
(in the infinite case)
limt→0
R∞
−∞ G(x, t, ξ)f (ξ)dξ
= u(x, 0) = f (x)
This implies that as t → 0, G(x, t, ξ) → δ(x − ξ)
The δ-function δ(x − ξ) has the following properties:
δ(x − ξ) is the derivative of the Heaviside (step) function, H(x − ξ)
Z
Z
∞
−∞
∞
−∞
δ(x − ξ)dx = 1,
H(x − ξ) =
g(x)δ(x − ξ)dx = g(ξ),
0
(
0 x<ξ
1 x≥ξ
H (x − ξ) = δ(x − ξ) =
(
0 x 6= ξ
∞ x=ξ
δ(x − ξ)
δ(x−ξ) is known
∞
H(x − ξ)
as a “generalized”
function; that is,
1
it is not (finitely)
defined everywhere,
but it has mean0
x=ξ
x=ξ
ing when integrated
against a “nice” test function. We will look further at generalized functions later.
50
Let’s return to the function G(x, t; ξ). We see that it has this “sharply peaked”
behavior as t → 0,
t = t1 > 0
G(x, t, ξ)
t = t2 > t1
G(x, t; ξ) is pulse-shaped
x
ξ
In fact, we see that G(x, t; ξ) is a solution of,
Gt = kGxx
G(x, 0; ξ) = δ(x − ξ)
That is, as t → 0, G(x, t, ξ) is more and more sharply peaked, and behaves as
δ(x − ξ).
Thus G(x, t; ξ) is sometimes called the “source function”. Then, the solution
u(x, t) can be viewed as being “built up” from sources as described by f (x). At
t = 0,
Z
u(x, 0) =
∞
−∞
δ(x − ξ)f (ξ)dξ = f (x)
That is, it is a combination of point sources (δ(x − ξ)) with strengths f (ξ).
Then, as t increases, these sourcesR “spread out” as described by G(x, t; ξ), and
∞
the resulting convolution is u(x, t) = −∞
G(x, t; ξ)f (ξ)dξ
So, the solution might look something like
t = 0 (u(x, 0) = f (x))
t > 0 (u(x, t)
spreads out)
Then, the solution on a semi-infinite region for
ut = kuxx ,
u(x, 0) = f (x)
u(0, t) = 0
is
u(x, t) =
Godd =
Z
∞
0
Godd (x, t; ξ)f (ξ)dξ
−(x+ξ)2
(x−ξ)2
1
√
e− 4kt − e 4kt
4πkt
= G(x, t; ξ) − G(x, t; −ξ)
↑%
“free-space” Green’s
function
Godd (x, t; ξ) can be viewed as the difference of 2 source functions. It is the
solution if we had a + point source (δ-function) at x = ξ and a - point source at
x = −ξ
51
+
x = −ξ
x=ξ
x=0
−
So, if there is a (+) source at ξ, an “image” source is placed at x = −ξ, with a
- strength, so that the boundary condition at x = 0 is satisfied, G(0, t; ξ) = 0. The
contributions from 2 sources cancel at x = 0.
Then the solution u(x, t) is built up from sources and images at all values 0 <
ξ < ∞, with strengths f (ξ), as above.
Constructing a solution in this way is known as the method of images, for obvious
reasons. Let’s apply it to the Neumann problem. It’s easy to show that the solution
to
ut = kuxx , ux (0, t) = 0
is u(x, t) =
u(x, 0) = f (x)
R∞
0
Geven (x, t, ξ)f (ξ)dξ
Geven (x, t; ξ) = G(x, t; ξ) + G(x, t; −ξ)
which corresponds to placing a + source at −ξ, to satisfy the boundary condition.
For Robin b.c.’s:
ut = kuxx
u(x, 0) = f (x)
0 < x < ∞, t > 0
ux (0, t) − hu(0, t) = 0
How can one write the solution in terms of a Green’s function?
Consider v = ux − hu. Then
vt = kvxx
0 < x < ∞, t > 0
v(x, 0) = ux (x, 0) − hu(x, 0) = f 0 (x) − hf (x) = g(x)
v(0, t) = 0
Then the solution for v is found by extending g(x) as an odd function about
x=0
1
v(x, t) = √
4πkt
Z
∞
e
−(x−ξ)2
4kt
−∞
gext (ξ)dξ, gext (x) =
(
g(x)
x>0
−g(−x) x < 0
Therefore, this suggests that for u(x, t) we can write the solution as
1
u(x, t) = √
4πkt
Z
∞
e
−(x−ξ)2
4kt
−∞
fext (ξ)dξ, since
if we can find the right extension fext (x). Note,
1
v(x, t) = ux − hu = √
4πkt
|
Z
∞
−∞
(e−(x−ξ)
{z
2 /4kt
−Gξ ⇒
IBP to get
R∞
0
−∞ Gfext (ξ)dξ
52
)x fext (ξ)dξ − h
}
Z
∞
−∞
Gfext (ξ)dξ,
R
∞
0 (ξ) − hf
Then v(x, t) = −∞
G(x, t; ξ) [fext
ext (ξ)] dξ, which suggests how to extend f
properly
0 − hf
⇒ fext
ext = gext
⇒ fext (x) =
(
f (x) x > 0
ˆ
f(x)
x<0
where −[f 0 (−x) − hf (−x)] = −g(−x), i.e.
fˆ0 (x) − hfˆ(x) = −g(−x), x < 0
(Strauss, Ex. 3.1.4)
For example, if
f (x) = x, fˆ = x + h2 so that
⇒ f 0 (x) − hf (x) = 1 − hx = g(x),
fˆ0 (x) − hfˆ(x) = 1 − hx − 2h
h = −1 − hx = −g(−x)


ODE’s


Summary of the results for the Green’s function 
&

heat equation
Let’s recall the process:
1. Using the Laplace transform U = L(u), we found an ODE for U , which was
inhomogeneous, LU = −f in general.
2. To solve for U , we constructed a Green’s function
G(x, ξ) =
(
Au1 + Bu2 x ≤ ξ
Cu1 + Du2 x ≥ ξ
where u1 , u2 are homogeneous solutions to LU = 0, and
A, B, C, D are constants (deR
pending on ξ, not x). Once we know G(x, ξ), U = ab G(x, ξ)f (ξ)dξ. The constants
are determined by the boundary conditions,
for example,
G(a; ξ) = 0, G(b; ξ) = 0
and jump conditions
G(ξ + ; ξ) = G(ξ − , ξ)
Gx (ξ + ; ξ) − Gx (ξ − , ξ) = −
LU = (pux )x + qu = 0
1
, with p(x) from the equation
p(ξ)
(Weinberger)
Note: In the examples we saw G(x, ξ) = G(ξ; x). This is always true if the operator
is of Sturm-Liouville form ((pux )x + qu) and the boundary conditions are symmetric
(Dirichlet, Neumann, Robin are included)
We can now say that the Green’s function satisfies:
LG = −δ(x − ξ),
G(a, ξ) = G(b, ξ) = 0
53
But where are the jump conditions? They are in the equation! This can be seen by
integrating the equation over ξ − < x < ξ + , and letting → 0.
lim
Z
ξ+
→0 ξ−
LGdx = lim
Z
ξ+
→0 ξ−
"
|
(p(x)Gx ) + qGdx =
{z
}
.
ξ+ Z
lim p(x)Gx (x, ξ)
+
→0
ξ−
ξ+
qGdx
ξ−
#
lim
Z
ξ+
→0 ξ−
(lim→0
= −1
|
R ξ+
ξ−
−δ(x − ξ)dx
{z
δ(x−ξ)dx=lim→0
R∞
}
−∞
δ(x−ξ)dx)
Now, if Gx (ξ + , ξ) = Gx (ξ − , ξ) as → 0, then G is smooth and all terms on
the left hand side will be zero as = 0, which does not equal −1.
G(ξ + , ξ) = G(ξ − , ξ) as → 0, but
However, if
Gx (ξ + , ξ) 6= Gx (ξ − , ξ) as → 0, then we can equate
ξ+
lim p(x)Gx (x, ξ)
= −1
→0
( this is the jump condition for the derivative)
ξ−
Then, for G continuous at x = ξ
Z
ξ+
ξ−
qGdx = 0 as → 0.
Therefore LG = −δ(x − ξ) is the equation exactly corresponding to the jump conditions.
Analogously, we have shown that for the heat equation, the Green’s function
satisfies
Gt (x, t, ξ) = kGxx (x, t; ξ)
G(x, 0; ξ) = δ(x − ξ)
The equation above for G(x, ξ) (the Green’s function for the ODE for U ) comes
from the Laplace transform of the the heat equation (a PDE!).
Another way to “verify” that the correct equation for the Green’s function for
the ODE problem
(pux )x − qu = f (x)
u(a) = u(b) = 0
is
(pGx )x − qG = δ(x − ξ)
:
G(a; ξ) = G(b; ξ) = 0
Multiply the equation for u by G, and the equation for G by u, subtract, and
54
integrate by parts:
Rb
a
G(pux )x dx −
=
b
Gpux | {z a}
⇒
Rb
a
Rb
a
u(pGx )x dx −
f (ξ)G(x; ξ)dξ −
b
upGx | {z a}
−
−
Rb
a
Rb
a
Rb
a
Gqudx +
Rb
a
u(x)δ(x − ξ)dx
Gx pux dx +
Rb
a
Rb
uqGdx
ux pGx dx =
a f (x)G(x; ξ)dx − u(ξ)
→ 0 by
→ 0 by
b.c. for G
b.c. for u
R
R
⇒ u(ξ) = ab f (x)G(x; ξ)dx ⇒ u(x) = ab f (ξ)G(x; ξ)dξ,
by interchanging the roles of x, ξ, and using
reciprocity G(ξ, x) = G(x; ξ)
Note that the b.c., and the operator for G had the “right” symmetries so that all
terms cancel in the integration of this left hand sides. In the case when u 6= 0 at the
boundary, there is a contribution from
b
upGx a
Aside
Similarly, for the full heat equation with a source, we consider the equation for
G∗ (x, t; ξ, τ )
The backward heat equation!
end condition at t = T ⇒
discussed later
−G∗t − kG∗xx = δ(x − ξ)δ(t − τ )
⇐ implicit that G = 0 for t > τ
and ut − kuxx = f (x, t)
G∗ (x, T ; ξ, τ ) = 0
u(x, 0) = g(x)
Using the same procedure as above (integrating on −∞ < x < ∞, 0 < t < T )
t=T
t
0
x
R∞ RT
∗
−∞ 0 u(−Gt )
−
G∗ u
=
t dtdx
RT R∞
0
−k
"
RT
∞
∗
0 uGx −∞ u(x, t)δ(x
−∞
− ux
- #
∞
R∞
R∞
∗
∗
− −∞ ux Gx dx + −∞ Gx ux dx
−∞
RT R∞
∗
G∗ − ξ)δ(t − τ )dxdt −
0
−∞ G
(x, t; ξ, τ )f (x, t)dxdt
Assuming u, G → 0 as x → ±∞ (physical considerations),
⇒
Z
|
∞
−∞
"
#
T
Z T
Z TZ ∞
∗
∗
ut G − G ut dt dx = u(ξ, τ ) −
−uG +
G∗ (x, t; ξ, τ )f (x, t)dxdt
0
0
−∞
0
{z
}
∗
55
+
Z
∞
−∞








| {zt=T}
g(x)G∗ (x, 0; ξ, τ )dx  using G∗ 
=0
Note: the “end condition”
is necessary to make sense for the
backward equation, also necessary to
define the Green’s function (adjoint
problem), i.e. G∗ so u(ξ, τ ) has 2 parts:
one is the integral of the source with the
kernel G ∗ , the other is the integral
of the initial distribution with G ∗ |t=0 ,
truncated since G∗ = 0 for t > τ and invert τ, t
= 0, otherwise
t=T
We can again interchange x, ξ, t, τ
the equation involves u(x, T ) which
is unknown
⇒ u(x, t) =
Z
∞
−∞
g(ξ)G∗ (ξ, τ, x, 0)dξ +
Z
T
0
Z
∞
−∞
G∗ (ξ, τ ; x, t)f (ξ, τ )dξdτ
(We can also show this with the Fourier Transform, or using some methods in
Haberman ↑ similar to above)
Later, we will show explicitly that G ∗ (ξ, τ ; x, t) = G(x, t; ξ, τ ) where Gt −kGxx =
(x−ξ)2
e− 4k(t−τ
δ(x − ξ)δ(t − τ ), G(x, 0; ξ, τ ) = 0, G(x, t; ξ, τ ) = √ 1
) for τ < t and
4πk(t−τ )
G∗ (x, t, ξ, τ ) = 0 for τ > t, in order to satisfy end condition, so
u(x, t) =
G(x, t; ξ, τ ) =
Z
∞
g(ξ)
−∞
p
dξ +
G(x, t; ξ, 0)
{z
|
0
}
same Green’s function
as for IVP
1
4πk(t − τ )
e
−(x−ξ)2
4k(t−τ )
Z tZ
t > τ > 0,
∞
−∞
G(x, t; ξ, τ )
|
{z
derived in homework
(similar to Green’s function
form before, with
t − τ , rather than t, to
allow for arbitrary time of the source)
For the Neumann and Dirichlet b.c. at x = 0, the same function G(x, t, ξ, τ )
satisfies
Gt − kGxx = δ(x − ξ)δ(t − τ )
G(x, 0; ξ, τ ) = 0
with either G(0, t; ξ, τ ) = 0 or Gx (0, t; ξ, τ ) = 0
Comparison of results
Let’s compare the results we have for the heat equation and the wave equation
on finite and infinite domains.
For the heat equation on an infinite domain, we write the solution as
u(x, t) =
Z
∞
G(x, t; ξ)f (ξ)dξ,
−∞
using a Green’s function. (For ut = kuxx
u(x, 0) = f (x))
For the heat equation on a finite domain, we used an eigenfunction expansion.
For example, for ut = kuxx , u(0, t) = u(`, t) = 0, u(x, 0) = f (x),
56
f (ξ, τ )dξdτ
}
∞
X
nπ − n22π2 kt
u=
An sin
,
xe `
`
n=1
2
where An =
`
Z
`
f (ξ) sin
0
nπ
ξdξ
`
So we can write
u(x, t) =
Z
∞
X
2
n=1
=
Z
`
`
f (ξ)
0
`
0
|
sin nπξ
nπ
dξ sin
x
`
`
∞
X
2
n=1
nπ
nπ
sin
x sin
ξ
`
`
`
{z
!
assuming the
sum converges
f (ξ)dξ
!
}
and we can identify this sum as the
eigenfunction expansion for the Green’s function
for the finite domain problem - note the symmetry in x and ξ
For the wave equation, we had d’Alembert’s solution (on an unbounded domain)
for
utt − c2 uxx = 0
ut (x, 0) = ψ(x)
u(x, 0) = φ(x)
Z
1 x+ct
ψ(s)ds
u(x, t) = 1/2[φ(x + ct) + φ(x − ct)] +
2c x−ct
If we have a bounded domain, then φ, ψ must be extended as odd or even functions
to satisfy the b.c.’s, e.g. for u(0, t) = u(π, t) = 0, φ, ψ are extended as odd functions
about 0, π, so that φ, ψ are 2π - periodic functions.
In the particular example of φ(x) = sin x, ψ(x) = sin 3x, these function are
already odd about 0, π, so d’Alembert’s solution holds with φ ext = φ and ψext = ψ.
Now, let’s compare with the eigenfunction expansion that we could also use to
obtain the solution of
utt − c2 uxx = 0 u(x, 0) = sin x
ut (x, 0) = sin 3x
u(0, t) = u(π, t) = 0
⇒
u(x, t) =
∞
X
(An sin nct + Bn cos nct) sin nx
n=1
where
and
So u(x, t) =
=
=
=
2
Bn =
π
Z
π
φ(x) sin nxdx =
0
2
An =
πnc
Z
(
π
ψ(x) sin nxdx =
0
1 n=1
0 otherwise
(
1
nc
0
n=3
otherwise
1
sin 3ct sin 3x + cos ct sin x
3c
1
1
1
(sin(x + ct) + sin(x − ct)) + [cos 3(x − ct) − cos 3(x + ct)] ·
2
2c
3
Z x+ct
1
1
[φ(x + ct) + φ(x − ct)] + [
sin 3sds]
2
2c x−ct
Z
1 x+ct
1
[φ(x + ct) + φ(x − ct)] +
ψ(s)ds
2
2c x−ct
57
which leads us back to d’Alembert’s solution.
Of course, in more complicated examples where there is an infinite number of
terms in the eigenfunction expansion, and more complicated extended functions used
in d’Alembert’s soluion, it is not so easy to show the exact correspondence. However,
in each we see how the solution is expressed in terms of functions with the correct
symmetry.
Summary of 9.4
One more note about Green’s functions:
The Fredholm Alternative Theorem:
Consider
For example,
Lu = f
u = 0 on ∂D, the boundary of domain D
uxx = f
u = 0 at x = 0, L
Then, either u = 0 is the only homogeneous solution to
Lu = 0, u = 0 on ∂D
and there is a nontrivial unique solution to inhomogeneous problem ( which can be
given in terms of the Green’s function)
or there are nontrivial homogeneous solutions φ n (x) (eigenfunctions). Then
the inhomogeneous problem has either no solutions (and no Green’s function), when
hf, φn i 6= 0
or an infinite # of solutions, given by the inhomogeneous solution +c n φn . Then
we find a modified Green’s function which satisfies:
φn (x)φn (x0 )
LG = δ(x − x0 ) − R b
2
a φn (x)dx
|
{z
}





see Stakgold:
Green’s functions
and
BVP





Properties of PDE’s
The maximum principle
Heat equation If u(x, t) satisfies the diffusion equation in a rectangle in spacetime (0 ≤ x ≤ L, 0 ≤ t ≤ T ) then the maximum value is assumed either initially
at t = 0 or on the lateral sides x = 0 or x = L. The same property holds for the
minimum.
T
boundary
conditions
t
boundary
conditions
0
x
initial conditions
58
L
This is not a surprise from what we’ve seen - the solution “spreads out” in time.
Proof:
Part I:
Consider vt − kvxx = −F (x, t), where F (x, t) > 0. If v attains its maximum v(x 0 , t0 )
at an interior point, 0 < x0 < L, T > t0 > 0, then vt = 0 there, and vxx ≤ 0 at
(x0 , t0 ).
⇒ −kvxx (x0 , t0 ) = −F (x0 , t0 ) ≥ 0 which contradicts that F (x, t) > 0
If v has a maximum at t = T , then vx (x0 , T ) = 0 and vxx (x0 , T ) ≤ 0, and vt (x0 , T ) ≥ 0
(increasing to t = T )
Again, this yields vt (x0 , T ) − kvxx (x0 , T ) ≥ 0, contradicting that −F (x0 , T ) < 0
Part II: Now consider ut − kuxx = 0, with v(x, t) = u(x, t) + 2 x2 , ( 1).
If v(x, t) ≤ M + L2 throughout the rectangle, then u(x, t) ≤ M + (L 2 − x2 ) for any > 0, where
M is the maximum on the “sides” t = 0, x = 0, L
Note that if ut − kuxx = 0, then
vt − k(vxx − 2) = vt − kvxx + 2k = 0
⇒ vt − kvxx = −2k
Using the argument from Part I, we show v attains its maximum on the boundary
x = 0 or L or t = 0, which implies that u ≤ M throughout the rectangle (Similarly
for the minimum principle)
Uniqueness can be shown using the max/min principle:
u(0, t) = g(t), u(L, t) = h(t)
u(x, 0) = f (x)
Let u1 , u2 be 2 solutions to this problem. Then v = u 1 − u2 satisfies
For ut − kuxx = 0,
vt − kvxx = 0, v(0, t) = v(L, t) = 0 v(x, 0) = 0
v reaches it maximum on either the initial line t = 0 or at the boundaries, so v ≤ 0.
Similarly, using the minimum principle v ≥ 0,
so v = 0, u1 = u2 , and the solution u is unique
Laplace’s equation has a similar max/min principle:
The maximum of u satisfying ∆2 u = 0 in a domain D is obtained on the boundary of
the domain D. And the max/min principle can be used to show uniqueness, as above.
Laplace’s equation also has the mean-value property: we’ve already
seen that
1 Rπ
for ∇2 u = 0 in circle R, with u = f (θ) on the boundary, then u(0, θ) = 2π
−π f (θ)dθ,
i.e. the value at the center is equal to the average of the values on the boundary.
The same is true for ∇2 u = 0 in a region D. For any circle R in D, the value at
the center of R is equal to the average of the values on the boundary of R.
D
R
59
Uniqueness for the wave equation
For the wave equation, there is no maximum/minimum principle as in the heat
equation or Laplace’s equation. Instead, an energy method is used to show uniqueness.
Let’s recall wave equation ρ[utt − c2 uxx ] = 0, T = ρc2 .
Multiply by ut and integrate over x
0=
RL
0
ρut [utt − c2 uxx ]dx =
=
RL
1 ∂
2 ∂t 0
R L 1
ρu2t
0
∂
2 ∂t
+
T u2x dx
ρu2t + T (ux )2 −
L
− T ux ut = 0
∂
∂x
[T ux ut ] dx
0
Aside: on an infinite domain, replace 0, L with −∞, ∞. Then, for T u x and ut
vanishing at infinity (physically motivated)
1
E=
2
Z
∞
−∞
ρu2t
T u2x
+
|{z}
dx is a constant, since it does not vary in time,
|{z}
↑
Kinetic
Energy
↑
P otential
Energy
To show uniqueness, let v = u1 − u2 , where u1 and u2 are 2 possible solutions to the
wave equation.
Then ρ[vtt − c2 vxx ] = 0 with vanishing initial and boundary conditions
0=
1 ∂
2 ∂t
|
Z
L
0
ρvt2
+
{z
T vx2 dx
integrate from 0, t
⇒0=
R`
1
2 0
}
−
= 0, since vt = 0 at x = 0, L
by differentiating the boundary
conditions with respect to t
t
t
2
2
ρvt + T vx dx
0
L
T vx vt | {z 0}
0
At t = 0, vt = 0, vx = 0 (by differentiating initial conditions with respect to x)
If v is not zero somewhere, then vt , vx 6= 0 at some point, (for t > 0) and the equality
will not hold.
⇒ v = 0 everywhere, u1 = u2 , and the solution is unique
60
dE
=0
dt
Fourier transform
Fourier series → Fourier transform
P
nπ
nπ
For f (x) = a0 + ∞
2`-periodic, we can write f (x)
n=1 an cos ` x + bn sin ` x,
nπ
nπ
nπ
±i
in complex form using e ` x = cos ` x ± i sin ` x:
∞
1 X
f (x) =
2` n−∞
Then, defining λn =
πn
` ,
f (x) =
=
fˆ(λn ) =
Z
`
−`
f (t)e−i
nπ
(x−t)
`
π
`
λn+1 − λn = ∆λn =
dt
and rewriting
Z `
∞
1 X
[
f (t)eiλn t dte−iλn x ∆λn ]
2π n=−∞ −`
∞
X
1
ˆ n )e−iλn x ∆λn ,
√
f(λ
2π n=−∞
1
√
2π
Z
`
f (t)eiλn t dt
−`
As ` → ∞, ∆λn → 0
Z ∞
∞
X
1
ˆ n )e−iλn x ∆λn =
ˆ n )e−iλn x dλn ⇒ F (λn ) = Fourier transform of f (x) = F (f (x))
√
f(λ
f(λ
2π n=−∞
−∞
We can use the Fourier transform to solve
ut − Duxx = 0
Multiply by
1 +ikx
2π e
1
√
2π
|
Z
and integrate U (x, k) = F(u)
∞
−∞
{z
Ut
u(x, 0) = f (x)
eikx ut − √12π
Z
∞
−∞
eikx uxx dx = 0
{z
|
∞ IBP
eikx ux −
−∞
|
{z
}
}
}
(ik)
Z
U (x, 0) = F(f )
∞
−∞
eikx ux dx
|
{z
}
Z ∞
∞
2
− (−k )
ikeikx u
eikx udx
−∞
−∞
{z
}
|
{z
} |
=0
−k2 U
=0
which gives the equation for U :
Ut + Dk 2 U = 0
U (k, 0) = F(f )
Solving, and using the definition of the Fourier transform, we get
U
= F(f )e−Dk
u = F
−1
2t
(F(f )e
−Dk 2 t
1
)=
2π
Z
∞
−∞
f (ξ)
Z
∞
| −∞
2
eikξ e−ikx e−Dk t dk dξ
{z
The heat kernel
61
}
2-D Fourier-Transform
ut − k(uxx + uyy ) = 0
−∞ < x < ∞
−∞ < y < ∞
u(x, y, 0) = f (x, y)
Same approach as above, using Fourier Transform 2 times!
First U
V
Z
1 ∞ iw1 x
e
u(x, y, t)dx
= Fx (u) =
2π −∞
Z ∞
1
eiw2 y U (w, y, t)dy
= Fy (U ) =
2π −∞
Applying the first transform:
Ut + kw12 U − kUyy = 0
U (w1 , y, 0) = F(f (x, y))
Applying the second transform:
Vt +
kw12 V
+
kw22 V
1
V (w1 , w2 , 0) = Fy (Fx (f )) = 2
4π
= 0,
2
Z
∞
Z
∞
f (x, y)eiw1 x+iw2 y dxdy
−∞ −∞
2
Now solve V = e−k(w1 +w2 )t Fy (Fx (f )) and invert
u(x, y, t) =
Z
∞
−∞
Z
∞
1
f (ξ, η) 2
4π
−∞
|
Z
∞
−∞
e
−iw1 (x−ξ) −kw12 t
e
Z
∞
−∞
{z
2
e−iw2 (y−η) e−kw2 t dw1 dw2 dξdη
↑ product of two inverted transforms
e
|
−(x−ξ)2
4kt
−
(y−η)2
4kt
4πkt
{z
}
}
free-space Green’s function
What about Fourier transform on a semi-infinite interval?
ut = Duxx , 0 < x < ∞, t > 0
u(x, 0) = f (x) x ≥ 0
u(0, t) = 0
f (x)
0
initial
distribution
x
Consider Dirichlet conditions at x = 0 on semi-infinite
domain. On a bounded
R`
P
nπ
nπ
u(ξ,
t)
sin
domain [0, `] we would expect u(x, t) = 2` ∞
n=1 −0
` ξ dξ sin ` x
What happens as ` → ∞? Same process as Fourier series → Fourier transform!
62
Fourier sine transform
u(x, t) =
r
2
π
Z
r
∞
−∞
|
2
π
Z
∞
=U (λn ,t)=Fourier
{z
}
sine transform of u(x, t)
Apply this transform to the heat equation:
U (k, t) = Fs (u) =
sin λn x dλn
sin λn ξu(ξ, t) dξ
−∞
r
2
π
Z
∞
sin kξu(ξ, t)dξ
−∞
Multiply by sin kx and IBP:
Z
∞
|0
sin kx ut dx −
{z
Ut (k,t)
}
D
Z
∞
uxx sin kx dx = 0
}
| 0
{z
∞
∞
R
∞
ux sin kx −ku cos kx −k 2
sin kx u dx=0
0
0
0
2
Then Ut + Dk 2 U = 0,
u(x, t) =
r
2
π
Z
∞
0
r
U (k, 0) = Fs (f ) ⇒ U (k, t) = Fs (f )e−Dk t
Z ∞
Z
Z
2 ∞
2
2
2 ∞
f (ξ)
sin kξ f (ξ)e−Dk t sin kx dξdk =
sin kξ sin kxe−Dk t dk dξ
π 0
π
0
0
|
Then the Green’s function is
Z
2 1 ∞ ik(x+ξ) −Dk2 t
1 ∞ ik(ξ−x) −Dk2 t
e
e
dk −
e
e
dk + c.c.
π 4 0
4 0
Z
Z
1 ∞ −ik(x+ξ) −Dk2 t
1 ∞ −ik(ξ−x) −Dk2 t
= −
e
e
dk −
e
e
dk
2π −∞
2π −∞
(eikξ − e−ikξ )(eikx − e−ikx )
sin kξ sin kx =
−4
⇒ GD (x, ξ, t) = −
Using
Z
From tables of inverse transforms:
So
1 R∞
−ikz e−Dk 2 t dk
2π −∞ e
(x−ξ)2
=
2
− z
e 4Dt
√
4πDt
(x+ξ)2
e− 4Dt
e− 4Dt
GD (x, ξ, t) = √
− √
4πDt
4πDt
·ξ
(+source)
Image solution
·ξ
{z
GD (x,ξ,t)
(−source)
Similarly, for Neumann b.c.’s
q R
Use cosine transform U = π2 0∞ cos nx u(x, t)dx or images.
Note: we can obtain G directly from the equation for G
Gt − kGxx = δ(x − x0 )δ(t − τ )
63
G(x, 0) = 0
}
or alternatively, we could also solve the equation for G ∗ directly
−G∗t − kG∗xx = δ(x − x0 )δ(t − τ ),
G∗ (x, T ; x0 , τ ) = 0,
0<t<T
R
∞
Using the Fourier Transform, U (ω, t) = −∞
eiwx G∗ (x, t; x0 , τ )dx
Then, transforming the equation and the i.c.,
−Ut + kw2 U
⇒U
= eiwx0 δ(t − τ ),
= e
=
⇒ G∗ (x, t; x0 , τ ) =
=
kw 2 t
t
U (T, x) = 0
2 0
eiwx0 δ(t0 − τ )e−kw t dt0
T
2 (t−τ )
kw
e
(
eiwx0
0
1
2π
p
Z
Z
∞
−∞
t<τ
t>τ
e−iwx eiwx0 ekw
2 (t−τ )
dw
(x−x0 )2
1
−
e 4k(τ −t) , and G(x, t; x0 , τ ) = G∗ (x0 , τ ; x, t)
4π(τ − t)
Review of solving Laplace’s equation on a circle
u = h(θ)
2
∇
| u{z= 0}
r=a
u = h(θ) on ∂D
(x2 + y 2 = a2 )
⇓
∇2 u = 0
1
1
urr + ur + 2 uθθ = 0
r
r
Method of Solution? Separation of Variables:
u(r, θ) = R(r)Θ(θ)
R0 θ 00
R00
+r +
= 0 ⇒ Θ00 + λΘ = 0 ⇒ cos nθ, sin nθ
R
R
θ
λ = n2
r 2 R00 + rR0 − λR = 0,
n = 0, 1, 2, ...
1
R = r ±n for n 6= 0,
R = c, for n = 0
r
r2
How do we choose R?
So far we have solutions like
r n (An cos nθ + Bn sin nθ),
A0 ,
A0 log r
r −n (A−n cos(−n)θ + B−n sin(−n)θ)
64
Eliminating the solutions that are not bounded at r = 0,
−1
X
u(r, θ) = A0 +
n=−∞
r −n (A−n cos(−n)θ + B−n sin(−n)θ) +
∞
X
r n (An cos nθ + Bn sin nθ)
n=1
Then combine coefficients to get a more compact form:
u(r, θ) = A0 +
∞
X
r n (An cos nθ + Bn sin nθ)
n=1
Now use the boundary condition at r = a,
∞
X
h(θ) = A0 +
an (An cos nθ + Bn sin nθ)
n=1
use orthogonality of eigenfunctions,
1
2
An =
an π
1
Now plug in to the expression for u:
u(r, θ) =
=
Recall that
Then
1
2π
Z
1
2π
Z
P∞
2π
h(φ)dφ +
0
2π
h(φ)dφ +
0
n=1
rn
X
πan
∞
X
1
1−x
xn =
1
u(r, θ) =
2π
Z
x
1−x
"
h(φ) 1 +
0

h(θ)dθ
h(θ) cos nθdθ


h(φ) cos nθ cos nφdφ + sin nθ sin nφ dφ
0
Z
π 0
R 2π
0

2π
2πan
−1=
2π
Z
rn
n=1
R
1 2π
A0 =
2π
0
|
{z
cos n(θ−φ)
h
i
h(φ) ein(θ−φ) + e−n(θ−φ) dφ
}
for |x| < 1.
1
r i(θ−φ)
ae
− ar ei(θ−φ)
2
+
1
r −i(θ−φ)
ae
− ar e−i(θ−φ)
#
dφ
2
a −r
Combine the integrand to get h(φ) (a2 +r2 −2r
cos(θ−φ))
Details:
a2 + r 2 − 2r cos (θ − φ) + rei(θ−φ) (1 − re−i(θ−φ) ) + re−i(θ−φ) (1 − rei(θ−φ) )
=
(a − rei(θ−φ) )(a − re−i(θ−φ) )
1
a2 − r 2
≡ G(r, θ, φ, a)
2π (a2 + r 2 − 2r cos(θ − φ))
Then
Z
1 2π
h(φ)G(r, θ, φ, a)dφ
u(r, θ) =
2π 0
and we have the Green’s function for Laplace’s equation on a disk!
Laplace’s equation on a open domain, 2 − D, 3 − D, or general domain
∇2 u = 0, ( or ∇2 u = f ),
u
is a harmonic function: for example, z = u + iv,
65
ux = vy , vx = −uy ⇒ uxx + uyy = 0,
or E = −∆φ, ∇ · E = 0 ⇒
∇2 φ = 0
|
{z
}
↑
potential
The Green’s function satisfies
∇2 G = δ(x − x0 ),
the higher dimensional analog of 1 − D: G xx = δ(x − x0 )
Recall, the homogeneous solutions for the 1-D case solutions are 1, x; then it can
be shown that G(x, x0 ) = |x − x0 |
How can we characterize G in general?
Let’s review some properties of harmonic functions Divergence
(Green’s)
Theorem
RR
RR
RR
v∇2 u
∇v
·
∇u
+
∇
·
(v∇u)
=
D
D
R
RR
RR D 2
∂D
n · (v∇u) =
D
∇v · ∇u +
D
v∇ u
n̄
So, if we have ∇2 u = f
Multiply and integrate
RRR
D
2
u ∇
| {zG}
δ(x−x0 )
⇒ u(x0 ) =
−G∇2 udx
RRR
=
,
∇2 G = δ(x − x0 )
Z Z
Z Z
|
∂D
D G(x, ξ)f (x)dx +
n · (u∇G)dS −
R R ∂G {zR R
u ∂n −
Z Z
|
u
∂D
∂G
dS −
∂n
{z
D
∂D
n · (G∇u)dS
}
∂u
G ∂n
Z Z
G
∂D
∂u
dS
∂n
use boundary contributions
}
(Note: If G was harmonic, we would get no contribution from ∇ 2 G)
G is harmonic except at x = x0
We’ve already seen one solution like this, recall the problem:
u = h(θ)
Laplace’s equation on a disk
u(r, θ) =
Z
2π
0
∇2 u = 0
D = circle
∂D = boundary
u on ∂D
z }| {
1
a2 − r 2
h(φ)
dθ
2π (a2 + r 2 − 2ra cos(θ − φ))
|
{z
∂
G(r,θ;a,φ)
∂r0
66
}
What is the equation for the Green’s function?
∇2 G = δ(x − x0 )
G = 0 on boundary
How can we solve it? Again use an eigenfunction expansions:
Finite Fourier transform (complex form)
1
r (rGr )r
In polar coordinates:
+
1
r 2 Gθθ
∞
X
G(r, θ; r0 , θ0 ) =
−∞
1
2π
gn =
= 1r δ(r − r0 )δ(θ − θ0 )
gn einθ
Z
2π
0
e−inθ G(r, θ, r0 , θ0 )dθ
so multiply by e−inθ and IBP
(r gnr )r +
−n2
1
gn =
δ(r − r0 )e−inθ0
r
2π
Hints for homework: We have to solve inhomogeneous ODE with b.c. g n = 0.
The homogeneous problem is
r 2 gn00 + rgn0 − n2 gn = 0 ⇒
gn =
(
gn = r ±n , write as r |n|
to satisfy left b.c.(r = 0)
to satisfy right b.c.(r = a)
⇒
C[r |n| −
Au1 + Bu2 r < r0
Cu1 + Du2 r > r0
jump conditions:
|n|
|n|
−|n|
Ar0 − C[r0 − a2|n| r0 ] = 0
|n|−1
−|n|−1
|n|−1
=
C[|n|r0
+ a2|n| |n|r0
] − A|n|r0
Solving for A and C ⇒ gn =
After some algebra:
G(r, θ, r0 , θ0 ) =
1
4π





log |r − r0 |2 −
1
2π
1
2π
1
4π
1 e−inθ0
2π
r0
|n|
r0
−|n|
e−inθ0
r |n|
− r0
2|n|
a2|n|
|n|
r |n|
e−inθ0
− r −|n| r0
2|n|
a2|n|
log |r −
2
a2 r0 −
1
2π
Ar |n| bounded at r = 0
Ca|n| + Da−|n| = 0 ⇒
a|n| |−n|
r
]=0
a−|n|
r < r0
r > r0
log ra0
Algebra details: Using gn (as above) in the eigenfunction expansion and
for n = 0
G(r, θ, r0 , θ0 ) =
g0 =
(
log r0 /a r < r0
(e−i0·θ. = 1)
log r/a r > r0
n
∞
i
1
r>
r>
1 X
1 h in(θ−θ0 )
n
−n
]
e
+ e−in(θ−θ0 ) [r<
−
r
log
+
>
2π
a
4π n=1 n
a2n
where r> = max(r, r0 ) and r< = min(r, r0 ).
To sum the series, use: − log(1 − x) =
67
∞
X
xn
n=1
n
|x| ≤ 1
n 6= 0
Then
G(r, r0 , θ, θ0 ) =
1
r>
log
2π
a
−
r> r<
1
r> r<
log(1 − ei(θ−θ0 ) 2 ) − 1 + log(1 − e−iθ−θ0 2 ) − 1
4π
a
a
−1
−1
−(log(1 − ei(θ−θ0 ) r>
r< ) − 1) − (log(1 − e−i(θ−θ0 ) r>
r< ) − 1)
For r < r0 ,
"
!
G(r, r0 , θ, θ0 ) =
rr0 r 2 r 2
r
1
1
log 1 − 2 cos(θ − θ0 ) 2 + 40
log −
2π
a 4π
a
a
G(r, r0 , θ, θ0 ) =
1
rr0 r 2 r 2
r0
1
log 1 − 2 cos(θ − θ0 ) 2 + 40
log
−
2π
a
4π
a
a
G(r, r0 , θ, θ0 ) =
r0
1
1
log
−
log
2π
a
4π
"
"
− log
r02
a4
r0 r 2
− log 1 − 2 cos θ − θ0 + 02
r
r
!
!#
r2
r
− log 1 − 2 cos(θ − θ0 ) + 2
r0 r0
a4
a2
− 2 cos(θ − θ0 )r + r 2
r02
r0
!#
!!
1 2
2
r
−
2
cos(θ
−
θ
)r
r
+
r
0 0
0
r02
|r − r0 |2 = (r cos θ − r0 cos θ0 )2 + (r sin θ − r0 sin θ0 )2
= r 2 + r02 − 2rr0 cos(θ − θ0 )
q
q
1
r0
1
1
r2
1
1
log
−
log r0 −
log 04 −
log |r0∗ − r|2 +
log |r0 − r|2
⇒ G(r, r0 , θ, θ0 ) =
2π
a
2π
4π
a
2π
2π
a
1
1
1
log
−
log |r − r0∗ | +
log |r − r0 |
=
2π
r0 2π
|2π
{z
}
free-space Green’s function
We can also check that the b.c. is satisfied:
G|r=a
a 1
1
log − log
=
2π
r0 2π
s
|
(a cos θ −
q
a2
a2
1
cos θ0 )2 + (a sin θ −
sin θ0 )2 + log (a2 + r02 − 2ar0 cos θ0 )
r0
r0
2π
q
{z
a2 2
[r +a2 −2ar0
r2 0
0
cos(θ−θ0 )]
}
We can interpret the expression for G in following way:
harmonic part which solves
G(r, θ, r0 , θ0 ) = Gfreespace (r, θ, r0 , θ0 ) − Gfreespace (r, θ, image point) + ∇2 v = 0,
(to satisfy b.c.)
68
,
r∗ = ( ra0 , θ0 )
(same angle)
(r0∗ , θ0 )
(same
angle θ0 )
(r0 , θ0 )
Now return to the result for u(r, θ)
a2 − r 2
h(φ)dφ
a2 + r 2 − 2ar cos(θ − φ)
0
G=0
on ∂D
. u on ∂D
.
Z
Z
∂u
∂G
h(φ) |{z}
dS +
G
dS
∂n
∂n
1
2πa
u(r, θ) =
=
Z
2π
adφ
1
a2 − r 2
∂G =
for G given above
2
2
2πa a + r − 2ar cos(θ − φ)
∂r0 r0 =a
Now what happens for Neumann b.c.’s?
We would expect, for ∇2 u = 0,
∂u ∂r = f (θ).
r=a
To look for a Green’s function solution to
Let’s check consistency:
1=
Z Z
δ(r − r0 )dx =
Z Z
D
∇2 Gdx =
This is inconsistent! To fix it, take
For example, for a circle,
∂G
∂r
=
∇2 G
1
2πa
= δ(r − r 0 ),
Z
∂D
∂G ∂r r=a
=0
n · ∇GdS = 0! by b.c.!
∂G
∂n
1
= (length
of S)
is the correct b.c.
unknown const
z
Z
}|
{
Z
∂u
∂u
G dS = C − ∂n
G dS
∂n
|{z}
↑ extra constant - non-unique!
known
So, in summary, for ∇2 u = f in D, with b.c. on ∂D
Then u =
∂G
u
∂D ∂n
−
1) ∆G = 0 in D except at x = x0
1
2) G = 0 on D
(“homogeneous” b.c.) or ∂G
∂n = |S| on ∂D
3) G(x, ξ) = free space Green’s function + harmonic part
|
{z
singular at x = x0 ,
otherwise harmonic
}
69
|
{z
}
to satisfy b.c.
The singularities which appear in G are:
1
log |x − x0 |
2π
1
3D ⇒ −
4π|x − x0 |
2D ⇒
1−D
∇2 G = δ(x − x0 )
G=+
|x − x0 |
2
Let’s see how these singularities give contributions. Consider D a small region
around x0 , (x0 − < x < x0 + ) in 1-D, as → 0. Then:
u(x0 ) =
Z
u
∂D
∂G
ds −
∂n
Z
G
∂D
∂u
ds =
∂n
u
|
∂G x0 +
∂n x0 −
{z
+0
}
adding up 2
boundary contributions
=
u(x0 + ) ∂(x − x0 ) u(x0 − ) ∂(x − x0 )
+
= u(x0 ) as → 0
2
∂(x)
2
∂x
x0 − x0
x0 + In 2D, the contribution at r0 is obtained similarly. Consider a ball B about
r = r0 , with radius R → 0:
Z
Z
∂ 1
∂u 1
u
u(x0 ) = lim
log |r − r0 | dS −
log(r − r0 )dS
R→0 ∂B ∂r 2π
∂B ∂r 2π
Z 2π 1
∂u 1
−
log
R
2πRdθ
= lim
u
R→0 0
∂r ∂B 2π
∂B 2πR
= hui
R
r0
∂B
as R → 0
For hui
∂B
, the average over ∂B, this is u(r0 ) as R → 0.
Building the Green’s function using images
Example:
u=0
δu
δn
=0
∇2 u = f
u=0
70
Construct using sources as follows
first satisfy
∂u
∂n = 0 on x = 0
u = 0 on y = 0
r0
r1
r2
r3
= (x0 , y0 )
= (−x0 , y0 )
= (−x0 , −y0 )
= (+x0 , −y0 )
(+) (+)
r̄1
r̄0
(−) (−)
r̄2
r̄3
Then satisfy u = 0 on r 2 = a2 , x, y > 0
(−)
(−)
r̄5
r̄4
(+) (+)
|r 4 | =
|r 5 | =
|r 6 | =
|r 7 | =
r0
r4
r5
r6
r7
r̄1
a
|r0 |
a
|r1 |
a
|r2 |
a
|r3 |
(−) (−)
r̄2
(+)
r̄6
r̄7
3D example
(x0 , y0 , z0 )
z=0
∇2u = F (x, y, z)
∂u ∂z x
(x0 , y0 , −z0 )
=0
z=0
Note: We need to find the correct b.c. for the Green’s function. For Neumann
71
r̄3
(+)
= (r0 cos θ0 , r0 sin θ0 )
2
2
= ( ar0 cos θ0 , ar0 sin θ0 )
2
2
= (− ar0 cos θ0 , ar0 sin θ0 )
2
2
= (− ar0 cos θ0 , − ar0 sin θ0 )
2
2
= ( ar0 cos θ0 , − ar0 sin θ0 )
y
r̄0
boundary conditions, as the boundary extends to infinity,
lim
X0 → ∞
Y0 → ∞
and
Z
X0
−X0
Z
Y0
−Y0
∂G
dxdy = 1,
∂n
∂G
1
=
→0
∂n
|An | → ∞
so the b.c. for G is Neumann and homogeneous. Then
G=−
p
4π (x − x0
)2
1
1
+ p
2
2
2
+ (y − y0 ) + (z − z0 ) 4π (x − x0 ) + (y − y0 )2 + (z + z0 )2
putting the image source at −z0 .
Aside: Sources in higher dimensions
In 2 − D polar coordinates, the Green’s function equation is:
1
1
δ(r − r0 )δ(θ − θ0 )
(rGr )r + 2 Gθθ =
r
r
r
(∇2 G = δ(x − x0 ))
That is, the unit source in polar coordinates is
R 2π R a δ(r−r0 )δ(θ−θ0 )
rdrdθ = 1
0
0
r
δ(r−r0 )δ(θ−θ0 )
r
so that
In the θ-independent (radial only) case
2D
1
δ(r − r0 )
(rGr)r =
r
2πr
The homogeneous equation is the Euler equation
∂B
In 3 − D
1 2
δ(r − r0 )
(r Gr )r =
r2
4πr 2
⇒= −
1
4π|r − r0 |
r0
Again, to see this Green’s function has the correct singularity, consider the solution over a ball B around r 0 with
radius R → 0,
Z
Z
∂u 2
∂G r 2 dr −
Gr dr
u(r0 ) =
u
∂B ∂r
∂B ∂r ∂B
Z
1
1
=
u(r0 + R)
dS + 0 = hu(r0 + R)i∂B
· R2 4π
2
4πR
4πR2
∂B
= u(r0 ) as R → 0
72
Download