Laplace

advertisement
Laplace’s Equation
1. Equilibrium Phenomena
Consider a general conservation statement for a region U in R n containing a material which
⃗ = Fx⃗, t. Let u = ux⃗, t denote the scalar
is being transported through U by a flux field, F
concentration field for the material (u equals the concentration at x⃗, t). Note that u is a
scalar valued function while Fx⃗, t is a vector valued function whose value at each x⃗, t is a
vector whose direction is the direction of the material flow at x⃗, t and whose magnitude is
proportional to the speed of the flow at x⃗, t. In addition, suppose there is a scalar source
density field denoted by sx⃗, t. This value of this scalar at x⃗, t indicates the rate at which
material is being created or destroyed at x⃗, t. If B denotes an arbitrary ball inside U, then
for any time interval t 1 , t 2  conservation of material requires that
t
t
∫B ux⃗, t 2  dx = ∫B ux⃗, t 1  dx − ∫ t 2 ∫∂B Fx⃗, t ⋅ ⃗nx⃗ dSxdt + ∫ t 2 ∫B sx⃗, t dxdt
1
1
Now
t
∫B ux⃗, t 2  dx − ∫B ux⃗, t 1  dx = ∫ t 2 ∫B ∂ t ux⃗, t dxdt
1
and
t
t
∫ t 2 ∫∂B Fx⃗, t ⋅ ⃗nx⃗ dSxdt = ∫ t 2 ∫B div Fx⃗, t dxdt
1
1
hence
∫ t ∫B ∂ t ux⃗, t + div Fx⃗, t − sx⃗, t dxdt = 0
t2
for all B ⊂ U, and all t 1 , t 2 .
1.1
1
Since the integrand here is assumed to be continuous, it follows that
∂ t ux⃗, t + div Fx⃗, t − sx⃗, t = 0,
for all ⃗
x ∈ U, and all t.
1.2
Equation (1.1) is the integral form of the conservation statement, while (1.2) is the
differential form of the same statement. This conservation statement describes a large
number of physical processes. We consider now a few special cases,
a) Transport
u = ux⃗, t
⃗
Fx⃗, t = ux, t V
sx⃗, t = 0.
⃗ = constant,
where V
In this case, the equation becomes
b) Steady Diffusion
⃗ ⋅ grad ux⃗, t = 0
∂ t ux⃗, t + V
u = ux⃗
Fx⃗, t = −K ∇ux
s = sx⃗.
where K = constant > 0,
In this case, the equation becomes
− K div grad ux⃗ = sx⃗
or
− K ∇ 2 ux⃗ = sx⃗.
This is the equation that governs steady state diffusion of the contaminant
through the region U. The equation is called Poisson’s equation if sx⃗ ≠ 0,
1
and Laplace’s equation when sx⃗ = 0. These are the equations we will
study in this section.
Another situation which leads to Laplace’s equation involves a steady state vector field
⃗ =V
⃗ x⃗ having the property that div V
⃗ x = 0. When V
⃗ denotes the velocity field for an
V
⃗ conserves mass. When V
⃗
incompressible fluid, the vanishing divergence expresses that V
denotes the magnetic force field in a magnetostatic field, the vanishing divergence asserts
⃗ represents the vector field of electric
that there are no magnetic sources. In the case that V
force, the equation is the statement that U contains no electric charges. In addition to the
⃗ x = 0, it may happen that V
⃗ satisfies the equation, curl V
⃗ x = 0. This
equation div V
⃗
condition asserts that the field V is conservative (energy conserving). Moreover, it is a
⃗ x = 0 implies that V
⃗ = −grad ux⃗, for some
standard result in vector calculus that curl V
scalar field, u = ux⃗. Then the pair of equations,
⃗ x = 0
div V
and
⃗ x = 0,
curl V
taken together, imply that
∇ 2 ux⃗ = 0
and
⃗ = −grad ux⃗.
V
⃗ is ”derivable from the potential, u = ux⃗”. To say that u
We say that the conservative field V
is a potential is to say that it satisfies Laplace’s equation.
The unifying feature of all of these physical models that lead to Laplace’s equation is the
fact that they are all in a state of equilibrium. Whatever forces are acting in each model,
they have come to a state of equilibrium so that the state of the system remains constant in
time. If the balance of the system is disturbed then it will have to go through another
transient process until the forces once again all balance each other and the system is in a
new equilibrium state.
2. Harmonic Functions
A function u = ux is said to be harmonic in U ⊂ R n if:
i) u ∈ C 2 U; i.e., u, together with all its derivatives of order ≤ 2, is continuous in U
ii) ∇ 2 ux⃗ = 0 at each point in U
Note that in Cartesian coordinates,
∂u/∂x 1
div ∇ux⃗ = ∂/∂x 1 , ... , ∂/∂x n  ⋅
⋮
= ∂ 2 u/∂x 21 + ... + ∂ 2 u/∂x 2n
∂u/∂x n
= ∂  ∂ ux⃗ = ∇ 2 ux⃗
It is clear from this that all linear functions are harmonic.
A function depending on x only through the radial variable, r =
be a radial function. If u is a radial function then
x 21 + ... + x 2n is said to
2
∂u/∂x i = u ′ r ∂r/∂x i
∂ 2 u/∂x 2i = u ” r ∂r/∂x i  2 + u ′ r
∂r/∂x i =
and
r − x i x i /r
r2
1
2
x 21 + ... + x 2n 
− 12
2x i = x i /r
x2
= u ” r ∂r/∂x i  2 + u ′ r 1r − 3i
r
and
∇ 2 ux⃗ = ∑ i=1 ∂ 2 u/∂x 2i = u ” r ∑ i=1  x i /r 2 + u ′ r ∑ i=1
n
n
= u ” r + u ′ r nr − 1r
n
2
1 − xi
r
r3
= u ” r + n −r 1 u ′ r
We see from this computation that the radial function u = u n r is harmonic for various n if:
n=1
u 1 ”r = 0; i.e.,
n=2:
u 2 ”r +
n>2:
u ”n r +
1
r
u ′ r =
u 1 r = Ar + B
1
r
d
dr
u ′n r = r 1−n
n−1
r
r u ′2 r = 0; i.e.,
u 2 r = C ln r
r n−1 u ′n r = 0;
u n r = C r 2−n
d
dr
Note also that since ∇ 2 ∂u/∂x i  = ∂/∂x i ∇ 2 u, for any i, it follows that every derivative of a
harmonic function is itself, harmonic. Of course this presupposes that the derivative exists
but it will be shown that every harmonic function is automatically infinitely differentiable so
every derivative exists and is therefore harmonic.
It is interesting to note that if u and u 2 are both harmonic, then u must be constant. To
see this, write
∇ 2 u 2  = divgrad u 2  = div2u ∇u = 2∇u ⋅ ∇u + 2u∇ 2 u = 2|∇u| 2
Then ∇ 2 u 2  = 0 implies |∇u| 2 = 0 which is to say, u is constant. Evidently, then, the
product of harmonic functions need not be harmonic.
It is easy to see that any linear combination of harmonic functions is harmonic so the
harmonic functions form a linear space. It is also easy to see that if u = ux is harmonic on
R n then for any z ∈ R n , the translate, vx = ux − z is harmonic as is the scaled function,
w = wλx for all scalars λ. Finally, ∇ 2 is invariant under orthogonal transformations. To see
this suppose coordinates x and y are related by
Q 11 x 1 + ... + Q 1n x n
y=
Qx⃗ = ⃗
⋮
Q n1 x 1 + ... + Q nn x n
Then
∇ x = ∂/∂x 1 , ... , ∂/∂x n 
and
∂ x i = ∂y 1 /∂x i  ∂ y 1 + ... + ∂y n /∂x i  ∂ y n = Q 1i ∂ y 1 + ... + Q ni ∂ y n
= (i-th row of Q)⋅∇ y
i.e.,
∇x = Q ∇y
and
∇ x = Q  ∇ y   = ∇ y Q
3
Then
∇ 2x = ∇ x ⋅ ∇ x = ∇ y Q Q  ∇ y = ∇ y ∇ y , for QQ  = I. A transformation Q with this
property, QQ  = I, is said to be an orthogonal transformation. Such transformations
include rotations and reflections.
Problem 6 Suppose u and v are both harmonic on R 3 . Show that, in general, the product
of u times v is not harmonic. Give one or more examples of a special case where the
product does turn out to be harmonic.
3. Integral Identities
Let U denote a bounded, open, connected set in R n having a smooth boundary, ∂U. This is
⃗ x⃗ denotes a
sufficient in order for the divergence theorem to be valid on U. That is, if F
1
smooth vector field over U, (i.e., F ∈ CŪ ∩ C U and if ⃗
nx denotes the outward unit
normal to ∂U at x ∈ ∂U, then the divergence theorem asserts that
∫U div F⃗ dx = ∫∂U F⃗ ⋅ ⃗n dSx
3.1
⃗ x = ∇ux for
Consider the integral identity (3.1) in the special case that F
u ∈ C 1 Ū ∩ C 2 U. Then
⃗ x = div ∇ux = ∇ 2 ux
div F
and
⃗⋅⃗
F
n = ∇u ⋅ ⃗
n = ∂ N ux the normal derivative of u)
Then (3.1) becomes
∫U
∇ 2 ux dx =
∫∂U ∂ N ux dSx
3.2
The identity (3.2) is known as Green’s first identity. If functions u and v both belong to
⃗ x = vx ∇ux, then
C 1 Ū ∩ C 2 U and if F
⃗ x = div vx ∇ux = vx ∇ 2 ux + ∇u ⋅ ∇v
div F
and
⃗⋅⃗
F
n = vx ∇u ⋅ ⃗
n = v ∂ N ux
⃗ , (3.1) becomes Green’s second identity,
and, with this choice for F
∫U vx ∇ 2 ux + ∇u ⋅ ∇v dx = ∫∂U vx ∂ N ux dSx
3.3
Finally, writing (3.3) with u and v reversed, and subtracting the result from (3.3), we
obtain Green’s symmetric identity,
∫U vx ∇ 2 ux − ux∇ 2 vx dx = ∫∂U vx ∂ N ux − ux∂ N vx dSx
3.4
Problem 7 Let u = ux, y, z be a smooth function on R 3 and let A denote a 3 by 3 matrix
⃗ = A ∇u. If U denotes a bounded open
whose entries are all smooth functions on R 3 Let F
3
set in R having smooth boundary ∂U, then find a surface integral over the boundary whose
⃗ over U. If v = vx, y, z is also a smooth
value equals the integral of the divergence of F
3
⃗
function on R then write the integral of v div F over U as the sum of 2 integrals, one of which
4
is a surface integral over ∂U.
4. The Mean Value Theorem for Harmonic Functions
We begin by introducing some notation:
B r a =
B̄ r a =
S r a =
x ∈ R n : |x − a| < r
x ∈ R n : |x − a| ≤ r
x ∈ R n : |x − a| = r
the open ball of radius r with center at x=a
the closed ball of radius r with center at x=a
the surface of the ball of radius r with center at x=a
Let A n denote the n-dimensional volume of B 1 0. Then A 2 = π, A 3 = 4π/3, and, in
general A n = π n/2 /Γn/2 + 1. Then the volume of the n-ball of radius r is r n A n . Also let S n
denote the area of the (n-1)-dimensional surface of B 1 0 in R n , (i.e, S n is the area of
∂B 1 0).Then S n = nA n and the area of ∂B r 0 is equal to nA n r n−1 . In particular,
S 2 r = 2πr, S 3 r 2 = 4πr 2 , etc.
We will also find it convenient to introduce the notation
∫B a fx dx̂ =
r
1 ∫
fx dx = average value of fx over B r a
A n r n B r a
and
∫∂B a fx dŜx =
r
1 ∫
fx dSx = average value of fx over ∂B r a
S n r n−1 ∂B r a
Recall that it follows from Green’s first identity that if ux is harmonic in U, then for any ball,
B r a contained in U, we have
∫∂B a ∂ N ux dSx = ∫B a ∇ 2 ux dx = 0.
r
r
This simple observation is the key to the proof of the following theorem.
Theorem 4.1 (Mean Value Theorem for Harmonic Functions)
Suppose u ∈ C 2 U and ∇ 2 ux = 0 for every x in the bounded, open set U in R n . Then for
every B r x ⊂ U,
ux = ∫
∂B r x
uy dŜy = ∫
B r x
uy dŷ
4.1
i.e., 4.1 asserts that for every x in U, and r > 0, sufficiently small that B r x is contained in
U, ux is equal to the average value of u over the surface, ∂B r x, and ux is also equal to
the average value of u over the entire ball, B r x. A function with the property asserted by
(4.1) is said to have the mean value property.
Proof- Fix a point x in U and an r > 0 such that B r x is contained in the open set U. Let
gr = ∫
∂B r x
uy dŜy = ∫
∂B 1 0
ux + rz dŜz.
Here we used the change of variable, y = x + rz, or z = y − x/r so as y ranges over ,
∂B r x, z ranges over ∂B 1 0. Then
y−x
g ′ r = ∫
∇ux + rz ⋅ z dŜz. = ∫
∇uy ⋅ r dŜy
∂B 1 0
∂B r x
It is evident that as y ranges over ∂B r x, |y − x| = r, hence y − x/r is just the outward
unit normal to the surface ∂B r x which means that
5
∇uy ⋅
y−x
r = ∂ N uy.
Then
g ′ r = ∫
∂B r x
∂ N uy dŜy = ∫
B r x
∇ 2 uy dŷ = 0
(since u is harmonic in U)
Now g ′ r = 0 implies that gr = constant which leads to,
gr = lim t→0 gt = lim t→0 ∫
i.e.,
ux = ∫
∂B r x
∂B 1 0
ux + tz dŜz = ux;
uy dŜy for all r > 0 such that B r x ⊂ U.
Notice that this result also implies,
r
r
∫B x uy dy = ∫ 0 ∫∂B x uy dSy dt = ∫ 0 ux S n t n−1 dt = uxA n r n
r
t
or,
ux =
1 ∫
uy dy = ∫
uy dŷ
B r x
A n r n B r x
which completes the proof of the theorem.■
The converse of theorem 4.1 is also true.
Theorem 4.2 Suppose U is a bounded open, connected set in R n and u ∈ C 2 U has the
mean value property; i.e., for every x in U and for each r > 0 such that B r x ⊂ U,
ux = ∫
∂B r x
uy dŜy.
Then ∇ 2 ux = 0 in U.
Proof- If it is not the case that ∇ 2 ux = 0 throughout U, then there is some B r x ⊂ U such
that ∇ 2 ux is (say) positive on B r x. Then for gr as in the proof of theorem 4.1,
0 = g ′ r = ∫
∂ N uy dŜy = nr ∫
∇ 2 uy dẏ > 0
∂B r x
B r x
This contradiction shows there can be no B r x ⊂ U on which ∇ 2 ux > 0, and hence no
point in U where ∇ 2 ux is different from zero.■
For u = ux, y a smooth function of two variables, we have
∂ xx ux, y  ux + h, y − 2ux, y + ux − h, y/h 2
∂ yy ux, y  ux, y + h − 2ux, y + ux, y − h/h 2
hence
h 2 ∇ 2 ux, y  −4ux, y + ux + h, y + ux − h, y + ux, y + h + ux, y − h
Then the equation, ∇ 2 ux, y = 0 in U, is approximated by the equation,
ux, y = ux + h, y + ux − h, y + ux, y + h + ux, y − h/4.
The expression on the right side of this equation is recognizable as an approximation for
∫∂B x uy dŜy.
r
Thus, in the discrete setting, the connection between the property of being harmonic and
6
the mean value property is more immediate.
5. Maximum-minimum Principles
The following theorem, known as the strong maximum-minimum principle, is an
immediate consequence of the mean value property.
Theorem 5.1 strong maximum − minimum principle Suppose U is a bounded open,
connected set in R n and u is harmonic in U and continuous on, Ū, the closure of U. Let M
and m denote, respectively, the maximum and minimum values of u on ∂U. Then either
ux is constant on Ū (so then ux = m = M), or else for every x in U we have
m < ux < M.
Proof Let M denote the maximum value of ux on Ū and suppose ux 0  = M. If x 0 is inside
U then there exists an r > 0 such that B r x 0  ⊂ U and ux ≤ ux 0  for all x ∈ B r x 0 .
Suppose there is some y 0 in B r x 0  such that uy 0  < ux 0 . But this contradicts the mean
value property since it implies
M = ux 0  = ∫
B r x 0 
uy dŷ < M.
It follows that ux = ux 0  for all x in B r x 0 . Similarly, for any other point y 0 ∈ U, the
assumption that uy 0  < ux 0  leads to a contradiction of the mean value property. Then if
x 0 is an interior point of U we a force to conclude that ux is identically equal to M on U
and, by continuity, on the closure, Ū. On the other hand, if u is not constant on U, then x 0
must lie on the boundary of U.■
Note that if u = ux, y satisfies the discrete Laplace equation,
ux, y = ux + h, y + ux − h, y + ux, y + h + ux, y − h/4,
on a square grid, then u can have neither a max nor a min at an interior point of the grid
since at such a point, the left side of the equation could not equal the right side. At an
interior maximum, the left side would be greater than all four of the values on the right side,
preventing equality. A similar situation would apply at an interior minimum. Unless u is
constant on the grid, the only possible location for an extreme value is at a boundary point
of the grid.
There is a weaker version of theorem 5.1 that is based on simple calculus arguments.
Theorem 5.2 (Weak Maximum-minimum principle) Suppose U is a bounded open,
connected set in R n and u ∈ CŪ ∩ C 2 U. Let M and m denote, respectively, the maximum
and minimum values of u on ∂U. Then
a − ∇ 2 ux ≤ 0 in U
implies
ux ≤ M for all x ∈ Ū
b − ∇ 2 ux ≥ 0 in U
implies
ux ≥ m for all x ∈ Ū
c − ∇ 2 ux = 0 in U
implies
m ≤ ux ≤ M for all x ∈ Ū
Proof of (a): The argument we plan to use can not be applied directly to ux. Instead, let
vx = ux +  |x| 2 for x ∈ U and note that
−∇ 2 vx = −∇ 2 ux − 2n < 0 for all x in U.
It follows that vx can have no interior maximum, since at such a point, x 0 , we would have
7
∂v/∂x i = 0
and
∂ 2 v/∂x 2i ≤ 0,
for
1 ≤ i ≤ n,
x = x0.
This is in contradiction to the previous inequality since it implies −∇ 2 vx ≥ 0. This allows
us to conclude that vx has no interior max and vx must therefore assume its maximum
value at a point on the boundary of U.
Now U is bounded so for some R sufficiently large, we have U ⊂ B R 0 and this
implies the following bound on max x∈U vx,
max vx ≤ max vx ≤ M + |x| 2 ≤ M + R 2 .
x∈U
x∈∂U
Finally, we have,
ux ≤ vx ≤ M + R 2 for all x in U and all  > 0. Since this holds for
all  > 0, it follows that ux ≤ M for all x in Ū.
Statement (b) can be proved by a similar argument, or, by applying (a) to -u. Then (c)
follows from (a) and (b).■
In the special case, n = 1 , it is easy to see why theorem 5.2 holds. In that case
U = a, b and ∇ 2 u = u”x and the figure illustrates (a), (b) and (c).
a
ux ≤ M
b
ux ≥ m
c
m ≤ ux ≤ M
The following figure illustrates why it is necessary to have both of the hypotheses, u
∈ CŪ, and u ∈ C 2 U.
u ∈ CŪ,
but
u ∉ C 2 U
u ∉ CŪ,
but u ∈ C 2 U
If U is not bounded, then the max-min principle fails in general. For example, if U denotes
the unbounded wedge, x, y : y > |x| in R 2 then ux, y = y 2 − x 2 is harmonic in U, equals
zero on the boundary of U, but is not the zero function inside U. An extended version of the
max-min principle, due to E Hopf, is frequently useful.
Theorem 5.3 Suppose U is a bounded open, connected set in R n and u ∈ CŪ ∩ C 2 U.
Suppose also that ∇ 2 ux = 0 in U and that u is not constant. Finally, suppose U is such
that for each point y on the boundary of U, there is a ball, contained in U with y lying on
the boundary of the ball. If uy = M, then ∂ N uy > 0 and if uy = m, then ∂ N uy < 0.
8
(i.e., at a point on the boundary of U where ux assumes an extreme value, the normal
derivative does not vanish).
Problem 8 Let ux be harmonic on U and let vx = |∇ux| 2 . Show that
vx ≤ max x∈∂U vx for x ∈ Ū.
(Hint: compute ∇ 2 v and show that it is non-negative on U)
6. Consequences of the Mean Value Theorem and M-m
Principles
Throughout this section, U is assumed to be a bounded open, connected set in R n . We list
now several consequences of the results of the previous two sections.
It is a standard result in elementary real analysis that if a sequence of continuous functions
u m  converges uniformly to a limit u on a compact set K, then u is also continuous.
Moreover, for any open subset W in K, we have
lim m→∞ ∫ u m dx = ∫ u dx.
W
W
Lemma 6.1 Suppose u m x is a sequence of functions which are harmonic in U and
which converge uniformly on Ū. Then u = lim m→∞ u m is harmonic in U.
Proof Since each u m is harmonic in U, theorem 4.1 implies that for every ball, B r x ⊂ U ,
we have
u m x = ∫
∂B r x
u m y dŜy = ∫
B r x
u m y dŷ.
The uniform convergence of the sequence on U implies that
u m x → ux,
∫∂B x u m y dŜy → ∫∂B x uy dŜy,
r
r
∫B x u m y dŷ → ∫B x uy dŷ
r
r
hence
ux = ∫
∂B r x
uy dŜy = ∫
B r x
uy dŷ.
But this says u has the mean value property and so, by theorem 4.2, u is harmonic.■
Lemma 6.2 Suppose u ∈ CŪ ∩ C 2 U satisfies the conditions
∇ 2 ux = 0,
in U,
and
ux = 0, on ∂U.
Then ux = 0 for all x in U.
Proof- The hypotheses, u ∈ CŪ ∩ C 2 U and ∇ 2 ux = 0, in U, imply that m ≤ ux ≤ M,
in Ū. Then ux = 0, on ∂U implies m = M = 0.■
Lemma 6.2 asserts that the so called Dirichlet boundary value problem
9
∇ 2 ux = Fx,
x ∈ U,
and
ux = gx, x ∈ ∂U,
has at most one solution in the class CŪ ∩ C 2 U. Solutions having this degree of
smoothness are called classical solutions of the Dirichlet boundary value problem. The
partial differential equation is satisfied at each point of U and the boundary condition is
satisfied at each point of the boundary. Later we are going to consider solutions in a wider
sense.
Lemma 6.3 For any F ∈ CU and g ∈ C∂U, there exists at most one u ∈ CŪ ∩ C 2 U
satisfying
−∇ 2 ux = F,
in U,
and
ux = g, on ∂U.
Proof Suppose u 1 , u 2 ∈ CŪ ∩ C 2 U both satisfy the conditions of the boundary value
problem. Then w = u 1 − u 2 satisfies the hypotheses of lemma 6.2 and is therefore zero on
the closure of U. Then u 1 = u 2 on the closure of U.■
Lemma 6.4 Suppose u ∈ CŪ ∩ C 2 U satisfies
∇ 2 ux = 0,
in U,
ux = g, on ∂U,
and
where gx ≥ 0. If gx 0  > 0 at some point x 0 ∈ ∂U then ux > 0 at every x ∈ U.
Proof First, gx ≥ 0 implies that m = 0. Then gx 0  > 0 at some point x 0 ∈ ∂U implies
M > 0. It follows now from the strong M-m principle that 0 < ux < M at every x ∈ U.■
Note that lemma 6.4 asserts that if a harmonic function that is non-negative on the
boundary of its domain is positive at some point of the boundary, then it must be positive at
every point inside the domain; i.e., a local stimulus applied to the ”skin” of the body
produces a global response felt everywhere inside the body. This could be referred to as
the organic behavior of harmonic functions. This mathematical behavior is related to the
fact that Laplace’s equation models physical systems that are in a state of equilibrium. If the
boundary state of a system in equilibrium is disturbed, even if the disturbance is very local,
then the system must readjust itself at each point inside the boundary to achieve a new
state of equilibrium. This is the physical interpretation of ”organic behavior”.
Lemma 6.5 For F ∈ CŪ and g ∈ C∂U, suppose u ∈ CŪ ∩ C 2 U satisfies
−∇ 2 ux = Fx, x ∈ U,
Then
where
ux = gx, x ∈ ∂U.
and
max x∈U |ux| ≤ C g + M C F
C g = max x∈∂U |gx|,
C F = max x∈U |Fx|,
M = a constant depending on U.
Proof The estimate asserts that −C g + M C F  ≤ ux ≤ C g + M C F for x ∈ Ū. First, let
vx = ux + |x| 2
Then
and
CF
2n
−∇ 2 vx = −∇ 2 ux − C F = Fx − C F ≤ 0
vx ≤ max x∈∂U ux + |x| 2
CF
2n
in U
 for x ∈ Ū.
Since U is bounded, there exists some R > 0 such that |x| 2 ≤ R 2 for x ∈ U. Then
10
vx ≤ C g + R 2
CF
2n
ux ≤ vx ≤ C g + M C F
and
forx ∈ Ū.
Similarly, let
wx = ux − |x| 2
CF
2n
and show that ux ≥ wx ≥ −C g + M C F 
for x ∈ Ū.■
If we define a mapping, S : CŪ × C∂U → CŪ ∩ C 2 U that associates the data pair,
(F,g), for the boundary value problem of lemma 6.5 to the solution ux, then we would
write u = SF, g. Evidently, lemma 6.5 asserts that the mapping S is continuous. To make
this statement precise, we must explain how to measure distance between data pairs
F 1 , g 1 , F 2 , g 2  in the data space CŪ × C∂U and between solutions u 1 , u 2 in the
solution space CŪ. Although we know that the solutions belong to the space
CŪ ∩ C 2 U, this is a subspace of the larger space, CŪ, so we are entitled to view the
solutions as belonging to this larger space. We are using the term ”space” to mean a linear
space of functions; that is, a set that is closed under the operation of forming linear
combinations.
Define the distance between u 1 , u 2 in the solution space CŪ as follows
||u 1 − u 2 || CŪ = max x∈Ū | u 1 x − u 2 x|.
Similarly, define the distance from F 1 , g 1  to F 2 , g 2  in the data space CŪ × C∂U by
||F 1 , g 1  − F 2 , g 2 || CŪ×C∂U = max x∈Ū | F 1 x − F 2 x| + max x∈∂U | g 1 x − g 2 x|..
Each of these ”distance functions” defines what is called a norm on the linear space where
it has been defined. In order to be called a norm, the functions have to satisfy the following
conditions,
i || α u|| = |α| ||u|| for all scalars α and for all functions u
ii || u + v|| ≤ || u|| + ||v||, for all functions u, v
iii || u|| ≥ 0, for all u and || u|| = 0 if and only if u = 0.
One can check that the distance functions defined above both satisfy all three of these
conditions and they therefore qualify as norms on the spaces where they have been
defined. Now the estimate of lemma 6.5 asserts that if u j solves the boundary value
problem with data F j , g j , j = 1, 2 then
max x∈U |u 1 x − u 2 x| ≤ max x∈∂U |g 1 x − g 2 x| + M max x∈U | F 1 x − F 2 x|
i.e.,
||u 1 − u 2 || CŪ ≤ max1, M ||F 1 , g 1  − F 2 , g 2 || CŪ×C∂U .
Evidently, if the data pairs are close in the data space, then the solutions are
correspondingly close in the solution space. This is what is meant by continuous
dependence of the solution on the data. Note that if we were to change the definition of the
norm in one or the other (or both) of the spaces, the solution might no longer depend
continuously on the data.
Consider the solution for the following boundary value problem
∇ 2 ux, y = 0
for
0 < x < π, y > 0,
11
∂ y ux, 0 = gx =
ux, 0 = 0,
1
n
0 < x < π,
sin nx,
u0, y = uπ, y = 0,
y > 0.
ux, y =
For any integer, n, the solution is given by
1
n2
sin nx sinh ny.
Evidently, the distance between g and zero in the data space is
|| gx − 0|| CR = max x∈R | 1n sin nx| ≤
1
n
,
while the distance between ux, y and zero in the solution space is
|| ux, y − 0|| C
0<x<π, y>0,
= max 0<x<π, y>0,
1
n2
sin nx sinh ny ≈
e ny
n2
This means that the data can be made arbitrarily close to zero by choosing n large, while
the solution can simultaneously be made as far from zero as we like by choosing y > 0,
large. Then the solution to this problem does not depend continuously on the data since
arbitrarily small data errors could lead to arbitrarily large solution errors. This problem is
said to be ”not well posed”.
7. Uniqueness from Integral Identities
Integral identities can be used to prove that various boundary value problems cannot have
more than one solution. For example, consider the following boundary value problem
∇ 2 ux = Fx,
x ∈ U,
∂ N ux = gx,
x ∈ ∂U.
This is known as the Neumann boundary value problem for Poisson’s equation. Green’s
first identity leads to
∫U Fx dx = ∫U ∇ 2 ux dx = ∫∂U ∂ N ux dSx = ∫∂U gx dSx.
Then a necessary condition for the existence of a solution to this problem is that the data,
F, g satisfies
∫U Fx dx = ∫∂U gx dSx.
If this condition is satisfied, and if u 1 , u 2 denote two solutions to the problem, then
w = u 1 − u 2 satisfies the problem with F = g = 0. Then we have
0 = ∫ w∇ 2 w dx = ∫
U
∂U
w ∂ N w dSx − ∫ ∇w ⋅ ∇w dx = − ∫ | ∇w | 2 dx
U
U
But this implies that | ∇w | = 0 which is to say, w is constant in U. Then the solutions to this
boundary value problem may differ by a constant, they are not unique. We should point out
that in order for the equation and the boundary condition to have meaning in the classical
sense, we must assume that the solutions to this problem belong to the class,
C 1 Ū ∩ C 2 U.
On the other hand, consider the problem,
∇ 2 ux = Fx,
x ∈ U,
ux = g 1 x, x ∈ ∂U 1 ,
∂ N ux = g 2 x,
x ∈ ∂U 2 ,
where ∂U is composed of two distinct pieces, ∂U 1 , and ∂U 2 . Now if u 1 , u 2 denote two
solutions to the problem, and w = u 1 − u 2 , then we have, as before
12
0 = ∫ w∇ 2 w dx = ∫
U
∂U
w ∂ N w dSx + ∫ ∇w ⋅ ∇w dx = ∫
U
∂U 1
w ∂ N w dSx + ∫
w ∂ N w dSx + ∫ |∇w| 2
∂U 2
U
In this case, w = 0 on ∂U 1 and ∂ N w = 0 on ∂U 2 , so we again reach the conclusion that w is
constant in U. Since w ∈ C 1 Ū ∩ C 2 U, it follows that if w = 0 on ∂U 1 , then w = 0 on Ū.
Then the solution to this problem is unique.
Finally, consider the Neumann problem for the so called Helmholtz equation,
−∇ 2 ux + cx ux = Fx,
x ∈ U,
ux = gx, x ∈ ∂U,
where we suppose that cx ≥ C 0 > 0 for x ∈ U. We can use integral identities to show that
this problem has at most one smooth solution. As usual, we begin by supposing the
problem has two solutions and we let wx denote their difference. Then
−∇ 2 wx + cx wx = 0,
x ∈ U,
wx = 0, x ∈ ∂U,
and,
0 = ∫ wx−∇ 2 wx + cx wx dx = − ∫
U
∂U
w ∂ N w dSx + ∫ ∇w ⋅ ∇w dx + ∫ cx wx 2 dx
U
U
Since w = 0 on ∂U, it follows that
∫U | ∇w| 2 + cx wx 2  dx ≥ C 0 ∫U wx 2 dx = 0,
and this implies that wx vanishes at every point of Ū. Notice that this proof of uniqueness
doesn’t work if we don’t know that the coefficient cx is non-negative. (How would the proof
have to be modified if we knew only that cx ≥ 0?.
Problem 9 Prove that the following problem has at most one smooth solution
−∇ 2 ux = Fx,
x ∈ U,
ux = gx, x ∈ ∂U.
and
Use first the Green’s identity approach and then use the result in lemma 6.5. Note that this
result was already established by means of the M-m principle.
Problem 10 Prove that the following problem has at most one smooth solution
−∇ 2 ux = Fx,
in U,
and
ux + ∂ N ux = gx, on ∂U.
Eigenvalues for the Laplacian
The eigenvalues for the Dirichlet problem for the Laplace operator are any scalars, λ, for
which there exist nontrivial solutions to the Dirichlet boundary value problem,
−∇ 2 ux = λ ux, x ∈ U,
ux = 0, x ∈ ∂U.
Note that if ux = 0 then any choice of λ will satisfy the conditions of the problem.
Therefore we allow only nontrivial solutions and we refer to these as eigenfunctions. If ux
is an eigenfunction for this problem corresponding to an eigenvalue λ then
λ ∫ ux 2 dx = − ∫ ux∇ 2 ux dx = − ∫
U
U
∂U
u∂ N u dSx + ∫ |∇u| 2 dx.
U
13
Then λ satisfies
∫U |∇u| 2 dx.
λ=
> 0.
∫U ux 2 dx
Note that |∇u| ≠ 0 since this would lead to u = 0 which is not allowed if u is an
eigenfunction. We have shown that all eigenvalues of the Dirichlet problem for the Laplace
operator are strictly positive.
Problem 11 Show that the Neumann problem,
−∇ 2 ux = λ ux, x ∈ U,
∂ N ux = 0, x ∈ ∂U.
has a zero eigenvalue which has the corresponding eigenfunction, ux =constant.
Problem 12 Under what conditions on the function αx, does the boundary value problem,
−∇ 2 ux = λ ux, x ∈ U,
αx ux + ∂ N ux = 0, x ∈ ∂U.
have only positive eigenvalues?
Problem 13 Show that for each of the eigenvalue problems considered here, if ux is an
eigenfunction corresponding to an eigenvalue, λ, then for any nonzero constant k,
vx = kux, is also an eigenfunction corresponding to the eigenvalue, λ.
8. Fundamental Solutions for the Laplacian
Let δx denote the ”function” with the property that for any continuous function, fx,
∫R n δx fx dx = f0, or, equivalently, ∫R n δx − y fy dy = fx
Of course this is a purely formal definition since there is no function δx which could have
this property. Later, we will see that δx can be given a rigorous, consistent meaning in the
context of generalized functions. However, using the delta in this formal way, we can give a
formal definition of a fundamental solution for the negative Laplacian as the solution of,
−∇ 2x Ex − y = δx − y,
x, y ∈ R n .
8.1
Formally, this definition implies
−∇ 2x ∫ n Ex − yfy dy = ∫ n δx − y fy dy = fx
R
R
Then the solution of the equation
−∇ 2 ux = fx,
x ∈ Rn,
is given by
ux = ∫ n Ex − yfy dy.
R
8.2
Although these steps are only formal, they can be made rigorous. Note that since there are
no side conditions imposed on Ex or on ux neither of these functions is unique. For
example, any harmonic function could be added to either of them and the resulting function
would still satisfy the same equation.
14
Since δx and ∇ 2 are both radially symmetric, it seems reasonable to assume that Ex is
radially symmetric as well; i.e., Ex = Er, for r = x 21 + ... + x 2n . Then a definition for Ex
which does not make use of δx can be stated as follows:
E n x is a fundamental solution for −∇ 2 on R n if,
i
E n r ∈ C 2 R n \0
ii ∇ 2 E n r = 0,
iii lim →0 ∫
∂B  0
for r > 0
8.3
∂ N E n xdSx = −1
The properties i) and ii) in the definition imply that
∇ 2 E n r = E n ”r + n −r 1 E ′n r = 0,
i.e.,
for r > 0
E n ”r/E ′n r = −n − 1/r
log E ′n r = −n − 1 log r + C,
E ′n r = C r 1−n ,
E n r =
if n = 2
C 2 log r
Cn r
if n > 2
2−n
.
The constant C n can be determined from part iii) of the definition. It is this part of the
definition that causes −∇ 2 E n x to behave like δx.
For n = 2 we have
2π
2π 1

∫∂B 0 ∂ N E n xdSx = ∫ 0 ∂ r C 2 log r  dθ = C 2 ∫ 0

Then
lim →0 ∫
∂B  0
 dθ = 2πC 2 .
∂ N E 2 xdSx = 2πC 2 . = −1
so C 2 = −1//2π and E 2 r = −
1
2π
log r.
When n = 3 we have
∫∂B 0 ∂ N E n xdSx = ∫∂B 0 ∂ r C 3 / r  2 dω = −C 3 ∫∂B 0



Then
lim →0 ∫
∂B  0
1
2
 2 dω = −4πC 3 .
∂ N E 3 xdSx = −4πC 3 . = −1
so C 3 = 1//4π and E 3 r = 1/4πr.
We will now show that condition 8.3 iii) really does produce the δ behavior for −∇ 2 E n . Of
course we can’t try to show that −∇ 2 E n = δx since are not allowed to refer to δx.
Instead, we will show equivalently that −∇ 2 ux = fx, for u given by 8.2. Here, we
suppose that fx is continuous, together with all its derivatives of order less than or equal to
2, and we suppose further that fx has compact support; i.e., for some positive K, fx
vanishes for | x| > K. The notation for this class of functions is C 2c R n .
15
Theorem 8.1 Let E n r denote a fundamental solution for −∇ 2 on R n . Then, for any
f ∈ C 2c R n ,
ux = ∫ n E n x − yfy dy,
R
satisfies
u ∈ C 2 R n ,
− ∇ 2 ux = fx
for any x ∈ R n .
Proof The smoothness of f implies the smoothness of u; i.e., for i = 1, 2, ..., n
ux⃗ + he⃗i  − ux⃗
fx⃗ + he⃗i − z − fx⃗ − z
= lim h→0 ∫ n Ez
dz,
R
h
h
fx⃗ + he⃗i − z − fx⃗ − z
converges uniformly to ∂f/∂x i and it follows that for each i,
h
∂u/∂x i = lim h→0
Now
∂u/∂x i = ∫ n Ez ∂ x i fx − z dy,
R
Similarly, ∂ ux/∂x i ∂x j exists for each i and j since the corresponding derivatives of f all
exist.
To show the second assertion, write
2
−∇ 2x ux = ∫ n −E n z∇ 2x fx − z dy = ∫ n −E n z ∇ 2z fx − z dz.
R
R
Since E n z tends to infinity as | z| tends to zero, we treat this as an improper integral;
∫R n E n z ∇ 2z fx − z dz = ∫B 0 E n z ∇ 2z fx − z dz + ∫R n \B 0 E n z ∇ 2z fx − z dz.


First, note that
∫B 0 E n z ∇ 2z fx − z dz ≤ max B  0 |∇ 2z fx − z| ∫B 0 | E n z| dz.


But

∫B 0 | E n z| dz. =

2π
1/2π ∫ ∫ |log r| rdr dθ = C 2 |log | if n = 2
0 0

C n ∫ ∫ r 2−n r n−1 dr dω = C 2
0 ω
if n > 2
hence
lim →0 ∫
B  0
E n z ∇ 2z fx − z dz = 0.
Next,
∫R n \B 0 E n z ∇ 2z fx − z dz = ∫∂R n \B 0 E n z ∂ N fx − z dSz − ∫R n \B 0 ∇E n z ⋅ ∇ z fx − z dz,



and
|∫
∂R n \B  0
E n z ∂ N fx − z dSz| ≤ max z∈∂B  0 |∂ N fx − z| ∫
−∂B  0
|E n z | dSz
16
0
1/2π ∫ |log | dθ = C 2 |log | if n = 2
2π
≤ C1
C n ∫  2−n  n−1 dω = C 3 
if n > 2
We used the fact that ∂R n \B  0 = −∂B  0. Finally, since E n z is harmonic in R n \B  0,
∫R n \B 0 ∇E n z ⋅ ∇ z fx − z dz = ∫−∂B 0 ∂ N E n z fx − z dSz − ∫R n \B 0 ∇ 2 E n z fx − z dz


=∫

∂ N E n z fx − z dSz.
−∂B  0
Now we can write
∫−∂B 0 ∂ N E n z fx − z dSz = ∫−∂B 0 ∂ N E n z fx − z − fx dSz + ∫−∂B 0 ∂ N E n z fx dSz,



and note that because fx is continuous,
∫−∂B 0 ∂ N E n z fx − z − fx dSz ≤ C max z∈∂B  0 |fx − z − fx| → 0 as  → 0.

In addition,
∫−∂B 0 ∂ N E n z dSz = − ∫∂B 0 ∂ N E n z dSz → 1 as  → 0


because of 8.3iii, and then it follows that
−∇ 2 ux = lim →0 ∫
−∂B  0
∂ N E n z fx − z dSz = fx ∀x ∈ R n ■
We remark again that since no side conditions have been imposed on ux, this solution is
not unique. Any harmonic function could be added to ux and the sum would also satisfy
−∇ 2 ux = fx.
9. Green’s Functions for the Laplacian
Throughout this section, U is assumed to be a bounded open, connected set in R n , whose
boundary ∂U is sufficiently smooth that the divergence theorem holds. Consider the
Dirichlet boundary value problem for Poisson’s equation,
−∇ 2 ux = Fx,
for x ∈ U,
and
ux = gx
for x ∈ ∂U
9.1
We know that
ux = ∫ n E n x − yFy dy,
R
satisfies the partial differential equation but this function, does not, in general, satisfy the
Dirichlet boundary condition. In order to find a function which satisfies both the equation and
the boundary condition, recall that for smooth functions ux and vx
∫U vy ∇ 2y uy − uy∇ 2y vy dy = ∫∂U vy∂ N uy − uy ∂ N vy dSy
9.2
For x in U fixed but arbitrary, let vy = E n x − y − φy in 9.2 where denotes a yet to be
specified function that is harmonic in U. Then since E n x − y is a fundamental solution and
17
φ is harmonic in U,
− ∫ uy ∇ 2y vy = ∫ uy− ∇ 2y E n x − y − 0 dy = ux
U
U
Since ux solves the Dirichlet problem, (9.2) becomes now,
ux = − ∫ vy ∇ 2y uy dy + ∫ vy ∂ N uy − uy ∂ N vy dSy
∂U
U
= ∫ vy Fydy − ∫
U
∂U
gy ∂ N vy dSy + ∫
∂U
vy ∂ N uydSy
If the values of ∂ N uy were known on ∂U then this would be an expression for the solution
ux in terms of the data in the problem. Since ∂ N uy on the boundary is not given, we
instead choose the harmonic function φ in such a way as to make the integral containing
this term disappear. Let φ be the solution of the following Dirichlet problem,
∇ 2y φy = 0
for y ∈ U,
φy = E n x − y,
for y ∈ ∂U
where we recall that x denotes some fixed but arbitrary point in U. Then
vy = E n x − y − φy = 0 on the boundary and the previous expression for ux reduces to
ux = ∫ Gx, y fy dy − ∫
U
where
∂U
∂ N Gx, y gy dSy
9.3
Gx, y = E n x − y − φy. Formally, Gx, y solves
−∇ 2 Gx, y = −∇ 2 E n x − y − 0 = δx − y
Gx − y = 0,
for x, y ∈ U,
9.4
for x ∈ U, y ∈ ∂U
and G(x,y) is known as the Green’s function for the Dirichlet problem for the Laplacian, or,
alternatively, as the Green’s function of the first kind. Note that if there are two Green’s
functions then their difference satisfies a completely homogeneous Dirichlet problem. This
would seem to imply uniqueness for the Green’s function except for the fact that the
uniqueness proofs were for the class of functions C 2 U ∩ CU and it is not known that
Gx, y is in this class. This point will be cleared up later.
It can be shown rigorously that Gx, y = Gy, x for all x, y ∈ U. However, a formal
demonstration based on (9.4) proceeds as follows. For x, z ∈ U, (be careful to note that x
and y are points in R n  apply (9.2) with uy = Gy, z and vy = Gy, x,
∫U uy ∇ 2y vy − vy∇ 2y uy dy = − ∫U Gy⃗, ⃗z  δy⃗ − ⃗x  − Gy⃗, ⃗x  δy⃗ − ⃗z dz
∫∂U uy ∂ N vy − vy∂ N uy dSy = ∫∂U Gy⃗, ⃗z  ∂ N vy⃗ − Gy⃗, ⃗x  ∂ N vy⃗dSy = 0
The last integral vanishes because Gy, z = Gy, z = 0, for y ∈ ∂U. Then (9.2) implies
0 = ∫ Gy, z δy − x − Gy, x δy − zdz = Gx, z − Gz, x
U
for all x, z ∈ U.
This proof will become rigorous when we have developed the generalized function
framework in which this argument has meaning.
Example 9.1 Let U = x 1 , x 2  ∈ R 2 : x 2 > 0. The half space is the simplest example of a
set having a boundary (i.e., the boundary of the half space is the x 1 − axis, x 2 = 0) and we
will be able to construct the Green’s function of the first kind for this simple set. Note that
18
the half space is not a bounded set (having a boundary is not the same as being bounded!).
Since n = 2, we write
Ex⃗ − ⃗
y  = −1/2π log | ⃗
x−⃗
y | = −1/2π log x 1 − y 1  2 + x 2 − y 2  2
For ⃗
x = x 1 , x 2  ∈ U, let ⃗
x ∗ = x 1 , −x 2  and let
vy⃗ = −1/2π log | ⃗
x∗ − ⃗
y | = −1/2π log x 1 − y 1  2 + x 2 + y 2  2 .
y for ⃗
y ∈ U. Moreover, v reduces to vy⃗ = Ex⃗ − ⃗
y
Then v = vy⃗ is a harmonic function of ⃗
for ⃗
y ∈ ∂U; i.e., vy 1 , 0 = Ex 1 , x 2  − y 1 , 0. Then
y  = −1/2π log | ⃗
x−⃗
y | − log | ⃗
x∗ − ⃗
y | = −1/2π log | ⃗
y |/| ⃗
y |
Gx⃗, ⃗
x−⃗
x∗ − ⃗
= −1/2π log
x 1 − y 1  2 + x 2 − y 2  2 / x 1 − y 1  2 + x 2 + y 2  2 .
9.5
Note that Gx⃗, ⃗
y  = 0 for ⃗
y ∈ ∂U; i.e., Gx 1 , x 2 , y 1 , 0 = 0. It is clear from the construction
that for each fixed ⃗
x = x 1 , x 2  ∈ U, Gx⃗, ⃗
y  is a harmonic function of ⃗
y for ⃗
y ∈ U.
Problem 14 Show that for the half-space U = x 1 , x 2  ∈ R 2 : x 2 > 0,
x2
∂ N Gx̄ , ȳ | ȳ ∈∂U = −1
π x − y  2 + x 2
1
1
2
so that
solves
x2
1 ∫∞
ux 1 , x 2  = π
gy 1 dy 1
−∞ x − y  2 + x 2
1
1
2
∇ 2 ux 1 , x 2  = 0 in U, and
Example 9.2 Let U =
−∇ 2 ur, θ = 0,
ux 1 , 0 = gx 1 , x 1 ∈ R.
r, θ : 0 < r < R, | θ| < π
in U,
and
= D R 0. Suppose u = ur, θ satisfies
uR, θ = gθ on ∂U =
r, θ : r = R, | θ| < π .
In an elementary course on PDE’s we would show that for all choices of the constants,
an, bn,
ur, θ =
1
2
∞
a 0 + ∑ r n  a n cosnθ + b n sinnθ
n=1
solves Laplace’s equation in the disc, U. Moreover, the boundary condition is satisfied if
uR, θ =
1
2
∞
a 0 + ∑ R n  a n cosnθ + b n sinnθ = gθ.
9.6
n=1
Then we would appeal to the theory of Fourier series which asserts that any continuous g
can be expressed as
gθ =
1
2
∞
A 0 + ∑ A n cosnθ + B n sinnθ
where
A n = 1/π ∫
9.7
n=1
π
−π
gs cosns ds,
B n = 1/π ∫
π
−π
gs sinns ds.
19
Then, comparing (9.6) with (9.7), it follows that R n a n = A n , R n b n = B n , and so
ur, θ =
1
2
∞
A 0 + ∑r/R n  A n cosnθ + B n sinnθ
n=1
satisfies both the PDE and the boundary condition. By uniqueness, this must be the
solution of the boundary value problem. If we write
A n cosnθ + B n sinnθ = 1/π ∫
= 1/π ∫
= 1/π ∫
π
−π
π
−π
π
−π
gs cosns ds cosnθ + 1/π ∫
π
−π
gs sinns ds sinnθ.
gscosns cosnθ + sinns sinnθ ds
gs cosnθ − s ds,
then ur, θ can be written as
π
∞
−π
n=1
ur, θ = 1/π ∫  12 + ∑r/R n cosnθ − s gs ds,
=
1
2π
π
∫ −π
R2 − r2
gs ds
R 2 − 2Rr cosθ − s + r 2
Here the series in n was summed by writing cosnθ − s in terms of exp±inθ − s and
recognizing that the series is a geometric series. Then
ur, θ =
1
2π
π
∫ −π
π
R2 − r2
gs ds = ∫ ∂ N Gr, θ, R, s gs ds
2
−π
R − 2Rr cosθ − s + r
2
where Gr, θ, R, s denotes the Green’s function for this problem. This representation is
often called the Poisson integral formula.
10. The Inverse Laplace Operator
We are all familiar with problems of the form Ax⃗ = ⃗f where A denotes an n by n matrix and
⃗
x,⃗f denote vectors in the linear space R n . In this situation, A can be viewed as a linear
operator from the linear space R n into R n . If the only solution of Ax⃗ = ⃗
0, is ⃗
x=⃗
0, then
⃗
⃗
⃗
⃗
Ax = f has a unique solution x for every data vector f . This solution can be expressed as
⃗
x = A −1⃗f, where A −1 denotes the inverse of the matrix A. There are strong analogies
between the problem Ax⃗ = ⃗f on R n and the problem (9.1).
Consider problem (9.1) in the special case g = 0; i.e.,
−∇ 2 ux⃗ = fx,
x ∈ U,
ux = 0,
x ∈ ∂U.
10.1
Recall that we showed that the only solution of (9.1) when g = f = 0, is u = 0, so the
solution to 10.1 is unique.
In fact, the unique solution u = ux⃗, can be expressed in terms of the Green’s function by
ux⃗ = ∫ Gx⃗, ⃗
y  fy⃗ dy⃗
U
10.2
20
Kfx⃗ = ∫ Gx⃗, ⃗
y  fy⃗ dy⃗
U
If we define
for any f ∈ CŪ,
then it is clear that
KC 1 f 1 + C 2 f 2  = C 1 Kf 1  + C 2 Kf 2 
for all f 1 , f 2 ∈ CU, C 1 , C 2 ∈ R.
We say that K is a linear operator on the linear space CŪ. We recall that to say that
CŪ is a linear space is to say that for all f 1 , f 2 ∈ CŪ and for all C 1 , C 2 ∈ R., the linear
combination C 1 f 1 + C 2 f 2 is also in CŪ.
The problem (10.1) can be expressed in operator notation. Define an operator L by
Lux⃗ = −∇ 2 ux⃗
for any u ∈ D = u ∈ C 2 Ū : ux⃗ = 0 for ⃗
x ∈ ∂U .
Then for any u ∈ D it follows that Lux⃗ ∈ CŪ so L can be viewed as a function defined
on D with values in CŪ. Since D is a subspace of CŪ we can even say that L is a
function from CŪ into CŪ but we should note that L is not defined on all of CŪ.
It is also easy to check that L is a linear operator from D into CŪ, and 10.1 can be
expressed in terms of this linear operator as follows,
find u ∈ D such that Lu = f ∈ CŪ.
The uniqueness for 10.1, stated in the operator terminology, becomes
Lu = 0 if and only if u = 0.
Evidently, the operators K and L are related by,
a Kfx⃗ ∈ D
for any f ∈ CŪ, and LKfx⃗ = fx⃗
b for any u ∈ D, Lux⃗ ∈ CŪ, and
KLux⃗ = ux⃗.
These two statements together assert that K = L −1 , K is the operator inverse to L.
x, ⃗z,
If we use the notation ⟨x⃗, ⃗z ⟩ to denote the usual inner product between two vectors ⃗
then
for all ⃗
x, ⃗z ∈ R n .
⟨Ax⃗, ⃗z ⟩ = ⟨x⃗, A ⃗z ⟩
Here A  denotes the matrix transpose of A. It is a fact from linear algebra that the
dimension of the null space of the matrix A is equal to the dimension of the null space of the
transpose matrix, A  . If the null space of A has positive dimension then the solution of
Ax⃗ = ⃗f is not unique. What is more, if ⃗z denotes any vector in the null space of A  then
⃗f, ⃗z = ⟨Ax⃗, ⃗z ⟩ = ⟨x⃗, A ⃗z ⟩ = 0
and it is then evident that a necessary condition for the existence of a solution for Ax⃗ = ⃗f is
that ⃗f, ⃗z = 0 for all ⃗z in the null space of A  . The matrix A is said to be symmetric if either
of the following equivalent conditions applies, A = A  or ⟨Ax⃗, ⃗z ⟩ = ⟨x⃗, Az⃗⟩ for all ⃗
x, ⃗z. When
A is symmetric, the null space of A not only has the same dimension as that of A  , the two
null spaces are actually the same. In this case, Ax⃗ = ⃗f has no solution unless ⃗f, ⃗z = 0 for
all ⃗z in the null space of A. If this condition is satisfied, then any two solutions of Ax⃗ = ⃗f
21
differ by an element from the null space of A.
We will now consider the analogue of these last results for the case of a boundary value
problem for Laplace’s equation. First, we have to have an inner product on the function
space CŪ. The essential properties of the inner product are
x⟩
for all ⃗
x, ⃗z.
i ⟨x⃗, ⃗z ⟩ = ⟨z⃗, ⃗
ii ⟨Cx⃗, ⃗z ⟩ = C⟨x⃗, ⃗z ⟩
for all ⃗
x, ⃗z, and all C ∈ R.
iii ⟨x⃗ + ⃗
y, ⃗z ⟩ = ⟨x⃗, ⃗z ⟩ + ⟨y⃗, ⃗z ⟩
for all ⃗
x, ⃗
y, ⃗z.
iv ⟨x⃗, ⃗
x⟩ ≥ 0
for all ⃗
x, and ⟨x⃗, ⃗
x ⟩ = 0, if and only if ⃗
x=⃗
0.
and any mapping from R n × R n to R having these four properties is called an inner product
on the linear space R n .
We can define an inner product on the function space CŪ, by letting
f1, f2 = ∫
U
f 1 x⃗ f 2 x⃗ dx⃗
for all f 1 , f 2 ∈ CŪ.
This is just a generalization of the vector inner product for vectors on R n and it is easy to
check that the four properties given above are all satisfied for this product.
We observe now, that
Kf 1 , f 2 = ∫
U
Kf 1 x⃗ f 2 x⃗ dx⃗ = ∫
U
∫U Gx⃗, ⃗y  f 1 y⃗ dy⃗ f 2 x⃗ dx⃗
for all f 1 , f 2 ∈ CŪ.
Note further that
∫U ∫U Gx⃗, ⃗y  f 1 y⃗ dy⃗ f 2 x⃗ dx⃗ = ∫U ∫U Gx⃗, ⃗y  f 2 x⃗ dx⃗ f 1 y⃗ dy⃗ = ⟨f 1 , K  f 2 ⟩
where K  f 2  is defined by
K  f = ∫ Gx⃗, ⃗
y  fx⃗ dx⃗
U
for any f ∈ CŪ.
Clearly, K  f defines another linear operator on CU. When
Kf 1 , f 2 = ⟨f 1 , K  f 2 ⟩ for all f 1 , f 2 ∈ CU,
we say that K  is the adjoint of the operator K. Since we know that
Gx⃗, ⃗
y  = Gy⃗, ⃗
x  for all ⃗
x, ⃗
y ∈ U,
it follows that
Kf =K  f for any f ∈ CŪ.
We say that the operator K is symmetric. Since
Kf 1 , f 2 = ⟨f 1 , K  f 2 ⟩
for all f 1 , f 2 ∈ CŪ,
and K = L −1 , it seems reasonable to expect that ⟨Lu, v⟩ = ⟨u, Lv⟩ for all u, v ∈ D. That
22
this is, in fact, the case follows from (3.4).
That is,
⟨Lu, v⟩ = ∫ −v∇ 2 u dx = ∫ −u∇ 2 v dx − ∫ v∂ N u − u∂ N v dS
U
∂U
U
= ∫ −u∇ 2 v dx = ⟨u, Lv⟩
U
for all u, v ∈ D.
Now consider the Neumann problem
−∇ 2 ux⃗ = fx,
x ∈ U,
∂ N ux = 0,
x ∈ ∂U.
10.3
Problem 10.3 can be expressed in terms of the following operator,
L N ux⃗ = −∇ 2 ux⃗
for any u ∈ D N =
u ∈ C 2 Ū : ∂ N ux⃗ = 0 for ⃗
x ∈ ∂U .
as
find u ∈ D N
such that L N u = f ∈ CŪ.
Although the action of this operator, L N u, is the same as that of the previously defined
operator, L, it is not the same operator since they have different domains. In particular, D N
contains all constant functions and these functions belong to the null space of L N . Then L N
is not invertible. However, the same argument used above shows that L N . is symmetric.
Then L N u = f has no solution unless f satisfies ⟨f, v⟩ = 0 for all constant functions v. If
this condition is satisfied, then any two solutions differ by a constant. This fact was already
mentioned in the beginning of section 7 but now we see it in a new setting. It is just the
analogue of the linear algebra result for singular matrices A.
23
Download